update vendor directory

This commit is contained in:
Megan Marsh 2020-08-07 10:58:15 -07:00
parent 03220c0b94
commit ecb825ee7a
50 changed files with 4977 additions and 0 deletions

354
vendor/github.com/hashicorp/go-getter/LICENSE generated vendored Normal file
View File

@ -0,0 +1,354 @@
Mozilla Public License, version 2.0
1. Definitions
1.1. “Contributor”
means each individual or legal entity that creates, contributes to the
creation of, or owns Covered Software.
1.2. “Contributor Version”
means the combination of the Contributions of others (if any) used by a
Contributor and that particular Contributors Contribution.
1.3. “Contribution”
means Covered Software of a particular Contributor.
1.4. “Covered Software”
means Source Code Form to which the initial Contributor has attached the
notice in Exhibit A, the Executable Form of such Source Code Form, and
Modifications of such Source Code Form, in each case including portions
thereof.
1.5. “Incompatible With Secondary Licenses”
means
a. that the initial Contributor has attached the notice described in
Exhibit B to the Covered Software; or
b. that the Covered Software was made available under the terms of version
1.1 or earlier of the License, but not also under the terms of a
Secondary License.
1.6. “Executable Form”
means any form of the work other than Source Code Form.
1.7. “Larger Work”
means a work that combines Covered Software with other material, in a separate
file or files, that is not Covered Software.
1.8. “License”
means this document.
1.9. “Licensable”
means having the right to grant, to the maximum extent possible, whether at the
time of the initial grant or subsequently, any and all of the rights conveyed by
this License.
1.10. “Modifications”
means any of the following:
a. any file in Source Code Form that results from an addition to, deletion
from, or modification of the contents of Covered Software; or
b. any new file in Source Code Form that contains any Covered Software.
1.11. “Patent Claims” of a Contributor
means any patent claim(s), including without limitation, method, process,
and apparatus claims, in any patent Licensable by such Contributor that
would be infringed, but for the grant of the License, by the making,
using, selling, offering for sale, having made, import, or transfer of
either its Contributions or its Contributor Version.
1.12. “Secondary License”
means either the GNU General Public License, Version 2.0, the GNU Lesser
General Public License, Version 2.1, the GNU Affero General Public
License, Version 3.0, or any later versions of those licenses.
1.13. “Source Code Form”
means the form of the work preferred for making modifications.
1.14. “You” (or “Your”)
means an individual or a legal entity exercising rights under this
License. For legal entities, “You” includes any entity that controls, is
controlled by, or is under common control with You. For purposes of this
definition, “control” means (a) the power, direct or indirect, to cause
the direction or management of such entity, whether by contract or
otherwise, or (b) ownership of more than fifty percent (50%) of the
outstanding shares or beneficial ownership of such entity.
2. License Grants and Conditions
2.1. Grants
Each Contributor hereby grants You a world-wide, royalty-free,
non-exclusive license:
a. under intellectual property rights (other than patent or trademark)
Licensable by such Contributor to use, reproduce, make available,
modify, display, perform, distribute, and otherwise exploit its
Contributions, either on an unmodified basis, with Modifications, or as
part of a Larger Work; and
b. under Patent Claims of such Contributor to make, use, sell, offer for
sale, have made, import, and otherwise transfer either its Contributions
or its Contributor Version.
2.2. Effective Date
The licenses granted in Section 2.1 with respect to any Contribution become
effective for each Contribution on the date the Contributor first distributes
such Contribution.
2.3. Limitations on Grant Scope
The licenses granted in this Section 2 are the only rights granted under this
License. No additional rights or licenses will be implied from the distribution
or licensing of Covered Software under this License. Notwithstanding Section
2.1(b) above, no patent license is granted by a Contributor:
a. for any code that a Contributor has removed from Covered Software; or
b. for infringements caused by: (i) Your and any other third partys
modifications of Covered Software, or (ii) the combination of its
Contributions with other software (except as part of its Contributor
Version); or
c. under Patent Claims infringed by Covered Software in the absence of its
Contributions.
This License does not grant any rights in the trademarks, service marks, or
logos of any Contributor (except as may be necessary to comply with the
notice requirements in Section 3.4).
2.4. Subsequent Licenses
No Contributor makes additional grants as a result of Your choice to
distribute the Covered Software under a subsequent version of this License
(see Section 10.2) or under the terms of a Secondary License (if permitted
under the terms of Section 3.3).
2.5. Representation
Each Contributor represents that the Contributor believes its Contributions
are its original creation(s) or it has sufficient rights to grant the
rights to its Contributions conveyed by this License.
2.6. Fair Use
This License is not intended to limit any rights You have under applicable
copyright doctrines of fair use, fair dealing, or other equivalents.
2.7. Conditions
Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in
Section 2.1.
3. Responsibilities
3.1. Distribution of Source Form
All distribution of Covered Software in Source Code Form, including any
Modifications that You create or to which You contribute, must be under the
terms of this License. You must inform recipients that the Source Code Form
of the Covered Software is governed by the terms of this License, and how
they can obtain a copy of this License. You may not attempt to alter or
restrict the recipients rights in the Source Code Form.
3.2. Distribution of Executable Form
If You distribute Covered Software in Executable Form then:
a. such Covered Software must also be made available in Source Code Form,
as described in Section 3.1, and You must inform recipients of the
Executable Form how they can obtain a copy of such Source Code Form by
reasonable means in a timely manner, at a charge no more than the cost
of distribution to the recipient; and
b. You may distribute such Executable Form under the terms of this License,
or sublicense it under different terms, provided that the license for
the Executable Form does not attempt to limit or alter the recipients
rights in the Source Code Form under this License.
3.3. Distribution of a Larger Work
You may create and distribute a Larger Work under terms of Your choice,
provided that You also comply with the requirements of this License for the
Covered Software. If the Larger Work is a combination of Covered Software
with a work governed by one or more Secondary Licenses, and the Covered
Software is not Incompatible With Secondary Licenses, this License permits
You to additionally distribute such Covered Software under the terms of
such Secondary License(s), so that the recipient of the Larger Work may, at
their option, further distribute the Covered Software under the terms of
either this License or such Secondary License(s).
3.4. Notices
You may not remove or alter the substance of any license notices (including
copyright notices, patent notices, disclaimers of warranty, or limitations
of liability) contained within the Source Code Form of the Covered
Software, except that You may alter any license notices to the extent
required to remedy known factual inaccuracies.
3.5. Application of Additional Terms
You may choose to offer, and to charge a fee for, warranty, support,
indemnity or liability obligations to one or more recipients of Covered
Software. However, You may do so only on Your own behalf, and not on behalf
of any Contributor. You must make it absolutely clear that any such
warranty, support, indemnity, or liability obligation is offered by You
alone, and You hereby agree to indemnify every Contributor for any
liability incurred by such Contributor as a result of warranty, support,
indemnity or liability terms You offer. You may include additional
disclaimers of warranty and limitations of liability specific to any
jurisdiction.
4. Inability to Comply Due to Statute or Regulation
If it is impossible for You to comply with any of the terms of this License
with respect to some or all of the Covered Software due to statute, judicial
order, or regulation then You must: (a) comply with the terms of this License
to the maximum extent possible; and (b) describe the limitations and the code
they affect. Such description must be placed in a text file included with all
distributions of the Covered Software under this License. Except to the
extent prohibited by statute or regulation, such description must be
sufficiently detailed for a recipient of ordinary skill to be able to
understand it.
5. Termination
5.1. The rights granted under this License will terminate automatically if You
fail to comply with any of its terms. However, if You become compliant,
then the rights granted under this License from a particular Contributor
are reinstated (a) provisionally, unless and until such Contributor
explicitly and finally terminates Your grants, and (b) on an ongoing basis,
if such Contributor fails to notify You of the non-compliance by some
reasonable means prior to 60 days after You have come back into compliance.
Moreover, Your grants from a particular Contributor are reinstated on an
ongoing basis if such Contributor notifies You of the non-compliance by
some reasonable means, this is the first time You have received notice of
non-compliance with this License from such Contributor, and You become
compliant prior to 30 days after Your receipt of the notice.
5.2. If You initiate litigation against any entity by asserting a patent
infringement claim (excluding declaratory judgment actions, counter-claims,
and cross-claims) alleging that a Contributor Version directly or
indirectly infringes any patent, then the rights granted to You by any and
all Contributors for the Covered Software under Section 2.1 of this License
shall terminate.
5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user
license agreements (excluding distributors and resellers) which have been
validly granted by You or Your distributors under this License prior to
termination shall survive termination.
6. Disclaimer of Warranty
Covered Software is provided under this License on an “as is” basis, without
warranty of any kind, either expressed, implied, or statutory, including,
without limitation, warranties that the Covered Software is free of defects,
merchantable, fit for a particular purpose or non-infringing. The entire
risk as to the quality and performance of the Covered Software is with You.
Should any Covered Software prove defective in any respect, You (not any
Contributor) assume the cost of any necessary servicing, repair, or
correction. This disclaimer of warranty constitutes an essential part of this
License. No use of any Covered Software is authorized under this License
except under this disclaimer.
7. Limitation of Liability
Under no circumstances and under no legal theory, whether tort (including
negligence), contract, or otherwise, shall any Contributor, or anyone who
distributes Covered Software as permitted above, be liable to You for any
direct, indirect, special, incidental, or consequential damages of any
character including, without limitation, damages for lost profits, loss of
goodwill, work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses, even if such party shall have been
informed of the possibility of such damages. This limitation of liability
shall not apply to liability for death or personal injury resulting from such
partys negligence to the extent applicable law prohibits such limitation.
Some jurisdictions do not allow the exclusion or limitation of incidental or
consequential damages, so this exclusion and limitation may not apply to You.
8. Litigation
Any litigation relating to this License may be brought only in the courts of
a jurisdiction where the defendant maintains its principal place of business
and such litigation shall be governed by laws of that jurisdiction, without
reference to its conflict-of-law provisions. Nothing in this Section shall
prevent a partys ability to bring cross-claims or counter-claims.
9. Miscellaneous
This License represents the complete agreement concerning the subject matter
hereof. If any provision of this License is held to be unenforceable, such
provision shall be reformed only to the extent necessary to make it
enforceable. Any law or regulation which provides that the language of a
contract shall be construed against the drafter shall not be used to construe
this License against a Contributor.
10. Versions of the License
10.1. New Versions
Mozilla Foundation is the license steward. Except as provided in Section
10.3, no one other than the license steward has the right to modify or
publish new versions of this License. Each version will be given a
distinguishing version number.
10.2. Effect of New Versions
You may distribute the Covered Software under the terms of the version of
the License under which You originally received the Covered Software, or
under the terms of any subsequent version published by the license
steward.
10.3. Modified Versions
If you create software not governed by this License, and you want to
create a new license for such software, you may create and use a modified
version of this License if you rename the license and remove any
references to the name of the license steward (except to note that such
modified license differs from this License).
10.4. Distributing Source Code Form that is Incompatible With Secondary Licenses
If You choose to distribute Source Code Form that is Incompatible With
Secondary Licenses under the terms of this version of the License, the
notice described in Exhibit B of this License must be attached.
Exhibit A - Source Code Form License Notice
This Source Code Form is subject to the
terms of the Mozilla Public License, v.
2.0. If a copy of the MPL was not
distributed with this file, You can
obtain one at
http://mozilla.org/MPL/2.0/.
If it is not possible or desirable to put the notice in a particular file, then
You may include the notice in a location (such as a LICENSE file in a relevant
directory) where a recipient would be likely to look for such a notice.
You may add additional accurate notices of copyright ownership.
Exhibit B - “Incompatible With Secondary Licenses” Notice
This Source Code Form is “Incompatible
With Secondary Licenses”, as defined by
the Mozilla Public License, v. 2.0.

362
vendor/github.com/hashicorp/go-getter/README.md generated vendored Normal file
View File

@ -0,0 +1,362 @@
# go-getter
[![CircleCI](https://circleci.com/gh/hashicorp/go-getter/tree/master.svg?style=svg)][circleci]
[![Build status](https://ci.appveyor.com/api/projects/status/ulq3qr43n62croyq/branch/master?svg=true)][appveyor]
[![Go Documentation](http://img.shields.io/badge/go-documentation-blue.svg?style=flat-square)][godocs]
[circleci]: https://circleci.com/gh/hashicorp/go-getter/tree/master
[godocs]: http://godoc.org/github.com/hashicorp/go-getter
[appveyor]: https://ci.appveyor.com/project/hashicorp/go-getter/branch/master
go-getter is a library for Go (golang) for downloading files or directories
from various sources using a URL as the primary form of input.
The power of this library is being flexible in being able to download
from a number of different sources (file paths, Git, HTTP, Mercurial, etc.)
using a single string as input. This removes the burden of knowing how to
download from a variety of sources from the implementer.
The concept of a _detector_ automatically turns invalid URLs into proper
URLs. For example: "github.com/hashicorp/go-getter" would turn into a
Git URL. Or "./foo" would turn into a file URL. These are extensible.
This library is used by [Terraform](https://terraform.io) for
downloading modules and [Nomad](https://nomadproject.io) for downloading
binaries.
## Installation and Usage
Package documentation can be found on
[GoDoc](http://godoc.org/github.com/hashicorp/go-getter).
Installation can be done with a normal `go get`:
```
$ go get github.com/hashicorp/go-getter
```
go-getter also has a command you can use to test URL strings:
```
$ go install github.com/hashicorp/go-getter/cmd/go-getter
...
$ go-getter github.com/foo/bar ./foo
...
```
The command is useful for verifying URL structures.
## URL Format
go-getter uses a single string URL as input to download from a variety of
protocols. go-getter has various "tricks" with this URL to do certain things.
This section documents the URL format.
### Supported Protocols and Detectors
**Protocols** are used to download files/directories using a specific
mechanism. Example protocols are Git and HTTP.
**Detectors** are used to transform a valid or invalid URL into another
URL if it matches a certain pattern. Example: "github.com/user/repo" is
automatically transformed into a fully valid Git URL. This allows go-getter
to be very user friendly.
go-getter out of the box supports the following protocols. Additional protocols
can be augmented at runtime by implementing the `Getter` interface.
* Local files
* Git
* Mercurial
* HTTP
* Amazon S3
* Google GCP
In addition to the above protocols, go-getter has what are called "detectors."
These take a URL and attempt to automatically choose the best protocol for
it, which might involve even changing the protocol. The following detection
is built-in by default:
* File paths such as "./foo" are automatically changed to absolute
file URLs.
* GitHub URLs, such as "github.com/mitchellh/vagrant" are automatically
changed to Git protocol over HTTP.
* BitBucket URLs, such as "bitbucket.org/mitchellh/vagrant" are automatically
changed to a Git or mercurial protocol using the BitBucket API.
### Forced Protocol
In some cases, the protocol to use is ambiguous depending on the source
URL. For example, "http://github.com/mitchellh/vagrant.git" could reference
an HTTP URL or a Git URL. Forced protocol syntax is used to disambiguate this
URL.
Forced protocol can be done by prefixing the URL with the protocol followed
by double colons. For example: `git::http://github.com/mitchellh/vagrant.git`
would download the given HTTP URL using the Git protocol.
Forced protocols will also override any detectors.
In the absence of a forced protocol, detectors may be run on the URL, transforming
the protocol anyways. The above example would've used the Git protocol either
way since the Git detector would've detected it was a GitHub URL.
### Protocol-Specific Options
Each protocol can support protocol-specific options to configure that
protocol. For example, the `git` protocol supports specifying a `ref`
query parameter that tells it what ref to checkout for that Git
repository.
The options are specified as query parameters on the URL (or URL-like string)
given to go-getter. Using the Git example above, the URL below is a valid
input to go-getter:
github.com/hashicorp/go-getter?ref=abcd1234
The protocol-specific options are documented below the URL format
section. But because they are part of the URL, we point it out here so
you know they exist.
### Subdirectories
If you want to download only a specific subdirectory from a downloaded
directory, you can specify a subdirectory after a double-slash `//`.
go-getter will first download the URL specified _before_ the double-slash
(as if you didn't specify a double-slash), but will then copy the
path after the double slash into the target directory.
For example, if you're downloading this GitHub repository, but you only
want to download the `testdata` directory, you can do the following:
```
https://github.com/hashicorp/go-getter.git//testdata
```
If you downloaded this to the `/tmp` directory, then the file
`/tmp/archive.gz` would exist. Notice that this file is in the `testdata`
directory in this repository, but because we specified a subdirectory,
go-getter automatically copied only that directory contents.
Subdirectory paths may contain may also use filesystem glob patterns.
The path must match _exactly one_ entry or go-getter will return an error.
This is useful if you're not sure the exact directory name but it follows
a predictable naming structure.
For example, the following URL would also work:
```
https://github.com/hashicorp/go-getter.git//test-*
```
### Checksumming
For file downloads of any protocol, go-getter can automatically verify
a checksum for you. Note that checksumming only works for downloading files,
not directories, but checksumming will work for any protocol.
To checksum a file, append a `checksum` query parameter to the URL. go-getter
will parse out this query parameter automatically and use it to verify the
checksum. The parameter value can be in the format of `type:value` or just
`value`, where type is "md5", "sha1", "sha256", "sha512" or "file" . The
"value" should be the actual checksum value or download URL for "file". When
`type` part is omitted, type will be guessed based on the length of the
checksum string. Examples:
```
./foo.txt?checksum=md5:b7d96c89d09d9e204f5fedc4d5d55b21
```
```
./foo.txt?checksum=b7d96c89d09d9e204f5fedc4d5d55b21
```
```
./foo.txt?checksum=file:./foo.txt.sha256sum
```
When checksumming from a file - ex: with `checksum=file:url` - go-getter will
get the file linked in the URL after `file:` using the same configuration. For
example, in `file:http://releases.ubuntu.com/cosmic/MD5SUMS` go-getter will
download a checksum file under the aforementioned url using the http protocol.
All protocols supported by go-getter can be used. The checksum file will be
downloaded in a temporary file then parsed. The destination of the temporary
file can be changed by setting system specific environment variables: `TMPDIR`
for unix; `TMP`, `TEMP` or `USERPROFILE` on windows. Read godoc of
[os.TempDir](https://golang.org/pkg/os/#TempDir) for more information on the
temporary directory selection. Content of files are expected to be BSD or GNU
style. Once go-getter is done with the checksum file; it is deleted.
The checksum query parameter is never sent to the backend protocol
implementation. It is used at a higher level by go-getter itself.
If the destination file exists and the checksums match: download
will be skipped.
### Unarchiving
go-getter will automatically unarchive files into a file or directory
based on the extension of the file being requested (over any protocol).
This works for both file and directory downloads.
go-getter looks for an `archive` query parameter to specify the format of
the archive. If this isn't specified, go-getter will use the extension of
the path to see if it appears archived. Unarchiving can be explicitly
disabled by setting the `archive` query parameter to `false`.
The following archive formats are supported:
* `tar.gz` and `tgz`
* `tar.bz2` and `tbz2`
* `tar.xz` and `txz`
* `zip`
* `gz`
* `bz2`
* `xz`
For example, an example URL is shown below:
```
./foo.zip
```
This will automatically be inferred to be a ZIP file and will be extracted.
You can also be explicit about the archive type:
```
./some/other/path?archive=zip
```
And finally, you can disable archiving completely:
```
./some/path?archive=false
```
You can combine unarchiving with the other features of go-getter such
as checksumming. The special `archive` query parameter will be removed
from the URL before going to the final protocol downloader.
## Protocol-Specific Options
This section documents the protocol-specific options that can be specified for
go-getter. These options should be appended to the input as normal query
parameters ([HTTP headers](#headers) are an exception to this, however).
Depending on the usage of go-getter, applications may provide alternate ways of
inputting options. For example, [Nomad](https://www.nomadproject.io) provides a
nice options block for specifying options rather than in the URL.
## General (All Protocols)
The options below are available to all protocols:
* `archive` - The archive format to use to unarchive this file, or "" (empty
string) to disable unarchiving. For more details, see the complete section
on archive support above.
* `checksum` - Checksum to verify the downloaded file or archive. See
the entire section on checksumming above for format and more details.
* `filename` - When in file download mode, allows specifying the name of the
downloaded file on disk. Has no effect in directory mode.
### Local Files (`file`)
None
### Git (`git`)
* `ref` - The Git ref to checkout. This is a ref, so it can point to
a commit SHA, a branch name, etc. If it is a named ref such as a branch
name, go-getter will update it to the latest on each get.
* `sshkey` - An SSH private key to use during clones. The provided key must
be a base64-encoded string. For example, to generate a suitable `sshkey`
from a private key file on disk, you would run `base64 -w0 <file>`.
**Note**: Git 2.3+ is required to use this feature.
* `depth` - The Git clone depth. The provided number specifies the last `n`
revisions to clone from the repository.
The `git` getter accepts both URL-style SSH addresses like
`git::ssh://git@example.com/foo/bar`, and "scp-style" addresses like
`git::git@example.com/foo/bar`. In the latter case, omitting the `git::`
force prefix is allowed if the username prefix is exactly `git@`.
The "scp-style" addresses _cannot_ be used in conjunction with the `ssh://`
scheme prefix, because in that case the colon is used to mark an optional
port number to connect on, rather than to delimit the path from the host.
### Mercurial (`hg`)
* `rev` - The Mercurial revision to checkout.
### HTTP (`http`)
#### Basic Authentication
To use HTTP basic authentication with go-getter, simply prepend `username:password@` to the
hostname in the URL such as `https://Aladdin:OpenSesame@www.example.com/index.html`. All special
characters, including the username and password, must be URL encoded.
#### Headers
Optional request headers can be added by supplying them in a custom
[`HttpGetter`](https://godoc.org/github.com/hashicorp/go-getter#HttpGetter)
(_not_ as query parameters like most other options). These headers will be sent
out on every request the getter in question makes.
### S3 (`s3`)
S3 takes various access configurations in the URL. Note that it will also
read these from standard AWS environment variables if they're set. S3 compliant servers like Minio
are also supported. If the query parameters are present, these take priority.
* `aws_access_key_id` - AWS access key.
* `aws_access_key_secret` - AWS access key secret.
* `aws_access_token` - AWS access token if this is being used.
#### Using IAM Instance Profiles with S3
If you use go-getter and want to use an EC2 IAM Instance Profile to avoid
using credentials, then just omit these and the profile, if available will
be used automatically.
### Using S3 with Minio
If you use go-gitter for Minio support, you must consider the following:
* `aws_access_key_id` (required) - Minio access key.
* `aws_access_key_secret` (required) - Minio access key secret.
* `region` (optional - defaults to us-east-1) - Region identifier to use.
* `version` (optional - defaults to Minio default) - Configuration file format.
#### S3 Bucket Examples
S3 has several addressing schemes used to reference your bucket. These are
listed here: http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html#access-bucket-intro
Some examples for these addressing schemes:
- s3::https://s3.amazonaws.com/bucket/foo
- s3::https://s3-eu-west-1.amazonaws.com/bucket/foo
- bucket.s3.amazonaws.com/foo
- bucket.s3-eu-west-1.amazonaws.com/foo/bar
- "s3::http://127.0.0.1:9000/test-bucket/hello.txt?aws_access_key_id=KEYID&aws_access_key_secret=SECRETKEY&region=us-east-2"
### GCS (`gcs`)
#### GCS Authentication
In order to access to GCS, authentication credentials should be provided. More information can be found [here](https://cloud.google.com/docs/authentication/getting-started)
#### GCS Bucket Examples
- gcs::https://www.googleapis.com/storage/v1/bucket
- gcs::https://www.googleapis.com/storage/v1/bucket/foo.zip
- www.googleapis.com/storage/v1/bucket/foo
#### GCS Testing
The tests for `get_gcs.go` require you to have GCP credentials set in your environment. These credentials can have any level of permissions to any project, they just need to exist. This means setting `GOOGLE_APPLICATION_CREDENTIALS="~/path/to/credentials.json"` or `GOOGLE_CREDENTIALS="{stringified-credentials-json}"`. Due to this configuration, `get_gcs_test.go` will fail for external contributors in CircleCI.

16
vendor/github.com/hashicorp/go-getter/appveyor.yml generated vendored Normal file
View File

@ -0,0 +1,16 @@
version: "build-{branch}-{build}"
image: Visual Studio 2017
clone_folder: c:\gopath\github.com\hashicorp\go-getter
environment:
GOPATH: c:\gopath
install:
- cmd: >-
echo %Path%
go version
go env
go get -d -v -t ./...
build_script:
- cmd: go test ./...

314
vendor/github.com/hashicorp/go-getter/checksum.go generated vendored Normal file
View File

@ -0,0 +1,314 @@
package getter
import (
"bufio"
"bytes"
"crypto/md5"
"crypto/sha1"
"crypto/sha256"
"crypto/sha512"
"encoding/hex"
"fmt"
"hash"
"io"
"net/url"
"os"
"path/filepath"
"strings"
urlhelper "github.com/hashicorp/go-getter/helper/url"
)
// FileChecksum helps verifying the checksum for a file.
type FileChecksum struct {
Type string
Hash hash.Hash
Value []byte
Filename string
}
// A ChecksumError is returned when a checksum differs
type ChecksumError struct {
Hash hash.Hash
Actual []byte
Expected []byte
File string
}
func (cerr *ChecksumError) Error() string {
if cerr == nil {
return "<nil>"
}
return fmt.Sprintf(
"Checksums did not match for %s.\nExpected: %s\nGot: %s\n%T",
cerr.File,
hex.EncodeToString(cerr.Expected),
hex.EncodeToString(cerr.Actual),
cerr.Hash, // ex: *sha256.digest
)
}
// checksum is a simple method to compute the checksum of a source file
// and compare it to the given expected value.
func (c *FileChecksum) checksum(source string) error {
f, err := os.Open(source)
if err != nil {
return fmt.Errorf("Failed to open file for checksum: %s", err)
}
defer f.Close()
c.Hash.Reset()
if _, err := io.Copy(c.Hash, f); err != nil {
return fmt.Errorf("Failed to hash: %s", err)
}
if actual := c.Hash.Sum(nil); !bytes.Equal(actual, c.Value) {
return &ChecksumError{
Hash: c.Hash,
Actual: actual,
Expected: c.Value,
File: source,
}
}
return nil
}
// extractChecksum will return a FileChecksum based on the 'checksum'
// parameter of u.
// ex:
// http://hashicorp.com/terraform?checksum=<checksumValue>
// http://hashicorp.com/terraform?checksum=<checksumType>:<checksumValue>
// http://hashicorp.com/terraform?checksum=file:<checksum_url>
// when checksumming from a file, extractChecksum will go get checksum_url
// in a temporary directory, parse the content of the file then delete it.
// Content of files are expected to be BSD style or GNU style.
//
// BSD-style checksum:
// MD5 (file1) = <checksum>
// MD5 (file2) = <checksum>
//
// GNU-style:
// <checksum> file1
// <checksum> *file2
//
// see parseChecksumLine for more detail on checksum file parsing
func (c *Client) extractChecksum(u *url.URL) (*FileChecksum, error) {
q := u.Query()
v := q.Get("checksum")
if v == "" {
return nil, nil
}
vs := strings.SplitN(v, ":", 2)
switch len(vs) {
case 2:
break // good
default:
// here, we try to guess the checksum from it's length
// if the type was not passed
return newChecksumFromValue(v, filepath.Base(u.EscapedPath()))
}
checksumType, checksumValue := vs[0], vs[1]
switch checksumType {
case "file":
return c.ChecksumFromFile(checksumValue, u)
default:
return newChecksumFromType(checksumType, checksumValue, filepath.Base(u.EscapedPath()))
}
}
func newChecksum(checksumValue, filename string) (*FileChecksum, error) {
c := &FileChecksum{
Filename: filename,
}
var err error
c.Value, err = hex.DecodeString(checksumValue)
if err != nil {
return nil, fmt.Errorf("invalid checksum: %s", err)
}
return c, nil
}
func newChecksumFromType(checksumType, checksumValue, filename string) (*FileChecksum, error) {
c, err := newChecksum(checksumValue, filename)
if err != nil {
return nil, err
}
c.Type = strings.ToLower(checksumType)
switch c.Type {
case "md5":
c.Hash = md5.New()
case "sha1":
c.Hash = sha1.New()
case "sha256":
c.Hash = sha256.New()
case "sha512":
c.Hash = sha512.New()
default:
return nil, fmt.Errorf(
"unsupported checksum type: %s", checksumType)
}
return c, nil
}
func newChecksumFromValue(checksumValue, filename string) (*FileChecksum, error) {
c, err := newChecksum(checksumValue, filename)
if err != nil {
return nil, err
}
switch len(c.Value) {
case md5.Size:
c.Hash = md5.New()
c.Type = "md5"
case sha1.Size:
c.Hash = sha1.New()
c.Type = "sha1"
case sha256.Size:
c.Hash = sha256.New()
c.Type = "sha256"
case sha512.Size:
c.Hash = sha512.New()
c.Type = "sha512"
default:
return nil, fmt.Errorf("Unknown type for checksum %s", checksumValue)
}
return c, nil
}
// ChecksumFromFile will return all the FileChecksums found in file
//
// ChecksumFromFile will try to guess the hashing algorithm based on content
// of checksum file
//
// ChecksumFromFile will only return checksums for files that match file
// behind src
func (c *Client) ChecksumFromFile(checksumFile string, src *url.URL) (*FileChecksum, error) {
checksumFileURL, err := urlhelper.Parse(checksumFile)
if err != nil {
return nil, err
}
tempfile, err := tmpFile("", filepath.Base(checksumFileURL.Path))
if err != nil {
return nil, err
}
defer os.Remove(tempfile)
c2 := &Client{
Ctx: c.Ctx,
Getters: c.Getters,
Decompressors: c.Decompressors,
Detectors: c.Detectors,
Pwd: c.Pwd,
Dir: false,
Src: checksumFile,
Dst: tempfile,
ProgressListener: c.ProgressListener,
}
if err = c2.Get(); err != nil {
return nil, fmt.Errorf(
"Error downloading checksum file: %s", err)
}
filename := filepath.Base(src.Path)
absPath, err := filepath.Abs(src.Path)
if err != nil {
return nil, err
}
checksumFileDir := filepath.Dir(checksumFileURL.Path)
relpath, err := filepath.Rel(checksumFileDir, absPath)
switch {
case err == nil ||
err.Error() == "Rel: can't make "+absPath+" relative to "+checksumFileDir:
// ex: on windows C:\gopath\...\content.txt cannot be relative to \
// which is okay, may be another expected path will work.
break
default:
return nil, err
}
// possible file identifiers:
options := []string{
filename, // ubuntu-14.04.1-server-amd64.iso
"*" + filename, // *ubuntu-14.04.1-server-amd64.iso Standard checksum
"?" + filename, // ?ubuntu-14.04.1-server-amd64.iso shasum -p
relpath, // dir/ubuntu-14.04.1-server-amd64.iso
"./" + relpath, // ./dir/ubuntu-14.04.1-server-amd64.iso
absPath, // fullpath; set if local
}
f, err := os.Open(tempfile)
if err != nil {
return nil, fmt.Errorf(
"Error opening downloaded file: %s", err)
}
defer f.Close()
rd := bufio.NewReader(f)
for {
line, err := rd.ReadString('\n')
if err != nil {
if err != io.EOF {
return nil, fmt.Errorf(
"Error reading checksum file: %s", err)
}
break
}
checksum, err := parseChecksumLine(line)
if err != nil || checksum == nil {
continue
}
if checksum.Filename == "" {
// filename not sure, let's try
return checksum, nil
}
// make sure the checksum is for the right file
for _, option := range options {
if option != "" && checksum.Filename == option {
// any checksum will work so we return the first one
return checksum, nil
}
}
}
return nil, fmt.Errorf("no checksum found in: %s", checksumFile)
}
// parseChecksumLine takes a line from a checksum file and returns
// checksumType, checksumValue and filename parseChecksumLine guesses the style
// of the checksum BSD vs GNU by splitting the line and by counting the parts.
// of a line.
// for BSD type sums parseChecksumLine guesses the hashing algorithm
// by checking the length of the checksum.
func parseChecksumLine(line string) (*FileChecksum, error) {
parts := strings.Fields(line)
switch len(parts) {
case 4:
// BSD-style checksum:
// MD5 (file1) = <checksum>
// MD5 (file2) = <checksum>
if len(parts[1]) <= 2 ||
parts[1][0] != '(' || parts[1][len(parts[1])-1] != ')' {
return nil, fmt.Errorf(
"Unexpected BSD-style-checksum filename format: %s", line)
}
filename := parts[1][1 : len(parts[1])-1]
return newChecksumFromType(parts[0], parts[3], filename)
case 2:
// GNU-style:
// <checksum> file1
// <checksum> *file2
return newChecksumFromValue(parts[0], parts[1])
case 0:
return nil, nil // empty line
default:
return newChecksumFromValue(parts[0], "")
}
}

298
vendor/github.com/hashicorp/go-getter/client.go generated vendored Normal file
View File

@ -0,0 +1,298 @@
package getter
import (
"context"
"fmt"
"io/ioutil"
"os"
"path/filepath"
"strconv"
"strings"
urlhelper "github.com/hashicorp/go-getter/helper/url"
safetemp "github.com/hashicorp/go-safetemp"
)
// Client is a client for downloading things.
//
// Top-level functions such as Get are shortcuts for interacting with a client.
// Using a client directly allows more fine-grained control over how downloading
// is done, as well as customizing the protocols supported.
type Client struct {
// Ctx for cancellation
Ctx context.Context
// Src is the source URL to get.
//
// Dst is the path to save the downloaded thing as. If Dir is set to
// true, then this should be a directory. If the directory doesn't exist,
// it will be created for you.
//
// Pwd is the working directory for detection. If this isn't set, some
// detection may fail. Client will not default pwd to the current
// working directory for security reasons.
Src string
Dst string
Pwd string
// Mode is the method of download the client will use. See ClientMode
// for documentation.
Mode ClientMode
// Detectors is the list of detectors that are tried on the source.
// If this is nil, then the default Detectors will be used.
Detectors []Detector
// Decompressors is the map of decompressors supported by this client.
// If this is nil, then the default value is the Decompressors global.
Decompressors map[string]Decompressor
// Getters is the map of protocols supported by this client. If this
// is nil, then the default Getters variable will be used.
Getters map[string]Getter
// Dir, if true, tells the Client it is downloading a directory (versus
// a single file). This distinction is necessary since filenames and
// directory names follow the same format so disambiguating is impossible
// without knowing ahead of time.
//
// WARNING: deprecated. If Mode is set, that will take precedence.
Dir bool
// ProgressListener allows to track file downloads.
// By default a no op progress listener is used.
ProgressListener ProgressTracker
Options []ClientOption
}
// Get downloads the configured source to the destination.
func (c *Client) Get() error {
if err := c.Configure(c.Options...); err != nil {
return err
}
// Store this locally since there are cases we swap this
mode := c.Mode
if mode == ClientModeInvalid {
if c.Dir {
mode = ClientModeDir
} else {
mode = ClientModeFile
}
}
src, err := Detect(c.Src, c.Pwd, c.Detectors)
if err != nil {
return err
}
// Determine if we have a forced protocol, i.e. "git::http://..."
force, src := getForcedGetter(src)
// If there is a subdir component, then we download the root separately
// and then copy over the proper subdir.
var realDst string
dst := c.Dst
src, subDir := SourceDirSubdir(src)
if subDir != "" {
td, tdcloser, err := safetemp.Dir("", "getter")
if err != nil {
return err
}
defer tdcloser.Close()
realDst = dst
dst = td
}
u, err := urlhelper.Parse(src)
if err != nil {
return err
}
if force == "" {
force = u.Scheme
}
g, ok := c.Getters[force]
if !ok {
return fmt.Errorf(
"download not supported for scheme '%s'", force)
}
// We have magic query parameters that we use to signal different features
q := u.Query()
// Determine if we have an archive type
archiveV := q.Get("archive")
if archiveV != "" {
// Delete the paramter since it is a magic parameter we don't
// want to pass on to the Getter
q.Del("archive")
u.RawQuery = q.Encode()
// If we can parse the value as a bool and it is false, then
// set the archive to "-" which should never map to a decompressor
if b, err := strconv.ParseBool(archiveV); err == nil && !b {
archiveV = "-"
}
}
if archiveV == "" {
// We don't appear to... but is it part of the filename?
matchingLen := 0
for k := range c.Decompressors {
if strings.HasSuffix(u.Path, "."+k) && len(k) > matchingLen {
archiveV = k
matchingLen = len(k)
}
}
}
// If we have a decompressor, then we need to change the destination
// to download to a temporary path. We unarchive this into the final,
// real path.
var decompressDst string
var decompressDir bool
decompressor := c.Decompressors[archiveV]
if decompressor != nil {
// Create a temporary directory to store our archive. We delete
// this at the end of everything.
td, err := ioutil.TempDir("", "getter")
if err != nil {
return fmt.Errorf(
"Error creating temporary directory for archive: %s", err)
}
defer os.RemoveAll(td)
// Swap the download directory to be our temporary path and
// store the old values.
decompressDst = dst
decompressDir = mode != ClientModeFile
dst = filepath.Join(td, "archive")
mode = ClientModeFile
}
// Determine checksum if we have one
checksum, err := c.extractChecksum(u)
if err != nil {
return fmt.Errorf("invalid checksum: %s", err)
}
// Delete the query parameter if we have it.
q.Del("checksum")
u.RawQuery = q.Encode()
if mode == ClientModeAny {
// Ask the getter which client mode to use
mode, err = g.ClientMode(u)
if err != nil {
return err
}
// Destination is the base name of the URL path in "any" mode when
// a file source is detected.
if mode == ClientModeFile {
filename := filepath.Base(u.Path)
// Determine if we have a custom file name
if v := q.Get("filename"); v != "" {
// Delete the query parameter if we have it.
q.Del("filename")
u.RawQuery = q.Encode()
filename = v
}
dst = filepath.Join(dst, filename)
}
}
// If we're not downloading a directory, then just download the file
// and return.
if mode == ClientModeFile {
getFile := true
if checksum != nil {
if err := checksum.checksum(dst); err == nil {
// don't get the file if the checksum of dst is correct
getFile = false
}
}
if getFile {
err := g.GetFile(dst, u)
if err != nil {
return err
}
if checksum != nil {
if err := checksum.checksum(dst); err != nil {
return err
}
}
}
if decompressor != nil {
// We have a decompressor, so decompress the current destination
// into the final destination with the proper mode.
err := decompressor.Decompress(decompressDst, dst, decompressDir)
if err != nil {
return err
}
// Swap the information back
dst = decompressDst
if decompressDir {
mode = ClientModeAny
} else {
mode = ClientModeFile
}
}
// We check the dir value again because it can be switched back
// if we were unarchiving. If we're still only Get-ing a file, then
// we're done.
if mode == ClientModeFile {
return nil
}
}
// If we're at this point we're either downloading a directory or we've
// downloaded and unarchived a directory and we're just checking subdir.
// In the case we have a decompressor we don't Get because it was Get
// above.
if decompressor == nil {
// If we're getting a directory, then this is an error. You cannot
// checksum a directory. TODO: test
if checksum != nil {
return fmt.Errorf(
"checksum cannot be specified for directory download")
}
// We're downloading a directory, which might require a bit more work
// if we're specifying a subdir.
err := g.Get(dst, u)
if err != nil {
err = fmt.Errorf("error downloading '%s': %s", src, err)
return err
}
}
// If we have a subdir, copy that over
if subDir != "" {
if err := os.RemoveAll(realDst); err != nil {
return err
}
if err := os.MkdirAll(realDst, 0755); err != nil {
return err
}
// Process any globs
subDir, err := SubdirGlob(dst, subDir)
if err != nil {
return err
}
return copyDir(c.Ctx, realDst, subDir, false)
}
return nil
}

24
vendor/github.com/hashicorp/go-getter/client_mode.go generated vendored Normal file
View File

@ -0,0 +1,24 @@
package getter
// ClientMode is the mode that the client operates in.
type ClientMode uint
const (
ClientModeInvalid ClientMode = iota
// ClientModeAny downloads anything it can. In this mode, dst must
// be a directory. If src is a file, it is saved into the directory
// with the basename of the URL. If src is a directory or archive,
// it is unpacked directly into dst.
ClientModeAny
// ClientModeFile downloads a single file. In this mode, dst must
// be a file path (doesn't have to exist). src must point to a single
// file. It is saved as dst.
ClientModeFile
// ClientModeDir downloads a directory. In this mode, dst must be
// a directory path (doesn't have to exist). src must point to an
// archive or directory (such as in s3).
ClientModeDir
)

46
vendor/github.com/hashicorp/go-getter/client_option.go generated vendored Normal file
View File

@ -0,0 +1,46 @@
package getter
import "context"
// A ClientOption allows to configure a client
type ClientOption func(*Client) error
// Configure configures a client with options.
func (c *Client) Configure(opts ...ClientOption) error {
if c.Ctx == nil {
c.Ctx = context.Background()
}
c.Options = opts
for _, opt := range opts {
err := opt(c)
if err != nil {
return err
}
}
// Default decompressor values
if c.Decompressors == nil {
c.Decompressors = Decompressors
}
// Default detector values
if c.Detectors == nil {
c.Detectors = Detectors
}
// Default getter values
if c.Getters == nil {
c.Getters = Getters
}
for _, getter := range c.Getters {
getter.SetClient(c)
}
return nil
}
// WithContext allows to pass a context to operation
// in order to be able to cancel a download in progress.
func WithContext(ctx context.Context) func(*Client) error {
return func(c *Client) error {
c.Ctx = ctx
return nil
}
}

View File

@ -0,0 +1,38 @@
package getter
import (
"io"
)
// WithProgress allows for a user to track
// the progress of a download.
// For example by displaying a progress bar with
// current download.
// Not all getters have progress support yet.
func WithProgress(pl ProgressTracker) func(*Client) error {
return func(c *Client) error {
c.ProgressListener = pl
return nil
}
}
// ProgressTracker allows to track the progress of downloads.
type ProgressTracker interface {
// TrackProgress should be called when
// a new object is being downloaded.
// src is the location the file is
// downloaded from.
// currentSize is the current size of
// the file in case it is a partial
// download.
// totalSize is the total size in bytes,
// size can be zero if the file size
// is not known.
// stream is the file being downloaded, every
// written byte will add up to processed size.
//
// TrackProgress returns a ReadCloser that wraps the
// download in progress ( stream ).
// When the download is finished, body shall be closed.
TrackProgress(src string, currentSize, totalSize int64, stream io.ReadCloser) (body io.ReadCloser)
}

14
vendor/github.com/hashicorp/go-getter/common.go generated vendored Normal file
View File

@ -0,0 +1,14 @@
package getter
import (
"io/ioutil"
)
func tmpFile(dir, pattern string) (string, error) {
f, err := ioutil.TempFile(dir, pattern)
if err != nil {
return "", err
}
f.Close()
return f.Name(), nil
}

78
vendor/github.com/hashicorp/go-getter/copy_dir.go generated vendored Normal file
View File

@ -0,0 +1,78 @@
package getter
import (
"context"
"os"
"path/filepath"
"strings"
)
// copyDir copies the src directory contents into dst. Both directories
// should already exist.
//
// If ignoreDot is set to true, then dot-prefixed files/folders are ignored.
func copyDir(ctx context.Context, dst string, src string, ignoreDot bool) error {
src, err := filepath.EvalSymlinks(src)
if err != nil {
return err
}
walkFn := func(path string, info os.FileInfo, err error) error {
if err != nil {
return err
}
if path == src {
return nil
}
if ignoreDot && strings.HasPrefix(filepath.Base(path), ".") {
// Skip any dot files
if info.IsDir() {
return filepath.SkipDir
} else {
return nil
}
}
// The "path" has the src prefixed to it. We need to join our
// destination with the path without the src on it.
dstPath := filepath.Join(dst, path[len(src):])
// If we have a directory, make that subdirectory, then continue
// the walk.
if info.IsDir() {
if path == filepath.Join(src, dst) {
// dst is in src; don't walk it.
return nil
}
if err := os.MkdirAll(dstPath, 0755); err != nil {
return err
}
return nil
}
// If we have a file, copy the contents.
srcF, err := os.Open(path)
if err != nil {
return err
}
defer srcF.Close()
dstF, err := os.Create(dstPath)
if err != nil {
return err
}
defer dstF.Close()
if _, err := Copy(ctx, dstF, srcF); err != nil {
return err
}
// Chmod it
return os.Chmod(dstPath, info.Mode())
}
return filepath.Walk(src, walkFn)
}

58
vendor/github.com/hashicorp/go-getter/decompress.go generated vendored Normal file
View File

@ -0,0 +1,58 @@
package getter
import (
"strings"
)
// Decompressor defines the interface that must be implemented to add
// support for decompressing a type.
//
// Important: if you're implementing a decompressor, please use the
// containsDotDot helper in this file to ensure that files can't be
// decompressed outside of the specified directory.
type Decompressor interface {
// Decompress should decompress src to dst. dir specifies whether dst
// is a directory or single file. src is guaranteed to be a single file
// that exists. dst is not guaranteed to exist already.
Decompress(dst, src string, dir bool) error
}
// Decompressors is the mapping of extension to the Decompressor implementation
// that will decompress that extension/type.
var Decompressors map[string]Decompressor
func init() {
tbzDecompressor := new(TarBzip2Decompressor)
tgzDecompressor := new(TarGzipDecompressor)
txzDecompressor := new(TarXzDecompressor)
Decompressors = map[string]Decompressor{
"bz2": new(Bzip2Decompressor),
"gz": new(GzipDecompressor),
"xz": new(XzDecompressor),
"tar.bz2": tbzDecompressor,
"tar.gz": tgzDecompressor,
"tar.xz": txzDecompressor,
"tbz2": tbzDecompressor,
"tgz": tgzDecompressor,
"txz": txzDecompressor,
"zip": new(ZipDecompressor),
}
}
// containsDotDot checks if the filepath value v contains a ".." entry.
// This will check filepath components by splitting along / or \. This
// function is copied directly from the Go net/http implementation.
func containsDotDot(v string) bool {
if !strings.Contains(v, "..") {
return false
}
for _, ent := range strings.FieldsFunc(v, isSlashRune) {
if ent == ".." {
return true
}
}
return false
}
func isSlashRune(r rune) bool { return r == '/' || r == '\\' }

View File

@ -0,0 +1,45 @@
package getter
import (
"compress/bzip2"
"fmt"
"io"
"os"
"path/filepath"
)
// Bzip2Decompressor is an implementation of Decompressor that can
// decompress bz2 files.
type Bzip2Decompressor struct{}
func (d *Bzip2Decompressor) Decompress(dst, src string, dir bool) error {
// Directory isn't supported at all
if dir {
return fmt.Errorf("bzip2-compressed files can only unarchive to a single file")
}
// If we're going into a directory we should make that first
if err := os.MkdirAll(filepath.Dir(dst), 0755); err != nil {
return err
}
// File first
f, err := os.Open(src)
if err != nil {
return err
}
defer f.Close()
// Bzip2 compression is second
bzipR := bzip2.NewReader(f)
// Copy it out
dstF, err := os.Create(dst)
if err != nil {
return err
}
defer dstF.Close()
_, err = io.Copy(dstF, bzipR)
return err
}

View File

@ -0,0 +1,49 @@
package getter
import (
"compress/gzip"
"fmt"
"io"
"os"
"path/filepath"
)
// GzipDecompressor is an implementation of Decompressor that can
// decompress gzip files.
type GzipDecompressor struct{}
func (d *GzipDecompressor) Decompress(dst, src string, dir bool) error {
// Directory isn't supported at all
if dir {
return fmt.Errorf("gzip-compressed files can only unarchive to a single file")
}
// If we're going into a directory we should make that first
if err := os.MkdirAll(filepath.Dir(dst), 0755); err != nil {
return err
}
// File first
f, err := os.Open(src)
if err != nil {
return err
}
defer f.Close()
// gzip compression is second
gzipR, err := gzip.NewReader(f)
if err != nil {
return err
}
defer gzipR.Close()
// Copy it out
dstF, err := os.Create(dst)
if err != nil {
return err
}
defer dstF.Close()
_, err = io.Copy(dstF, gzipR)
return err
}

160
vendor/github.com/hashicorp/go-getter/decompress_tar.go generated vendored Normal file
View File

@ -0,0 +1,160 @@
package getter
import (
"archive/tar"
"fmt"
"io"
"os"
"path/filepath"
"time"
)
// untar is a shared helper for untarring an archive. The reader should provide
// an uncompressed view of the tar archive.
func untar(input io.Reader, dst, src string, dir bool) error {
tarR := tar.NewReader(input)
done := false
dirHdrs := []*tar.Header{}
now := time.Now()
for {
hdr, err := tarR.Next()
if err == io.EOF {
if !done {
// Empty archive
return fmt.Errorf("empty archive: %s", src)
}
break
}
if err != nil {
return err
}
if hdr.Typeflag == tar.TypeXGlobalHeader || hdr.Typeflag == tar.TypeXHeader {
// don't unpack extended headers as files
continue
}
path := dst
if dir {
// Disallow parent traversal
if containsDotDot(hdr.Name) {
return fmt.Errorf("entry contains '..': %s", hdr.Name)
}
path = filepath.Join(path, hdr.Name)
}
if hdr.FileInfo().IsDir() {
if !dir {
return fmt.Errorf("expected a single file: %s", src)
}
// A directory, just make the directory and continue unarchiving...
if err := os.MkdirAll(path, 0755); err != nil {
return err
}
// Record the directory information so that we may set its attributes
// after all files have been extracted
dirHdrs = append(dirHdrs, hdr)
continue
} else {
// There is no ordering guarantee that a file in a directory is
// listed before the directory
dstPath := filepath.Dir(path)
// Check that the directory exists, otherwise create it
if _, err := os.Stat(dstPath); os.IsNotExist(err) {
if err := os.MkdirAll(dstPath, 0755); err != nil {
return err
}
}
}
// We have a file. If we already decoded, then it is an error
if !dir && done {
return fmt.Errorf("expected a single file, got multiple: %s", src)
}
// Mark that we're done so future in single file mode errors
done = true
// Open the file for writing
dstF, err := os.Create(path)
if err != nil {
return err
}
_, err = io.Copy(dstF, tarR)
dstF.Close()
if err != nil {
return err
}
// Chmod the file
if err := os.Chmod(path, hdr.FileInfo().Mode()); err != nil {
return err
}
// Set the access and modification time if valid, otherwise default to current time
aTime := now
mTime := now
if hdr.AccessTime.Unix() > 0 {
aTime = hdr.AccessTime
}
if hdr.ModTime.Unix() > 0 {
mTime = hdr.ModTime
}
if err := os.Chtimes(path, aTime, mTime); err != nil {
return err
}
}
// Perform a final pass over extracted directories to update metadata
for _, dirHdr := range dirHdrs {
path := filepath.Join(dst, dirHdr.Name)
// Chmod the directory since they might be created before we know the mode flags
if err := os.Chmod(path, dirHdr.FileInfo().Mode()); err != nil {
return err
}
// Set the mtime/atime attributes since they would have been changed during extraction
aTime := now
mTime := now
if dirHdr.AccessTime.Unix() > 0 {
aTime = dirHdr.AccessTime
}
if dirHdr.ModTime.Unix() > 0 {
mTime = dirHdr.ModTime
}
if err := os.Chtimes(path, aTime, mTime); err != nil {
return err
}
}
return nil
}
// tarDecompressor is an implementation of Decompressor that can
// unpack tar files.
type tarDecompressor struct{}
func (d *tarDecompressor) Decompress(dst, src string, dir bool) error {
// If we're going into a directory we should make that first
mkdir := dst
if !dir {
mkdir = filepath.Dir(dst)
}
if err := os.MkdirAll(mkdir, 0755); err != nil {
return err
}
// File first
f, err := os.Open(src)
if err != nil {
return err
}
defer f.Close()
return untar(f, dst, src, dir)
}

View File

@ -0,0 +1,33 @@
package getter
import (
"compress/bzip2"
"os"
"path/filepath"
)
// TarBzip2Decompressor is an implementation of Decompressor that can
// decompress tar.bz2 files.
type TarBzip2Decompressor struct{}
func (d *TarBzip2Decompressor) Decompress(dst, src string, dir bool) error {
// If we're going into a directory we should make that first
mkdir := dst
if !dir {
mkdir = filepath.Dir(dst)
}
if err := os.MkdirAll(mkdir, 0755); err != nil {
return err
}
// File first
f, err := os.Open(src)
if err != nil {
return err
}
defer f.Close()
// Bzip2 compression is second
bzipR := bzip2.NewReader(f)
return untar(bzipR, dst, src, dir)
}

View File

@ -0,0 +1,171 @@
package getter
import (
"crypto/md5"
"encoding/hex"
"io"
"io/ioutil"
"os"
"path/filepath"
"reflect"
"runtime"
"sort"
"strings"
"time"
"github.com/mitchellh/go-testing-interface"
)
// TestDecompressCase is a single test case for testing decompressors
type TestDecompressCase struct {
Input string // Input is the complete path to the input file
Dir bool // Dir is whether or not we're testing directory mode
Err bool // Err is whether we expect an error or not
DirList []string // DirList is the list of files for Dir mode
FileMD5 string // FileMD5 is the expected MD5 for a single file
Mtime *time.Time // Mtime is the optionally expected mtime for a single file (or all files if in Dir mode)
}
// TestDecompressor is a helper function for testing generic decompressors.
func TestDecompressor(t testing.T, d Decompressor, cases []TestDecompressCase) {
t.Helper()
for _, tc := range cases {
t.Logf("Testing: %s", tc.Input)
// Temporary dir to store stuff
td, err := ioutil.TempDir("", "getter")
if err != nil {
t.Fatalf("err: %s", err)
}
// Destination is always joining result so that we have a new path
dst := filepath.Join(td, "subdir", "result")
// We use a function so defers work
func() {
defer os.RemoveAll(td)
// Decompress
err := d.Decompress(dst, tc.Input, tc.Dir)
if (err != nil) != tc.Err {
t.Fatalf("err %s: %s", tc.Input, err)
}
if tc.Err {
return
}
// If it isn't a directory, then check for a single file
if !tc.Dir {
fi, err := os.Stat(dst)
if err != nil {
t.Fatalf("err %s: %s", tc.Input, err)
}
if fi.IsDir() {
t.Fatalf("err %s: expected file, got directory", tc.Input)
}
if tc.FileMD5 != "" {
actual := testMD5(t, dst)
expected := tc.FileMD5
if actual != expected {
t.Fatalf("err %s: expected MD5 %s, got %s", tc.Input, expected, actual)
}
}
if tc.Mtime != nil {
actual := fi.ModTime()
if tc.Mtime.Unix() > 0 {
expected := *tc.Mtime
if actual != expected {
t.Fatalf("err %s: expected mtime '%s' for %s, got '%s'", tc.Input, expected.String(), dst, actual.String())
}
} else if actual.Unix() <= 0 {
t.Fatalf("err %s: expected mtime to be > 0, got '%s'", actual.String())
}
}
return
}
// Convert expected for windows
expected := tc.DirList
if runtime.GOOS == "windows" {
for i, v := range expected {
expected[i] = strings.Replace(v, "/", "\\", -1)
}
}
// Directory, check for the correct contents
actual := testListDir(t, dst)
if !reflect.DeepEqual(actual, expected) {
t.Fatalf("bad %s\n\n%#v\n\n%#v", tc.Input, actual, expected)
}
// Check for correct atime/mtime
for _, dir := range actual {
path := filepath.Join(dst, dir)
if tc.Mtime != nil {
fi, err := os.Stat(path)
if err != nil {
t.Fatalf("err: %s", err)
}
actual := fi.ModTime()
if tc.Mtime.Unix() > 0 {
expected := *tc.Mtime
if actual != expected {
t.Fatalf("err %s: expected mtime '%s' for %s, got '%s'", tc.Input, expected.String(), path, actual.String())
}
} else if actual.Unix() < 0 {
t.Fatalf("err %s: expected mtime to be > 0, got '%s'", actual.String())
}
}
}
}()
}
}
func testListDir(t testing.T, path string) []string {
var result []string
err := filepath.Walk(path, func(sub string, info os.FileInfo, err error) error {
if err != nil {
return err
}
sub = strings.TrimPrefix(sub, path)
if sub == "" {
return nil
}
sub = sub[1:] // Trim the leading path sep.
// If it is a dir, add trailing sep
if info.IsDir() {
sub += string(os.PathSeparator)
}
result = append(result, sub)
return nil
})
if err != nil {
t.Fatalf("err: %s", err)
}
sort.Strings(result)
return result
}
func testMD5(t testing.T, path string) string {
f, err := os.Open(path)
if err != nil {
t.Fatalf("err: %s", err)
}
defer f.Close()
h := md5.New()
_, err = io.Copy(h, f)
if err != nil {
t.Fatalf("err: %s", err)
}
result := h.Sum(nil)
return hex.EncodeToString(result)
}

View File

@ -0,0 +1,39 @@
package getter
import (
"compress/gzip"
"fmt"
"os"
"path/filepath"
)
// TarGzipDecompressor is an implementation of Decompressor that can
// decompress tar.gzip files.
type TarGzipDecompressor struct{}
func (d *TarGzipDecompressor) Decompress(dst, src string, dir bool) error {
// If we're going into a directory we should make that first
mkdir := dst
if !dir {
mkdir = filepath.Dir(dst)
}
if err := os.MkdirAll(mkdir, 0755); err != nil {
return err
}
// File first
f, err := os.Open(src)
if err != nil {
return err
}
defer f.Close()
// Gzip compression is second
gzipR, err := gzip.NewReader(f)
if err != nil {
return fmt.Errorf("Error opening a gzip reader for %s: %s", src, err)
}
defer gzipR.Close()
return untar(gzipR, dst, src, dir)
}

View File

@ -0,0 +1,39 @@
package getter
import (
"fmt"
"os"
"path/filepath"
"github.com/ulikunitz/xz"
)
// TarXzDecompressor is an implementation of Decompressor that can
// decompress tar.xz files.
type TarXzDecompressor struct{}
func (d *TarXzDecompressor) Decompress(dst, src string, dir bool) error {
// If we're going into a directory we should make that first
mkdir := dst
if !dir {
mkdir = filepath.Dir(dst)
}
if err := os.MkdirAll(mkdir, 0755); err != nil {
return err
}
// File first
f, err := os.Open(src)
if err != nil {
return err
}
defer f.Close()
// xz compression is second
txzR, err := xz.NewReader(f)
if err != nil {
return fmt.Errorf("Error opening an xz reader for %s: %s", src, err)
}
return untar(txzR, dst, src, dir)
}

49
vendor/github.com/hashicorp/go-getter/decompress_xz.go generated vendored Normal file
View File

@ -0,0 +1,49 @@
package getter
import (
"fmt"
"io"
"os"
"path/filepath"
"github.com/ulikunitz/xz"
)
// XzDecompressor is an implementation of Decompressor that can
// decompress xz files.
type XzDecompressor struct{}
func (d *XzDecompressor) Decompress(dst, src string, dir bool) error {
// Directory isn't supported at all
if dir {
return fmt.Errorf("xz-compressed files can only unarchive to a single file")
}
// If we're going into a directory we should make that first
if err := os.MkdirAll(filepath.Dir(dst), 0755); err != nil {
return err
}
// File first
f, err := os.Open(src)
if err != nil {
return err
}
defer f.Close()
// xz compression is second
xzR, err := xz.NewReader(f)
if err != nil {
return err
}
// Copy it out
dstF, err := os.Create(dst)
if err != nil {
return err
}
defer dstF.Close()
_, err = io.Copy(dstF, xzR)
return err
}

101
vendor/github.com/hashicorp/go-getter/decompress_zip.go generated vendored Normal file
View File

@ -0,0 +1,101 @@
package getter
import (
"archive/zip"
"fmt"
"io"
"os"
"path/filepath"
)
// ZipDecompressor is an implementation of Decompressor that can
// decompress zip files.
type ZipDecompressor struct{}
func (d *ZipDecompressor) Decompress(dst, src string, dir bool) error {
// If we're going into a directory we should make that first
mkdir := dst
if !dir {
mkdir = filepath.Dir(dst)
}
if err := os.MkdirAll(mkdir, 0755); err != nil {
return err
}
// Open the zip
zipR, err := zip.OpenReader(src)
if err != nil {
return err
}
defer zipR.Close()
// Check the zip integrity
if len(zipR.File) == 0 {
// Empty archive
return fmt.Errorf("empty archive: %s", src)
}
if !dir && len(zipR.File) > 1 {
return fmt.Errorf("expected a single file: %s", src)
}
// Go through and unarchive
for _, f := range zipR.File {
path := dst
if dir {
// Disallow parent traversal
if containsDotDot(f.Name) {
return fmt.Errorf("entry contains '..': %s", f.Name)
}
path = filepath.Join(path, f.Name)
}
if f.FileInfo().IsDir() {
if !dir {
return fmt.Errorf("expected a single file: %s", src)
}
// A directory, just make the directory and continue unarchiving...
if err := os.MkdirAll(path, 0755); err != nil {
return err
}
continue
}
// Create the enclosing directories if we must. ZIP files aren't
// required to contain entries for just the directories so this
// can happen.
if dir {
if err := os.MkdirAll(filepath.Dir(path), 0755); err != nil {
return err
}
}
// Open the file for reading
srcF, err := f.Open()
if err != nil {
return err
}
// Open the file for writing
dstF, err := os.Create(path)
if err != nil {
srcF.Close()
return err
}
_, err = io.Copy(dstF, srcF)
srcF.Close()
dstF.Close()
if err != nil {
return err
}
// Chmod the file
if err := os.Chmod(path, f.Mode()); err != nil {
return err
}
}
return nil
}

105
vendor/github.com/hashicorp/go-getter/detect.go generated vendored Normal file
View File

@ -0,0 +1,105 @@
package getter
import (
"fmt"
"path/filepath"
"github.com/hashicorp/go-getter/helper/url"
)
// Detector defines the interface that an invalid URL or a URL with a blank
// scheme is passed through in order to determine if its shorthand for
// something else well-known.
type Detector interface {
// Detect will detect whether the string matches a known pattern to
// turn it into a proper URL.
Detect(string, string) (string, bool, error)
}
// Detectors is the list of detectors that are tried on an invalid URL.
// This is also the order they're tried (index 0 is first).
var Detectors []Detector
func init() {
Detectors = []Detector{
new(GitHubDetector),
new(GitDetector),
new(BitBucketDetector),
new(S3Detector),
new(GCSDetector),
new(FileDetector),
}
}
// Detect turns a source string into another source string if it is
// detected to be of a known pattern.
//
// The third parameter should be the list of detectors to use in the
// order to try them. If you don't want to configure this, just use
// the global Detectors variable.
//
// This is safe to be called with an already valid source string: Detect
// will just return it.
func Detect(src string, pwd string, ds []Detector) (string, error) {
getForce, getSrc := getForcedGetter(src)
// Separate out the subdir if there is one, we don't pass that to detect
getSrc, subDir := SourceDirSubdir(getSrc)
u, err := url.Parse(getSrc)
if err == nil && u.Scheme != "" {
// Valid URL
return src, nil
}
for _, d := range ds {
result, ok, err := d.Detect(getSrc, pwd)
if err != nil {
return "", err
}
if !ok {
continue
}
var detectForce string
detectForce, result = getForcedGetter(result)
result, detectSubdir := SourceDirSubdir(result)
// If we have a subdir from the detection, then prepend it to our
// requested subdir.
if detectSubdir != "" {
if subDir != "" {
subDir = filepath.Join(detectSubdir, subDir)
} else {
subDir = detectSubdir
}
}
if subDir != "" {
u, err := url.Parse(result)
if err != nil {
return "", fmt.Errorf("Error parsing URL: %s", err)
}
u.Path += "//" + subDir
// a subdir may contain wildcards, but in order to support them we
// have to ensure the path isn't escaped.
u.RawPath = u.Path
result = u.String()
}
// Preserve the forced getter if it exists. We try to use the
// original set force first, followed by any force set by the
// detector.
if getForce != "" {
result = fmt.Sprintf("%s::%s", getForce, result)
} else if detectForce != "" {
result = fmt.Sprintf("%s::%s", detectForce, result)
}
return result, nil
}
return "", fmt.Errorf("invalid source string: %s", src)
}

View File

@ -0,0 +1,66 @@
package getter
import (
"encoding/json"
"fmt"
"net/http"
"net/url"
"strings"
)
// BitBucketDetector implements Detector to detect BitBucket URLs and turn
// them into URLs that the Git or Hg Getter can understand.
type BitBucketDetector struct{}
func (d *BitBucketDetector) Detect(src, _ string) (string, bool, error) {
if len(src) == 0 {
return "", false, nil
}
if strings.HasPrefix(src, "bitbucket.org/") {
return d.detectHTTP(src)
}
return "", false, nil
}
func (d *BitBucketDetector) detectHTTP(src string) (string, bool, error) {
u, err := url.Parse("https://" + src)
if err != nil {
return "", true, fmt.Errorf("error parsing BitBucket URL: %s", err)
}
// We need to get info on this BitBucket repository to determine whether
// it is Git or Hg.
var info struct {
SCM string `json:"scm"`
}
infoUrl := "https://api.bitbucket.org/2.0/repositories" + u.Path
resp, err := http.Get(infoUrl)
if err != nil {
return "", true, fmt.Errorf("error looking up BitBucket URL: %s", err)
}
if resp.StatusCode == 403 {
// A private repo
return "", true, fmt.Errorf(
"shorthand BitBucket URL can't be used for private repos, " +
"please use a full URL")
}
dec := json.NewDecoder(resp.Body)
if err := dec.Decode(&info); err != nil {
return "", true, fmt.Errorf("error looking up BitBucket URL: %s", err)
}
switch info.SCM {
case "git":
if !strings.HasSuffix(u.Path, ".git") {
u.Path += ".git"
}
return "git::" + u.String(), true, nil
case "hg":
return "hg::" + u.String(), true, nil
default:
return "", true, fmt.Errorf("unknown BitBucket SCM type: %s", info.SCM)
}
}

67
vendor/github.com/hashicorp/go-getter/detect_file.go generated vendored Normal file
View File

@ -0,0 +1,67 @@
package getter
import (
"fmt"
"os"
"path/filepath"
"runtime"
)
// FileDetector implements Detector to detect file paths.
type FileDetector struct{}
func (d *FileDetector) Detect(src, pwd string) (string, bool, error) {
if len(src) == 0 {
return "", false, nil
}
if !filepath.IsAbs(src) {
if pwd == "" {
return "", true, fmt.Errorf(
"relative paths require a module with a pwd")
}
// Stat the pwd to determine if its a symbolic link. If it is,
// then the pwd becomes the original directory. Otherwise,
// `filepath.Join` below does some weird stuff.
//
// We just ignore if the pwd doesn't exist. That error will be
// caught later when we try to use the URL.
if fi, err := os.Lstat(pwd); !os.IsNotExist(err) {
if err != nil {
return "", true, err
}
if fi.Mode()&os.ModeSymlink != 0 {
pwd, err = filepath.EvalSymlinks(pwd)
if err != nil {
return "", true, err
}
// The symlink itself might be a relative path, so we have to
// resolve this to have a correctly rooted URL.
pwd, err = filepath.Abs(pwd)
if err != nil {
return "", true, err
}
}
}
src = filepath.Join(pwd, src)
}
return fmtFileURL(src), true, nil
}
func fmtFileURL(path string) string {
if runtime.GOOS == "windows" {
// Make sure we're using "/" on Windows. URLs are "/"-based.
path = filepath.ToSlash(path)
return fmt.Sprintf("file://%s", path)
}
// Make sure that we don't start with "/" since we add that below.
if path[0] == '/' {
path = path[1:]
}
return fmt.Sprintf("file:///%s", path)
}

43
vendor/github.com/hashicorp/go-getter/detect_gcs.go generated vendored Normal file
View File

@ -0,0 +1,43 @@
package getter
import (
"fmt"
"net/url"
"strings"
)
// GCSDetector implements Detector to detect GCS URLs and turn
// them into URLs that the GCSGetter can understand.
type GCSDetector struct{}
func (d *GCSDetector) Detect(src, _ string) (string, bool, error) {
if len(src) == 0 {
return "", false, nil
}
if strings.Contains(src, "googleapis.com/") {
return d.detectHTTP(src)
}
return "", false, nil
}
func (d *GCSDetector) detectHTTP(src string) (string, bool, error) {
parts := strings.Split(src, "/")
if len(parts) < 5 {
return "", false, fmt.Errorf(
"URL is not a valid GCS URL")
}
version := parts[2]
bucket := parts[3]
object := strings.Join(parts[4:], "/")
url, err := url.Parse(fmt.Sprintf("https://www.googleapis.com/storage/%s/%s/%s",
version, bucket, object))
if err != nil {
return "", false, fmt.Errorf("error parsing GCS URL: %s", err)
}
return "gcs::" + url.String(), true, nil
}

26
vendor/github.com/hashicorp/go-getter/detect_git.go generated vendored Normal file
View File

@ -0,0 +1,26 @@
package getter
// GitDetector implements Detector to detect Git SSH URLs such as
// git@host.com:dir1/dir2 and converts them to proper URLs.
type GitDetector struct{}
func (d *GitDetector) Detect(src, _ string) (string, bool, error) {
if len(src) == 0 {
return "", false, nil
}
u, err := detectSSH(src)
if err != nil {
return "", true, err
}
if u == nil {
return "", false, nil
}
// We require the username to be "git" to assume that this is a Git URL
if u.User.Username() != "git" {
return "", false, nil
}
return "git::" + u.String(), true, nil
}

47
vendor/github.com/hashicorp/go-getter/detect_github.go generated vendored Normal file
View File

@ -0,0 +1,47 @@
package getter
import (
"fmt"
"net/url"
"strings"
)
// GitHubDetector implements Detector to detect GitHub URLs and turn
// them into URLs that the Git Getter can understand.
type GitHubDetector struct{}
func (d *GitHubDetector) Detect(src, _ string) (string, bool, error) {
if len(src) == 0 {
return "", false, nil
}
if strings.HasPrefix(src, "github.com/") {
return d.detectHTTP(src)
}
return "", false, nil
}
func (d *GitHubDetector) detectHTTP(src string) (string, bool, error) {
parts := strings.Split(src, "/")
if len(parts) < 3 {
return "", false, fmt.Errorf(
"GitHub URLs should be github.com/username/repo")
}
urlStr := fmt.Sprintf("https://%s", strings.Join(parts[:3], "/"))
url, err := url.Parse(urlStr)
if err != nil {
return "", true, fmt.Errorf("error parsing GitHub URL: %s", err)
}
if !strings.HasSuffix(url.Path, ".git") {
url.Path += ".git"
}
if len(parts) > 3 {
url.Path += "//" + strings.Join(parts[3:], "/")
}
return "git::" + url.String(), true, nil
}

61
vendor/github.com/hashicorp/go-getter/detect_s3.go generated vendored Normal file
View File

@ -0,0 +1,61 @@
package getter
import (
"fmt"
"net/url"
"strings"
)
// S3Detector implements Detector to detect S3 URLs and turn
// them into URLs that the S3 getter can understand.
type S3Detector struct{}
func (d *S3Detector) Detect(src, _ string) (string, bool, error) {
if len(src) == 0 {
return "", false, nil
}
if strings.Contains(src, ".amazonaws.com/") {
return d.detectHTTP(src)
}
return "", false, nil
}
func (d *S3Detector) detectHTTP(src string) (string, bool, error) {
parts := strings.Split(src, "/")
if len(parts) < 2 {
return "", false, fmt.Errorf(
"URL is not a valid S3 URL")
}
hostParts := strings.Split(parts[0], ".")
if len(hostParts) == 3 {
return d.detectPathStyle(hostParts[0], parts[1:])
} else if len(hostParts) == 4 {
return d.detectVhostStyle(hostParts[1], hostParts[0], parts[1:])
} else {
return "", false, fmt.Errorf(
"URL is not a valid S3 URL")
}
}
func (d *S3Detector) detectPathStyle(region string, parts []string) (string, bool, error) {
urlStr := fmt.Sprintf("https://%s.amazonaws.com/%s", region, strings.Join(parts, "/"))
url, err := url.Parse(urlStr)
if err != nil {
return "", false, fmt.Errorf("error parsing S3 URL: %s", err)
}
return "s3::" + url.String(), true, nil
}
func (d *S3Detector) detectVhostStyle(region, bucket string, parts []string) (string, bool, error) {
urlStr := fmt.Sprintf("https://%s.amazonaws.com/%s/%s", region, bucket, strings.Join(parts, "/"))
url, err := url.Parse(urlStr)
if err != nil {
return "", false, fmt.Errorf("error parsing S3 URL: %s", err)
}
return "s3::" + url.String(), true, nil
}

49
vendor/github.com/hashicorp/go-getter/detect_ssh.go generated vendored Normal file
View File

@ -0,0 +1,49 @@
package getter
import (
"fmt"
"net/url"
"regexp"
"strings"
)
// Note that we do not have an SSH-getter currently so this file serves
// only to hold the detectSSH helper that is used by other detectors.
// sshPattern matches SCP-like SSH patterns (user@host:path)
var sshPattern = regexp.MustCompile("^(?:([^@]+)@)?([^:]+):/?(.+)$")
// detectSSH determines if the src string matches an SSH-like URL and
// converts it into a net.URL compatible string. This returns nil if the
// string doesn't match the SSH pattern.
//
// This function is tested indirectly via detect_git_test.go
func detectSSH(src string) (*url.URL, error) {
matched := sshPattern.FindStringSubmatch(src)
if matched == nil {
return nil, nil
}
user := matched[1]
host := matched[2]
path := matched[3]
qidx := strings.Index(path, "?")
if qidx == -1 {
qidx = len(path)
}
var u url.URL
u.Scheme = "ssh"
u.User = url.User(user)
u.Host = host
u.Path = path[0:qidx]
if qidx < len(path) {
q, err := url.ParseQuery(path[qidx+1:])
if err != nil {
return nil, fmt.Errorf("error parsing GitHub SSH URL: %s", err)
}
u.RawQuery = q.Encode()
}
return &u, nil
}

View File

@ -0,0 +1,65 @@
package getter
import (
"crypto/md5"
"encoding/hex"
"fmt"
"os"
"path/filepath"
)
// FolderStorage is an implementation of the Storage interface that manages
// modules on the disk.
type FolderStorage struct {
// StorageDir is the directory where the modules will be stored.
StorageDir string
}
// Dir implements Storage.Dir
func (s *FolderStorage) Dir(key string) (d string, e bool, err error) {
d = s.dir(key)
_, err = os.Stat(d)
if err == nil {
// Directory exists
e = true
return
}
if os.IsNotExist(err) {
// Directory doesn't exist
d = ""
e = false
err = nil
return
}
// An error
d = ""
e = false
return
}
// Get implements Storage.Get
func (s *FolderStorage) Get(key string, source string, update bool) error {
dir := s.dir(key)
if !update {
if _, err := os.Stat(dir); err == nil {
// If the directory already exists, then we're done since
// we're not updating.
return nil
} else if !os.IsNotExist(err) {
// If the error we got wasn't a file-not-exist error, then
// something went wrong and we should report it.
return fmt.Errorf("Error reading module directory: %s", err)
}
}
// Get the source. This always forces an update.
return Get(dir, source)
}
// dir returns the directory name internally that we'll use to map to
// internally.
func (s *FolderStorage) dir(key string) string {
sum := md5.Sum([]byte(key))
return filepath.Join(s.StorageDir, hex.EncodeToString(sum[:]))
}

152
vendor/github.com/hashicorp/go-getter/get.go generated vendored Normal file
View File

@ -0,0 +1,152 @@
// getter is a package for downloading files or directories from a variety of
// protocols.
//
// getter is unique in its ability to download both directories and files.
// It also detects certain source strings to be protocol-specific URLs. For
// example, "github.com/hashicorp/go-getter" would turn into a Git URL and
// use the Git protocol.
//
// Protocols and detectors are extensible.
//
// To get started, see Client.
package getter
import (
"bytes"
"fmt"
"net/url"
"os/exec"
"regexp"
"syscall"
cleanhttp "github.com/hashicorp/go-cleanhttp"
)
// Getter defines the interface that schemes must implement to download
// things.
type Getter interface {
// Get downloads the given URL into the given directory. This always
// assumes that we're updating and gets the latest version that it can.
//
// The directory may already exist (if we're updating). If it is in a
// format that isn't understood, an error should be returned. Get shouldn't
// simply nuke the directory.
Get(string, *url.URL) error
// GetFile downloads the give URL into the given path. The URL must
// reference a single file. If possible, the Getter should check if
// the remote end contains the same file and no-op this operation.
GetFile(string, *url.URL) error
// ClientMode returns the mode based on the given URL. This is used to
// allow clients to let the getters decide which mode to use.
ClientMode(*url.URL) (ClientMode, error)
// SetClient allows a getter to know it's client
// in order to access client's Get functions or
// progress tracking.
SetClient(*Client)
}
// Getters is the mapping of scheme to the Getter implementation that will
// be used to get a dependency.
var Getters map[string]Getter
// forcedRegexp is the regular expression that finds forced getters. This
// syntax is schema::url, example: git::https://foo.com
var forcedRegexp = regexp.MustCompile(`^([A-Za-z0-9]+)::(.+)$`)
// httpClient is the default client to be used by HttpGetters.
var httpClient = cleanhttp.DefaultClient()
func init() {
httpGetter := &HttpGetter{
Netrc: true,
}
Getters = map[string]Getter{
"file": new(FileGetter),
"git": new(GitGetter),
"gcs": new(GCSGetter),
"hg": new(HgGetter),
"s3": new(S3Getter),
"http": httpGetter,
"https": httpGetter,
}
}
// Get downloads the directory specified by src into the folder specified by
// dst. If dst already exists, Get will attempt to update it.
//
// src is a URL, whereas dst is always just a file path to a folder. This
// folder doesn't need to exist. It will be created if it doesn't exist.
func Get(dst, src string, opts ...ClientOption) error {
return (&Client{
Src: src,
Dst: dst,
Dir: true,
Options: opts,
}).Get()
}
// GetAny downloads a URL into the given destination. Unlike Get or
// GetFile, both directories and files are supported.
//
// dst must be a directory. If src is a file, it will be downloaded
// into dst with the basename of the URL. If src is a directory or
// archive, it will be unpacked directly into dst.
func GetAny(dst, src string, opts ...ClientOption) error {
return (&Client{
Src: src,
Dst: dst,
Mode: ClientModeAny,
Options: opts,
}).Get()
}
// GetFile downloads the file specified by src into the path specified by
// dst.
func GetFile(dst, src string, opts ...ClientOption) error {
return (&Client{
Src: src,
Dst: dst,
Dir: false,
Options: opts,
}).Get()
}
// getRunCommand is a helper that will run a command and capture the output
// in the case an error happens.
func getRunCommand(cmd *exec.Cmd) error {
var buf bytes.Buffer
cmd.Stdout = &buf
cmd.Stderr = &buf
err := cmd.Run()
if err == nil {
return nil
}
if exiterr, ok := err.(*exec.ExitError); ok {
// The program has exited with an exit code != 0
if status, ok := exiterr.Sys().(syscall.WaitStatus); ok {
return fmt.Errorf(
"%s exited with %d: %s",
cmd.Path,
status.ExitStatus(),
buf.String())
}
}
return fmt.Errorf("error running %s: %s", cmd.Path, buf.String())
}
// getForcedGetter takes a source and returns the tuple of the forced
// getter and the raw URL (without the force syntax).
func getForcedGetter(src string) (string, string) {
var forced string
if ms := forcedRegexp.FindStringSubmatch(src); ms != nil {
forced = ms[1]
src = ms[2]
}
return forced, src
}

20
vendor/github.com/hashicorp/go-getter/get_base.go generated vendored Normal file
View File

@ -0,0 +1,20 @@
package getter
import "context"
// getter is our base getter; it regroups
// fields all getters have in common.
type getter struct {
client *Client
}
func (g *getter) SetClient(c *Client) { g.client = c }
// Context tries to returns the Contex from the getter's
// client. otherwise context.Background() is returned.
func (g *getter) Context() context.Context {
if g == nil || g.client == nil {
return context.Background()
}
return g.client.Ctx
}

36
vendor/github.com/hashicorp/go-getter/get_file.go generated vendored Normal file
View File

@ -0,0 +1,36 @@
package getter
import (
"net/url"
"os"
)
// FileGetter is a Getter implementation that will download a module from
// a file scheme.
type FileGetter struct {
getter
// Copy, if set to true, will copy data instead of using a symlink. If
// false, attempts to symlink to speed up the operation and to lower the
// disk space usage. If the symlink fails, may attempt to copy on windows.
Copy bool
}
func (g *FileGetter) ClientMode(u *url.URL) (ClientMode, error) {
path := u.Path
if u.RawPath != "" {
path = u.RawPath
}
fi, err := os.Stat(path)
if err != nil {
return 0, err
}
// Check if the source is a directory.
if fi.IsDir() {
return ClientModeDir, nil
}
return ClientModeFile, nil
}

29
vendor/github.com/hashicorp/go-getter/get_file_copy.go generated vendored Normal file
View File

@ -0,0 +1,29 @@
package getter
import (
"context"
"io"
)
// readerFunc is syntactic sugar for read interface.
type readerFunc func(p []byte) (n int, err error)
func (rf readerFunc) Read(p []byte) (n int, err error) { return rf(p) }
// Copy is a io.Copy cancellable by context
func Copy(ctx context.Context, dst io.Writer, src io.Reader) (int64, error) {
// Copy will call the Reader and Writer interface multiple time, in order
// to copy by chunk (avoiding loading the whole file in memory).
return io.Copy(dst, readerFunc(func(p []byte) (int, error) {
select {
case <-ctx.Done():
// context has been canceled
// stop process and propagate "context canceled" error
return 0, ctx.Err()
default:
// otherwise just run default io.Reader implementation
return src.Read(p)
}
}))
}

103
vendor/github.com/hashicorp/go-getter/get_file_unix.go generated vendored Normal file
View File

@ -0,0 +1,103 @@
// +build !windows
package getter
import (
"fmt"
"net/url"
"os"
"path/filepath"
)
func (g *FileGetter) Get(dst string, u *url.URL) error {
path := u.Path
if u.RawPath != "" {
path = u.RawPath
}
// The source path must exist and be a directory to be usable.
if fi, err := os.Stat(path); err != nil {
return fmt.Errorf("source path error: %s", err)
} else if !fi.IsDir() {
return fmt.Errorf("source path must be a directory")
}
fi, err := os.Lstat(dst)
if err != nil && !os.IsNotExist(err) {
return err
}
// If the destination already exists, it must be a symlink
if err == nil {
mode := fi.Mode()
if mode&os.ModeSymlink == 0 {
return fmt.Errorf("destination exists and is not a symlink")
}
// Remove the destination
if err := os.Remove(dst); err != nil {
return err
}
}
// Create all the parent directories
if err := os.MkdirAll(filepath.Dir(dst), 0755); err != nil {
return err
}
return os.Symlink(path, dst)
}
func (g *FileGetter) GetFile(dst string, u *url.URL) error {
ctx := g.Context()
path := u.Path
if u.RawPath != "" {
path = u.RawPath
}
// The source path must exist and be a file to be usable.
if fi, err := os.Stat(path); err != nil {
return fmt.Errorf("source path error: %s", err)
} else if fi.IsDir() {
return fmt.Errorf("source path must be a file")
}
_, err := os.Lstat(dst)
if err != nil && !os.IsNotExist(err) {
return err
}
// If the destination already exists, it must be a symlink
if err == nil {
// Remove the destination
if err := os.Remove(dst); err != nil {
return err
}
}
// Create all the parent directories
if err := os.MkdirAll(filepath.Dir(dst), 0755); err != nil {
return err
}
// If we're not copying, just symlink and we're done
if !g.Copy {
return os.Symlink(path, dst)
}
// Copy
srcF, err := os.Open(path)
if err != nil {
return err
}
defer srcF.Close()
dstF, err := os.Create(dst)
if err != nil {
return err
}
defer dstF.Close()
_, err = Copy(ctx, dstF, srcF)
return err
}

View File

@ -0,0 +1,136 @@
// +build windows
package getter
import (
"fmt"
"net/url"
"os"
"os/exec"
"path/filepath"
"strings"
"syscall"
)
func (g *FileGetter) Get(dst string, u *url.URL) error {
ctx := g.Context()
path := u.Path
if u.RawPath != "" {
path = u.RawPath
}
// The source path must exist and be a directory to be usable.
if fi, err := os.Stat(path); err != nil {
return fmt.Errorf("source path error: %s", err)
} else if !fi.IsDir() {
return fmt.Errorf("source path must be a directory")
}
fi, err := os.Lstat(dst)
if err != nil && !os.IsNotExist(err) {
return err
}
// If the destination already exists, it must be a symlink
if err == nil {
mode := fi.Mode()
if mode&os.ModeSymlink == 0 {
return fmt.Errorf("destination exists and is not a symlink")
}
// Remove the destination
if err := os.Remove(dst); err != nil {
return err
}
}
// Create all the parent directories
if err := os.MkdirAll(filepath.Dir(dst), 0755); err != nil {
return err
}
sourcePath := toBackslash(path)
// Use mklink to create a junction point
output, err := exec.CommandContext(ctx, "cmd", "/c", "mklink", "/J", dst, sourcePath).CombinedOutput()
if err != nil {
return fmt.Errorf("failed to run mklink %v %v: %v %q", dst, sourcePath, err, output)
}
return nil
}
func (g *FileGetter) GetFile(dst string, u *url.URL) error {
ctx := g.Context()
path := u.Path
if u.RawPath != "" {
path = u.RawPath
}
// The source path must exist and be a directory to be usable.
if fi, err := os.Stat(path); err != nil {
return fmt.Errorf("source path error: %s", err)
} else if fi.IsDir() {
return fmt.Errorf("source path must be a file")
}
_, err := os.Lstat(dst)
if err != nil && !os.IsNotExist(err) {
return err
}
// If the destination already exists, it must be a symlink
if err == nil {
// Remove the destination
if err := os.Remove(dst); err != nil {
return err
}
}
// Create all the parent directories
if err := os.MkdirAll(filepath.Dir(dst), 0755); err != nil {
return err
}
// If we're not copying, just symlink and we're done
if !g.Copy {
if err = os.Symlink(path, dst); err == nil {
return err
}
lerr, ok := err.(*os.LinkError)
if !ok {
return err
}
switch lerr.Err {
case syscall.ERROR_PRIVILEGE_NOT_HELD:
// no symlink privilege, let's
// fallback to a copy to avoid an error.
break
default:
return err
}
}
// Copy
srcF, err := os.Open(path)
if err != nil {
return err
}
defer srcF.Close()
dstF, err := os.Create(dst)
if err != nil {
return err
}
defer dstF.Close()
_, err = Copy(ctx, dstF, srcF)
return err
}
// toBackslash returns the result of replacing each slash character
// in path with a backslash ('\') character. Multiple separators are
// replaced by multiple backslashes.
func toBackslash(path string) string {
return strings.Replace(path, "/", "\\", -1)
}

172
vendor/github.com/hashicorp/go-getter/get_gcs.go generated vendored Normal file
View File

@ -0,0 +1,172 @@
package getter
import (
"context"
"fmt"
"net/url"
"os"
"path/filepath"
"strings"
"cloud.google.com/go/storage"
"google.golang.org/api/iterator"
)
// GCSGetter is a Getter implementation that will download a module from
// a GCS bucket.
type GCSGetter struct {
getter
}
func (g *GCSGetter) ClientMode(u *url.URL) (ClientMode, error) {
ctx := g.Context()
// Parse URL
bucket, object, err := g.parseURL(u)
if err != nil {
return 0, err
}
client, err := storage.NewClient(ctx)
if err != nil {
return 0, err
}
iter := client.Bucket(bucket).Objects(ctx, &storage.Query{Prefix: object})
for {
obj, err := iter.Next()
if err != nil && err != iterator.Done {
return 0, err
}
if err == iterator.Done {
break
}
if strings.HasSuffix(obj.Name, "/") {
// A directory matched the prefix search, so this must be a directory
return ClientModeDir, nil
} else if obj.Name != object {
// A file matched the prefix search and doesn't have the same name
// as the query, so this must be a directory
return ClientModeDir, nil
}
}
// There are no directories or subdirectories, and if a match was returned,
// it was exactly equal to the prefix search. So return File mode
return ClientModeFile, nil
}
func (g *GCSGetter) Get(dst string, u *url.URL) error {
ctx := g.Context()
// Parse URL
bucket, object, err := g.parseURL(u)
if err != nil {
return err
}
// Remove destination if it already exists
_, err = os.Stat(dst)
if err != nil && !os.IsNotExist(err) {
return err
}
if err == nil {
// Remove the destination
if err := os.RemoveAll(dst); err != nil {
return err
}
}
// Create all the parent directories
if err := os.MkdirAll(filepath.Dir(dst), 0755); err != nil {
return err
}
client, err := storage.NewClient(ctx)
if err != nil {
return err
}
// Iterate through all matching objects.
iter := client.Bucket(bucket).Objects(ctx, &storage.Query{Prefix: object})
for {
obj, err := iter.Next()
if err != nil && err != iterator.Done {
return err
}
if err == iterator.Done {
break
}
if !strings.HasSuffix(obj.Name, "/") {
// Get the object destination path
objDst, err := filepath.Rel(object, obj.Name)
if err != nil {
return err
}
objDst = filepath.Join(dst, objDst)
// Download the matching object.
err = g.getObject(ctx, client, objDst, bucket, obj.Name)
if err != nil {
return err
}
}
}
return nil
}
func (g *GCSGetter) GetFile(dst string, u *url.URL) error {
ctx := g.Context()
// Parse URL
bucket, object, err := g.parseURL(u)
if err != nil {
return err
}
client, err := storage.NewClient(ctx)
if err != nil {
return err
}
return g.getObject(ctx, client, dst, bucket, object)
}
func (g *GCSGetter) getObject(ctx context.Context, client *storage.Client, dst, bucket, object string) error {
rc, err := client.Bucket(bucket).Object(object).NewReader(ctx)
if err != nil {
return err
}
defer rc.Close()
// Create all the parent directories
if err := os.MkdirAll(filepath.Dir(dst), 0755); err != nil {
return err
}
f, err := os.Create(dst)
if err != nil {
return err
}
defer f.Close()
_, err = Copy(ctx, f, rc)
return err
}
func (g *GCSGetter) parseURL(u *url.URL) (bucket, path string, err error) {
if strings.Contains(u.Host, "googleapis.com") {
hostParts := strings.Split(u.Host, ".")
if len(hostParts) != 3 {
err = fmt.Errorf("URL is not a valid GCS URL")
return
}
pathParts := strings.SplitN(u.Path, "/", 5)
if len(pathParts) != 5 {
err = fmt.Errorf("URL is not a valid GCS URL")
return
}
bucket = pathParts[3]
path = pathParts[4]
}
return
}

313
vendor/github.com/hashicorp/go-getter/get_git.go generated vendored Normal file
View File

@ -0,0 +1,313 @@
package getter
import (
"bytes"
"context"
"encoding/base64"
"fmt"
"io/ioutil"
"net/url"
"os"
"os/exec"
"path/filepath"
"regexp"
"runtime"
"strconv"
"strings"
urlhelper "github.com/hashicorp/go-getter/helper/url"
safetemp "github.com/hashicorp/go-safetemp"
version "github.com/hashicorp/go-version"
)
// GitGetter is a Getter implementation that will download a module from
// a git repository.
type GitGetter struct {
getter
}
var defaultBranchRegexp = regexp.MustCompile(`\s->\sorigin/(.*)`)
func (g *GitGetter) ClientMode(_ *url.URL) (ClientMode, error) {
return ClientModeDir, nil
}
func (g *GitGetter) Get(dst string, u *url.URL) error {
ctx := g.Context()
if _, err := exec.LookPath("git"); err != nil {
return fmt.Errorf("git must be available and on the PATH")
}
// The port number must be parseable as an integer. If not, the user
// was probably trying to use a scp-style address, in which case the
// ssh:// prefix must be removed to indicate that.
//
// This is not necessary in versions of Go which have patched
// CVE-2019-14809 (e.g. Go 1.12.8+)
if portStr := u.Port(); portStr != "" {
if _, err := strconv.ParseUint(portStr, 10, 16); err != nil {
return fmt.Errorf("invalid port number %q; if using the \"scp-like\" git address scheme where a colon introduces the path instead, remove the ssh:// portion and use just the git:: prefix", portStr)
}
}
// Extract some query parameters we use
var ref, sshKey string
var depth int
q := u.Query()
if len(q) > 0 {
ref = q.Get("ref")
q.Del("ref")
sshKey = q.Get("sshkey")
q.Del("sshkey")
if n, err := strconv.Atoi(q.Get("depth")); err == nil {
depth = n
}
q.Del("depth")
// Copy the URL
var newU url.URL = *u
u = &newU
u.RawQuery = q.Encode()
}
var sshKeyFile string
if sshKey != "" {
// Check that the git version is sufficiently new.
if err := checkGitVersion("2.3"); err != nil {
return fmt.Errorf("Error using ssh key: %v", err)
}
// We have an SSH key - decode it.
raw, err := base64.StdEncoding.DecodeString(sshKey)
if err != nil {
return err
}
// Create a temp file for the key and ensure it is removed.
fh, err := ioutil.TempFile("", "go-getter")
if err != nil {
return err
}
sshKeyFile = fh.Name()
defer os.Remove(sshKeyFile)
// Set the permissions prior to writing the key material.
if err := os.Chmod(sshKeyFile, 0600); err != nil {
return err
}
// Write the raw key into the temp file.
_, err = fh.Write(raw)
fh.Close()
if err != nil {
return err
}
}
// Clone or update the repository
_, err := os.Stat(dst)
if err != nil && !os.IsNotExist(err) {
return err
}
if err == nil {
err = g.update(ctx, dst, sshKeyFile, ref, depth)
} else {
err = g.clone(ctx, dst, sshKeyFile, u, depth)
}
if err != nil {
return err
}
// Next: check out the proper tag/branch if it is specified, and checkout
if ref != "" {
if err := g.checkout(dst, ref); err != nil {
return err
}
}
// Lastly, download any/all submodules.
return g.fetchSubmodules(ctx, dst, sshKeyFile, depth)
}
// GetFile for Git doesn't support updating at this time. It will download
// the file every time.
func (g *GitGetter) GetFile(dst string, u *url.URL) error {
td, tdcloser, err := safetemp.Dir("", "getter")
if err != nil {
return err
}
defer tdcloser.Close()
// Get the filename, and strip the filename from the URL so we can
// just get the repository directly.
filename := filepath.Base(u.Path)
u.Path = filepath.Dir(u.Path)
// Get the full repository
if err := g.Get(td, u); err != nil {
return err
}
// Copy the single file
u, err = urlhelper.Parse(fmtFileURL(filepath.Join(td, filename)))
if err != nil {
return err
}
fg := &FileGetter{Copy: true}
return fg.GetFile(dst, u)
}
func (g *GitGetter) checkout(dst string, ref string) error {
cmd := exec.Command("git", "checkout", ref)
cmd.Dir = dst
return getRunCommand(cmd)
}
func (g *GitGetter) clone(ctx context.Context, dst, sshKeyFile string, u *url.URL, depth int) error {
args := []string{"clone"}
if depth > 0 {
args = append(args, "--depth", strconv.Itoa(depth))
}
args = append(args, u.String(), dst)
cmd := exec.CommandContext(ctx, "git", args...)
setupGitEnv(cmd, sshKeyFile)
return getRunCommand(cmd)
}
func (g *GitGetter) update(ctx context.Context, dst, sshKeyFile, ref string, depth int) error {
// Determine if we're a branch. If we're NOT a branch, then we just
// switch to master prior to checking out
cmd := exec.CommandContext(ctx, "git", "show-ref", "-q", "--verify", "refs/heads/"+ref)
cmd.Dir = dst
if getRunCommand(cmd) != nil {
// Not a branch, switch to default branch. This will also catch
// non-existent branches, in which case we want to switch to default
// and then checkout the proper branch later.
ref = findDefaultBranch(dst)
}
// We have to be on a branch to pull
if err := g.checkout(dst, ref); err != nil {
return err
}
if depth > 0 {
cmd = exec.Command("git", "pull", "--depth", strconv.Itoa(depth), "--ff-only")
} else {
cmd = exec.Command("git", "pull", "--ff-only")
}
cmd.Dir = dst
setupGitEnv(cmd, sshKeyFile)
return getRunCommand(cmd)
}
// fetchSubmodules downloads any configured submodules recursively.
func (g *GitGetter) fetchSubmodules(ctx context.Context, dst, sshKeyFile string, depth int) error {
args := []string{"submodule", "update", "--init", "--recursive"}
if depth > 0 {
args = append(args, "--depth", strconv.Itoa(depth))
}
cmd := exec.CommandContext(ctx, "git", args...)
cmd.Dir = dst
setupGitEnv(cmd, sshKeyFile)
return getRunCommand(cmd)
}
// findDefaultBranch checks the repo's origin remote for its default branch
// (generally "master"). "master" is returned if an origin default branch
// can't be determined.
func findDefaultBranch(dst string) string {
var stdoutbuf bytes.Buffer
cmd := exec.Command("git", "branch", "-r", "--points-at", "refs/remotes/origin/HEAD")
cmd.Dir = dst
cmd.Stdout = &stdoutbuf
err := cmd.Run()
matches := defaultBranchRegexp.FindStringSubmatch(stdoutbuf.String())
if err != nil || matches == nil {
return "master"
}
return matches[len(matches)-1]
}
// setupGitEnv sets up the environment for the given command. This is used to
// pass configuration data to git and ssh and enables advanced cloning methods.
func setupGitEnv(cmd *exec.Cmd, sshKeyFile string) {
const gitSSHCommand = "GIT_SSH_COMMAND="
var sshCmd []string
// If we have an existing GIT_SSH_COMMAND, we need to append our options.
// We will also remove our old entry to make sure the behavior is the same
// with versions of Go < 1.9.
env := os.Environ()
for i, v := range env {
if strings.HasPrefix(v, gitSSHCommand) && len(v) > len(gitSSHCommand) {
sshCmd = []string{v}
env[i], env[len(env)-1] = env[len(env)-1], env[i]
env = env[:len(env)-1]
break
}
}
if len(sshCmd) == 0 {
sshCmd = []string{gitSSHCommand + "ssh"}
}
if sshKeyFile != "" {
// We have an SSH key temp file configured, tell ssh about this.
if runtime.GOOS == "windows" {
sshKeyFile = strings.Replace(sshKeyFile, `\`, `/`, -1)
}
sshCmd = append(sshCmd, "-i", sshKeyFile)
}
env = append(env, strings.Join(sshCmd, " "))
cmd.Env = env
}
// checkGitVersion is used to check the version of git installed on the system
// against a known minimum version. Returns an error if the installed version
// is older than the given minimum.
func checkGitVersion(min string) error {
want, err := version.NewVersion(min)
if err != nil {
return err
}
out, err := exec.Command("git", "version").Output()
if err != nil {
return err
}
fields := strings.Fields(string(out))
if len(fields) < 3 {
return fmt.Errorf("Unexpected 'git version' output: %q", string(out))
}
v := fields[2]
if runtime.GOOS == "windows" && strings.Contains(v, ".windows.") {
// on windows, git version will return for example:
// git version 2.20.1.windows.1
// Which does not follow the semantic versionning specs
// https://semver.org. We remove that part in order for
// go-version to not error.
v = v[:strings.Index(v, ".windows.")]
}
have, err := version.NewVersion(v)
if err != nil {
return err
}
if have.LessThan(want) {
return fmt.Errorf("Required git version = %s, have %s", want, have)
}
return nil
}

135
vendor/github.com/hashicorp/go-getter/get_hg.go generated vendored Normal file
View File

@ -0,0 +1,135 @@
package getter
import (
"context"
"fmt"
"net/url"
"os"
"os/exec"
"path/filepath"
"runtime"
urlhelper "github.com/hashicorp/go-getter/helper/url"
safetemp "github.com/hashicorp/go-safetemp"
)
// HgGetter is a Getter implementation that will download a module from
// a Mercurial repository.
type HgGetter struct {
getter
}
func (g *HgGetter) ClientMode(_ *url.URL) (ClientMode, error) {
return ClientModeDir, nil
}
func (g *HgGetter) Get(dst string, u *url.URL) error {
ctx := g.Context()
if _, err := exec.LookPath("hg"); err != nil {
return fmt.Errorf("hg must be available and on the PATH")
}
newURL, err := urlhelper.Parse(u.String())
if err != nil {
return err
}
if fixWindowsDrivePath(newURL) {
// See valid file path form on http://www.selenic.com/hg/help/urls
newURL.Path = fmt.Sprintf("/%s", newURL.Path)
}
// Extract some query parameters we use
var rev string
q := newURL.Query()
if len(q) > 0 {
rev = q.Get("rev")
q.Del("rev")
newURL.RawQuery = q.Encode()
}
_, err = os.Stat(dst)
if err != nil && !os.IsNotExist(err) {
return err
}
if err != nil {
if err := g.clone(dst, newURL); err != nil {
return err
}
}
if err := g.pull(dst, newURL); err != nil {
return err
}
return g.update(ctx, dst, newURL, rev)
}
// GetFile for Hg doesn't support updating at this time. It will download
// the file every time.
func (g *HgGetter) GetFile(dst string, u *url.URL) error {
// Create a temporary directory to store the full source. This has to be
// a non-existent directory.
td, tdcloser, err := safetemp.Dir("", "getter")
if err != nil {
return err
}
defer tdcloser.Close()
// Get the filename, and strip the filename from the URL so we can
// just get the repository directly.
filename := filepath.Base(u.Path)
u.Path = filepath.ToSlash(filepath.Dir(u.Path))
// If we're on Windows, we need to set the host to "localhost" for hg
if runtime.GOOS == "windows" {
u.Host = "localhost"
}
// Get the full repository
if err := g.Get(td, u); err != nil {
return err
}
// Copy the single file
u, err = urlhelper.Parse(fmtFileURL(filepath.Join(td, filename)))
if err != nil {
return err
}
fg := &FileGetter{Copy: true, getter: g.getter}
return fg.GetFile(dst, u)
}
func (g *HgGetter) clone(dst string, u *url.URL) error {
cmd := exec.Command("hg", "clone", "-U", u.String(), dst)
return getRunCommand(cmd)
}
func (g *HgGetter) pull(dst string, u *url.URL) error {
cmd := exec.Command("hg", "pull")
cmd.Dir = dst
return getRunCommand(cmd)
}
func (g *HgGetter) update(ctx context.Context, dst string, u *url.URL, rev string) error {
args := []string{"update"}
if rev != "" {
args = append(args, rev)
}
cmd := exec.CommandContext(ctx, "hg", args...)
cmd.Dir = dst
return getRunCommand(cmd)
}
func fixWindowsDrivePath(u *url.URL) bool {
// hg assumes a file:/// prefix for Windows drive letter file paths.
// (e.g. file:///c:/foo/bar)
// If the URL Path does not begin with a '/' character, the resulting URL
// path will have a file:// prefix. (e.g. file://c:/foo/bar)
// See http://www.selenic.com/hg/help/urls and the examples listed in
// http://selenic.com/repo/hg-stable/file/1265a3a71d75/mercurial/util.py#l1936
return runtime.GOOS == "windows" && u.Scheme == "file" &&
len(u.Path) > 1 && u.Path[0] != '/' && u.Path[1] == ':'
}

328
vendor/github.com/hashicorp/go-getter/get_http.go generated vendored Normal file
View File

@ -0,0 +1,328 @@
package getter
import (
"context"
"encoding/xml"
"fmt"
"io"
"net/http"
"net/url"
"os"
"path/filepath"
"strings"
safetemp "github.com/hashicorp/go-safetemp"
)
// HttpGetter is a Getter implementation that will download from an HTTP
// endpoint.
//
// For file downloads, HTTP is used directly.
//
// The protocol for downloading a directory from an HTTP endpoint is as follows:
//
// An HTTP GET request is made to the URL with the additional GET parameter
// "terraform-get=1". This lets you handle that scenario specially if you
// wish. The response must be a 2xx.
//
// First, a header is looked for "X-Terraform-Get" which should contain
// a source URL to download.
//
// If the header is not present, then a meta tag is searched for named
// "terraform-get" and the content should be a source URL.
//
// The source URL, whether from the header or meta tag, must be a fully
// formed URL. The shorthand syntax of "github.com/foo/bar" or relative
// paths are not allowed.
type HttpGetter struct {
getter
// Netrc, if true, will lookup and use auth information found
// in the user's netrc file if available.
Netrc bool
// Client is the http.Client to use for Get requests.
// This defaults to a cleanhttp.DefaultClient if left unset.
Client *http.Client
// Header contains optional request header fields that should be included
// with every HTTP request. Note that the zero value of this field is nil,
// and as such it needs to be initialized before use, via something like
// make(http.Header).
Header http.Header
}
func (g *HttpGetter) ClientMode(u *url.URL) (ClientMode, error) {
if strings.HasSuffix(u.Path, "/") {
return ClientModeDir, nil
}
return ClientModeFile, nil
}
func (g *HttpGetter) Get(dst string, u *url.URL) error {
ctx := g.Context()
// Copy the URL so we can modify it
var newU url.URL = *u
u = &newU
if g.Netrc {
// Add auth from netrc if we can
if err := addAuthFromNetrc(u); err != nil {
return err
}
}
if g.Client == nil {
g.Client = httpClient
}
// Add terraform-get to the parameter.
q := u.Query()
q.Add("terraform-get", "1")
u.RawQuery = q.Encode()
// Get the URL
req, err := http.NewRequest("GET", u.String(), nil)
if err != nil {
return err
}
if g.Header != nil {
req.Header = g.Header.Clone()
}
resp, err := g.Client.Do(req)
if err != nil {
return err
}
defer resp.Body.Close()
if resp.StatusCode < 200 || resp.StatusCode >= 300 {
return fmt.Errorf("bad response code: %d", resp.StatusCode)
}
// Extract the source URL
var source string
if v := resp.Header.Get("X-Terraform-Get"); v != "" {
source = v
} else {
source, err = g.parseMeta(resp.Body)
if err != nil {
return err
}
}
if source == "" {
return fmt.Errorf("no source URL was returned")
}
// If there is a subdir component, then we download the root separately
// into a temporary directory, then copy over the proper subdir.
source, subDir := SourceDirSubdir(source)
if subDir == "" {
var opts []ClientOption
if g.client != nil {
opts = g.client.Options
}
return Get(dst, source, opts...)
}
// We have a subdir, time to jump some hoops
return g.getSubdir(ctx, dst, source, subDir)
}
// GetFile fetches the file from src and stores it at dst.
// If the server supports Accept-Range, HttpGetter will attempt a range
// request. This means it is the caller's responsibility to ensure that an
// older version of the destination file does not exist, else it will be either
// falsely identified as being replaced, or corrupted with extra bytes
// appended.
func (g *HttpGetter) GetFile(dst string, src *url.URL) error {
ctx := g.Context()
if g.Netrc {
// Add auth from netrc if we can
if err := addAuthFromNetrc(src); err != nil {
return err
}
}
// Create all the parent directories if needed
if err := os.MkdirAll(filepath.Dir(dst), 0755); err != nil {
return err
}
f, err := os.OpenFile(dst, os.O_RDWR|os.O_CREATE, os.FileMode(0666))
if err != nil {
return err
}
defer f.Close()
if g.Client == nil {
g.Client = httpClient
}
var currentFileSize int64
// We first make a HEAD request so we can check
// if the server supports range queries. If the server/URL doesn't
// support HEAD requests, we just fall back to GET.
req, err := http.NewRequest("HEAD", src.String(), nil)
if err != nil {
return err
}
if g.Header != nil {
req.Header = g.Header.Clone()
}
headResp, err := g.Client.Do(req)
if err == nil {
headResp.Body.Close()
if headResp.StatusCode == 200 {
// If the HEAD request succeeded, then attempt to set the range
// query if we can.
if headResp.Header.Get("Accept-Ranges") == "bytes" && headResp.ContentLength >= 0 {
if fi, err := f.Stat(); err == nil {
if _, err = f.Seek(0, io.SeekEnd); err == nil {
currentFileSize = fi.Size()
req.Header.Set("Range", fmt.Sprintf("bytes=%d-", currentFileSize))
if currentFileSize >= headResp.ContentLength {
// file already present
return nil
}
}
}
}
}
}
req.Method = "GET"
resp, err := g.Client.Do(req)
if err != nil {
return err
}
switch resp.StatusCode {
case http.StatusOK, http.StatusPartialContent:
// all good
default:
resp.Body.Close()
return fmt.Errorf("bad response code: %d", resp.StatusCode)
}
body := resp.Body
if g.client != nil && g.client.ProgressListener != nil {
// track download
fn := filepath.Base(src.EscapedPath())
body = g.client.ProgressListener.TrackProgress(fn, currentFileSize, currentFileSize+resp.ContentLength, resp.Body)
}
defer body.Close()
n, err := Copy(ctx, f, body)
if err == nil && n < resp.ContentLength {
err = io.ErrShortWrite
}
return err
}
// getSubdir downloads the source into the destination, but with
// the proper subdir.
func (g *HttpGetter) getSubdir(ctx context.Context, dst, source, subDir string) error {
// Create a temporary directory to store the full source. This has to be
// a non-existent directory.
td, tdcloser, err := safetemp.Dir("", "getter")
if err != nil {
return err
}
defer tdcloser.Close()
var opts []ClientOption
if g.client != nil {
opts = g.client.Options
}
// Download that into the given directory
if err := Get(td, source, opts...); err != nil {
return err
}
// Process any globbing
sourcePath, err := SubdirGlob(td, subDir)
if err != nil {
return err
}
// Make sure the subdir path actually exists
if _, err := os.Stat(sourcePath); err != nil {
return fmt.Errorf(
"Error downloading %s: %s", source, err)
}
// Copy the subdirectory into our actual destination.
if err := os.RemoveAll(dst); err != nil {
return err
}
// Make the final destination
if err := os.MkdirAll(dst, 0755); err != nil {
return err
}
return copyDir(ctx, dst, sourcePath, false)
}
// parseMeta looks for the first meta tag in the given reader that
// will give us the source URL.
func (g *HttpGetter) parseMeta(r io.Reader) (string, error) {
d := xml.NewDecoder(r)
d.CharsetReader = charsetReader
d.Strict = false
var err error
var t xml.Token
for {
t, err = d.Token()
if err != nil {
if err == io.EOF {
err = nil
}
return "", err
}
if e, ok := t.(xml.StartElement); ok && strings.EqualFold(e.Name.Local, "body") {
return "", nil
}
if e, ok := t.(xml.EndElement); ok && strings.EqualFold(e.Name.Local, "head") {
return "", nil
}
e, ok := t.(xml.StartElement)
if !ok || !strings.EqualFold(e.Name.Local, "meta") {
continue
}
if attrValue(e.Attr, "name") != "terraform-get" {
continue
}
if f := attrValue(e.Attr, "content"); f != "" {
return f, nil
}
}
}
// attrValue returns the attribute value for the case-insensitive key
// `name', or the empty string if nothing is found.
func attrValue(attrs []xml.Attr, name string) string {
for _, a := range attrs {
if strings.EqualFold(a.Name.Local, name) {
return a.Value
}
}
return ""
}
// charsetReader returns a reader for the given charset. Currently
// it only supports UTF-8 and ASCII. Otherwise, it returns a meaningful
// error which is printed by go get, so the user can find why the package
// wasn't downloaded if the encoding is not supported. Note that, in
// order to reduce potential errors, ASCII is treated as UTF-8 (i.e. characters
// greater than 0x7f are not rejected).
func charsetReader(charset string, input io.Reader) (io.Reader, error) {
switch strings.ToLower(charset) {
case "ascii":
return input, nil
default:
return nil, fmt.Errorf("can't decode XML document using charset %q", charset)
}
}

54
vendor/github.com/hashicorp/go-getter/get_mock.go generated vendored Normal file
View File

@ -0,0 +1,54 @@
package getter
import (
"net/url"
)
// MockGetter is an implementation of Getter that can be used for tests.
type MockGetter struct {
getter
// Proxy, if set, will be called after recording the calls below.
// If it isn't set, then the *Err values will be returned.
Proxy Getter
GetCalled bool
GetDst string
GetURL *url.URL
GetErr error
GetFileCalled bool
GetFileDst string
GetFileURL *url.URL
GetFileErr error
}
func (g *MockGetter) Get(dst string, u *url.URL) error {
g.GetCalled = true
g.GetDst = dst
g.GetURL = u
if g.Proxy != nil {
return g.Proxy.Get(dst, u)
}
return g.GetErr
}
func (g *MockGetter) GetFile(dst string, u *url.URL) error {
g.GetFileCalled = true
g.GetFileDst = dst
g.GetFileURL = u
if g.Proxy != nil {
return g.Proxy.GetFile(dst, u)
}
return g.GetFileErr
}
func (g *MockGetter) ClientMode(u *url.URL) (ClientMode, error) {
if l := len(u.Path); l > 0 && u.Path[l-1:] == "/" {
return ClientModeDir, nil
}
return ClientModeFile, nil
}

275
vendor/github.com/hashicorp/go-getter/get_s3.go generated vendored Normal file
View File

@ -0,0 +1,275 @@
package getter
import (
"context"
"fmt"
"net/url"
"os"
"path/filepath"
"strings"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/credentials"
"github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds"
"github.com/aws/aws-sdk-go/aws/ec2metadata"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/s3"
)
// S3Getter is a Getter implementation that will download a module from
// a S3 bucket.
type S3Getter struct {
getter
}
func (g *S3Getter) ClientMode(u *url.URL) (ClientMode, error) {
// Parse URL
region, bucket, path, _, creds, err := g.parseUrl(u)
if err != nil {
return 0, err
}
// Create client config
config := g.getAWSConfig(region, u, creds)
sess := session.New(config)
client := s3.New(sess)
// List the object(s) at the given prefix
req := &s3.ListObjectsInput{
Bucket: aws.String(bucket),
Prefix: aws.String(path),
}
resp, err := client.ListObjects(req)
if err != nil {
return 0, err
}
for _, o := range resp.Contents {
// Use file mode on exact match.
if *o.Key == path {
return ClientModeFile, nil
}
// Use dir mode if child keys are found.
if strings.HasPrefix(*o.Key, path+"/") {
return ClientModeDir, nil
}
}
// There was no match, so just return file mode. The download is going
// to fail but we will let S3 return the proper error later.
return ClientModeFile, nil
}
func (g *S3Getter) Get(dst string, u *url.URL) error {
ctx := g.Context()
// Parse URL
region, bucket, path, _, creds, err := g.parseUrl(u)
if err != nil {
return err
}
// Remove destination if it already exists
_, err = os.Stat(dst)
if err != nil && !os.IsNotExist(err) {
return err
}
if err == nil {
// Remove the destination
if err := os.RemoveAll(dst); err != nil {
return err
}
}
// Create all the parent directories
if err := os.MkdirAll(filepath.Dir(dst), 0755); err != nil {
return err
}
config := g.getAWSConfig(region, u, creds)
sess := session.New(config)
client := s3.New(sess)
// List files in path, keep listing until no more objects are found
lastMarker := ""
hasMore := true
for hasMore {
req := &s3.ListObjectsInput{
Bucket: aws.String(bucket),
Prefix: aws.String(path),
}
if lastMarker != "" {
req.Marker = aws.String(lastMarker)
}
resp, err := client.ListObjects(req)
if err != nil {
return err
}
hasMore = aws.BoolValue(resp.IsTruncated)
// Get each object storing each file relative to the destination path
for _, object := range resp.Contents {
lastMarker = aws.StringValue(object.Key)
objPath := aws.StringValue(object.Key)
// If the key ends with a backslash assume it is a directory and ignore
if strings.HasSuffix(objPath, "/") {
continue
}
// Get the object destination path
objDst, err := filepath.Rel(path, objPath)
if err != nil {
return err
}
objDst = filepath.Join(dst, objDst)
if err := g.getObject(ctx, client, objDst, bucket, objPath, ""); err != nil {
return err
}
}
}
return nil
}
func (g *S3Getter) GetFile(dst string, u *url.URL) error {
ctx := g.Context()
region, bucket, path, version, creds, err := g.parseUrl(u)
if err != nil {
return err
}
config := g.getAWSConfig(region, u, creds)
sess := session.New(config)
client := s3.New(sess)
return g.getObject(ctx, client, dst, bucket, path, version)
}
func (g *S3Getter) getObject(ctx context.Context, client *s3.S3, dst, bucket, key, version string) error {
req := &s3.GetObjectInput{
Bucket: aws.String(bucket),
Key: aws.String(key),
}
if version != "" {
req.VersionId = aws.String(version)
}
resp, err := client.GetObject(req)
if err != nil {
return err
}
// Create all the parent directories
if err := os.MkdirAll(filepath.Dir(dst), 0755); err != nil {
return err
}
f, err := os.Create(dst)
if err != nil {
return err
}
defer f.Close()
_, err = Copy(ctx, f, resp.Body)
return err
}
func (g *S3Getter) getAWSConfig(region string, url *url.URL, creds *credentials.Credentials) *aws.Config {
conf := &aws.Config{}
if creds == nil {
// Grab the metadata URL
metadataURL := os.Getenv("AWS_METADATA_URL")
if metadataURL == "" {
metadataURL = "http://169.254.169.254:80/latest"
}
creds = credentials.NewChainCredentials(
[]credentials.Provider{
&credentials.EnvProvider{},
&credentials.SharedCredentialsProvider{Filename: "", Profile: ""},
&ec2rolecreds.EC2RoleProvider{
Client: ec2metadata.New(session.New(&aws.Config{
Endpoint: aws.String(metadataURL),
})),
},
})
}
if creds != nil {
conf.Endpoint = &url.Host
conf.S3ForcePathStyle = aws.Bool(true)
if url.Scheme == "http" {
conf.DisableSSL = aws.Bool(true)
}
}
conf.Credentials = creds
if region != "" {
conf.Region = aws.String(region)
}
return conf
}
func (g *S3Getter) parseUrl(u *url.URL) (region, bucket, path, version string, creds *credentials.Credentials, err error) {
// This just check whether we are dealing with S3 or
// any other S3 compliant service. S3 has a predictable
// url as others do not
if strings.Contains(u.Host, "amazonaws.com") {
// Expected host style: s3.amazonaws.com. They always have 3 parts,
// although the first may differ if we're accessing a specific region.
hostParts := strings.Split(u.Host, ".")
if len(hostParts) != 3 {
err = fmt.Errorf("URL is not a valid S3 URL")
return
}
// Parse the region out of the first part of the host
region = strings.TrimPrefix(strings.TrimPrefix(hostParts[0], "s3-"), "s3")
if region == "" {
region = "us-east-1"
}
pathParts := strings.SplitN(u.Path, "/", 3)
if len(pathParts) != 3 {
err = fmt.Errorf("URL is not a valid S3 URL")
return
}
bucket = pathParts[1]
path = pathParts[2]
version = u.Query().Get("version")
} else {
pathParts := strings.SplitN(u.Path, "/", 3)
if len(pathParts) != 3 {
err = fmt.Errorf("URL is not a valid S3 complaint URL")
return
}
bucket = pathParts[1]
path = pathParts[2]
version = u.Query().Get("version")
region = u.Query().Get("region")
if region == "" {
region = "us-east-1"
}
}
_, hasAwsId := u.Query()["aws_access_key_id"]
_, hasAwsSecret := u.Query()["aws_access_key_secret"]
_, hasAwsToken := u.Query()["aws_access_token"]
if hasAwsId || hasAwsSecret || hasAwsToken {
creds = credentials.NewStaticCredentials(
u.Query().Get("aws_access_key_id"),
u.Query().Get("aws_access_key_secret"),
u.Query().Get("aws_access_token"),
)
}
return
}

23
vendor/github.com/hashicorp/go-getter/go.mod generated vendored Normal file
View File

@ -0,0 +1,23 @@
module github.com/hashicorp/go-getter
require (
cloud.google.com/go v0.45.1
github.com/aws/aws-sdk-go v1.15.78
github.com/bgentry/go-netrc v0.0.0-20140422174119-9fd32a8b3d3d
github.com/cheggaaa/pb v1.0.27
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/fatih/color v1.7.0 // indirect
github.com/hashicorp/go-cleanhttp v0.5.0
github.com/hashicorp/go-safetemp v1.0.0
github.com/hashicorp/go-version v1.1.0
github.com/mattn/go-colorable v0.0.9 // indirect
github.com/mattn/go-isatty v0.0.4 // indirect
github.com/mattn/go-runewidth v0.0.4 // indirect
github.com/mitchellh/go-homedir v1.0.0
github.com/mitchellh/go-testing-interface v1.0.0
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/stretchr/testify v1.2.2 // indirect
github.com/ulikunitz/xz v0.5.5
google.golang.org/api v0.9.0
gopkg.in/cheggaaa/pb.v1 v1.0.27 // indirect
)

162
vendor/github.com/hashicorp/go-getter/go.sum generated vendored Normal file
View File

@ -0,0 +1,162 @@
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.38.0/go.mod h1:990N+gfupTy94rShfmMCWGDn0LpTmnzTp2qbd1dvSRU=
cloud.google.com/go v0.44.1/go.mod h1:iSa0KzasP4Uvy3f1mN/7PiObzGgflwredwwASm/v6AU=
cloud.google.com/go v0.44.2/go.mod h1:60680Gw3Yr4ikxnPRS/oxxkBccT6SA1yMk63TGekxKY=
cloud.google.com/go v0.45.1 h1:lRi0CHyU+ytlvylOlFKKq0af6JncuyoRh1J+QJBqQx0=
cloud.google.com/go v0.45.1/go.mod h1:RpBamKRgapWJb87xiFSdk4g1CME7QZg3uwTez+TSTjc=
cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o=
cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
github.com/aws/aws-sdk-go v1.15.78 h1:LaXy6lWR0YK7LKyuU0QWy2ws/LWTPfYV/UgfiBu4tvY=
github.com/aws/aws-sdk-go v1.15.78/go.mod h1:E3/ieXAlvM0XWO57iftYVDLLvQ824smPP3ATZkfNZeM=
github.com/bgentry/go-netrc v0.0.0-20140422174119-9fd32a8b3d3d h1:xDfNPAt8lFiC1UJrqV3uuy861HCTo708pDMbjHHdCas=
github.com/bgentry/go-netrc v0.0.0-20140422174119-9fd32a8b3d3d/go.mod h1:6QX/PXZ00z/TKoufEY6K/a0k6AhaJrQKdFe6OfVXsa4=
github.com/cheggaaa/pb v1.0.27 h1:wIkZHkNfC7R6GI5w7l/PdAdzXzlrbcI3p8OAlnkTsnc=
github.com/cheggaaa/pb v1.0.27/go.mod h1:pQciLPpbU0oxA0h+VJYYLxO+XeDQb5pZijXscXHm81s=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/fatih/color v1.7.0 h1:DkWD4oS2D8LGGgTQ6IvwJJXSL5Vp2ffcQg58nFV38Ys=
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b h1:VKtxabqXZkF25pY9ekfRL6a582T4P37/31XEstQ5p58=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.2.0/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.3.1/go.mod h1:sBzyDLLjw3U8JLTeZvSv8jJB+tU5PVekmnlKIyFUx0Y=
github.com/golang/protobuf v1.2.0 h1:P3YflyNX/ehuJFLhxviNdFxQPkGK5cDcApsge1SqnvM=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2 h1:6nsPYzhq5kReh6QImI3k5qWzO4PEbvbIW2cwSfR/6xs=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.3.0 h1:crn/baboCvb5fXaQ0IJ1SGTsTVrWpDsCWC8EGETZijY=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/martian v2.1.0+incompatible h1:/CP5g8u/VJHijgedC/Legn3BAbAaWPgecwXBIDzw5no=
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
github.com/googleapis/gax-go/v2 v2.0.5 h1:sjZBwGj9Jlw33ImPtvFviGYvseOtDM7hkSKB7+Tv3SM=
github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
github.com/hashicorp/go-cleanhttp v0.5.0 h1:wvCrVc9TjDls6+YGAF2hAifE1E5U1+b4tH6KdvN3Gig=
github.com/hashicorp/go-cleanhttp v0.5.0/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80=
github.com/hashicorp/go-safetemp v1.0.0 h1:2HR189eFNrjHQyENnQMMpCiBAsRxzbTMIgBhEyExpmo=
github.com/hashicorp/go-safetemp v1.0.0/go.mod h1:oaerMy3BhqiTbVye6QuFhFtIceqFoDHxNAB65b+Rj1I=
github.com/hashicorp/go-version v1.1.0 h1:bPIoEKD27tNdebFGGxxYwcL4nepeY4j1QP23PFRGzg0=
github.com/hashicorp/go-version v1.1.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA=
github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/golang-lru v0.5.1 h1:0hERBMJE1eitiLkihrMvRVBYAkpHzc/J3QdDN+dAcgU=
github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/jmespath/go-jmespath v0.0.0-20160202185014-0b12d6b521d8 h1:12VvqtR6Aowv3l/EQUlocDHW2Cp4G9WJVH7uyH8QFJE=
github.com/jmespath/go-jmespath v0.0.0-20160202185014-0b12d6b521d8/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k=
github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU=
github.com/mattn/go-colorable v0.0.9 h1:UVL0vNpWh04HeJXV0KLcaT7r06gOH2l4OW6ddYRUIY4=
github.com/mattn/go-colorable v0.0.9/go.mod h1:9vuHe8Xs5qXnSaW/c/ABM9alt+Vo+STaOChaDxuIBZU=
github.com/mattn/go-isatty v0.0.4 h1:bnP0vzxcAdeI1zdubAl5PjU6zsERjGZb7raWodagDYs=
github.com/mattn/go-isatty v0.0.4/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
github.com/mattn/go-runewidth v0.0.4 h1:2BvfKmzob6Bmd4YsL0zygOqfdFnK7GR4QL06Do4/p7Y=
github.com/mattn/go-runewidth v0.0.4/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU=
github.com/mitchellh/go-homedir v1.0.0 h1:vKb8ShqSby24Yrqr/yDYkuFz8d0WUjys40rvnGC8aR0=
github.com/mitchellh/go-homedir v1.0.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
github.com/mitchellh/go-testing-interface v1.0.0 h1:fzU/JVNcaqHQEcVFAKeR41fkiLdIPrefOvVG1VZ96U0=
github.com/mitchellh/go-testing-interface v1.0.0/go.mod h1:kRemZodwjscx+RGhAo8eIhFbs2+BFgRtFPeD/KE+zxI=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/stretchr/testify v1.2.2 h1:bSDNvY7ZPG5RlJ8otE/7V6gMiyenm9RtJ7IUVIAoJ1w=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/ulikunitz/xz v0.5.5 h1:pFrO0lVpTBXLpYw+pnLj6TbvHuyjXMfjGeCwSqCVwok=
github.com/ulikunitz/xz v0.5.5/go.mod h1:2bypXElzHzzJZwzH67Y6wb67pO62Rzfn7BSiF4ABRW8=
go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=
go.opencensus.io v0.22.0 h1:C9hSCOW830chIVkdja34wa6Ky+IzWllkUinR+BtRZd4=
go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
golang.org/x/lint v0.0.0-20190301231843-5614ed5bae6f/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20190409202823-959b441ac422/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/mobile v0.0.0-20190312151609-d3739f865fa6/go.mod h1:z+o9i4GpDbdi3rU15maQ/Ox0txvL9dWGYEHz965HBQE=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190501004415-9ce7a6920f09/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190503192946-f4e77d36d62c/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859 h1:R/3boaszxrf1GEUWTVDzSKVwLmSJpwZ1yqXm8j0v2QI=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45 h1:SVwTIAaPC2U/AvvLNZ2a7OVsmBpC8L5BlwK1whH3hm0=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f h1:Bl/8QSvNqXvPGPGXa2z5xUTmV7VDcZyvRZ+QQXkXTZQ=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0 h1:HyfiK1WMnHj5FXFXatD+Qs1A/xC2Run6RzeW1SyHxpc=
golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/text v0.3.0 h1:g61tztE5qeGQ89tm6NTjjM9VPIm088od1l6aSorWRWg=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2 h1:z99zHgr7hKfrUcX/KsoJk5FJfjTceCKIp96+biqP4To=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190312151545-0bb0c0a6e846/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190312170243-e65039ee4138/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190425150028-36563e24a262/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190506145303-2d16b83fe98c/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190606124116-d0a3d012864b/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
google.golang.org/api v0.4.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE=
google.golang.org/api v0.7.0/go.mod h1:WtwebWUNSVBH/HAw79HIFXZNqEvBhG+Ra+ax0hx3E3M=
google.golang.org/api v0.8.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
google.golang.org/api v0.9.0 h1:jbyannxz0XFD3zdjgrSUsaJbgpH4eTrkdhRChkHPfO8=
google.golang.org/api v0.9.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.6.1 h1:QzqyMA1tlu6CgqCDUtU9V+ZKhLFT2dkJuANu5QaxI3I=
google.golang.org/appengine v1.6.1/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww/cMBSeb0=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190418145605-e7d98fc518a7/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190425155659-357c62f0e4bb/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190502173448-54afdca5d873/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190801165951-fa694d86fc64/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55 h1:gSJIx1SDwno+2ElGhA4+qG2zF97qiUzTM+rQ0klBOcE=
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
google.golang.org/grpc v1.21.1 h1:j6XxA85m/6txkUCHvzlV5f+HBNl/1r5cZ2A/3IEFOO8=
google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
gopkg.in/cheggaaa/pb.v1 v1.0.27 h1:kJdccidYzt3CaHD1crCFTS1hxyhSi059NhOFUf03YFo=
gopkg.in/cheggaaa/pb.v1 v1.0.27/go.mod h1:V/YB90LKu/1FcN3WVnfiiE5oMCibMjukxqG/qStrOgw=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=

View File

@ -0,0 +1,14 @@
package url
import (
"net/url"
)
// Parse parses rawURL into a URL structure.
// The rawURL may be relative or absolute.
//
// Parse is a wrapper for the Go stdlib net/url Parse function, but returns
// Windows "safe" URLs on Windows platforms.
func Parse(rawURL string) (*url.URL, error) {
return parse(rawURL)
}

View File

@ -0,0 +1,11 @@
// +build !windows
package url
import (
"net/url"
)
func parse(rawURL string) (*url.URL, error) {
return url.Parse(rawURL)
}

View File

@ -0,0 +1,39 @@
package url
import (
"fmt"
"net/url"
"path/filepath"
"strings"
)
func parse(rawURL string) (*url.URL, error) {
// Make sure we're using "/" since URLs are "/"-based.
rawURL = filepath.ToSlash(rawURL)
if len(rawURL) > 1 && rawURL[1] == ':' {
// Assume we're dealing with a drive letter. In which case we
// force the 'file' scheme to avoid "net/url" URL.String() prepending
// our url with "./".
rawURL = "file://" + rawURL
}
u, err := url.Parse(rawURL)
if err != nil {
return nil, err
}
if len(u.Host) > 1 && u.Host[1] == ':' && strings.HasPrefix(rawURL, "file://") {
// Assume we're dealing with a drive letter file path where the drive
// letter has been parsed into the URL Host.
u.Path = fmt.Sprintf("%s%s", u.Host, u.Path)
u.Host = ""
}
// Remove leading slash for absolute file paths.
if len(u.Path) > 2 && u.Path[0] == '/' && u.Path[2] == ':' {
u.Path = u.Path[1:]
}
return u, err
}

67
vendor/github.com/hashicorp/go-getter/netrc.go generated vendored Normal file
View File

@ -0,0 +1,67 @@
package getter
import (
"fmt"
"net/url"
"os"
"runtime"
"github.com/bgentry/go-netrc/netrc"
"github.com/mitchellh/go-homedir"
)
// addAuthFromNetrc adds auth information to the URL from the user's
// netrc file if it can be found. This will only add the auth info
// if the URL doesn't already have auth info specified and the
// the username is blank.
func addAuthFromNetrc(u *url.URL) error {
// If the URL already has auth information, do nothing
if u.User != nil && u.User.Username() != "" {
return nil
}
// Get the netrc file path
path := os.Getenv("NETRC")
if path == "" {
filename := ".netrc"
if runtime.GOOS == "windows" {
filename = "_netrc"
}
var err error
path, err = homedir.Expand("~/" + filename)
if err != nil {
return err
}
}
// If the file is not a file, then do nothing
if fi, err := os.Stat(path); err != nil {
// File doesn't exist, do nothing
if os.IsNotExist(err) {
return nil
}
// Some other error!
return err
} else if fi.IsDir() {
// File is directory, ignore
return nil
}
// Load up the netrc file
net, err := netrc.ParseFile(path)
if err != nil {
return fmt.Errorf("Error parsing netrc file at %q: %s", path, err)
}
machine := net.FindMachine(u.Host)
if machine == nil {
// Machine not found, no problem
return nil
}
// Set the user info
u.User = url.UserPassword(machine.Login, machine.Password)
return nil
}

75
vendor/github.com/hashicorp/go-getter/source.go generated vendored Normal file
View File

@ -0,0 +1,75 @@
package getter
import (
"fmt"
"path/filepath"
"strings"
)
// SourceDirSubdir takes a source URL and returns a tuple of the URL without
// the subdir and the subdir.
//
// ex:
// dom.com/path/?q=p => dom.com/path/?q=p, ""
// proto://dom.com/path//*?q=p => proto://dom.com/path?q=p, "*"
// proto://dom.com/path//path2?q=p => proto://dom.com/path?q=p, "path2"
//
func SourceDirSubdir(src string) (string, string) {
// URL might contains another url in query parameters
stop := len(src)
if idx := strings.Index(src, "?"); idx > -1 {
stop = idx
}
// Calculate an offset to avoid accidentally marking the scheme
// as the dir.
var offset int
if idx := strings.Index(src[:stop], "://"); idx > -1 {
offset = idx + 3
}
// First see if we even have an explicit subdir
idx := strings.Index(src[offset:stop], "//")
if idx == -1 {
return src, ""
}
idx += offset
subdir := src[idx+2:]
src = src[:idx]
// Next, check if we have query parameters and push them onto the
// URL.
if idx = strings.Index(subdir, "?"); idx > -1 {
query := subdir[idx:]
subdir = subdir[:idx]
src += query
}
return src, subdir
}
// SubdirGlob returns the actual subdir with globbing processed.
//
// dst should be a destination directory that is already populated (the
// download is complete) and subDir should be the set subDir. If subDir
// is an empty string, this returns an empty string.
//
// The returned path is the full absolute path.
func SubdirGlob(dst, subDir string) (string, error) {
matches, err := filepath.Glob(filepath.Join(dst, subDir))
if err != nil {
return "", err
}
if len(matches) == 0 {
return "", fmt.Errorf("subdir %q not found", subDir)
}
if len(matches) > 1 {
return "", fmt.Errorf("subdir %q matches multiple paths", subDir)
}
return matches[0], nil
}

13
vendor/github.com/hashicorp/go-getter/storage.go generated vendored Normal file
View File

@ -0,0 +1,13 @@
package getter
// Storage is an interface that knows how to lookup downloaded directories
// as well as download and update directories from their sources into the
// proper location.
type Storage interface {
// Dir returns the directory on local disk where the directory source
// can be loaded from.
Dir(string) (string, bool, error)
// Get will download and optionally update the given directory.
Get(string, string, bool) error
}

3
vendor/modules.txt vendored
View File

@ -311,6 +311,9 @@ github.com/hashicorp/go-cty-funcs/crypto
github.com/hashicorp/go-cty-funcs/encoding
github.com/hashicorp/go-cty-funcs/filesystem
github.com/hashicorp/go-cty-funcs/uuid
# github.com/hashicorp/go-getter v1.4.1
github.com/hashicorp/go-getter
github.com/hashicorp/go-getter/helper/url
# github.com/hashicorp/go-getter/gcs/v2 v2.0.0-20200604122502-a6995fa1edad
github.com/hashicorp/go-getter/gcs/v2
# github.com/hashicorp/go-getter/s3/v2 v2.0.0-20200604122502-a6995fa1edad