Use the hashicorp/go-getter to download files
* removed packer.Cache and references since packer.Cache is never used except in the download step. The download step now uses the new func packer.CachePath(targetPath) for this, the behavior is the same.
* removed download code from packer that was reimplemented into the go-getter library: progress bar, http download restart, checksuming from file, skip already downloaded files, symlinking, make a download cancellable by context.
* on windows if packer is running without symlinking rights and we are getting a local file, the file will be copied instead to avoid errors.
* added unit tests for step_download that are now CI tested on windows, mac & linux.
* files are now downloaded under cache dir `sha1(filename + "?checksum=" + checksum) + file_extension`
* since the output dir is based on the source url and the checksum, when the checksum fails, the file is auto deleted.
* a download file is protected and locked by a file lock,
* updated docs
* updated go modules and vendors
2019-03-13 07:11:58 -04:00
|
|
|
|
/*
|
|
|
|
|
*
|
|
|
|
|
* Copyright 2017 gRPC authors.
|
|
|
|
|
*
|
|
|
|
|
* Licensed under the Apache License, Version 2.0 (the "License");
|
|
|
|
|
* you may not use this file except in compliance with the License.
|
|
|
|
|
* You may obtain a copy of the License at
|
|
|
|
|
*
|
|
|
|
|
* http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
|
*
|
|
|
|
|
* Unless required by applicable law or agreed to in writing, software
|
|
|
|
|
* distributed under the License is distributed on an "AS IS" BASIS,
|
|
|
|
|
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
|
|
|
* See the License for the specific language governing permissions and
|
|
|
|
|
* limitations under the License.
|
|
|
|
|
*
|
|
|
|
|
*/
|
|
|
|
|
|
|
|
|
|
package grpc
|
|
|
|
|
|
|
|
|
|
import (
|
|
|
|
|
"encoding/json"
|
2020-10-01 17:06:15 -04:00
|
|
|
|
"errors"
|
Use the hashicorp/go-getter to download files
* removed packer.Cache and references since packer.Cache is never used except in the download step. The download step now uses the new func packer.CachePath(targetPath) for this, the behavior is the same.
* removed download code from packer that was reimplemented into the go-getter library: progress bar, http download restart, checksuming from file, skip already downloaded files, symlinking, make a download cancellable by context.
* on windows if packer is running without symlinking rights and we are getting a local file, the file will be copied instead to avoid errors.
* added unit tests for step_download that are now CI tested on windows, mac & linux.
* files are now downloaded under cache dir `sha1(filename + "?checksum=" + checksum) + file_extension`
* since the output dir is based on the source url and the checksum, when the checksum fails, the file is auto deleted.
* a download file is protected and locked by a file lock,
* updated docs
* updated go modules and vendors
2019-03-13 07:11:58 -04:00
|
|
|
|
"fmt"
|
2020-10-01 17:06:15 -04:00
|
|
|
|
"reflect"
|
Use the hashicorp/go-getter to download files
* removed packer.Cache and references since packer.Cache is never used except in the download step. The download step now uses the new func packer.CachePath(targetPath) for this, the behavior is the same.
* removed download code from packer that was reimplemented into the go-getter library: progress bar, http download restart, checksuming from file, skip already downloaded files, symlinking, make a download cancellable by context.
* on windows if packer is running without symlinking rights and we are getting a local file, the file will be copied instead to avoid errors.
* added unit tests for step_download that are now CI tested on windows, mac & linux.
* files are now downloaded under cache dir `sha1(filename + "?checksum=" + checksum) + file_extension`
* since the output dir is based on the source url and the checksum, when the checksum fails, the file is auto deleted.
* a download file is protected and locked by a file lock,
* updated docs
* updated go modules and vendors
2019-03-13 07:11:58 -04:00
|
|
|
|
"strconv"
|
|
|
|
|
"strings"
|
|
|
|
|
"time"
|
|
|
|
|
|
|
|
|
|
"google.golang.org/grpc/codes"
|
2019-12-03 08:18:49 -05:00
|
|
|
|
"google.golang.org/grpc/internal"
|
2020-10-01 17:06:15 -04:00
|
|
|
|
internalserviceconfig "google.golang.org/grpc/internal/serviceconfig"
|
2019-12-03 08:18:49 -05:00
|
|
|
|
"google.golang.org/grpc/serviceconfig"
|
Use the hashicorp/go-getter to download files
* removed packer.Cache and references since packer.Cache is never used except in the download step. The download step now uses the new func packer.CachePath(targetPath) for this, the behavior is the same.
* removed download code from packer that was reimplemented into the go-getter library: progress bar, http download restart, checksuming from file, skip already downloaded files, symlinking, make a download cancellable by context.
* on windows if packer is running without symlinking rights and we are getting a local file, the file will be copied instead to avoid errors.
* added unit tests for step_download that are now CI tested on windows, mac & linux.
* files are now downloaded under cache dir `sha1(filename + "?checksum=" + checksum) + file_extension`
* since the output dir is based on the source url and the checksum, when the checksum fails, the file is auto deleted.
* a download file is protected and locked by a file lock,
* updated docs
* updated go modules and vendors
2019-03-13 07:11:58 -04:00
|
|
|
|
)
|
|
|
|
|
|
|
|
|
|
const maxInt = int(^uint(0) >> 1)
|
|
|
|
|
|
|
|
|
|
// MethodConfig defines the configuration recommended by the service providers for a
|
|
|
|
|
// particular method.
|
|
|
|
|
//
|
|
|
|
|
// Deprecated: Users should not use this struct. Service config should be received
|
|
|
|
|
// through name resolver, as specified here
|
|
|
|
|
// https://github.com/grpc/grpc/blob/master/doc/service_config.md
|
|
|
|
|
type MethodConfig struct {
|
|
|
|
|
// WaitForReady indicates whether RPCs sent to this method should wait until
|
|
|
|
|
// the connection is ready by default (!failfast). The value specified via the
|
|
|
|
|
// gRPC client API will override the value set here.
|
|
|
|
|
WaitForReady *bool
|
|
|
|
|
// Timeout is the default timeout for RPCs sent to this method. The actual
|
|
|
|
|
// deadline used will be the minimum of the value specified here and the value
|
|
|
|
|
// set by the application via the gRPC client API. If either one is not set,
|
|
|
|
|
// then the other will be used. If neither is set, then the RPC has no deadline.
|
|
|
|
|
Timeout *time.Duration
|
|
|
|
|
// MaxReqSize is the maximum allowed payload size for an individual request in a
|
|
|
|
|
// stream (client->server) in bytes. The size which is measured is the serialized
|
|
|
|
|
// payload after per-message compression (but before stream compression) in bytes.
|
|
|
|
|
// The actual value used is the minimum of the value specified here and the value set
|
|
|
|
|
// by the application via the gRPC client API. If either one is not set, then the other
|
|
|
|
|
// will be used. If neither is set, then the built-in default is used.
|
|
|
|
|
MaxReqSize *int
|
|
|
|
|
// MaxRespSize is the maximum allowed payload size for an individual response in a
|
|
|
|
|
// stream (server->client) in bytes.
|
|
|
|
|
MaxRespSize *int
|
|
|
|
|
// RetryPolicy configures retry options for the method.
|
|
|
|
|
retryPolicy *retryPolicy
|
|
|
|
|
}
|
|
|
|
|
|
2019-12-03 08:18:49 -05:00
|
|
|
|
type lbConfig struct {
|
|
|
|
|
name string
|
|
|
|
|
cfg serviceconfig.LoadBalancingConfig
|
|
|
|
|
}
|
|
|
|
|
|
Use the hashicorp/go-getter to download files
* removed packer.Cache and references since packer.Cache is never used except in the download step. The download step now uses the new func packer.CachePath(targetPath) for this, the behavior is the same.
* removed download code from packer that was reimplemented into the go-getter library: progress bar, http download restart, checksuming from file, skip already downloaded files, symlinking, make a download cancellable by context.
* on windows if packer is running without symlinking rights and we are getting a local file, the file will be copied instead to avoid errors.
* added unit tests for step_download that are now CI tested on windows, mac & linux.
* files are now downloaded under cache dir `sha1(filename + "?checksum=" + checksum) + file_extension`
* since the output dir is based on the source url and the checksum, when the checksum fails, the file is auto deleted.
* a download file is protected and locked by a file lock,
* updated docs
* updated go modules and vendors
2019-03-13 07:11:58 -04:00
|
|
|
|
// ServiceConfig is provided by the service provider and contains parameters for how
|
|
|
|
|
// clients that connect to the service should behave.
|
|
|
|
|
//
|
|
|
|
|
// Deprecated: Users should not use this struct. Service config should be received
|
|
|
|
|
// through name resolver, as specified here
|
|
|
|
|
// https://github.com/grpc/grpc/blob/master/doc/service_config.md
|
|
|
|
|
type ServiceConfig struct {
|
2019-12-03 08:18:49 -05:00
|
|
|
|
serviceconfig.Config
|
|
|
|
|
|
|
|
|
|
// LB is the load balancer the service providers recommends. The balancer
|
2020-10-01 17:06:15 -04:00
|
|
|
|
// specified via grpc.WithBalancerName will override this. This is deprecated;
|
2019-12-03 08:18:49 -05:00
|
|
|
|
// lbConfigs is preferred. If lbConfig and LB are both present, lbConfig
|
|
|
|
|
// will be used.
|
Use the hashicorp/go-getter to download files
* removed packer.Cache and references since packer.Cache is never used except in the download step. The download step now uses the new func packer.CachePath(targetPath) for this, the behavior is the same.
* removed download code from packer that was reimplemented into the go-getter library: progress bar, http download restart, checksuming from file, skip already downloaded files, symlinking, make a download cancellable by context.
* on windows if packer is running without symlinking rights and we are getting a local file, the file will be copied instead to avoid errors.
* added unit tests for step_download that are now CI tested on windows, mac & linux.
* files are now downloaded under cache dir `sha1(filename + "?checksum=" + checksum) + file_extension`
* since the output dir is based on the source url and the checksum, when the checksum fails, the file is auto deleted.
* a download file is protected and locked by a file lock,
* updated docs
* updated go modules and vendors
2019-03-13 07:11:58 -04:00
|
|
|
|
LB *string
|
|
|
|
|
|
2019-12-03 08:18:49 -05:00
|
|
|
|
// lbConfig is the service config's load balancing configuration. If
|
|
|
|
|
// lbConfig and LB are both present, lbConfig will be used.
|
|
|
|
|
lbConfig *lbConfig
|
|
|
|
|
|
Use the hashicorp/go-getter to download files
* removed packer.Cache and references since packer.Cache is never used except in the download step. The download step now uses the new func packer.CachePath(targetPath) for this, the behavior is the same.
* removed download code from packer that was reimplemented into the go-getter library: progress bar, http download restart, checksuming from file, skip already downloaded files, symlinking, make a download cancellable by context.
* on windows if packer is running without symlinking rights and we are getting a local file, the file will be copied instead to avoid errors.
* added unit tests for step_download that are now CI tested on windows, mac & linux.
* files are now downloaded under cache dir `sha1(filename + "?checksum=" + checksum) + file_extension`
* since the output dir is based on the source url and the checksum, when the checksum fails, the file is auto deleted.
* a download file is protected and locked by a file lock,
* updated docs
* updated go modules and vendors
2019-03-13 07:11:58 -04:00
|
|
|
|
// Methods contains a map for the methods in this service. If there is an
|
|
|
|
|
// exact match for a method (i.e. /service/method) in the map, use the
|
|
|
|
|
// corresponding MethodConfig. If there's no exact match, look for the
|
|
|
|
|
// default config for the service (/service/) and use the corresponding
|
|
|
|
|
// MethodConfig if it exists. Otherwise, the method has no MethodConfig to
|
|
|
|
|
// use.
|
|
|
|
|
Methods map[string]MethodConfig
|
|
|
|
|
|
|
|
|
|
// If a retryThrottlingPolicy is provided, gRPC will automatically throttle
|
|
|
|
|
// retry attempts and hedged RPCs when the client’s ratio of failures to
|
|
|
|
|
// successes exceeds a threshold.
|
|
|
|
|
//
|
|
|
|
|
// For each server name, the gRPC client will maintain a token_count which is
|
|
|
|
|
// initially set to maxTokens, and can take values between 0 and maxTokens.
|
|
|
|
|
//
|
|
|
|
|
// Every outgoing RPC (regardless of service or method invoked) will change
|
|
|
|
|
// token_count as follows:
|
|
|
|
|
//
|
|
|
|
|
// - Every failed RPC will decrement the token_count by 1.
|
|
|
|
|
// - Every successful RPC will increment the token_count by tokenRatio.
|
|
|
|
|
//
|
|
|
|
|
// If token_count is less than or equal to maxTokens / 2, then RPCs will not
|
|
|
|
|
// be retried and hedged RPCs will not be sent.
|
|
|
|
|
retryThrottling *retryThrottlingPolicy
|
|
|
|
|
// healthCheckConfig must be set as one of the requirement to enable LB channel
|
|
|
|
|
// health check.
|
|
|
|
|
healthCheckConfig *healthCheckConfig
|
2019-05-30 17:25:43 -04:00
|
|
|
|
// rawJSONString stores service config json string that get parsed into
|
|
|
|
|
// this service config struct.
|
|
|
|
|
rawJSONString string
|
Use the hashicorp/go-getter to download files
* removed packer.Cache and references since packer.Cache is never used except in the download step. The download step now uses the new func packer.CachePath(targetPath) for this, the behavior is the same.
* removed download code from packer that was reimplemented into the go-getter library: progress bar, http download restart, checksuming from file, skip already downloaded files, symlinking, make a download cancellable by context.
* on windows if packer is running without symlinking rights and we are getting a local file, the file will be copied instead to avoid errors.
* added unit tests for step_download that are now CI tested on windows, mac & linux.
* files are now downloaded under cache dir `sha1(filename + "?checksum=" + checksum) + file_extension`
* since the output dir is based on the source url and the checksum, when the checksum fails, the file is auto deleted.
* a download file is protected and locked by a file lock,
* updated docs
* updated go modules and vendors
2019-03-13 07:11:58 -04:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// healthCheckConfig defines the go-native version of the LB channel health check config.
|
|
|
|
|
type healthCheckConfig struct {
|
|
|
|
|
// serviceName is the service name to use in the health-checking request.
|
|
|
|
|
ServiceName string
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// retryPolicy defines the go-native version of the retry policy defined by the
|
|
|
|
|
// service config here:
|
|
|
|
|
// https://github.com/grpc/proposal/blob/master/A6-client-retries.md#integration-with-service-config
|
|
|
|
|
type retryPolicy struct {
|
|
|
|
|
// MaxAttempts is the maximum number of attempts, including the original RPC.
|
|
|
|
|
//
|
|
|
|
|
// This field is required and must be two or greater.
|
|
|
|
|
maxAttempts int
|
|
|
|
|
|
|
|
|
|
// Exponential backoff parameters. The initial retry attempt will occur at
|
2020-03-25 15:15:46 -04:00
|
|
|
|
// random(0, initialBackoff). In general, the nth attempt will occur at
|
Use the hashicorp/go-getter to download files
* removed packer.Cache and references since packer.Cache is never used except in the download step. The download step now uses the new func packer.CachePath(targetPath) for this, the behavior is the same.
* removed download code from packer that was reimplemented into the go-getter library: progress bar, http download restart, checksuming from file, skip already downloaded files, symlinking, make a download cancellable by context.
* on windows if packer is running without symlinking rights and we are getting a local file, the file will be copied instead to avoid errors.
* added unit tests for step_download that are now CI tested on windows, mac & linux.
* files are now downloaded under cache dir `sha1(filename + "?checksum=" + checksum) + file_extension`
* since the output dir is based on the source url and the checksum, when the checksum fails, the file is auto deleted.
* a download file is protected and locked by a file lock,
* updated docs
* updated go modules and vendors
2019-03-13 07:11:58 -04:00
|
|
|
|
// random(0,
|
2020-03-25 15:15:46 -04:00
|
|
|
|
// min(initialBackoff*backoffMultiplier**(n-1), maxBackoff)).
|
Use the hashicorp/go-getter to download files
* removed packer.Cache and references since packer.Cache is never used except in the download step. The download step now uses the new func packer.CachePath(targetPath) for this, the behavior is the same.
* removed download code from packer that was reimplemented into the go-getter library: progress bar, http download restart, checksuming from file, skip already downloaded files, symlinking, make a download cancellable by context.
* on windows if packer is running without symlinking rights and we are getting a local file, the file will be copied instead to avoid errors.
* added unit tests for step_download that are now CI tested on windows, mac & linux.
* files are now downloaded under cache dir `sha1(filename + "?checksum=" + checksum) + file_extension`
* since the output dir is based on the source url and the checksum, when the checksum fails, the file is auto deleted.
* a download file is protected and locked by a file lock,
* updated docs
* updated go modules and vendors
2019-03-13 07:11:58 -04:00
|
|
|
|
//
|
|
|
|
|
// These fields are required and must be greater than zero.
|
|
|
|
|
initialBackoff time.Duration
|
|
|
|
|
maxBackoff time.Duration
|
|
|
|
|
backoffMultiplier float64
|
|
|
|
|
|
|
|
|
|
// The set of status codes which may be retried.
|
|
|
|
|
//
|
|
|
|
|
// Status codes are specified as strings, e.g., "UNAVAILABLE".
|
|
|
|
|
//
|
|
|
|
|
// This field is required and must be non-empty.
|
|
|
|
|
// Note: a set is used to store this for easy lookup.
|
|
|
|
|
retryableStatusCodes map[codes.Code]bool
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
type jsonRetryPolicy struct {
|
|
|
|
|
MaxAttempts int
|
|
|
|
|
InitialBackoff string
|
|
|
|
|
MaxBackoff string
|
|
|
|
|
BackoffMultiplier float64
|
|
|
|
|
RetryableStatusCodes []codes.Code
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// retryThrottlingPolicy defines the go-native version of the retry throttling
|
|
|
|
|
// policy defined by the service config here:
|
|
|
|
|
// https://github.com/grpc/proposal/blob/master/A6-client-retries.md#integration-with-service-config
|
|
|
|
|
type retryThrottlingPolicy struct {
|
|
|
|
|
// The number of tokens starts at maxTokens. The token_count will always be
|
|
|
|
|
// between 0 and maxTokens.
|
|
|
|
|
//
|
|
|
|
|
// This field is required and must be greater than zero.
|
|
|
|
|
MaxTokens float64
|
|
|
|
|
// The amount of tokens to add on each successful RPC. Typically this will
|
|
|
|
|
// be some number between 0 and 1, e.g., 0.1.
|
|
|
|
|
//
|
|
|
|
|
// This field is required and must be greater than zero. Up to 3 decimal
|
|
|
|
|
// places are supported.
|
|
|
|
|
TokenRatio float64
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
func parseDuration(s *string) (*time.Duration, error) {
|
|
|
|
|
if s == nil {
|
|
|
|
|
return nil, nil
|
|
|
|
|
}
|
|
|
|
|
if !strings.HasSuffix(*s, "s") {
|
|
|
|
|
return nil, fmt.Errorf("malformed duration %q", *s)
|
|
|
|
|
}
|
|
|
|
|
ss := strings.SplitN((*s)[:len(*s)-1], ".", 3)
|
|
|
|
|
if len(ss) > 2 {
|
|
|
|
|
return nil, fmt.Errorf("malformed duration %q", *s)
|
|
|
|
|
}
|
|
|
|
|
// hasDigits is set if either the whole or fractional part of the number is
|
|
|
|
|
// present, since both are optional but one is required.
|
|
|
|
|
hasDigits := false
|
|
|
|
|
var d time.Duration
|
|
|
|
|
if len(ss[0]) > 0 {
|
|
|
|
|
i, err := strconv.ParseInt(ss[0], 10, 32)
|
|
|
|
|
if err != nil {
|
|
|
|
|
return nil, fmt.Errorf("malformed duration %q: %v", *s, err)
|
|
|
|
|
}
|
|
|
|
|
d = time.Duration(i) * time.Second
|
|
|
|
|
hasDigits = true
|
|
|
|
|
}
|
|
|
|
|
if len(ss) == 2 && len(ss[1]) > 0 {
|
|
|
|
|
if len(ss[1]) > 9 {
|
|
|
|
|
return nil, fmt.Errorf("malformed duration %q", *s)
|
|
|
|
|
}
|
|
|
|
|
f, err := strconv.ParseInt(ss[1], 10, 64)
|
|
|
|
|
if err != nil {
|
|
|
|
|
return nil, fmt.Errorf("malformed duration %q: %v", *s, err)
|
|
|
|
|
}
|
|
|
|
|
for i := 9; i > len(ss[1]); i-- {
|
|
|
|
|
f *= 10
|
|
|
|
|
}
|
|
|
|
|
d += time.Duration(f)
|
|
|
|
|
hasDigits = true
|
|
|
|
|
}
|
|
|
|
|
if !hasDigits {
|
|
|
|
|
return nil, fmt.Errorf("malformed duration %q", *s)
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
return &d, nil
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
type jsonName struct {
|
2020-10-01 17:06:15 -04:00
|
|
|
|
Service string
|
|
|
|
|
Method string
|
Use the hashicorp/go-getter to download files
* removed packer.Cache and references since packer.Cache is never used except in the download step. The download step now uses the new func packer.CachePath(targetPath) for this, the behavior is the same.
* removed download code from packer that was reimplemented into the go-getter library: progress bar, http download restart, checksuming from file, skip already downloaded files, symlinking, make a download cancellable by context.
* on windows if packer is running without symlinking rights and we are getting a local file, the file will be copied instead to avoid errors.
* added unit tests for step_download that are now CI tested on windows, mac & linux.
* files are now downloaded under cache dir `sha1(filename + "?checksum=" + checksum) + file_extension`
* since the output dir is based on the source url and the checksum, when the checksum fails, the file is auto deleted.
* a download file is protected and locked by a file lock,
* updated docs
* updated go modules and vendors
2019-03-13 07:11:58 -04:00
|
|
|
|
}
|
|
|
|
|
|
2020-10-01 17:06:15 -04:00
|
|
|
|
var (
|
|
|
|
|
errDuplicatedName = errors.New("duplicated name")
|
|
|
|
|
errEmptyServiceNonEmptyMethod = errors.New("cannot combine empty 'service' and non-empty 'method'")
|
|
|
|
|
)
|
|
|
|
|
|
|
|
|
|
func (j jsonName) generatePath() (string, error) {
|
|
|
|
|
if j.Service == "" {
|
|
|
|
|
if j.Method != "" {
|
|
|
|
|
return "", errEmptyServiceNonEmptyMethod
|
|
|
|
|
}
|
|
|
|
|
return "", nil
|
Use the hashicorp/go-getter to download files
* removed packer.Cache and references since packer.Cache is never used except in the download step. The download step now uses the new func packer.CachePath(targetPath) for this, the behavior is the same.
* removed download code from packer that was reimplemented into the go-getter library: progress bar, http download restart, checksuming from file, skip already downloaded files, symlinking, make a download cancellable by context.
* on windows if packer is running without symlinking rights and we are getting a local file, the file will be copied instead to avoid errors.
* added unit tests for step_download that are now CI tested on windows, mac & linux.
* files are now downloaded under cache dir `sha1(filename + "?checksum=" + checksum) + file_extension`
* since the output dir is based on the source url and the checksum, when the checksum fails, the file is auto deleted.
* a download file is protected and locked by a file lock,
* updated docs
* updated go modules and vendors
2019-03-13 07:11:58 -04:00
|
|
|
|
}
|
2020-10-01 17:06:15 -04:00
|
|
|
|
res := "/" + j.Service + "/"
|
|
|
|
|
if j.Method != "" {
|
|
|
|
|
res += j.Method
|
Use the hashicorp/go-getter to download files
* removed packer.Cache and references since packer.Cache is never used except in the download step. The download step now uses the new func packer.CachePath(targetPath) for this, the behavior is the same.
* removed download code from packer that was reimplemented into the go-getter library: progress bar, http download restart, checksuming from file, skip already downloaded files, symlinking, make a download cancellable by context.
* on windows if packer is running without symlinking rights and we are getting a local file, the file will be copied instead to avoid errors.
* added unit tests for step_download that are now CI tested on windows, mac & linux.
* files are now downloaded under cache dir `sha1(filename + "?checksum=" + checksum) + file_extension`
* since the output dir is based on the source url and the checksum, when the checksum fails, the file is auto deleted.
* a download file is protected and locked by a file lock,
* updated docs
* updated go modules and vendors
2019-03-13 07:11:58 -04:00
|
|
|
|
}
|
2020-10-01 17:06:15 -04:00
|
|
|
|
return res, nil
|
Use the hashicorp/go-getter to download files
* removed packer.Cache and references since packer.Cache is never used except in the download step. The download step now uses the new func packer.CachePath(targetPath) for this, the behavior is the same.
* removed download code from packer that was reimplemented into the go-getter library: progress bar, http download restart, checksuming from file, skip already downloaded files, symlinking, make a download cancellable by context.
* on windows if packer is running without symlinking rights and we are getting a local file, the file will be copied instead to avoid errors.
* added unit tests for step_download that are now CI tested on windows, mac & linux.
* files are now downloaded under cache dir `sha1(filename + "?checksum=" + checksum) + file_extension`
* since the output dir is based on the source url and the checksum, when the checksum fails, the file is auto deleted.
* a download file is protected and locked by a file lock,
* updated docs
* updated go modules and vendors
2019-03-13 07:11:58 -04:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// TODO(lyuxuan): delete this struct after cleaning up old service config implementation.
|
|
|
|
|
type jsonMC struct {
|
|
|
|
|
Name *[]jsonName
|
|
|
|
|
WaitForReady *bool
|
|
|
|
|
Timeout *string
|
|
|
|
|
MaxRequestMessageBytes *int64
|
|
|
|
|
MaxResponseMessageBytes *int64
|
|
|
|
|
RetryPolicy *jsonRetryPolicy
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// TODO(lyuxuan): delete this struct after cleaning up old service config implementation.
|
|
|
|
|
type jsonSC struct {
|
|
|
|
|
LoadBalancingPolicy *string
|
2020-10-01 17:06:15 -04:00
|
|
|
|
LoadBalancingConfig *internalserviceconfig.BalancerConfig
|
Use the hashicorp/go-getter to download files
* removed packer.Cache and references since packer.Cache is never used except in the download step. The download step now uses the new func packer.CachePath(targetPath) for this, the behavior is the same.
* removed download code from packer that was reimplemented into the go-getter library: progress bar, http download restart, checksuming from file, skip already downloaded files, symlinking, make a download cancellable by context.
* on windows if packer is running without symlinking rights and we are getting a local file, the file will be copied instead to avoid errors.
* added unit tests for step_download that are now CI tested on windows, mac & linux.
* files are now downloaded under cache dir `sha1(filename + "?checksum=" + checksum) + file_extension`
* since the output dir is based on the source url and the checksum, when the checksum fails, the file is auto deleted.
* a download file is protected and locked by a file lock,
* updated docs
* updated go modules and vendors
2019-03-13 07:11:58 -04:00
|
|
|
|
MethodConfig *[]jsonMC
|
|
|
|
|
RetryThrottling *retryThrottlingPolicy
|
|
|
|
|
HealthCheckConfig *healthCheckConfig
|
|
|
|
|
}
|
|
|
|
|
|
2019-12-03 08:18:49 -05:00
|
|
|
|
func init() {
|
|
|
|
|
internal.ParseServiceConfigForTesting = parseServiceConfig
|
|
|
|
|
}
|
|
|
|
|
func parseServiceConfig(js string) *serviceconfig.ParseResult {
|
|
|
|
|
if len(js) == 0 {
|
|
|
|
|
return &serviceconfig.ParseResult{Err: fmt.Errorf("no JSON service config provided")}
|
|
|
|
|
}
|
Use the hashicorp/go-getter to download files
* removed packer.Cache and references since packer.Cache is never used except in the download step. The download step now uses the new func packer.CachePath(targetPath) for this, the behavior is the same.
* removed download code from packer that was reimplemented into the go-getter library: progress bar, http download restart, checksuming from file, skip already downloaded files, symlinking, make a download cancellable by context.
* on windows if packer is running without symlinking rights and we are getting a local file, the file will be copied instead to avoid errors.
* added unit tests for step_download that are now CI tested on windows, mac & linux.
* files are now downloaded under cache dir `sha1(filename + "?checksum=" + checksum) + file_extension`
* since the output dir is based on the source url and the checksum, when the checksum fails, the file is auto deleted.
* a download file is protected and locked by a file lock,
* updated docs
* updated go modules and vendors
2019-03-13 07:11:58 -04:00
|
|
|
|
var rsc jsonSC
|
|
|
|
|
err := json.Unmarshal([]byte(js), &rsc)
|
|
|
|
|
if err != nil {
|
2020-10-01 17:06:15 -04:00
|
|
|
|
logger.Warningf("grpc: parseServiceConfig error unmarshaling %s due to %v", js, err)
|
2019-12-03 08:18:49 -05:00
|
|
|
|
return &serviceconfig.ParseResult{Err: err}
|
Use the hashicorp/go-getter to download files
* removed packer.Cache and references since packer.Cache is never used except in the download step. The download step now uses the new func packer.CachePath(targetPath) for this, the behavior is the same.
* removed download code from packer that was reimplemented into the go-getter library: progress bar, http download restart, checksuming from file, skip already downloaded files, symlinking, make a download cancellable by context.
* on windows if packer is running without symlinking rights and we are getting a local file, the file will be copied instead to avoid errors.
* added unit tests for step_download that are now CI tested on windows, mac & linux.
* files are now downloaded under cache dir `sha1(filename + "?checksum=" + checksum) + file_extension`
* since the output dir is based on the source url and the checksum, when the checksum fails, the file is auto deleted.
* a download file is protected and locked by a file lock,
* updated docs
* updated go modules and vendors
2019-03-13 07:11:58 -04:00
|
|
|
|
}
|
|
|
|
|
sc := ServiceConfig{
|
|
|
|
|
LB: rsc.LoadBalancingPolicy,
|
|
|
|
|
Methods: make(map[string]MethodConfig),
|
|
|
|
|
retryThrottling: rsc.RetryThrottling,
|
|
|
|
|
healthCheckConfig: rsc.HealthCheckConfig,
|
2019-05-30 17:25:43 -04:00
|
|
|
|
rawJSONString: js,
|
Use the hashicorp/go-getter to download files
* removed packer.Cache and references since packer.Cache is never used except in the download step. The download step now uses the new func packer.CachePath(targetPath) for this, the behavior is the same.
* removed download code from packer that was reimplemented into the go-getter library: progress bar, http download restart, checksuming from file, skip already downloaded files, symlinking, make a download cancellable by context.
* on windows if packer is running without symlinking rights and we are getting a local file, the file will be copied instead to avoid errors.
* added unit tests for step_download that are now CI tested on windows, mac & linux.
* files are now downloaded under cache dir `sha1(filename + "?checksum=" + checksum) + file_extension`
* since the output dir is based on the source url and the checksum, when the checksum fails, the file is auto deleted.
* a download file is protected and locked by a file lock,
* updated docs
* updated go modules and vendors
2019-03-13 07:11:58 -04:00
|
|
|
|
}
|
2020-10-01 17:06:15 -04:00
|
|
|
|
if c := rsc.LoadBalancingConfig; c != nil {
|
|
|
|
|
sc.lbConfig = &lbConfig{
|
|
|
|
|
name: c.Name,
|
|
|
|
|
cfg: c.Config,
|
2019-12-03 08:18:49 -05:00
|
|
|
|
}
|
Use the hashicorp/go-getter to download files
* removed packer.Cache and references since packer.Cache is never used except in the download step. The download step now uses the new func packer.CachePath(targetPath) for this, the behavior is the same.
* removed download code from packer that was reimplemented into the go-getter library: progress bar, http download restart, checksuming from file, skip already downloaded files, symlinking, make a download cancellable by context.
* on windows if packer is running without symlinking rights and we are getting a local file, the file will be copied instead to avoid errors.
* added unit tests for step_download that are now CI tested on windows, mac & linux.
* files are now downloaded under cache dir `sha1(filename + "?checksum=" + checksum) + file_extension`
* since the output dir is based on the source url and the checksum, when the checksum fails, the file is auto deleted.
* a download file is protected and locked by a file lock,
* updated docs
* updated go modules and vendors
2019-03-13 07:11:58 -04:00
|
|
|
|
}
|
|
|
|
|
|
2019-12-03 08:18:49 -05:00
|
|
|
|
if rsc.MethodConfig == nil {
|
|
|
|
|
return &serviceconfig.ParseResult{Config: &sc}
|
|
|
|
|
}
|
2020-10-01 17:06:15 -04:00
|
|
|
|
|
|
|
|
|
paths := map[string]struct{}{}
|
Use the hashicorp/go-getter to download files
* removed packer.Cache and references since packer.Cache is never used except in the download step. The download step now uses the new func packer.CachePath(targetPath) for this, the behavior is the same.
* removed download code from packer that was reimplemented into the go-getter library: progress bar, http download restart, checksuming from file, skip already downloaded files, symlinking, make a download cancellable by context.
* on windows if packer is running without symlinking rights and we are getting a local file, the file will be copied instead to avoid errors.
* added unit tests for step_download that are now CI tested on windows, mac & linux.
* files are now downloaded under cache dir `sha1(filename + "?checksum=" + checksum) + file_extension`
* since the output dir is based on the source url and the checksum, when the checksum fails, the file is auto deleted.
* a download file is protected and locked by a file lock,
* updated docs
* updated go modules and vendors
2019-03-13 07:11:58 -04:00
|
|
|
|
for _, m := range *rsc.MethodConfig {
|
|
|
|
|
if m.Name == nil {
|
|
|
|
|
continue
|
|
|
|
|
}
|
|
|
|
|
d, err := parseDuration(m.Timeout)
|
|
|
|
|
if err != nil {
|
2020-10-01 17:06:15 -04:00
|
|
|
|
logger.Warningf("grpc: parseServiceConfig error unmarshaling %s due to %v", js, err)
|
2019-12-03 08:18:49 -05:00
|
|
|
|
return &serviceconfig.ParseResult{Err: err}
|
Use the hashicorp/go-getter to download files
* removed packer.Cache and references since packer.Cache is never used except in the download step. The download step now uses the new func packer.CachePath(targetPath) for this, the behavior is the same.
* removed download code from packer that was reimplemented into the go-getter library: progress bar, http download restart, checksuming from file, skip already downloaded files, symlinking, make a download cancellable by context.
* on windows if packer is running without symlinking rights and we are getting a local file, the file will be copied instead to avoid errors.
* added unit tests for step_download that are now CI tested on windows, mac & linux.
* files are now downloaded under cache dir `sha1(filename + "?checksum=" + checksum) + file_extension`
* since the output dir is based on the source url and the checksum, when the checksum fails, the file is auto deleted.
* a download file is protected and locked by a file lock,
* updated docs
* updated go modules and vendors
2019-03-13 07:11:58 -04:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
mc := MethodConfig{
|
|
|
|
|
WaitForReady: m.WaitForReady,
|
|
|
|
|
Timeout: d,
|
|
|
|
|
}
|
|
|
|
|
if mc.retryPolicy, err = convertRetryPolicy(m.RetryPolicy); err != nil {
|
2020-10-01 17:06:15 -04:00
|
|
|
|
logger.Warningf("grpc: parseServiceConfig error unmarshaling %s due to %v", js, err)
|
2019-12-03 08:18:49 -05:00
|
|
|
|
return &serviceconfig.ParseResult{Err: err}
|
Use the hashicorp/go-getter to download files
* removed packer.Cache and references since packer.Cache is never used except in the download step. The download step now uses the new func packer.CachePath(targetPath) for this, the behavior is the same.
* removed download code from packer that was reimplemented into the go-getter library: progress bar, http download restart, checksuming from file, skip already downloaded files, symlinking, make a download cancellable by context.
* on windows if packer is running without symlinking rights and we are getting a local file, the file will be copied instead to avoid errors.
* added unit tests for step_download that are now CI tested on windows, mac & linux.
* files are now downloaded under cache dir `sha1(filename + "?checksum=" + checksum) + file_extension`
* since the output dir is based on the source url and the checksum, when the checksum fails, the file is auto deleted.
* a download file is protected and locked by a file lock,
* updated docs
* updated go modules and vendors
2019-03-13 07:11:58 -04:00
|
|
|
|
}
|
|
|
|
|
if m.MaxRequestMessageBytes != nil {
|
|
|
|
|
if *m.MaxRequestMessageBytes > int64(maxInt) {
|
|
|
|
|
mc.MaxReqSize = newInt(maxInt)
|
|
|
|
|
} else {
|
|
|
|
|
mc.MaxReqSize = newInt(int(*m.MaxRequestMessageBytes))
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
if m.MaxResponseMessageBytes != nil {
|
|
|
|
|
if *m.MaxResponseMessageBytes > int64(maxInt) {
|
|
|
|
|
mc.MaxRespSize = newInt(maxInt)
|
|
|
|
|
} else {
|
|
|
|
|
mc.MaxRespSize = newInt(int(*m.MaxResponseMessageBytes))
|
|
|
|
|
}
|
|
|
|
|
}
|
2020-10-01 17:06:15 -04:00
|
|
|
|
for i, n := range *m.Name {
|
|
|
|
|
path, err := n.generatePath()
|
|
|
|
|
if err != nil {
|
|
|
|
|
logger.Warningf("grpc: parseServiceConfig error unmarshaling %s due to methodConfig[%d]: %v", js, i, err)
|
|
|
|
|
return &serviceconfig.ParseResult{Err: err}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
if _, ok := paths[path]; ok {
|
|
|
|
|
err = errDuplicatedName
|
|
|
|
|
logger.Warningf("grpc: parseServiceConfig error unmarshaling %s due to methodConfig[%d]: %v", js, i, err)
|
|
|
|
|
return &serviceconfig.ParseResult{Err: err}
|
Use the hashicorp/go-getter to download files
* removed packer.Cache and references since packer.Cache is never used except in the download step. The download step now uses the new func packer.CachePath(targetPath) for this, the behavior is the same.
* removed download code from packer that was reimplemented into the go-getter library: progress bar, http download restart, checksuming from file, skip already downloaded files, symlinking, make a download cancellable by context.
* on windows if packer is running without symlinking rights and we are getting a local file, the file will be copied instead to avoid errors.
* added unit tests for step_download that are now CI tested on windows, mac & linux.
* files are now downloaded under cache dir `sha1(filename + "?checksum=" + checksum) + file_extension`
* since the output dir is based on the source url and the checksum, when the checksum fails, the file is auto deleted.
* a download file is protected and locked by a file lock,
* updated docs
* updated go modules and vendors
2019-03-13 07:11:58 -04:00
|
|
|
|
}
|
2020-10-01 17:06:15 -04:00
|
|
|
|
paths[path] = struct{}{}
|
|
|
|
|
sc.Methods[path] = mc
|
Use the hashicorp/go-getter to download files
* removed packer.Cache and references since packer.Cache is never used except in the download step. The download step now uses the new func packer.CachePath(targetPath) for this, the behavior is the same.
* removed download code from packer that was reimplemented into the go-getter library: progress bar, http download restart, checksuming from file, skip already downloaded files, symlinking, make a download cancellable by context.
* on windows if packer is running without symlinking rights and we are getting a local file, the file will be copied instead to avoid errors.
* added unit tests for step_download that are now CI tested on windows, mac & linux.
* files are now downloaded under cache dir `sha1(filename + "?checksum=" + checksum) + file_extension`
* since the output dir is based on the source url and the checksum, when the checksum fails, the file is auto deleted.
* a download file is protected and locked by a file lock,
* updated docs
* updated go modules and vendors
2019-03-13 07:11:58 -04:00
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
if sc.retryThrottling != nil {
|
2019-12-03 08:18:49 -05:00
|
|
|
|
if mt := sc.retryThrottling.MaxTokens; mt <= 0 || mt > 1000 {
|
|
|
|
|
return &serviceconfig.ParseResult{Err: fmt.Errorf("invalid retry throttling config: maxTokens (%v) out of range (0, 1000]", mt)}
|
|
|
|
|
}
|
|
|
|
|
if tr := sc.retryThrottling.TokenRatio; tr <= 0 {
|
|
|
|
|
return &serviceconfig.ParseResult{Err: fmt.Errorf("invalid retry throttling config: tokenRatio (%v) may not be negative", tr)}
|
Use the hashicorp/go-getter to download files
* removed packer.Cache and references since packer.Cache is never used except in the download step. The download step now uses the new func packer.CachePath(targetPath) for this, the behavior is the same.
* removed download code from packer that was reimplemented into the go-getter library: progress bar, http download restart, checksuming from file, skip already downloaded files, symlinking, make a download cancellable by context.
* on windows if packer is running without symlinking rights and we are getting a local file, the file will be copied instead to avoid errors.
* added unit tests for step_download that are now CI tested on windows, mac & linux.
* files are now downloaded under cache dir `sha1(filename + "?checksum=" + checksum) + file_extension`
* since the output dir is based on the source url and the checksum, when the checksum fails, the file is auto deleted.
* a download file is protected and locked by a file lock,
* updated docs
* updated go modules and vendors
2019-03-13 07:11:58 -04:00
|
|
|
|
}
|
|
|
|
|
}
|
2019-12-03 08:18:49 -05:00
|
|
|
|
return &serviceconfig.ParseResult{Config: &sc}
|
Use the hashicorp/go-getter to download files
* removed packer.Cache and references since packer.Cache is never used except in the download step. The download step now uses the new func packer.CachePath(targetPath) for this, the behavior is the same.
* removed download code from packer that was reimplemented into the go-getter library: progress bar, http download restart, checksuming from file, skip already downloaded files, symlinking, make a download cancellable by context.
* on windows if packer is running without symlinking rights and we are getting a local file, the file will be copied instead to avoid errors.
* added unit tests for step_download that are now CI tested on windows, mac & linux.
* files are now downloaded under cache dir `sha1(filename + "?checksum=" + checksum) + file_extension`
* since the output dir is based on the source url and the checksum, when the checksum fails, the file is auto deleted.
* a download file is protected and locked by a file lock,
* updated docs
* updated go modules and vendors
2019-03-13 07:11:58 -04:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
func convertRetryPolicy(jrp *jsonRetryPolicy) (p *retryPolicy, err error) {
|
|
|
|
|
if jrp == nil {
|
|
|
|
|
return nil, nil
|
|
|
|
|
}
|
|
|
|
|
ib, err := parseDuration(&jrp.InitialBackoff)
|
|
|
|
|
if err != nil {
|
|
|
|
|
return nil, err
|
|
|
|
|
}
|
|
|
|
|
mb, err := parseDuration(&jrp.MaxBackoff)
|
|
|
|
|
if err != nil {
|
|
|
|
|
return nil, err
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
if jrp.MaxAttempts <= 1 ||
|
|
|
|
|
*ib <= 0 ||
|
|
|
|
|
*mb <= 0 ||
|
|
|
|
|
jrp.BackoffMultiplier <= 0 ||
|
|
|
|
|
len(jrp.RetryableStatusCodes) == 0 {
|
2020-10-01 17:06:15 -04:00
|
|
|
|
logger.Warningf("grpc: ignoring retry policy %v due to illegal configuration", jrp)
|
Use the hashicorp/go-getter to download files
* removed packer.Cache and references since packer.Cache is never used except in the download step. The download step now uses the new func packer.CachePath(targetPath) for this, the behavior is the same.
* removed download code from packer that was reimplemented into the go-getter library: progress bar, http download restart, checksuming from file, skip already downloaded files, symlinking, make a download cancellable by context.
* on windows if packer is running without symlinking rights and we are getting a local file, the file will be copied instead to avoid errors.
* added unit tests for step_download that are now CI tested on windows, mac & linux.
* files are now downloaded under cache dir `sha1(filename + "?checksum=" + checksum) + file_extension`
* since the output dir is based on the source url and the checksum, when the checksum fails, the file is auto deleted.
* a download file is protected and locked by a file lock,
* updated docs
* updated go modules and vendors
2019-03-13 07:11:58 -04:00
|
|
|
|
return nil, nil
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
rp := &retryPolicy{
|
|
|
|
|
maxAttempts: jrp.MaxAttempts,
|
|
|
|
|
initialBackoff: *ib,
|
|
|
|
|
maxBackoff: *mb,
|
|
|
|
|
backoffMultiplier: jrp.BackoffMultiplier,
|
|
|
|
|
retryableStatusCodes: make(map[codes.Code]bool),
|
|
|
|
|
}
|
|
|
|
|
if rp.maxAttempts > 5 {
|
|
|
|
|
// TODO(retry): Make the max maxAttempts configurable.
|
|
|
|
|
rp.maxAttempts = 5
|
|
|
|
|
}
|
|
|
|
|
for _, code := range jrp.RetryableStatusCodes {
|
|
|
|
|
rp.retryableStatusCodes[code] = true
|
|
|
|
|
}
|
|
|
|
|
return rp, nil
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
func min(a, b *int) *int {
|
|
|
|
|
if *a < *b {
|
|
|
|
|
return a
|
|
|
|
|
}
|
|
|
|
|
return b
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
func getMaxSize(mcMax, doptMax *int, defaultVal int) *int {
|
|
|
|
|
if mcMax == nil && doptMax == nil {
|
|
|
|
|
return &defaultVal
|
|
|
|
|
}
|
|
|
|
|
if mcMax != nil && doptMax != nil {
|
|
|
|
|
return min(mcMax, doptMax)
|
|
|
|
|
}
|
|
|
|
|
if mcMax != nil {
|
|
|
|
|
return mcMax
|
|
|
|
|
}
|
|
|
|
|
return doptMax
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
func newInt(b int) *int {
|
|
|
|
|
return &b
|
|
|
|
|
}
|
2020-10-01 17:06:15 -04:00
|
|
|
|
|
|
|
|
|
func init() {
|
|
|
|
|
internal.EqualServiceConfigForTesting = equalServiceConfig
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// equalServiceConfig compares two configs. The rawJSONString field is ignored,
|
|
|
|
|
// because they may diff in white spaces.
|
|
|
|
|
//
|
|
|
|
|
// If any of them is NOT *ServiceConfig, return false.
|
|
|
|
|
func equalServiceConfig(a, b serviceconfig.Config) bool {
|
|
|
|
|
aa, ok := a.(*ServiceConfig)
|
|
|
|
|
if !ok {
|
|
|
|
|
return false
|
|
|
|
|
}
|
|
|
|
|
bb, ok := b.(*ServiceConfig)
|
|
|
|
|
if !ok {
|
|
|
|
|
return false
|
|
|
|
|
}
|
|
|
|
|
aaRaw := aa.rawJSONString
|
|
|
|
|
aa.rawJSONString = ""
|
|
|
|
|
bbRaw := bb.rawJSONString
|
|
|
|
|
bb.rawJSONString = ""
|
|
|
|
|
defer func() {
|
|
|
|
|
aa.rawJSONString = aaRaw
|
|
|
|
|
bb.rawJSONString = bbRaw
|
|
|
|
|
}()
|
|
|
|
|
// Using reflect.DeepEqual instead of cmp.Equal because many balancer
|
|
|
|
|
// configs are unexported, and cmp.Equal cannot compare unexported fields
|
|
|
|
|
// from unexported structs.
|
|
|
|
|
return reflect.DeepEqual(aa, bb)
|
|
|
|
|
}
|