Use the hashicorp/go-getter to download files
* removed packer.Cache and references since packer.Cache is never used except in the download step. The download step now uses the new func packer.CachePath(targetPath) for this, the behavior is the same.
* removed download code from packer that was reimplemented into the go-getter library: progress bar, http download restart, checksuming from file, skip already downloaded files, symlinking, make a download cancellable by context.
* on windows if packer is running without symlinking rights and we are getting a local file, the file will be copied instead to avoid errors.
* added unit tests for step_download that are now CI tested on windows, mac & linux.
* files are now downloaded under cache dir `sha1(filename + "?checksum=" + checksum) + file_extension`
* since the output dir is based on the source url and the checksum, when the checksum fails, the file is auto deleted.
* a download file is protected and locked by a file lock,
* updated docs
* updated go modules and vendors
2019-03-13 07:11:58 -04:00
|
|
|
// Copyright 2017, OpenCensus Authors
|
|
|
|
//
|
|
|
|
// Licensed under the Apache License, Version 2.0 (the "License");
|
|
|
|
// you may not use this file except in compliance with the License.
|
|
|
|
// You may obtain a copy of the License at
|
|
|
|
//
|
|
|
|
// http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
//
|
|
|
|
// Unless required by applicable law or agreed to in writing, software
|
|
|
|
// distributed under the License is distributed on an "AS IS" BASIS,
|
|
|
|
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
|
|
// See the License for the specific language governing permissions and
|
|
|
|
// limitations under the License.
|
|
|
|
//
|
|
|
|
|
|
|
|
package view
|
|
|
|
|
|
|
|
// AggType represents the type of aggregation function used on a View.
|
|
|
|
type AggType int
|
|
|
|
|
|
|
|
// All available aggregation types.
|
|
|
|
const (
|
|
|
|
AggTypeNone AggType = iota // no aggregation; reserved for future use.
|
|
|
|
AggTypeCount // the count aggregation, see Count.
|
|
|
|
AggTypeSum // the sum aggregation, see Sum.
|
|
|
|
AggTypeDistribution // the distribution aggregation, see Distribution.
|
|
|
|
AggTypeLastValue // the last value aggregation, see LastValue.
|
|
|
|
)
|
|
|
|
|
|
|
|
func (t AggType) String() string {
|
|
|
|
return aggTypeName[t]
|
|
|
|
}
|
|
|
|
|
|
|
|
var aggTypeName = map[AggType]string{
|
|
|
|
AggTypeNone: "None",
|
|
|
|
AggTypeCount: "Count",
|
|
|
|
AggTypeSum: "Sum",
|
|
|
|
AggTypeDistribution: "Distribution",
|
|
|
|
AggTypeLastValue: "LastValue",
|
|
|
|
}
|
|
|
|
|
|
|
|
// Aggregation represents a data aggregation method. Use one of the functions:
|
|
|
|
// Count, Sum, or Distribution to construct an Aggregation.
|
|
|
|
type Aggregation struct {
|
|
|
|
Type AggType // Type is the AggType of this Aggregation.
|
|
|
|
Buckets []float64 // Buckets are the bucket endpoints if this Aggregation represents a distribution, see Distribution.
|
|
|
|
|
|
|
|
newData func() AggregationData
|
|
|
|
}
|
|
|
|
|
|
|
|
var (
|
|
|
|
aggCount = &Aggregation{
|
|
|
|
Type: AggTypeCount,
|
|
|
|
newData: func() AggregationData {
|
|
|
|
return &CountData{}
|
|
|
|
},
|
|
|
|
}
|
|
|
|
aggSum = &Aggregation{
|
|
|
|
Type: AggTypeSum,
|
|
|
|
newData: func() AggregationData {
|
|
|
|
return &SumData{}
|
|
|
|
},
|
|
|
|
}
|
|
|
|
)
|
|
|
|
|
|
|
|
// Count indicates that data collected and aggregated
|
|
|
|
// with this method will be turned into a count value.
|
|
|
|
// For example, total number of accepted requests can be
|
|
|
|
// aggregated by using Count.
|
|
|
|
func Count() *Aggregation {
|
|
|
|
return aggCount
|
|
|
|
}
|
|
|
|
|
|
|
|
// Sum indicates that data collected and aggregated
|
|
|
|
// with this method will be summed up.
|
|
|
|
// For example, accumulated request bytes can be aggregated by using
|
|
|
|
// Sum.
|
|
|
|
func Sum() *Aggregation {
|
|
|
|
return aggSum
|
|
|
|
}
|
|
|
|
|
|
|
|
// Distribution indicates that the desired aggregation is
|
|
|
|
// a histogram distribution.
|
|
|
|
//
|
2019-12-03 08:18:49 -05:00
|
|
|
// A distribution aggregation may contain a histogram of the values in the
|
Use the hashicorp/go-getter to download files
* removed packer.Cache and references since packer.Cache is never used except in the download step. The download step now uses the new func packer.CachePath(targetPath) for this, the behavior is the same.
* removed download code from packer that was reimplemented into the go-getter library: progress bar, http download restart, checksuming from file, skip already downloaded files, symlinking, make a download cancellable by context.
* on windows if packer is running without symlinking rights and we are getting a local file, the file will be copied instead to avoid errors.
* added unit tests for step_download that are now CI tested on windows, mac & linux.
* files are now downloaded under cache dir `sha1(filename + "?checksum=" + checksum) + file_extension`
* since the output dir is based on the source url and the checksum, when the checksum fails, the file is auto deleted.
* a download file is protected and locked by a file lock,
* updated docs
* updated go modules and vendors
2019-03-13 07:11:58 -04:00
|
|
|
// population. The bucket boundaries for that histogram are described
|
|
|
|
// by the bounds. This defines len(bounds)+1 buckets.
|
|
|
|
//
|
|
|
|
// If len(bounds) >= 2 then the boundaries for bucket index i are:
|
|
|
|
//
|
|
|
|
// [-infinity, bounds[i]) for i = 0
|
|
|
|
// [bounds[i-1], bounds[i]) for 0 < i < length
|
|
|
|
// [bounds[i-1], +infinity) for i = length
|
|
|
|
//
|
|
|
|
// If len(bounds) is 0 then there is no histogram associated with the
|
|
|
|
// distribution. There will be a single bucket with boundaries
|
|
|
|
// (-infinity, +infinity).
|
|
|
|
//
|
|
|
|
// If len(bounds) is 1 then there is no finite buckets, and that single
|
|
|
|
// element is the common boundary of the overflow and underflow buckets.
|
|
|
|
func Distribution(bounds ...float64) *Aggregation {
|
2019-12-03 08:18:49 -05:00
|
|
|
agg := &Aggregation{
|
Use the hashicorp/go-getter to download files
* removed packer.Cache and references since packer.Cache is never used except in the download step. The download step now uses the new func packer.CachePath(targetPath) for this, the behavior is the same.
* removed download code from packer that was reimplemented into the go-getter library: progress bar, http download restart, checksuming from file, skip already downloaded files, symlinking, make a download cancellable by context.
* on windows if packer is running without symlinking rights and we are getting a local file, the file will be copied instead to avoid errors.
* added unit tests for step_download that are now CI tested on windows, mac & linux.
* files are now downloaded under cache dir `sha1(filename + "?checksum=" + checksum) + file_extension`
* since the output dir is based on the source url and the checksum, when the checksum fails, the file is auto deleted.
* a download file is protected and locked by a file lock,
* updated docs
* updated go modules and vendors
2019-03-13 07:11:58 -04:00
|
|
|
Type: AggTypeDistribution,
|
|
|
|
Buckets: bounds,
|
|
|
|
}
|
2019-12-03 08:18:49 -05:00
|
|
|
agg.newData = func() AggregationData {
|
|
|
|
return newDistributionData(agg)
|
|
|
|
}
|
|
|
|
return agg
|
Use the hashicorp/go-getter to download files
* removed packer.Cache and references since packer.Cache is never used except in the download step. The download step now uses the new func packer.CachePath(targetPath) for this, the behavior is the same.
* removed download code from packer that was reimplemented into the go-getter library: progress bar, http download restart, checksuming from file, skip already downloaded files, symlinking, make a download cancellable by context.
* on windows if packer is running without symlinking rights and we are getting a local file, the file will be copied instead to avoid errors.
* added unit tests for step_download that are now CI tested on windows, mac & linux.
* files are now downloaded under cache dir `sha1(filename + "?checksum=" + checksum) + file_extension`
* since the output dir is based on the source url and the checksum, when the checksum fails, the file is auto deleted.
* a download file is protected and locked by a file lock,
* updated docs
* updated go modules and vendors
2019-03-13 07:11:58 -04:00
|
|
|
}
|
|
|
|
|
|
|
|
// LastValue only reports the last value recorded using this
|
|
|
|
// aggregation. All other measurements will be dropped.
|
|
|
|
func LastValue() *Aggregation {
|
|
|
|
return &Aggregation{
|
|
|
|
Type: AggTypeLastValue,
|
|
|
|
newData: func() AggregationData {
|
|
|
|
return &LastValueData{}
|
|
|
|
},
|
|
|
|
}
|
|
|
|
}
|