Merge remote-tracking branch 'origin/master' into scrape_doc_to_builder_struct_config

This commit is contained in:
Adrien Delorme 2019-09-10 12:38:47 +02:00
commit 146b88ba1e
74 changed files with 1184 additions and 11975 deletions

View File

@ -292,3 +292,12 @@ make testacc TEST=./builder/amazon/ebs TESTARGS="-run TestBuilderAcc_forceDelete
Acceptance tests typically require other environment variables to be set for
things such as API tokens and keys. Each test should error and tell you which
credentials are missing, so those are not documented here.
#### Debugging Plugins
Each packer plugin runs in a separate process and communicates with RCP over a
socket therefore using a debugger will not work (be complicated at least).
But most of the Packer code is really simple and easy to follow with PACKER_LOG
turned on. If that doesn't work adding some extra debug print outs when you have
homed in on the problem is usually enough.

View File

@ -1,11 +1,29 @@
## 1.4.4 (Upcoming)
### IMPROVEMENTS:
* builder/amazon: Add AWS API call reties on AMI prevalidation [GH-8034]
* builder/hcloud: Allow selecting image based on filters [GH-7945]
* builder/hyper-v: Decrease the delay between Hyper-V VM startup and hyper-v
builder's ability to send keystrokes to the target VM. [GH-7970]
* builder/openstack: Store WinRM password for provisioners to use [GH-7940]
* builder/virtualbox-vm: Make target snapshot optional [GH-8011] [GH-8004]
* post-processor/vagrant-cloud: Allow use of the Artifice post-processor with
the Vagrant Cloud post-processor [GH-8018] [GH-8027]
### BUG FIXES:
* builder/azure: Avoid a panic in getObjectIdFromToken [GH-8047]
* builder/hyper-v: Fix when management interface is not part of virtual switch
[GH-8017]
* builder/openstack: Fix race condition created when adding metadata [GH-8016]
* builder/qemu: Fix dropped error when retrieving version [GH-8050]
* builder/vagrant: Fix provisioning boxes, define source and output boxes
[GH-7957]
* builder/virtualbox: Fix windows pathing problem for guest additions checksum
download. [GH-7996]
* core: Fix bug where sensitive variables contianing commas were not being
properly sanitized in UI calls. [GH-7997]
* provisioner/ansible: Fix provisioner dropped errors [GH-8045]
* builder/proxmox: Fix panic caused by cancelling build [GH-8067] [GH-8072]
## 1.4.3 (August 14, 2019)

View File

@ -14,6 +14,7 @@
/builder/profitbricks/ @jasmingacic
/builder/triton/ @sean-
/builder/ncloud/ @YuSungDuk
/builder/proxmox/ @carlpett
/builder/scaleway/ @sieben @mvaude @jqueuniet @fflorens @brmzkw
/builder/hcloud/ @LKaemmerling
/builder/hyperone @m110 @gregorybrzeski @ad-m

View File

@ -24,6 +24,21 @@ type Config struct {
common.PackerConfig `mapstructure:",squash"`
awscommon.AccessConfig `mapstructure:",squash"`
awscommon.RunConfig `mapstructure:",squash"`
// Enable enhanced networking (ENA but not SriovNetSupport) on
// HVM-compatible AMIs. If set, add `ec2:ModifyInstanceAttribute` to your
// AWS IAM policy. Note: you must make sure enhanced networking is enabled
// on your instance. See [Amazon's documentation on enabling enhanced
// networking](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html#enabling_enhanced_networking).
AMIENASupport *bool `mapstructure:"ena_support" required:"false"`
// Enable enhanced networking (SriovNetSupport but not ENA) on
// HVM-compatible AMIs. If true, add `ec2:ModifyInstanceAttribute` to your
// AWS IAM policy. Note: you must make sure enhanced networking is enabled
// on your instance. See [Amazon's documentation on enabling enhanced
// networking](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html#enabling_enhanced_networking).
// Default `false`.
AMISriovNetSupport bool `mapstructure:"sriov_support" required:"false"`
// Add the block device mappings to the AMI. If you add instance store
// volumes or EBS volumes in addition to the root device volume, the
// created AMI will contain block device mapping information for those
@ -34,23 +49,23 @@ type Config struct {
// source instance. See the [BlockDevices](#block-devices-configuration)
// documentation for fields.
VolumeMappings BlockDevices `mapstructure:"ebs_volumes" required:"false"`
// Enable enhanced networking (ENA but not SriovNetSupport) on
// HVM-compatible AMIs. If set, add ec2:ModifyInstanceAttribute to your AWS
// IAM policy. If false, this will disable enhanced networking in the final
// AMI as opposed to passing the setting through unchanged from the source.
// Note: you must make sure enhanced networking is enabled on your
// instance. See Amazon's documentation on enabling enhanced networking.
AMIENASupport *bool `mapstructure:"ena_support" required:"false"`
// Enable enhanced networking (SriovNetSupport but not ENA) on
// HVM-compatible AMIs. If true, add `ec2:ModifyInstanceAttribute` to your
// AWS IAM policy. Note: you must make sure enhanced networking is enabled
// on your instance. See [Amazon's documentation on enabling enhanced
// networking](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html#enabling_enhanced_networking).
// Default `false`.
AMISriovNetSupport bool `mapstructure:"sriov_support" required:"false"`
// Tags to apply to the volumes of the instance that is *launched* to
// create EBS Volumes. These tags will *not* appear in the tags of the
// resulting EBS volumes unless they're duplicated under `tags` in the
// `ebs_volumes` setting. This is a [template
// engine](/docs/templates/engine.html), see [Build template
// data](#build-template-data) for more information.
//
// Note: The tags specified here will be *temporarily* applied to volumes
// specified in `ebs_volumes` - but only while the instance is being
// created. Packer will replace all tags on the volume with the tags
// configured in the `ebs_volumes` section as soon as the instance is
// reported as 'ready'.
VolumeRunTags awscommon.TagMap `mapstructure:"run_volume_tags"`
launchBlockDevices BlockDevices
ctx interpolate.Context
ctx interpolate.Context
}
type Builder struct {
@ -142,8 +157,8 @@ func (b *Builder) Run(ctx context.Context, ui packer.Ui, hook packer.Hook) (pack
AssociatePublicIpAddress: b.config.AssociatePublicIpAddress,
LaunchMappings: b.config.launchBlockDevices,
BlockDurationMinutes: b.config.BlockDurationMinutes,
Ctx: b.config.ctx,
Comm: &b.config.RunConfig.Comm,
Ctx: b.config.ctx,
Debug: b.config.PackerDebug,
EbsOptimized: b.config.EbsOptimized,
ExpectedRootDevice: "ebs",
@ -151,12 +166,13 @@ func (b *Builder) Run(ctx context.Context, ui packer.Ui, hook packer.Hook) (pack
InstanceInitiatedShutdownBehavior: b.config.InstanceInitiatedShutdownBehavior,
InstanceType: b.config.InstanceType,
SourceAMI: b.config.SourceAmi,
SpotPrice: b.config.SpotPrice,
SpotInstanceTypes: b.config.SpotInstanceTypes,
SpotPrice: b.config.SpotPrice,
SpotTags: b.config.SpotTags,
Tags: b.config.RunTags,
UserData: b.config.UserData,
UserDataFile: b.config.UserDataFile,
VolumeTags: b.config.VolumeRunTags,
}
} else {
instanceStep = &awscommon.StepRunSourceInstance{
@ -176,6 +192,7 @@ func (b *Builder) Run(ctx context.Context, ui packer.Ui, hook packer.Hook) (pack
Tags: b.config.RunTags,
UserData: b.config.UserData,
UserDataFile: b.config.UserDataFile,
VolumeTags: b.config.VolumeRunTags,
}
}

View File

@ -3,7 +3,9 @@ package ebsvolume
import (
"context"
"fmt"
"strings"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/service/ec2"
"github.com/hashicorp/packer/helper/multistep"
"github.com/hashicorp/packer/packer"
@ -19,6 +21,7 @@ func (s *stepTagEBSVolumes) Run(ctx context.Context, state multistep.StateBag) m
ec2conn := state.Get("ec2").(*ec2.EC2)
instance := state.Get("instance").(*ec2.Instance)
ui := state.Get("ui").(packer.Ui)
config := state.Get("config").(*Config)
volumes := make(EbsVolumes)
for _, instanceBlockDevices := range instance.BlockDeviceMappings {
@ -36,8 +39,50 @@ func (s *stepTagEBSVolumes) Run(ctx context.Context, state multistep.StateBag) m
state.Put("ebsvolumes", volumes)
if len(s.VolumeMapping) > 0 {
ui.Say("Tagging EBS volumes...")
// If run_volume_tags were set in the template any attached EBS
// volume will have had these tags applied when the instance was
// created. We now need to remove these tags to ensure only the EBS
// volume tags are applied (if any)
if config.VolumeRunTags.IsSet() {
ui.Say("Removing any tags applied to EBS volumes when the source instance was created...")
ui.Message("Compiling list of existing tags to remove...")
existingTags, err := config.VolumeRunTags.EC2Tags(s.Ctx, *ec2conn.Config.Region, state)
if err != nil {
err := fmt.Errorf("Error generating list of tags to remove: %s", err)
state.Put("error", err)
ui.Error(err.Error())
return multistep.ActionHalt
}
existingTags.Report(ui)
// Generate the list of volumes with tags to delete.
// Looping over the instance block device mappings allows us to
// obtain the volumeId
volumeIds := []string{}
for _, mapping := range s.VolumeMapping {
for _, v := range instance.BlockDeviceMappings {
if *v.DeviceName == mapping.DeviceName {
volumeIds = append(volumeIds, *v.Ebs.VolumeId)
}
}
}
// Delete the tags
ui.Message(fmt.Sprintf("Deleting 'run_volume_tags' on EBS Volumes: %s", strings.Join(volumeIds, ", ")))
_, err = ec2conn.DeleteTags(&ec2.DeleteTagsInput{
Resources: aws.StringSlice(volumeIds),
Tags: existingTags,
})
if err != nil {
err := fmt.Errorf("Error deleting tags on EBS Volumes %s: %s", strings.Join(volumeIds, ", "), err)
state.Put("error", err)
ui.Error(err.Error())
return multistep.ActionHalt
}
}
ui.Say("Tagging EBS volumes...")
toTag := map[string][]*ec2.Tag{}
for _, mapping := range s.VolumeMapping {
if len(mapping.Tags) == 0 {
@ -45,15 +90,19 @@ func (s *stepTagEBSVolumes) Run(ctx context.Context, state multistep.StateBag) m
continue
}
ui.Message(fmt.Sprintf("Compiling list of tags to apply to volume on %s...", mapping.DeviceName))
tags, err := mapping.Tags.EC2Tags(s.Ctx, *ec2conn.Config.Region, state)
if err != nil {
err := fmt.Errorf("Error tagging device %s with %s", mapping.DeviceName, err)
err := fmt.Errorf("Error generating tags for device %s: %s", mapping.DeviceName, err)
state.Put("error", err)
ui.Error(err.Error())
return multistep.ActionHalt
}
tags.Report(ui)
// Generate the map of volumes and associated tags to apply.
// Looping over the instance block device mappings allows us to
// obtain the volumeId
for _, v := range instance.BlockDeviceMappings {
if *v.DeviceName == mapping.DeviceName {
toTag[*v.Ebs.VolumeId] = tags
@ -61,9 +110,11 @@ func (s *stepTagEBSVolumes) Run(ctx context.Context, state multistep.StateBag) m
}
}
// Apply the tags
for volumeId, tags := range toTag {
ui.Message(fmt.Sprintf("Applying tags to EBS Volume: %s", volumeId))
_, err := ec2conn.CreateTags(&ec2.CreateTagsInput{
Resources: []*string{&volumeId},
Resources: aws.StringSlice([]string{volumeId}),
Tags: tags,
})
if err != nil {
@ -72,7 +123,6 @@ func (s *stepTagEBSVolumes) Run(ctx context.Context, state multistep.StateBag) m
ui.Error(err.Error())
return multistep.ActionHalt
}
}
}

View File

@ -25,15 +25,6 @@ type OMIConfig struct {
SnapshotGroups []string `mapstructure:"snapshot_groups"`
}
func stringInSlice(s []string, searchstr string) bool {
for _, item := range s {
if item == searchstr {
return true
}
}
return false
}
func (c *OMIConfig) Prepare(accessConfig *AccessConfig, ctx *interpolate.Context) []error {
var errs []error

View File

@ -3,6 +3,7 @@ package common
import (
"errors"
"fmt"
"sort"
"time"
"github.com/hashicorp/packer/helper/multistep"
@ -27,35 +28,42 @@ func SSHHost(e oapiDescriber, sshInterface string) func(multistep.StateBag) (str
for j := 0; j <= tries; j++ {
var host string
i := state.Get("vm").(oapi.Vm)
if len(i.Nics) <= 0 {
return "", errors.New("couldn't determine address for vm, nics are empty")
}
nic := i.Nics[0]
if sshInterface != "" {
switch sshInterface {
case "public_ip":
if i.PublicIp != "" {
host = i.PublicIp
}
case "private_ip":
if i.PrivateIp != "" {
host = i.PrivateIp
if nic.LinkPublicIp.PublicIp != "" {
host = nic.LinkPublicIp.PublicIp
}
case "public_dns":
if i.PublicDnsName != "" {
host = i.PublicDnsName
if nic.LinkPublicIp.PublicDnsName != "" {
host = nic.LinkPublicIp.PublicDnsName
}
case "private_ip":
if privateIP, err := getPrivateIP(nic); err != nil {
host = privateIP.PrivateIp
}
case "private_dns":
if i.PrivateDnsName != "" {
host = i.PrivateDnsName
if privateIP, err := getPrivateIP(nic); err != nil {
host = privateIP.PrivateDnsName
}
default:
panic(fmt.Sprintf("Unknown interface type: %s", sshInterface))
}
} else if i.NetId != "" {
if i.PublicIp != "" {
host = i.PublicIp
} else if i.PrivateIp != "" {
host = i.PrivateIp
if nic.LinkPublicIp.PublicIp != "" {
host = nic.LinkPublicIp.PublicIp
} else if privateIP, err := getPrivateIP(nic); err != nil {
host = privateIP.PrivateIp
}
} else if i.PublicDnsName != "" {
host = i.PublicDnsName
} else if nic.LinkPublicIp.PublicDnsName != "" {
host = nic.LinkPublicIp.PublicDnsName
}
if host != "" {
@ -82,3 +90,20 @@ func SSHHost(e oapiDescriber, sshInterface string) func(multistep.StateBag) (str
return "", errors.New("couldn't determine address for vm")
}
}
func getPrivateIP(nic oapi.NicLight) (oapi.PrivateIpLightForVm, error) {
isPrimary := true
i := sort.Search(len(nic.PrivateIps), func(i int) bool { return nic.PrivateIps[i].IsPrimary == isPrimary })
if i < len(nic.PrivateIps) && nic.PrivateIps[i].IsPrimary == isPrimary {
return nic.PrivateIps[i], nil
}
if len(nic.PrivateIps) > 0 {
return nic.PrivateIps[0], nil
}
return oapi.PrivateIpLightForVm{}, fmt.Errorf("couldn't determine private address for vm")
}

View File

@ -248,7 +248,7 @@ func (s *StepRunSourceVm) Run(ctx context.Context, state multistep.StateBag) mul
}
if vm.PrivateIp != "" {
ui.Message(fmt.Sprintf("Private IP: %s", vm.PublicIp))
ui.Message(fmt.Sprintf("Private IP: %s", vm.PrivateIp))
}
}

View File

@ -102,13 +102,10 @@ func (p *proxmoxDriver) SendSpecial(special string, action bootcommand.KeyAction
}
func (p *proxmoxDriver) send(keys string) error {
res, err := p.client.MonitorCmd(p.vmRef, "sendkey "+keys)
err := p.client.Sendkey(p.vmRef, keys)
if err != nil {
return err
}
if data, ok := res["data"].(string); ok && len(data) > 0 {
return fmt.Errorf("failed to send keys: %s", data)
}
time.Sleep(p.interval)
return nil

View File

@ -3,6 +3,7 @@ package proxmox
import (
"context"
"crypto/tls"
"errors"
"fmt"
"github.com/Telmate/proxmox-api-go/proxmox"
@ -45,7 +46,7 @@ func (b *Builder) Run(ctx context.Context, ui packer.Ui, hook packer.Hook) (pack
return nil, err
}
err = b.proxmoxClient.Login(b.config.Username, b.config.Password)
err = b.proxmoxClient.Login(b.config.Username, b.config.Password, "")
if err != nil {
return nil, err
}
@ -89,9 +90,19 @@ func (b *Builder) Run(ctx context.Context, ui packer.Ui, hook packer.Hook) (pack
if rawErr, ok := state.GetOk("error"); ok {
return nil, rawErr.(error)
}
// If we were interrupted or cancelled, then just exit.
if _, ok := state.GetOk(multistep.StateCancelled); ok {
return nil, errors.New("build was cancelled")
}
// Verify that the template_id was set properly, otherwise we didn't progress through the last step
tplID, ok := state.Get("template_id").(int)
if !ok {
return nil, fmt.Errorf("template ID could not be determined")
}
artifact := &Artifact{
templateID: state.Get("template_id").(int),
templateID: tplID,
proxmoxClient: b.proxmoxClient,
}

View File

@ -103,7 +103,7 @@ func NewConfig(raws ...interface{}) (*Config, []string, error) {
c.RawBootKeyInterval = os.Getenv(common.PackerKeyEnv)
}
if c.RawBootKeyInterval == "" {
c.BootKeyInterval = common.PackerKeyDefault
c.BootKeyInterval = 5 * time.Millisecond
} else {
if interval, err := time.ParseDuration(c.RawBootKeyInterval); err == nil {
c.BootKeyInterval = interval
@ -151,6 +151,12 @@ func NewConfig(raws ...interface{}) (*Config, []string, error) {
log.Printf("Disk %d cache mode not set, using default 'none'", idx)
c.Disks[idx].CacheMode = "none"
}
// For any storage pool types which aren't in rxStorageTypes in proxmox-api/proxmox/config_qemu.go:651
// (currently zfspool and lvm), the format parameter is mandatory. Make sure this is still up to date
// when updating the vendored code!
if !contains([]string{"zfspool", "lvm"}, c.Disks[idx].StoragePoolType) && c.Disks[idx].DiskFormat == "" {
errs = packer.MultiErrorAppend(errs, errors.New(fmt.Sprintf("disk format must be specified for pool type %q", c.Disks[idx].StoragePoolType)))
}
}
errs = packer.MultiErrorAppend(errs, c.Comm.Prepare(&c.ctx)...)
@ -197,3 +203,12 @@ func NewConfig(raws ...interface{}) (*Config, []string, error) {
packer.LogSecretFilter.Set(c.Password)
return c, nil, nil
}
func contains(haystack []string, needle string) bool {
for _, candidate := range haystack {
if candidate == needle {
return true
}
}
return false
}

View File

@ -3,6 +3,7 @@ package proxmox
import (
"context"
"fmt"
"log"
"github.com/Telmate/proxmox-api-go/proxmox"
"github.com/hashicorp/packer/helper/multistep"
@ -44,7 +45,7 @@ func (s *stepConvertToTemplate) Run(ctx context.Context, state multistep.StateBa
ui.Error(err.Error())
return multistep.ActionHalt
}
log.Printf("template_id: %d", vmRef.VmId())
state.Put("template_id", vmRef.VmId())
return multistep.ActionContinue

View File

@ -30,6 +30,8 @@ func (s *stepStartVM) Run(ctx context.Context, state multistep.StateBag) multist
config := proxmox.ConfigQemu{
Name: c.VMName,
Agent: agent,
Boot: "cdn", // Boot priority, c:CDROM -> d:Disk -> n:Network
QemuCpu: "host",
Description: "Packer ephemeral build VM",
Memory: c.Memory,
QemuCores: c.Cores,
@ -142,7 +144,7 @@ func (s *stepStartVM) Cleanup(state multistep.StateBag) {
ui.Say("Stopping VM")
_, err := client.StopVm(vmRef)
if err != nil {
ui.Error(fmt.Sprintf("Error stop VM. Please stop and delete it manually: %s", err))
ui.Error(fmt.Sprintf("Error stopping VM. Please stop and delete it manually: %s", err))
return
}

View File

@ -30,7 +30,7 @@ type bootCommandTemplateData struct {
}
type commandTyper interface {
MonitorCmd(*proxmox.VmRef, string) (map[string]interface{}, error)
Sendkey(*proxmox.VmRef, string) error
}
var _ commandTyper = &proxmox.Client{}

View File

@ -13,75 +13,74 @@ import (
)
type commandTyperMock struct {
monitorCmd func(*proxmox.VmRef, string) (map[string]interface{}, error)
sendkey func(*proxmox.VmRef, string) error
}
func (m commandTyperMock) MonitorCmd(ref *proxmox.VmRef, cmd string) (map[string]interface{}, error) {
return m.monitorCmd(ref, cmd)
func (m commandTyperMock) Sendkey(ref *proxmox.VmRef, cmd string) error {
return m.sendkey(ref, cmd)
}
var _ commandTyper = commandTyperMock{}
func TestTypeBootCommand(t *testing.T) {
cs := []struct {
name string
builderConfig *Config
expectCallMonitorCmd bool
monitorCmdErr error
monitorCmdRet map[string]interface{}
expectedKeysSent string
expectedAction multistep.StepAction
name string
builderConfig *Config
expectCallSendkey bool
sendkeyErr error
expectedKeysSent string
expectedAction multistep.StepAction
}{
{
name: "simple boot command is typed",
builderConfig: &Config{BootConfig: bootcommand.BootConfig{BootCommand: []string{"hello"}}},
expectCallMonitorCmd: true,
expectedKeysSent: "hello",
expectedAction: multistep.ActionContinue,
name: "simple boot command is typed",
builderConfig: &Config{BootConfig: bootcommand.BootConfig{BootCommand: []string{"hello"}}},
expectCallSendkey: true,
expectedKeysSent: "hello",
expectedAction: multistep.ActionContinue,
},
{
name: "interpolated boot command",
builderConfig: &Config{BootConfig: bootcommand.BootConfig{BootCommand: []string{"hello<enter>world"}}},
expectCallMonitorCmd: true,
expectedKeysSent: "helloretworld",
expectedAction: multistep.ActionContinue,
name: "interpolated boot command",
builderConfig: &Config{BootConfig: bootcommand.BootConfig{BootCommand: []string{"hello<enter>world"}}},
expectCallSendkey: true,
expectedKeysSent: "helloretworld",
expectedAction: multistep.ActionContinue,
},
{
name: "merge multiple interpolated boot command",
builderConfig: &Config{BootConfig: bootcommand.BootConfig{BootCommand: []string{"Hello World 2.0", "foo!bar@baz"}}},
expectCallMonitorCmd: true,
expectedKeysSent: "shift-hellospcshift-worldspc2dot0fooshift-1barshift-2baz",
expectedAction: multistep.ActionContinue,
name: "merge multiple interpolated boot command",
builderConfig: &Config{BootConfig: bootcommand.BootConfig{BootCommand: []string{"Hello World 2.0", "foo!bar@baz"}}},
expectCallSendkey: true,
expectedKeysSent: "shift-hellospcshift-worldspc2dot0fooshift-1barshift-2baz",
expectedAction: multistep.ActionContinue,
},
{
name: "without boot command monitorcmd should not be called",
builderConfig: &Config{BootConfig: bootcommand.BootConfig{BootCommand: []string{}}},
expectCallMonitorCmd: false,
expectedAction: multistep.ActionContinue,
name: "without boot command sendkey should not be called",
builderConfig: &Config{BootConfig: bootcommand.BootConfig{BootCommand: []string{}}},
expectCallSendkey: false,
expectedAction: multistep.ActionContinue,
},
{
name: "invalid boot command template function",
builderConfig: &Config{BootConfig: bootcommand.BootConfig{BootCommand: []string{"{{ foo }}"}}},
expectCallMonitorCmd: false,
expectedAction: multistep.ActionHalt,
name: "invalid boot command template function",
builderConfig: &Config{BootConfig: bootcommand.BootConfig{BootCommand: []string{"{{ foo }}"}}},
expectCallSendkey: false,
expectedAction: multistep.ActionHalt,
},
{
// When proxmox (or Qemu, really) doesn't recognize the keycode we send, we get no error back, but
// a map {"data": "invalid parameter: X"}, where X is the keycode.
name: "invalid keys sent to proxmox",
builderConfig: &Config{BootConfig: bootcommand.BootConfig{BootCommand: []string{"x"}}},
expectCallMonitorCmd: true,
monitorCmdRet: map[string]interface{}{"data": "invalid parameter: x"},
expectedKeysSent: "x",
expectedAction: multistep.ActionHalt,
name: "invalid keys sent to proxmox",
builderConfig: &Config{BootConfig: bootcommand.BootConfig{BootCommand: []string{"x"}}},
expectCallSendkey: true,
sendkeyErr: fmt.Errorf("invalid parameter: x"),
expectedKeysSent: "x",
expectedAction: multistep.ActionHalt,
},
{
name: "error in typing should return halt",
builderConfig: &Config{BootConfig: bootcommand.BootConfig{BootCommand: []string{"hello"}}},
expectCallMonitorCmd: true,
monitorCmdErr: fmt.Errorf("some error"),
expectedKeysSent: "h",
expectedAction: multistep.ActionHalt,
name: "error in typing should return halt",
builderConfig: &Config{BootConfig: bootcommand.BootConfig{BootCommand: []string{"hello"}}},
expectCallSendkey: true,
sendkeyErr: fmt.Errorf("some error"),
expectedKeysSent: "h",
expectedAction: multistep.ActionHalt,
},
}
@ -89,17 +88,14 @@ func TestTypeBootCommand(t *testing.T) {
t.Run(c.name, func(t *testing.T) {
accumulator := strings.Builder{}
typer := commandTyperMock{
monitorCmd: func(ref *proxmox.VmRef, cmd string) (map[string]interface{}, error) {
if !c.expectCallMonitorCmd {
t.Error("Did not expect MonitorCmd to be called")
}
if !strings.HasPrefix(cmd, "sendkey ") {
t.Errorf("Expected all commands to be sendkey, got %s", cmd)
sendkey: func(ref *proxmox.VmRef, cmd string) error {
if !c.expectCallSendkey {
t.Error("Did not expect sendkey to be called")
}
accumulator.WriteString(strings.TrimPrefix(cmd, "sendkey "))
accumulator.WriteString(cmd)
return c.monitorCmdRet, c.monitorCmdErr
return c.sendkeyErr
},
}

View File

@ -32,20 +32,25 @@ type scriptOptions struct {
func GetHostAdapterIpAddressForSwitch(switchName string) (string, error) {
var script = `
param([string]$switchName, [int]$addressIndex)
$HostVMAdapter = Hyper-V\Get-VMNetworkAdapter -ManagementOS -SwitchName $switchName
if ($HostVMAdapter){
$HostNetAdapter = Get-NetAdapter | ?{ $_.DeviceId -eq $HostVMAdapter.DeviceId }
if ($HostNetAdapter){
$HostNetAdapterIfIndex = @()
$HostNetAdapterIfIndex += $HostNetAdapter.ifIndex
$HostNetAdapterConfiguration = @(get-wmiobject win32_networkadapterconfiguration -filter "IPEnabled = 'TRUE'") | Where-Object { $HostNetAdapterIfIndex.Contains($_.InterfaceIndex) }
if ($HostNetAdapterConfiguration){
return @($HostNetAdapterConfiguration.IpAddress)[$addressIndex]
}
$HostNetAdapter = Get-NetAdapter | Where-Object { $_.DeviceId -eq $HostVMAdapter.DeviceId }
if ($HostNetAdapter){
$HostNetAdapterIfIndex = @()
$HostNetAdapterIfIndex += $HostNetAdapter.ifIndex
$HostNetAdapterConfiguration = @(get-wmiobject win32_networkadapterconfiguration -filter "IPEnabled = 'TRUE'") | Where-Object { $HostNetAdapterIfIndex.Contains($_.InterfaceIndex)
if ($HostNetAdapterConfiguration){
return @($HostNetAdapterConfiguration.IpAddress)[$addressIndex]
}
}
} else {
$HostNetAdapterConfiguration=@(Get-NetIPAddress -CimSession $env:computername -AddressFamily IPv4 | Where-Object { ( $_.InterfaceAlias -notmatch 'Loopback' ) -and ( $_.SuffixOrigin -notmatch "Link" )})
if ($HostNetAdapterConfiguration) {
return @($HostNetAdapterConfiguration.IpAddress)[$addressIndex]
} else {
return $false
}
}
return $false
`
var ps powershell.PowerShellCmd

6
go.mod
View File

@ -9,7 +9,7 @@ require (
github.com/ChrisTrenkamp/goxpath v0.0.0-20170625215350-4fe035839290
github.com/NaverCloudPlatform/ncloud-sdk-go v0.0.0-20180110055012-c2e73f942591
github.com/PuerkitoBio/goquery v1.5.0 // indirect
github.com/Telmate/proxmox-api-go v0.0.0-20190614181158-26cd147831a4
github.com/Telmate/proxmox-api-go v0.0.0-20190815172943-ef9222844e60
github.com/abdullin/seq v0.0.0-20160510034733-d5467c17e7af // indirect
github.com/aliyun/alibaba-cloud-sdk-go v0.0.0-20190418113227-25233c783f4e
github.com/aliyun/aliyun-oss-go-sdk v0.0.0-20170113022742-e6dbea820a9f
@ -143,3 +143,7 @@ require (
gopkg.in/ini.v1 v1.42.0 // indirect
gopkg.in/jarcoal/httpmock.v1 v1.0.0-20181117152235-275e9df93516 // indirect
)
replace git.apache.org/thrift.git => github.com/apache/thrift v0.0.0-20180902110319-2566ecd5d999
go 1.13

6
go.sum
View File

@ -9,7 +9,6 @@ dmitri.shuralyov.com/app/changes v0.0.0-20180602232624-0a106ad413e3/go.mod h1:Yl
dmitri.shuralyov.com/html/belt v0.0.0-20180602232347-f7d459c86be0/go.mod h1:JLBrvjyP0v+ecvNYvCpyZgu5/xkfAUhi6wJj28eUfSU=
dmitri.shuralyov.com/service/change v0.0.0-20181023043359-a85b471d5412/go.mod h1:a1inKt/atXimZ4Mv927x+r7UpyzRUf4emIoiiSC2TN4=
dmitri.shuralyov.com/state v0.0.0-20180228185332-28bcc343414c/go.mod h1:0PRwlb0D6DFvNNtx+9ybjezNCa8XF0xaYcETyp6rHWU=
git.apache.org/thrift.git v0.0.0-20180902110319-2566ecd5d999/go.mod h1:fPE2ZNJGynbRyZ4dJvy6G277gSllfV2HJqblrnkyeyg=
github.com/1and1/oneandone-cloudserver-sdk-go v1.0.1 h1:RMTyvS5bjvSWiUcfqfr/E2pxHEMrALvU+E12n6biymg=
github.com/1and1/oneandone-cloudserver-sdk-go v1.0.1/go.mod h1:61apmbkVJH4kg+38ftT+/l0XxdUCVnHggqcOTqZRSEE=
github.com/Azure/azure-sdk-for-go v30.0.0+incompatible h1:6o1Yzl7wTBYg+xw0pY4qnalaPmEQolubEEdepo1/kmI=
@ -25,8 +24,8 @@ github.com/NaverCloudPlatform/ncloud-sdk-go v0.0.0-20180110055012-c2e73f942591 h
github.com/NaverCloudPlatform/ncloud-sdk-go v0.0.0-20180110055012-c2e73f942591/go.mod h1:EHGzQGbwozJBj/4qj3WGrTJ0FqjgOTOxLQ0VNWvPn08=
github.com/PuerkitoBio/goquery v1.5.0 h1:uGvmFXOA73IKluu/F84Xd1tt/z07GYm8X49XKHP7EJk=
github.com/PuerkitoBio/goquery v1.5.0/go.mod h1:qD2PgZ9lccMbQlc7eEOjaeRlFQON7xY8kdmcsrnKqMg=
github.com/Telmate/proxmox-api-go v0.0.0-20190614181158-26cd147831a4 h1:o//09WenT9BNcQypCYfOBfRe5gtLUvUfTPq0xQqPMEI=
github.com/Telmate/proxmox-api-go v0.0.0-20190614181158-26cd147831a4/go.mod h1:OGWyIMJ87/k/GCz8CGiWB2HOXsOVDM6Lpe/nFPkC4IQ=
github.com/Telmate/proxmox-api-go v0.0.0-20190815172943-ef9222844e60 h1:iEmbIRk4brAP3wevhCr5MGAqxHUbbIDHvE+6D1/7pRA=
github.com/Telmate/proxmox-api-go v0.0.0-20190815172943-ef9222844e60/go.mod h1:OGWyIMJ87/k/GCz8CGiWB2HOXsOVDM6Lpe/nFPkC4IQ=
github.com/abdullin/seq v0.0.0-20160510034733-d5467c17e7af h1:DBNMBMuMiWYu0b+8KMJuWmfCkcxl09JwdlqwDZZ6U14=
github.com/abdullin/seq v0.0.0-20160510034733-d5467c17e7af/go.mod h1:5Jv4cbFiHJMsVxt52+i0Ha45fjshj6wxYr1r19tB9bw=
github.com/aliyun/alibaba-cloud-sdk-go v0.0.0-20190418113227-25233c783f4e h1:/8wOj52pewmIX/8d5eVO3t7Rr3astkBI/ruyg4WNqRo=
@ -46,6 +45,7 @@ github.com/antchfx/xquery v0.0.0-20170730121040-eb8c3c172607 h1:BFFG6KP8ASFBg2pt
github.com/antchfx/xquery v0.0.0-20170730121040-eb8c3c172607/go.mod h1:LzD22aAzDP8/dyiCKFp31He4m2GPjl0AFyzDtZzUu9M=
github.com/antihax/optional v0.0.0-20180407024304-ca021399b1a6 h1:uZuxRZCz65cG1o6K/xUqImNcYKtmk9ylqaH0itMSvzA=
github.com/antihax/optional v0.0.0-20180407024304-ca021399b1a6/go.mod h1:V8iCPQYkqmusNa815XgQio277wI47sdRh1dUOLdyC6Q=
github.com/apache/thrift v0.0.0-20180902110319-2566ecd5d999/go.mod h1:cp2SuWMxlEZw2r+iP2GNCdIi4C1qmUzdZFSVb+bacwQ=
github.com/approvals/go-approval-tests v0.0.0-20160714161514-ad96e53bea43 h1:ePCAQPf5tUc5IMcUvu6euhSGna7jzs7eiXtJXHig6Zc=
github.com/approvals/go-approval-tests v0.0.0-20160714161514-ad96e53bea43/go.mod h1:S6puKjZ9ZeqUPBv2hEBnMZGcM2J6mOsDRQcmxkMAND0=
github.com/armon/circbuf v0.0.0-20150827004946-bbbad097214e/go.mod h1:3U/XgcO3hCbHZ8TKRvWD2dDTCfh9M9ya+I9JpbB7O8o=

View File

@ -19,11 +19,6 @@ import (
"github.com/hashicorp/packer/template/interpolate"
)
var builtins = map[string]string{
"mitchellh.vmware": "vmware",
"mitchellh.vmware-esx": "vmware",
}
var ovftool string = "ovftool"
var (
@ -116,10 +111,6 @@ func (p *PostProcessor) Configure(raws ...interface{}) error {
}
func (p *PostProcessor) PostProcess(ctx context.Context, ui packer.Ui, artifact packer.Artifact) (packer.Artifact, bool, bool, error) {
if _, ok := builtins[artifact.BuilderId()]; !ok {
return nil, false, false, fmt.Errorf("Unknown artifact type, can't build box: %s", artifact.BuilderId())
}
source := ""
for _, path := range artifact.Files() {
if strings.HasSuffix(path, ".vmx") || strings.HasSuffix(path, ".ovf") || strings.HasSuffix(path, ".ova") {

View File

@ -31,6 +31,7 @@ type Client struct {
ApiUrl string
Username string
Password string
Otp string
}
// VmRef - virtual machine ref parts
@ -38,6 +39,7 @@ type Client struct {
type VmRef struct {
vmId int
node string
pool string
vmType string
}
@ -46,11 +48,19 @@ func (vmr *VmRef) SetNode(node string) {
return
}
func (vmr *VmRef) SetPool(pool string) {
vmr.pool = pool
}
func (vmr *VmRef) SetVmType(vmType string) {
vmr.vmType = vmType
return
}
func (vmr *VmRef) GetVmType() (string) {
return vmr.vmType
}
func (vmr *VmRef) VmId() int {
return vmr.vmId
}
@ -59,6 +69,10 @@ func (vmr *VmRef) Node() string {
return vmr.node
}
func (vmr *VmRef) Pool() string {
return vmr.pool
}
func NewVmRef(vmId int) (vmr *VmRef) {
vmr = &VmRef{vmId: vmId, node: "", vmType: ""}
return
@ -73,10 +87,11 @@ func NewClient(apiUrl string, hclient *http.Client, tls *tls.Config) (client *Cl
return client, err
}
func (c *Client) Login(username string, password string) (err error) {
func (c *Client) Login(username string, password string, otp string) (err error) {
c.Username = username
c.Password = password
return c.session.Login(username, password)
c.Otp = otp
return c.session.Login(username, password, otp)
}
func (c *Client) GetJsonRetryable(url string, data *map[string]interface{}, tries int) error {
@ -275,10 +290,26 @@ func (c *Client) MonitorCmd(vmr *VmRef, command string) (monitorRes map[string]i
reqbody := ParamsToBody(map[string]interface{}{"command": command})
url := fmt.Sprintf("/nodes/%s/%s/%d/monitor", vmr.node, vmr.vmType, vmr.vmId)
resp, err := c.session.Post(url, nil, nil, &reqbody)
if err != nil {
return nil, err
}
monitorRes, err = ResponseJSON(resp)
return
}
func (c *Client) Sendkey(vmr *VmRef, qmKey string) error {
err := c.CheckVmRef(vmr)
if err != nil {
return err
}
reqbody := ParamsToBody(map[string]interface{}{"key": qmKey})
url := fmt.Sprintf("/nodes/%s/%s/%d/sendkey", vmr.node, vmr.vmType, vmr.vmId)
// No return, even for errors: https://bugzilla.proxmox.com/show_bug.cgi?id=2275
_, err = c.session.Put(url, nil, nil, &reqbody)
return err
}
// WaitForCompletion - poll the API for task completion
func (c *Client) WaitForCompletion(taskResponse map[string]interface{}) (waitExitStatus string, err error) {
if taskResponse["errors"] != nil {
@ -416,6 +447,29 @@ func (c *Client) CreateQemuVm(node string, vmParams map[string]interface{}) (exi
return
}
func (c *Client) CreateLxcContainer(node string, vmParams map[string]interface{}) (exitStatus string, err error) {
reqbody := ParamsToBody(vmParams)
url := fmt.Sprintf("/nodes/%s/lxc", node)
var resp *http.Response
resp, err = c.session.Post(url, nil, nil, &reqbody)
defer resp.Body.Close()
if err != nil {
// This might not work if we never got a body. We'll ignore errors in trying to read,
// but extract the body if possible to give any error information back in the exitStatus
b, _ := ioutil.ReadAll(resp.Body)
exitStatus = string(b)
return exitStatus, err
}
taskResponse, err := ResponseJSON(resp)
if err != nil {
return "", err
}
exitStatus, err = c.WaitForCompletion(taskResponse)
return
}
func (c *Client) CloneQemuVm(vmr *VmRef, vmParams map[string]interface{}) (exitStatus string, err error) {
reqbody := ParamsToBody(vmParams)
url := fmt.Sprintf("/nodes/%s/qemu/%d/clone", vmr.node, vmr.vmId)
@ -457,6 +511,21 @@ func (c *Client) SetVmConfig(vmr *VmRef, vmParams map[string]interface{}) (exitS
return
}
// SetLxcConfig - send config options
func (c *Client) SetLxcConfig(vmr *VmRef, vmParams map[string]interface{}) (exitStatus interface{}, err error) {
reqbody := ParamsToBody(vmParams)
url := fmt.Sprintf("/nodes/%s/%s/%d/config", vmr.node, vmr.vmType, vmr.vmId)
resp, err := c.session.Put(url, nil, nil, &reqbody)
if err == nil {
taskResponse, err := ResponseJSON(resp)
if err != nil {
return nil, err
}
exitStatus, err = c.WaitForCompletion(taskResponse)
}
return
}
func (c *Client) ResizeQemuDisk(vmr *VmRef, disk string, moreSizeGB int) (exitStatus interface{}, err error) {
// PUT
//disk:virtio0
@ -478,6 +547,23 @@ func (c *Client) ResizeQemuDisk(vmr *VmRef, disk string, moreSizeGB int) (exitSt
return
}
func (c *Client) MoveQemuDisk(vmr *VmRef, disk string, storage string) (exitStatus interface{}, err error) {
if disk == "" {
disk = "virtio0"
}
reqbody := ParamsToBody(map[string]interface{}{"disk": disk, "storage": storage, "delete": true})
url := fmt.Sprintf("/nodes/%s/%s/%d/move_disk", vmr.node, vmr.vmType, vmr.vmId)
resp, err := c.session.Post(url, nil, nil, &reqbody)
if err == nil {
taskResponse, err := ResponseJSON(resp)
if err != nil {
return nil, err
}
exitStatus, err = c.WaitForCompletion(taskResponse)
}
return
}
// GetNextID - Get next free VMID
func (c *Client) GetNextID(currentID int) (nextID int, err error) {
var data map[string]interface{}

View File

@ -0,0 +1,433 @@
package proxmox
import (
"encoding/json"
"fmt"
"io"
"log"
"strings"
"strconv"
)
// LXC options for the Proxmox API
type configLxc struct {
Ostemplate string `json:"ostemplate"`
Arch string `json:"arch"`
BWLimit int `json:"bwlimit,omitempty"`
CMode string `json:"cmode"`
Console bool `json:"console"`
Cores int `json:"cores,omitempty"`
CPULimit int `json:"cpulimit"`
CPUUnits int `json:"cpuunits"`
Description string `json:"description,omitempty"`
Features QemuDevice `json:"features,omitempty"`
Force bool `json:"force,omitempty"`
Hookscript string `json:"hookscript,omitempty"`
Hostname string `json:"hostname,omitempty"`
IgnoreUnpackErrors bool `json:"ignore-unpack-errors,omitempty"`
Lock string `json:"lock,omitempty"`
Memory int `json:"memory"`
Mountpoints QemuDevices `json:"mountpoints,omitempty"`
Nameserver string `json:"nameserver,omitempty"`
Networks QemuDevices `json:"networks,omitempty"`
OnBoot bool `json:"onboot"`
OsType string `json:"ostype,omitempty"`
Password string `json:"password,omitempty"`
Pool string `json:"pool,omitempty"`
Protection bool `json:"protection"`
Restore bool `json:"restore,omitempty"`
RootFs string `json:"rootfs,omitempty"`
SearchDomain string `json:"searchdomain,omitempty"`
SSHPublicKeys string `json:"ssh-public-keys,omitempty"`
Start bool `json:"start"`
Startup string `json:"startup,omitempty"`
Storage string `json:"storage"`
Swap int `json:"swap"`
Template bool `json:"template,omitempty"`
Tty int `json:"tty"`
Unique bool `json:"unique,omitempty"`
Unprivileged bool `json:"unprivileged"`
Unused []string `json:"unused,omitempty"`
}
func NewConfigLxc() (configLxc) {
return configLxc{
Arch: "amd64",
CMode: "tty",
Console: true,
CPULimit: 0,
CPUUnits: 1024,
Memory: 512,
OnBoot: false,
Protection: false,
Start: false,
Storage: "local",
Swap: 512,
Template: false,
Tty: 2,
Unprivileged: false,
}
}
func NewConfigLxcFromJson(io io.Reader) (config configLxc, err error) {
config = NewConfigLxc()
err = json.NewDecoder(io).Decode(config)
if err != nil {
log.Fatal(err)
return config, err
}
log.Println(config)
return
}
func NewConfigLxcFromApi(vmr *VmRef, client *Client) (config *configLxc, err error) {
// prepare json map to receive the information from the api
var lxcConfig map[string]interface{}
lxcConfig, err = client.GetVmConfig(vmr)
if err != nil {
log.Fatal(err)
return nil, err
}
// prepare a new lxc config to store and return\
// the information from api
newConfig := NewConfigLxc()
config = &newConfig
arch := ""
if _, isSet := lxcConfig["arch"]; isSet {
arch = lxcConfig["arch"].(string)
}
cmode := ""
if _, isSet := lxcConfig["cmode"]; isSet {
cmode = lxcConfig["cmode"].(string)
}
console := true
if _, isSet := lxcConfig["console"]; isSet {
console = Itob(int(lxcConfig["console"].(float64)))
}
cores := 1
if _, isSet := lxcConfig["cores"]; isSet {
cores = int(lxcConfig["cores"].(float64))
}
cpulimit := 0
if _, isSet := lxcConfig["cpulimit"]; isSet {
cpulimit, _ = strconv.Atoi(lxcConfig["cpulimit"].(string))
}
cpuunits := 1024
if _, isSet := lxcConfig["cpuunits"]; isSet {
cpuunits = int(lxcConfig["cpuunits"].(float64))
}
description := ""
if _, isSet := lxcConfig["description"]; isSet {
description = lxcConfig["description"].(string)
}
// add features, if any
if features, isSet := lxcConfig["features"]; isSet {
featureList := strings.Split(features.(string), ",")
// create new device map to store features
featureMap := QemuDevice{}
// add all features to device map
featureMap.readDeviceConfig(featureList)
// prepare empty feature map
if config.Features == nil {
config.Features = QemuDevice{}
}
// and device config to networks
if len(featureMap) > 0 {
config.Features = featureMap
}
}
hookscript := ""
if _, isSet := lxcConfig["hookscript"]; isSet {
hookscript = lxcConfig["hookscript"].(string)
}
hostname := ""
if _, isSet := lxcConfig["hostname"]; isSet {
hostname = lxcConfig["hostname"].(string)
}
lock := ""
if _, isSet := lxcConfig["lock"]; isSet {
lock = lxcConfig["lock"].(string)
}
memory := 512
if _, isSet := lxcConfig["memory"]; isSet {
memory = int(lxcConfig["memory"].(float64))
}
// add mountpoints
mpNames := []string{}
for k, _ := range lxcConfig {
if mpName:= rxMpName.FindStringSubmatch(k); len(mpName) > 0 {
mpNames = append(mpNames, mpName[0])
}
}
for _, mpName := range mpNames {
mpConfStr := lxcConfig[mpName]
mpConfList := strings.Split(mpConfStr.(string), ",")
id := rxDeviceID.FindStringSubmatch(mpName)
mpID, _ := strconv.Atoi(id[0])
// add mp id
mpConfMap := QemuDevice{
"id": mpID,
}
// add rest of device config
mpConfMap.readDeviceConfig(mpConfList)
// prepare empty mountpoint map
if config.Mountpoints == nil {
config.Mountpoints = QemuDevices{}
}
// and device config to mountpoints
if len(mpConfMap) > 0 {
config.Mountpoints[mpID] = mpConfMap
}
}
nameserver := ""
if _, isSet := lxcConfig["nameserver"]; isSet {
nameserver = lxcConfig["nameserver"].(string)
}
// add networks
nicNames := []string{}
for k, _ := range lxcConfig {
if nicName := rxNicName.FindStringSubmatch(k); len(nicName) > 0 {
nicNames = append(nicNames, nicName[0])
}
}
for _, nicName := range nicNames {
nicConfStr := lxcConfig[nicName]
nicConfList := strings.Split(nicConfStr.(string), ",")
id := rxDeviceID.FindStringSubmatch(nicName)
nicID, _ := strconv.Atoi(id[0])
// add nic id
nicConfMap := QemuDevice{
"id": nicID,
}
// add rest of device config
nicConfMap.readDeviceConfig(nicConfList)
// prepare empty network map
if config.Networks == nil {
config.Networks = QemuDevices{}
}
// and device config to networks
if len(nicConfMap) > 0 {
config.Networks[nicID] = nicConfMap
}
}
onboot := false
if _, isSet := lxcConfig["onboot"]; isSet {
onboot = Itob(int(lxcConfig["onboot"].(float64)))
}
ostype := ""
if _, isSet := lxcConfig["ostype"]; isSet {
ostype = lxcConfig["ostype"].(string)
}
protection := false
if _, isSet := lxcConfig["protection"]; isSet {
protection = Itob(int(lxcConfig["protection"].(float64)))
}
rootfs := ""
if _, isSet := lxcConfig["rootfs"]; isSet {
rootfs = lxcConfig["rootfs"].(string)
}
searchdomain := ""
if _, isSet := lxcConfig["searchdomain"]; isSet {
searchdomain = lxcConfig["searchdomain"].(string)
}
startup := ""
if _, isSet := lxcConfig["startup"]; isSet {
startup = lxcConfig["startup"].(string)
}
swap := 512
if _, isSet := lxcConfig["swap"]; isSet {
swap = int(lxcConfig["swap"].(float64))
}
template := false
if _, isSet := lxcConfig["template"]; isSet {
template = Itob(int(lxcConfig["template"].(float64)))
}
tty := 2
if _, isSet := lxcConfig["tty"]; isSet {
tty = int(lxcConfig["tty"].(float64))
}
unprivileged := false
if _, isset := lxcConfig["unprivileged"]; isset {
unprivileged = Itob(int(lxcConfig["unprivileged"].(float64)))
}
var unused []string
if _, isset := lxcConfig["unused"]; isset {
unused = lxcConfig["unused"].([]string)
}
config.Arch = arch
config.CMode = cmode
config.Console = console
config.Cores = cores
config.CPULimit = cpulimit
config.CPUUnits = cpuunits
config.Description = description
config.OnBoot = onboot
config.Hookscript = hookscript
config.Hostname = hostname
config.Lock = lock
config.Memory = memory
config.Nameserver = nameserver
config.OnBoot = onboot
config.OsType = ostype
config.Protection = protection
config.RootFs = rootfs
config.SearchDomain = searchdomain
config.Startup = startup
config.Swap = swap
config.Template = template
config.Tty = tty
config.Unprivileged = unprivileged
config.Unused = unused
return
}
// create LXC container using the Proxmox API
func (config configLxc) CreateLxc(vmr *VmRef, client *Client) (err error) {
vmr.SetVmType("lxc")
// convert config to map
params, _ := json.Marshal(&config)
var paramMap map[string]interface{}
json.Unmarshal(params, &paramMap)
// build list of features
// add features as parameter list to lxc parameters
// this overwrites the orginal formatting with a
// comma separated list of "key=value" pairs
featuresParam := QemuDeviceParam{}
featuresParam = featuresParam.createDeviceParam(config.Features, nil)
paramMap["features"] = strings.Join(featuresParam, ",")
// build list of mountpoints
// this does the same as for the feature list
// except that there can be multiple of these mountpoint sets
// and each mountpoint set comes with a new id
for mpID, mpConfMap := range config.Mountpoints {
mpConfParam := QemuDeviceParam{}
mpConfParam = mpConfParam.createDeviceParam(mpConfMap, nil)
// add mp to lxc parameters
mpName := fmt.Sprintf("mp%v", mpID)
paramMap[mpName] = strings.Join(mpConfParam, ",")
}
// build list of network parameters
for nicID, nicConfMap := range config.Networks {
nicConfParam := QemuDeviceParam{}
nicConfParam = nicConfParam.createDeviceParam(nicConfMap, nil)
// add nic to lxc parameters
nicName := fmt.Sprintf("net%v", nicID)
paramMap[nicName] = strings.Join(nicConfParam, ",")
}
// build list of unused volumes for sake of completenes,
// even if it is not recommended to change these volumes manually
for volID, vol := range config.Unused {
// add volume to lxc parameters
volName := fmt.Sprintf("unused%v", volID)
paramMap[volName] = vol
}
// now that we concatenated the key value parameter
// list for the networks, mountpoints and unused volumes,
// remove the original keys, since the Proxmox API does
// not know how to handle this key
delete(paramMap, "networks")
delete(paramMap, "mountpoints")
delete(paramMap, "unused")
// amend vmid
paramMap["vmid"] = vmr.vmId
exitStatus, err := client.CreateLxcContainer(vmr.node, paramMap)
if err != nil {
return fmt.Errorf("Error creating LXC container: %v, error status: %s (params: %v)", err, exitStatus, params)
}
return
}
func (config configLxc) UpdateConfig(vmr *VmRef, client *Client) (err error) {
// convert config to map
params, _ := json.Marshal(&config)
var paramMap map[string]interface{}
json.Unmarshal(params, &paramMap)
// build list of features
// add features as parameter list to lxc parameters
// this overwrites the orginal formatting with a
// comma separated list of "key=value" pairs
featuresParam := QemuDeviceParam{}
featuresParam = featuresParam.createDeviceParam(config.Features, nil)
paramMap["features"] = strings.Join(featuresParam, ",")
// build list of mountpoints
// this does the same as for the feature list
// except that there can be multiple of these mountpoint sets
// and each mountpoint set comes with a new id
for mpID, mpConfMap := range config.Mountpoints {
mpConfParam := QemuDeviceParam{}
mpConfParam = mpConfParam.createDeviceParam(mpConfMap, nil)
// add mp to lxc parameters
mpName := fmt.Sprintf("mp%v", mpID)
paramMap[mpName] = strings.Join(mpConfParam, ",")
}
// build list of network parameters
for nicID, nicConfMap := range config.Networks {
nicConfParam := QemuDeviceParam{}
nicConfParam = nicConfParam.createDeviceParam(nicConfMap, nil)
// add nic to lxc parameters
nicName := fmt.Sprintf("net%v", nicID)
paramMap[nicName] = strings.Join(nicConfParam, ",")
}
// build list of unused volumes for sake of completenes,
// even if it is not recommended to change these volumes manually
for volID, vol := range config.Unused {
// add volume to lxc parameters
volName := fmt.Sprintf("unused%v", volID)
paramMap[volName] = vol
}
// now that we concatenated the key value parameter
// list for the networks, mountpoints and unused volumes,
// remove the original keys, since the Proxmox API does
// not know how to handle this key
delete(paramMap, "networks")
delete(paramMap, "mountpoints")
delete(paramMap, "unused")
// delete parameters wich are not supported in updated operations
delete(paramMap, "pool")
delete(paramMap, "storage")
delete(paramMap, "password")
delete(paramMap, "ostemplate")
delete(paramMap, "start")
// even though it is listed as a PUT option in the API documentation
// we remove it here because "it should not be modified manually";
// also, error "500 unable to modify read-only option: 'unprivileged'"
delete(paramMap, "unprivileged")
_, err = client.SetLxcConfig(vmr, paramMap)
return err
}

View File

@ -31,10 +31,17 @@ type ConfigQemu struct {
QemuOs string `json:"os"`
QemuCores int `json:"cores"`
QemuSockets int `json:"sockets"`
QemuCpu string `json:"cpu"`
QemuNuma bool `json:"numa"`
Hotplug string `json:"hotplug"`
QemuIso string `json:"iso"`
FullClone *int `json:"fullclone"`
Boot string `json:"boot"`
BootDisk string `json:"bootdisk,omitempty"`
Scsihw string `json:"scsihw,omitempty"`
QemuDisks QemuDevices `json:"disk"`
QemuNetworks QemuDevices `json:"network"`
QemuSerials QemuDevices `json:"serial,omitempty"`
// Deprecated single disk.
DiskSize float64 `json:"diskGB"`
@ -50,6 +57,7 @@ type ConfigQemu struct {
// cloud-init options
CIuser string `json:"ciuser"`
CIpassword string `json:"cipassword"`
CIcustom string `json:"cicustom"`
Searchdomain string `json:"searchdomain"`
Nameserver string `json:"nameserver"`
@ -76,10 +84,24 @@ func (config ConfigQemu) CreateVm(vmr *VmRef, client *Client) (err error) {
"ostype": config.QemuOs,
"sockets": config.QemuSockets,
"cores": config.QemuCores,
"cpu": "host",
"cpu": config.QemuCpu,
"numa": config.QemuNuma,
"hotplug": config.Hotplug,
"memory": config.Memory,
"boot": config.Boot,
"description": config.Description,
}
if vmr.pool != "" {
params["pool"] = vmr.pool
}
if config.BootDisk != "" {
params["bootdisk"] = config.BootDisk
}
if config.Scsihw != "" {
params["scsihw"] = config.Scsihw
}
// Create disks config.
config.CreateQemuDisksParams(vmr.vmId, params, false)
@ -87,6 +109,9 @@ func (config ConfigQemu) CreateVm(vmr *VmRef, client *Client) (err error) {
// Create networks config.
config.CreateQemuNetworksParams(vmr.vmId, params)
// Create serial interfaces
config.CreateQemuSerialsParams(vmr.vmId, params)
exitStatus, err := client.CreateQemuVm(vmr.node, params)
if err != nil {
return fmt.Errorf("Error creating VM: %v, error status: %s (params: %v)", err, exitStatus, params)
@ -102,7 +127,8 @@ func (config ConfigQemu) HasCloudInit() bool {
config.Nameserver != "" ||
config.Sshkeys != "" ||
config.Ipconfig0 != "" ||
config.Ipconfig1 != ""
config.Ipconfig1 != "" ||
config.CIcustom != ""
}
/*
@ -136,6 +162,10 @@ func (config ConfigQemu) CloneVm(sourceVmr *VmRef, vmr *VmRef, client *Client) (
"storage": storage,
"full": fullclone,
}
if vmr.pool != "" {
params["pool"] = vmr.pool
}
_, err = client.CloneQemuVm(sourceVmr, params)
if err != nil {
return
@ -151,7 +181,19 @@ func (config ConfigQemu) UpdateConfig(vmr *VmRef, client *Client) (err error) {
"agent": config.Agent,
"sockets": config.QemuSockets,
"cores": config.QemuCores,
"cpu": config.QemuCpu,
"numa": config.QemuNuma,
"hotplug": config.Hotplug,
"memory": config.Memory,
"boot": config.Boot,
}
if config.BootDisk != "" {
configParams["bootdisk"] = config.BootDisk
}
if config.Scsihw != "" {
configParams["scsihw"] = config.Scsihw
}
// Create disks config.
@ -160,6 +202,9 @@ func (config ConfigQemu) UpdateConfig(vmr *VmRef, client *Client) (err error) {
// Create networks config.
config.CreateQemuNetworksParams(vmr.vmId, configParams)
// Create serial interfaces
config.CreateQemuSerialsParams(vmr.vmId, configParams)
// cloud-init options
if config.CIuser != "" {
configParams["ciuser"] = config.CIuser
@ -167,6 +212,9 @@ func (config ConfigQemu) UpdateConfig(vmr *VmRef, client *Client) (err error) {
if config.CIpassword != "" {
configParams["cipassword"] = config.CIpassword
}
if config.CIcustom != "" {
configParams["cicustom"] = config.CIcustom
}
if config.Searchdomain != "" {
configParams["searchdomain"] = config.Searchdomain
}
@ -202,11 +250,13 @@ func NewConfigQemuFromJson(io io.Reader) (config *ConfigQemu, err error) {
}
var (
rxIso = regexp.MustCompile(`(.*?),media`)
rxDeviceID = regexp.MustCompile(`\d+`)
rxDiskName = regexp.MustCompile(`(virtio|scsi)\d+`)
rxDiskType = regexp.MustCompile(`\D+`)
rxNicName = regexp.MustCompile(`net\d+`)
rxIso = regexp.MustCompile(`(.*?),media`)
rxDeviceID = regexp.MustCompile(`\d+`)
rxDiskName = regexp.MustCompile(`(virtio|scsi)\d+`)
rxDiskType = regexp.MustCompile(`\D+`)
rxNicName = regexp.MustCompile(`net\d+`)
rxMpName = regexp.MustCompile(`mp\d+`)
rxSerialName = regexp.MustCompile(`serial\d+`)
)
func NewConfigQemuFromApi(vmr *VmRef, client *Client) (config *ConfigQemu, err error) {
@ -278,6 +328,32 @@ func NewConfigQemuFromApi(vmr *VmRef, client *Client) (config *ConfigQemu, err e
if _, isSet := vmConfig["sockets"]; isSet {
sockets = vmConfig["sockets"].(float64)
}
cpu := "host"
if _, isSet := vmConfig["cpu"]; isSet {
cpu = vmConfig["cpu"].(string)
}
numa := false
if _, isSet := vmConfig["numa"]; isSet {
numa = Itob(int(vmConfig["numa"].(float64)))
}
//Can be network,disk,cpu,memory,usb
hotplug := "network,disk,usb"
if _, isSet := vmConfig["hotplug"]; isSet {
hotplug = vmConfig["hotplug"].(string)
}
//boot by default from hard disk (c), CD-ROM (d), network (n).
boot := "cdn"
if _, isSet := vmConfig["boot"]; isSet {
boot = vmConfig["boot"].(string)
}
bootdisk := ""
if _, isSet := vmConfig["bootdisk"]; isSet {
bootdisk = vmConfig["bootdisk"].(string)
}
scsihw := "lsi"
if _, isSet := vmConfig["scsihw"]; isSet {
scsihw = vmConfig["scsihw"].(string)
}
config = &ConfigQemu{
Name: name,
Description: strings.TrimSpace(description),
@ -287,9 +363,16 @@ func NewConfigQemuFromApi(vmr *VmRef, client *Client) (config *ConfigQemu, err e
Memory: int(memory),
QemuCores: int(cores),
QemuSockets: int(sockets),
QemuCpu: cpu,
QemuNuma: numa,
Hotplug: hotplug,
QemuVlanTag: -1,
Boot: boot,
BootDisk: bootdisk,
Scsihw: scsihw,
QemuDisks: QemuDevices{},
QemuNetworks: QemuDevices{},
QemuSerials: QemuDevices{},
}
if vmConfig["ide2"] != nil {
@ -303,6 +386,9 @@ func NewConfigQemuFromApi(vmr *VmRef, client *Client) (config *ConfigQemu, err e
if _, isSet := vmConfig["cipassword"]; isSet {
config.CIpassword = vmConfig["cipassword"].(string)
}
if _, isSet := vmConfig["cicustom"]; isSet {
config.CIcustom = vmConfig["cicustom"].(string)
}
if _, isSet := vmConfig["searchdomain"]; isSet {
config.Searchdomain = vmConfig["searchdomain"].(string)
}
@ -388,6 +474,30 @@ func NewConfigQemuFromApi(vmr *VmRef, client *Client) (config *ConfigQemu, err e
}
}
// Add serials
serialNames := []string{}
for k, _ := range vmConfig {
if serialName := rxSerialName.FindStringSubmatch(k); len(serialName) > 0 {
serialNames = append(serialNames, serialName[0])
}
}
for _, serialName := range serialNames {
id := rxDeviceID.FindStringSubmatch(serialName)
serialID, _ := strconv.Atoi(id[0])
serialConfMap := QemuDevice{
"id": serialID,
"type": vmConfig[serialName],
}
// And device config to serials map.
if len(serialConfMap) > 0 {
config.QemuSerials[serialID] = serialConfMap
}
}
return
}
@ -620,7 +730,9 @@ func (c ConfigQemu) CreateQemuDisksParams(
"storage_type": "lvm", // default old style
"cache": "none", // default old value
}
if c.QemuDisks == nil {
c.QemuDisks = make(QemuDevices)
}
c.QemuDisks[0] = deprecatedStyleMap
}
@ -646,9 +758,9 @@ func (c ConfigQemu) CreateQemuDisksParams(
// Disk name.
var diskFile string
// Currently ZFS local, LVM, and Directory are considered.
// Currently ZFS local, LVM, Ceph RBD, and Directory are considered.
// Other formats are not verified, but could be added if they're needed.
rxStorageTypes := `(zfspool|lvm)`
rxStorageTypes := `(zfspool|lvm|rbd)`
storageType := diskConfMap["storage_type"].(string)
if matched, _ := regexp.MatchString(rxStorageTypes, storageType); matched {
diskFile = fmt.Sprintf("file=%v:vm-%v-disk-%v", diskConfMap["storage"], vmID, diskID)
@ -716,3 +828,22 @@ func (c ConfigQemu) String() string {
jsConf, _ := json.Marshal(c)
return string(jsConf)
}
// Create parameters for serial interface
func (c ConfigQemu) CreateQemuSerialsParams(
vmID int,
params map[string]interface{},
) error {
// For new style with multi disk device.
for serialID, serialConfMap := range c.QemuSerials {
// Device name.
deviceType := serialConfMap["type"].(string)
qemuSerialName := "serial" + strconv.Itoa(serialID)
// Add back to Qemu prams.
params[qemuSerialName] = deviceType
}
return nil
}

View File

@ -106,8 +106,12 @@ func TypedResponse(resp *http.Response, v interface{}) error {
return nil
}
func (s *Session) Login(username string, password string) (err error) {
reqbody := ParamsToBody(map[string]interface{}{"username": username, "password": password})
func (s *Session) Login(username string, password string, otp string) (err error) {
reqUser := map[string]interface{}{"username": username, "password": password}
if otp != "" {
reqUser["otp"] = otp
}
reqbody := ParamsToBody(reqUser)
olddebug := *Debug
*Debug = false // don't share passwords in debug log
resp, err := s.Post("/access/ticket", nil, nil, &reqbody)
@ -127,6 +131,10 @@ func (s *Session) Login(username string, password string) (err error) {
return fmt.Errorf("Invalid login response:\n-----\n%s\n-----", dr)
}
dat := jbody["data"].(map[string]interface{})
//Check if the 2FA was required
if dat["NeedTFA"] == 1.0 {
return errors.New("Missing TFA code")
}
s.AuthTicket = dat["ticket"].(string)
s.CsrfToken = dat["CSRFPreventionToken"].(string)
return nil

View File

@ -1,265 +0,0 @@
// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
// This program generates fixedhuff.go
// Invoke as
//
// go run gen.go -output fixedhuff.go
package main
import (
"bytes"
"flag"
"fmt"
"go/format"
"io/ioutil"
"log"
)
var filename = flag.String("output", "fixedhuff.go", "output file name")
const maxCodeLen = 16
// Note: the definition of the huffmanDecoder struct is copied from
// inflate.go, as it is private to the implementation.
// chunk & 15 is number of bits
// chunk >> 4 is value, including table link
const (
huffmanChunkBits = 9
huffmanNumChunks = 1 << huffmanChunkBits
huffmanCountMask = 15
huffmanValueShift = 4
)
type huffmanDecoder struct {
min int // the minimum code length
chunks [huffmanNumChunks]uint32 // chunks as described above
links [][]uint32 // overflow links
linkMask uint32 // mask the width of the link table
}
// Initialize Huffman decoding tables from array of code lengths.
// Following this function, h is guaranteed to be initialized into a complete
// tree (i.e., neither over-subscribed nor under-subscribed). The exception is a
// degenerate case where the tree has only a single symbol with length 1. Empty
// trees are permitted.
func (h *huffmanDecoder) init(bits []int) bool {
// Sanity enables additional runtime tests during Huffman
// table construction. It's intended to be used during
// development to supplement the currently ad-hoc unit tests.
const sanity = false
if h.min != 0 {
*h = huffmanDecoder{}
}
// Count number of codes of each length,
// compute min and max length.
var count [maxCodeLen]int
var min, max int
for _, n := range bits {
if n == 0 {
continue
}
if min == 0 || n < min {
min = n
}
if n > max {
max = n
}
count[n]++
}
// Empty tree. The decompressor.huffSym function will fail later if the tree
// is used. Technically, an empty tree is only valid for the HDIST tree and
// not the HCLEN and HLIT tree. However, a stream with an empty HCLEN tree
// is guaranteed to fail since it will attempt to use the tree to decode the
// codes for the HLIT and HDIST trees. Similarly, an empty HLIT tree is
// guaranteed to fail later since the compressed data section must be
// composed of at least one symbol (the end-of-block marker).
if max == 0 {
return true
}
code := 0
var nextcode [maxCodeLen]int
for i := min; i <= max; i++ {
code <<= 1
nextcode[i] = code
code += count[i]
}
// Check that the coding is complete (i.e., that we've
// assigned all 2-to-the-max possible bit sequences).
// Exception: To be compatible with zlib, we also need to
// accept degenerate single-code codings. See also
// TestDegenerateHuffmanCoding.
if code != 1<<uint(max) && !(code == 1 && max == 1) {
return false
}
h.min = min
if max > huffmanChunkBits {
numLinks := 1 << (uint(max) - huffmanChunkBits)
h.linkMask = uint32(numLinks - 1)
// create link tables
link := nextcode[huffmanChunkBits+1] >> 1
h.links = make([][]uint32, huffmanNumChunks-link)
for j := uint(link); j < huffmanNumChunks; j++ {
reverse := int(reverseByte[j>>8]) | int(reverseByte[j&0xff])<<8
reverse >>= uint(16 - huffmanChunkBits)
off := j - uint(link)
if sanity && h.chunks[reverse] != 0 {
panic("impossible: overwriting existing chunk")
}
h.chunks[reverse] = uint32(off<<huffmanValueShift | (huffmanChunkBits + 1))
h.links[off] = make([]uint32, numLinks)
}
}
for i, n := range bits {
if n == 0 {
continue
}
code := nextcode[n]
nextcode[n]++
chunk := uint32(i<<huffmanValueShift | n)
reverse := int(reverseByte[code>>8]) | int(reverseByte[code&0xff])<<8
reverse >>= uint(16 - n)
if n <= huffmanChunkBits {
for off := reverse; off < len(h.chunks); off += 1 << uint(n) {
// We should never need to overwrite
// an existing chunk. Also, 0 is
// never a valid chunk, because the
// lower 4 "count" bits should be
// between 1 and 15.
if sanity && h.chunks[off] != 0 {
panic("impossible: overwriting existing chunk")
}
h.chunks[off] = chunk
}
} else {
j := reverse & (huffmanNumChunks - 1)
if sanity && h.chunks[j]&huffmanCountMask != huffmanChunkBits+1 {
// Longer codes should have been
// associated with a link table above.
panic("impossible: not an indirect chunk")
}
value := h.chunks[j] >> huffmanValueShift
linktab := h.links[value]
reverse >>= huffmanChunkBits
for off := reverse; off < len(linktab); off += 1 << uint(n-huffmanChunkBits) {
if sanity && linktab[off] != 0 {
panic("impossible: overwriting existing chunk")
}
linktab[off] = chunk
}
}
}
if sanity {
// Above we've sanity checked that we never overwrote
// an existing entry. Here we additionally check that
// we filled the tables completely.
for i, chunk := range h.chunks {
if chunk == 0 {
// As an exception, in the degenerate
// single-code case, we allow odd
// chunks to be missing.
if code == 1 && i%2 == 1 {
continue
}
panic("impossible: missing chunk")
}
}
for _, linktab := range h.links {
for _, chunk := range linktab {
if chunk == 0 {
panic("impossible: missing chunk")
}
}
}
}
return true
}
func main() {
flag.Parse()
var h huffmanDecoder
var bits [288]int
initReverseByte()
for i := 0; i < 144; i++ {
bits[i] = 8
}
for i := 144; i < 256; i++ {
bits[i] = 9
}
for i := 256; i < 280; i++ {
bits[i] = 7
}
for i := 280; i < 288; i++ {
bits[i] = 8
}
h.init(bits[:])
if h.links != nil {
log.Fatal("Unexpected links table in fixed Huffman decoder")
}
var buf bytes.Buffer
fmt.Fprintf(&buf, `// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.`+"\n\n")
fmt.Fprintln(&buf, "package flate")
fmt.Fprintln(&buf)
fmt.Fprintln(&buf, "// autogenerated by go run gen.go -output fixedhuff.go, DO NOT EDIT")
fmt.Fprintln(&buf)
fmt.Fprintln(&buf, "var fixedHuffmanDecoder = huffmanDecoder{")
fmt.Fprintf(&buf, "\t%d,\n", h.min)
fmt.Fprintln(&buf, "\t[huffmanNumChunks]uint32{")
for i := 0; i < huffmanNumChunks; i++ {
if i&7 == 0 {
fmt.Fprintf(&buf, "\t\t")
} else {
fmt.Fprintf(&buf, " ")
}
fmt.Fprintf(&buf, "0x%04x,", h.chunks[i])
if i&7 == 7 {
fmt.Fprintln(&buf)
}
}
fmt.Fprintln(&buf, "\t},")
fmt.Fprintln(&buf, "\tnil, 0,")
fmt.Fprintln(&buf, "}")
data, err := format.Source(buf.Bytes())
if err != nil {
log.Fatal(err)
}
err = ioutil.WriteFile(*filename, data, 0644)
if err != nil {
log.Fatal(err)
}
}
var reverseByte [256]byte
func initReverseByte() {
for x := 0; x < 256; x++ {
var result byte
for i := uint(0); i < 8; i++ {
result |= byte(((x >> i) & 1) << (7 - i))
}
reverseByte[x] = result
}
}

View File

@ -1,476 +0,0 @@
// +build ignore
package main
import (
"bytes"
"fmt"
"go/ast"
"go/parser"
"go/printer"
"go/token"
"io"
"io/ioutil"
"log"
"os"
"reflect"
"strings"
"unicode"
"unicode/utf8"
)
var inFiles = []string{"cpuid.go", "cpuid_test.go"}
var copyFiles = []string{"cpuid_amd64.s", "cpuid_386.s", "detect_ref.go", "detect_intel.go"}
var fileSet = token.NewFileSet()
var reWrites = []rewrite{
initRewrite("CPUInfo -> cpuInfo"),
initRewrite("Vendor -> vendor"),
initRewrite("Flags -> flags"),
initRewrite("Detect -> detect"),
initRewrite("CPU -> cpu"),
}
var excludeNames = map[string]bool{"string": true, "join": true, "trim": true,
// cpuid_test.go
"t": true, "println": true, "logf": true, "log": true, "fatalf": true, "fatal": true,
}
var excludePrefixes = []string{"test", "benchmark"}
func main() {
Package := "private"
parserMode := parser.ParseComments
exported := make(map[string]rewrite)
for _, file := range inFiles {
in, err := os.Open(file)
if err != nil {
log.Fatalf("opening input", err)
}
src, err := ioutil.ReadAll(in)
if err != nil {
log.Fatalf("reading input", err)
}
astfile, err := parser.ParseFile(fileSet, file, src, parserMode)
if err != nil {
log.Fatalf("parsing input", err)
}
for _, rw := range reWrites {
astfile = rw(astfile)
}
// Inspect the AST and print all identifiers and literals.
var startDecl token.Pos
var endDecl token.Pos
ast.Inspect(astfile, func(n ast.Node) bool {
var s string
switch x := n.(type) {
case *ast.Ident:
if x.IsExported() {
t := strings.ToLower(x.Name)
for _, pre := range excludePrefixes {
if strings.HasPrefix(t, pre) {
return true
}
}
if excludeNames[t] != true {
//if x.Pos() > startDecl && x.Pos() < endDecl {
exported[x.Name] = initRewrite(x.Name + " -> " + t)
}
}
case *ast.GenDecl:
if x.Tok == token.CONST && x.Lparen > 0 {
startDecl = x.Lparen
endDecl = x.Rparen
// fmt.Printf("Decl:%s -> %s\n", fileSet.Position(startDecl), fileSet.Position(endDecl))
}
}
if s != "" {
fmt.Printf("%s:\t%s\n", fileSet.Position(n.Pos()), s)
}
return true
})
for _, rw := range exported {
astfile = rw(astfile)
}
var buf bytes.Buffer
printer.Fprint(&buf, fileSet, astfile)
// Remove package documentation and insert information
s := buf.String()
ind := strings.Index(buf.String(), "\npackage cpuid")
s = s[ind:]
s = "// Generated, DO NOT EDIT,\n" +
"// but copy it to your own project and rename the package.\n" +
"// See more at http://github.com/klauspost/cpuid\n" +
s
outputName := Package + string(os.PathSeparator) + file
err = ioutil.WriteFile(outputName, []byte(s), 0644)
if err != nil {
log.Fatalf("writing output: %s", err)
}
log.Println("Generated", outputName)
}
for _, file := range copyFiles {
dst := ""
if strings.HasPrefix(file, "cpuid") {
dst = Package + string(os.PathSeparator) + file
} else {
dst = Package + string(os.PathSeparator) + "cpuid_" + file
}
err := copyFile(file, dst)
if err != nil {
log.Fatalf("copying file: %s", err)
}
log.Println("Copied", dst)
}
}
// CopyFile copies a file from src to dst. If src and dst files exist, and are
// the same, then return success. Copy the file contents from src to dst.
func copyFile(src, dst string) (err error) {
sfi, err := os.Stat(src)
if err != nil {
return
}
if !sfi.Mode().IsRegular() {
// cannot copy non-regular files (e.g., directories,
// symlinks, devices, etc.)
return fmt.Errorf("CopyFile: non-regular source file %s (%q)", sfi.Name(), sfi.Mode().String())
}
dfi, err := os.Stat(dst)
if err != nil {
if !os.IsNotExist(err) {
return
}
} else {
if !(dfi.Mode().IsRegular()) {
return fmt.Errorf("CopyFile: non-regular destination file %s (%q)", dfi.Name(), dfi.Mode().String())
}
if os.SameFile(sfi, dfi) {
return
}
}
err = copyFileContents(src, dst)
return
}
// copyFileContents copies the contents of the file named src to the file named
// by dst. The file will be created if it does not already exist. If the
// destination file exists, all it's contents will be replaced by the contents
// of the source file.
func copyFileContents(src, dst string) (err error) {
in, err := os.Open(src)
if err != nil {
return
}
defer in.Close()
out, err := os.Create(dst)
if err != nil {
return
}
defer func() {
cerr := out.Close()
if err == nil {
err = cerr
}
}()
if _, err = io.Copy(out, in); err != nil {
return
}
err = out.Sync()
return
}
type rewrite func(*ast.File) *ast.File
// Mostly copied from gofmt
func initRewrite(rewriteRule string) rewrite {
f := strings.Split(rewriteRule, "->")
if len(f) != 2 {
fmt.Fprintf(os.Stderr, "rewrite rule must be of the form 'pattern -> replacement'\n")
os.Exit(2)
}
pattern := parseExpr(f[0], "pattern")
replace := parseExpr(f[1], "replacement")
return func(p *ast.File) *ast.File { return rewriteFile(pattern, replace, p) }
}
// parseExpr parses s as an expression.
// It might make sense to expand this to allow statement patterns,
// but there are problems with preserving formatting and also
// with what a wildcard for a statement looks like.
func parseExpr(s, what string) ast.Expr {
x, err := parser.ParseExpr(s)
if err != nil {
fmt.Fprintf(os.Stderr, "parsing %s %s at %s\n", what, s, err)
os.Exit(2)
}
return x
}
// Keep this function for debugging.
/*
func dump(msg string, val reflect.Value) {
fmt.Printf("%s:\n", msg)
ast.Print(fileSet, val.Interface())
fmt.Println()
}
*/
// rewriteFile applies the rewrite rule 'pattern -> replace' to an entire file.
func rewriteFile(pattern, replace ast.Expr, p *ast.File) *ast.File {
cmap := ast.NewCommentMap(fileSet, p, p.Comments)
m := make(map[string]reflect.Value)
pat := reflect.ValueOf(pattern)
repl := reflect.ValueOf(replace)
var rewriteVal func(val reflect.Value) reflect.Value
rewriteVal = func(val reflect.Value) reflect.Value {
// don't bother if val is invalid to start with
if !val.IsValid() {
return reflect.Value{}
}
for k := range m {
delete(m, k)
}
val = apply(rewriteVal, val)
if match(m, pat, val) {
val = subst(m, repl, reflect.ValueOf(val.Interface().(ast.Node).Pos()))
}
return val
}
r := apply(rewriteVal, reflect.ValueOf(p)).Interface().(*ast.File)
r.Comments = cmap.Filter(r).Comments() // recreate comments list
return r
}
// set is a wrapper for x.Set(y); it protects the caller from panics if x cannot be changed to y.
func set(x, y reflect.Value) {
// don't bother if x cannot be set or y is invalid
if !x.CanSet() || !y.IsValid() {
return
}
defer func() {
if x := recover(); x != nil {
if s, ok := x.(string); ok &&
(strings.Contains(s, "type mismatch") || strings.Contains(s, "not assignable")) {
// x cannot be set to y - ignore this rewrite
return
}
panic(x)
}
}()
x.Set(y)
}
// Values/types for special cases.
var (
objectPtrNil = reflect.ValueOf((*ast.Object)(nil))
scopePtrNil = reflect.ValueOf((*ast.Scope)(nil))
identType = reflect.TypeOf((*ast.Ident)(nil))
objectPtrType = reflect.TypeOf((*ast.Object)(nil))
positionType = reflect.TypeOf(token.NoPos)
callExprType = reflect.TypeOf((*ast.CallExpr)(nil))
scopePtrType = reflect.TypeOf((*ast.Scope)(nil))
)
// apply replaces each AST field x in val with f(x), returning val.
// To avoid extra conversions, f operates on the reflect.Value form.
func apply(f func(reflect.Value) reflect.Value, val reflect.Value) reflect.Value {
if !val.IsValid() {
return reflect.Value{}
}
// *ast.Objects introduce cycles and are likely incorrect after
// rewrite; don't follow them but replace with nil instead
if val.Type() == objectPtrType {
return objectPtrNil
}
// similarly for scopes: they are likely incorrect after a rewrite;
// replace them with nil
if val.Type() == scopePtrType {
return scopePtrNil
}
switch v := reflect.Indirect(val); v.Kind() {
case reflect.Slice:
for i := 0; i < v.Len(); i++ {
e := v.Index(i)
set(e, f(e))
}
case reflect.Struct:
for i := 0; i < v.NumField(); i++ {
e := v.Field(i)
set(e, f(e))
}
case reflect.Interface:
e := v.Elem()
set(v, f(e))
}
return val
}
func isWildcard(s string) bool {
rune, size := utf8.DecodeRuneInString(s)
return size == len(s) && unicode.IsLower(rune)
}
// match returns true if pattern matches val,
// recording wildcard submatches in m.
// If m == nil, match checks whether pattern == val.
func match(m map[string]reflect.Value, pattern, val reflect.Value) bool {
// Wildcard matches any expression. If it appears multiple
// times in the pattern, it must match the same expression
// each time.
if m != nil && pattern.IsValid() && pattern.Type() == identType {
name := pattern.Interface().(*ast.Ident).Name
if isWildcard(name) && val.IsValid() {
// wildcards only match valid (non-nil) expressions.
if _, ok := val.Interface().(ast.Expr); ok && !val.IsNil() {
if old, ok := m[name]; ok {
return match(nil, old, val)
}
m[name] = val
return true
}
}
}
// Otherwise, pattern and val must match recursively.
if !pattern.IsValid() || !val.IsValid() {
return !pattern.IsValid() && !val.IsValid()
}
if pattern.Type() != val.Type() {
return false
}
// Special cases.
switch pattern.Type() {
case identType:
// For identifiers, only the names need to match
// (and none of the other *ast.Object information).
// This is a common case, handle it all here instead
// of recursing down any further via reflection.
p := pattern.Interface().(*ast.Ident)
v := val.Interface().(*ast.Ident)
return p == nil && v == nil || p != nil && v != nil && p.Name == v.Name
case objectPtrType, positionType:
// object pointers and token positions always match
return true
case callExprType:
// For calls, the Ellipsis fields (token.Position) must
// match since that is how f(x) and f(x...) are different.
// Check them here but fall through for the remaining fields.
p := pattern.Interface().(*ast.CallExpr)
v := val.Interface().(*ast.CallExpr)
if p.Ellipsis.IsValid() != v.Ellipsis.IsValid() {
return false
}
}
p := reflect.Indirect(pattern)
v := reflect.Indirect(val)
if !p.IsValid() || !v.IsValid() {
return !p.IsValid() && !v.IsValid()
}
switch p.Kind() {
case reflect.Slice:
if p.Len() != v.Len() {
return false
}
for i := 0; i < p.Len(); i++ {
if !match(m, p.Index(i), v.Index(i)) {
return false
}
}
return true
case reflect.Struct:
for i := 0; i < p.NumField(); i++ {
if !match(m, p.Field(i), v.Field(i)) {
return false
}
}
return true
case reflect.Interface:
return match(m, p.Elem(), v.Elem())
}
// Handle token integers, etc.
return p.Interface() == v.Interface()
}
// subst returns a copy of pattern with values from m substituted in place
// of wildcards and pos used as the position of tokens from the pattern.
// if m == nil, subst returns a copy of pattern and doesn't change the line
// number information.
func subst(m map[string]reflect.Value, pattern reflect.Value, pos reflect.Value) reflect.Value {
if !pattern.IsValid() {
return reflect.Value{}
}
// Wildcard gets replaced with map value.
if m != nil && pattern.Type() == identType {
name := pattern.Interface().(*ast.Ident).Name
if isWildcard(name) {
if old, ok := m[name]; ok {
return subst(nil, old, reflect.Value{})
}
}
}
if pos.IsValid() && pattern.Type() == positionType {
// use new position only if old position was valid in the first place
if old := pattern.Interface().(token.Pos); !old.IsValid() {
return pattern
}
return pos
}
// Otherwise copy.
switch p := pattern; p.Kind() {
case reflect.Slice:
v := reflect.MakeSlice(p.Type(), p.Len(), p.Len())
for i := 0; i < p.Len(); i++ {
v.Index(i).Set(subst(m, p.Index(i), pos))
}
return v
case reflect.Struct:
v := reflect.New(p.Type()).Elem()
for i := 0; i < p.NumField(); i++ {
v.Field(i).Set(subst(m, p.Field(i), pos))
}
return v
case reflect.Ptr:
v := reflect.New(p.Type()).Elem()
if elem := p.Elem(); elem.IsValid() {
v.Set(subst(m, elem, pos).Addr())
}
return v
case reflect.Interface:
v := reflect.New(p.Type()).Elem()
if elem := p.Elem(); elem.IsValid() {
v.Set(subst(m, elem, pos))
}
return v
}
return pattern
}

View File

@ -1,167 +0,0 @@
// +build ignore
package linodego
/*
- replace "Template" with "NameOfResource"
- replace "template" with "nameOfResource"
- copy template_test.go and do the same
- When updating Template structs,
- use pointers where ever null'able would have a different meaning if the wrapper
supplied "" or 0 instead
- Add "NameOfResource" to client.go, resources.go, pagination.go
*/
import (
"context"
"encoding/json"
"fmt"
)
// Template represents a Template object
type Template struct {
ID int `json:"id"`
// UpdatedStr string `json:"updated"`
// Updated *time.Time `json:"-"`
}
// TemplateCreateOptions fields are those accepted by CreateTemplate
type TemplateCreateOptions struct {
}
// TemplateUpdateOptions fields are those accepted by UpdateTemplate
type TemplateUpdateOptions struct {
}
// GetCreateOptions converts a Template to TemplateCreateOptions for use in CreateTemplate
func (i Template) GetCreateOptions() (o TemplateCreateOptions) {
// o.Label = i.Label
// o.Description = copyString(i.Description)
return
}
// GetUpdateOptions converts a Template to TemplateUpdateOptions for use in UpdateTemplate
func (i Template) GetUpdateOptions() (o TemplateUpdateOptions) {
// o.Label = i.Label
// o.Description = copyString(i.Description)
return
}
// TemplatesPagedResponse represents a paginated Template API response
type TemplatesPagedResponse struct {
*PageOptions
Data []Template `json:"data"`
}
// endpoint gets the endpoint URL for Template
func (TemplatesPagedResponse) endpoint(c *Client) string {
endpoint, err := c.Templates.Endpoint()
if err != nil {
panic(err)
}
return endpoint
}
// appendData appends Templates when processing paginated Template responses
func (resp *TemplatesPagedResponse) appendData(r *TemplatesPagedResponse) {
resp.Data = append(resp.Data, r.Data...)
}
// ListTemplates lists Templates
func (c *Client) ListTemplates(ctx context.Context, opts *ListOptions) ([]Template, error) {
response := TemplatesPagedResponse{}
err := c.listHelper(ctx, &response, opts)
for i := range response.Data {
response.Data[i].fixDates()
}
if err != nil {
return nil, err
}
return response.Data, nil
}
// fixDates converts JSON timestamps to Go time.Time values
func (i *Template) fixDates() *Template {
// i.Created, _ = parseDates(i.CreatedStr)
// i.Updated, _ = parseDates(i.UpdatedStr)
return i
}
// GetTemplate gets the template with the provided ID
func (c *Client) GetTemplate(ctx context.Context, id int) (*Template, error) {
e, err := c.Templates.Endpoint()
if err != nil {
return nil, err
}
e = fmt.Sprintf("%s/%d", e, id)
r, err := coupleAPIErrors(c.R(ctx).SetResult(&Template{}).Get(e))
if err != nil {
return nil, err
}
return r.Result().(*Template).fixDates(), nil
}
// CreateTemplate creates a Template
func (c *Client) CreateTemplate(ctx context.Context, createOpts TemplateCreateOptions) (*Template, error) {
var body string
e, err := c.Templates.Endpoint()
if err != nil {
return nil, err
}
req := c.R(ctx).SetResult(&Template{})
if bodyData, err := json.Marshal(createOpts); err == nil {
body = string(bodyData)
} else {
return nil, NewError(err)
}
r, err := coupleAPIErrors(req.
SetBody(body).
Post(e))
if err != nil {
return nil, err
}
return r.Result().(*Template).fixDates(), nil
}
// UpdateTemplate updates the Template with the specified id
func (c *Client) UpdateTemplate(ctx context.Context, id int, updateOpts TemplateUpdateOptions) (*Template, error) {
var body string
e, err := c.Templates.Endpoint()
if err != nil {
return nil, err
}
e = fmt.Sprintf("%s/%d", e, id)
req := c.R(ctx).SetResult(&Template{})
if bodyData, err := json.Marshal(updateOpts); err == nil {
body = string(bodyData)
} else {
return nil, NewError(err)
}
r, err := coupleAPIErrors(req.
SetBody(body).
Put(e))
if err != nil {
return nil, err
}
return r.Result().(*Template).fixDates(), nil
}
// DeleteTemplate deletes the Template with the specified id
func (c *Client) DeleteTemplate(ctx context.Context, id int) error {
e, err := c.Templates.Endpoint()
if err != nil {
return err
}
e = fmt.Sprintf("%s/%d", e, id)
_, err = coupleAPIErrors(c.R(ctx).Delete(e))
return err
}

View File

@ -1,141 +0,0 @@
// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
package main
import (
"bytes"
"expvar"
"flag"
"fmt"
"io"
"log"
"net/http"
"os"
"os/exec"
"strconv"
"sync"
)
// hello world, the web server
var helloRequests = expvar.NewInt("hello-requests")
func HelloServer(w http.ResponseWriter, req *http.Request) {
helloRequests.Add(1)
io.WriteString(w, "hello, world!\n")
}
// Simple counter server. POSTing to it will set the value.
type Counter struct {
mu sync.Mutex // protects n
n int
}
// This makes Counter satisfy the expvar.Var interface, so we can export
// it directly.
func (ctr *Counter) String() string {
ctr.mu.Lock()
defer ctr.mu.Unlock()
return fmt.Sprintf("%d", ctr.n)
}
func (ctr *Counter) ServeHTTP(w http.ResponseWriter, req *http.Request) {
ctr.mu.Lock()
defer ctr.mu.Unlock()
switch req.Method {
case "GET":
ctr.n++
case "POST":
buf := new(bytes.Buffer)
io.Copy(buf, req.Body)
body := buf.String()
if n, err := strconv.Atoi(body); err != nil {
fmt.Fprintf(w, "bad POST: %v\nbody: [%v]\n", err, body)
} else {
ctr.n = n
fmt.Fprint(w, "counter reset\n")
}
}
fmt.Fprintf(w, "counter = %d\n", ctr.n)
}
// simple flag server
var booleanflag = flag.Bool("boolean", true, "another flag for testing")
func FlagServer(w http.ResponseWriter, req *http.Request) {
w.Header().Set("Content-Type", "text/plain; charset=utf-8")
fmt.Fprint(w, "Flags:\n")
flag.VisitAll(func(f *flag.Flag) {
if f.Value.String() != f.DefValue {
fmt.Fprintf(w, "%s = %s [default = %s]\n", f.Name, f.Value.String(), f.DefValue)
} else {
fmt.Fprintf(w, "%s = %s\n", f.Name, f.Value.String())
}
})
}
// simple argument server
func ArgServer(w http.ResponseWriter, req *http.Request) {
for _, s := range os.Args {
fmt.Fprint(w, s, " ")
}
}
// a channel (just for the fun of it)
type Chan chan int
func ChanCreate() Chan {
c := make(Chan)
go func(c Chan) {
for x := 0; ; x++ {
c <- x
}
}(c)
return c
}
func (ch Chan) ServeHTTP(w http.ResponseWriter, req *http.Request) {
io.WriteString(w, fmt.Sprintf("channel send #%d\n", <-ch))
}
// exec a program, redirecting output
func DateServer(rw http.ResponseWriter, req *http.Request) {
rw.Header().Set("Content-Type", "text/plain; charset=utf-8")
date, err := exec.Command("/bin/date").Output()
if err != nil {
http.Error(rw, err.Error(), 500)
return
}
rw.Write(date)
}
func Logger(w http.ResponseWriter, req *http.Request) {
log.Print(req.URL)
http.Error(w, "oops", 404)
}
var webroot = flag.String("root", os.Getenv("HOME"), "web root directory")
func main() {
flag.Parse()
// The counter is published as a variable directly.
ctr := new(Counter)
expvar.Publish("counter", ctr)
http.Handle("/counter", ctr)
http.Handle("/", http.HandlerFunc(Logger))
http.Handle("/go/", http.StripPrefix("/go/", http.FileServer(http.Dir(*webroot))))
http.Handle("/chan", ChanCreate())
http.HandleFunc("/flags", FlagServer)
http.HandleFunc("/args", ArgServer)
http.HandleFunc("/go/hello", HelloServer)
http.HandleFunc("/date", DateServer)
err := http.ListenAndServe(":12345", nil)
if err != nil {
log.Panicln("ListenAndServe:", err)
}
}

View File

@ -1,118 +0,0 @@
// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
// Generate a self-signed X.509 certificate for a TLS server. Outputs to
// 'cert.pem' and 'key.pem' and will overwrite existing files.
package main
import (
"crypto/rand"
"crypto/rsa"
"crypto/x509"
"crypto/x509/pkix"
"encoding/pem"
"flag"
"fmt"
"log"
"math/big"
"net"
"os"
"strings"
"time"
)
var (
host = flag.String("host", "", "Comma-separated hostnames and IPs to generate a certificate for")
validFrom = flag.String("start-date", "", "Creation date formatted as Jan 1 15:04:05 2011")
validFor = flag.Duration("duration", 365*24*time.Hour, "Duration that certificate is valid for")
isCA = flag.Bool("ca", false, "whether this cert should be its own Certificate Authority")
rsaBits = flag.Int("rsa-bits", 2048, "Size of RSA key to generate")
)
func main() {
flag.Parse()
if len(*host) == 0 {
log.Fatalf("Missing required --host parameter")
}
priv, err := rsa.GenerateKey(rand.Reader, *rsaBits)
if err != nil {
log.Fatalf("failed to generate private key: %s", err)
return
}
var notBefore time.Time
if len(*validFrom) == 0 {
notBefore = time.Now()
} else {
notBefore, err = time.Parse("Jan 2 15:04:05 2006", *validFrom)
if err != nil {
fmt.Fprintf(os.Stderr, "Failed to parse creation date: %s\n", err)
os.Exit(1)
}
}
notAfter := notBefore.Add(*validFor)
// end of ASN.1 time
endOfTime := time.Date(2049, 12, 31, 23, 59, 59, 0, time.UTC)
if notAfter.After(endOfTime) {
notAfter = endOfTime
}
template := x509.Certificate{
SerialNumber: new(big.Int).SetInt64(0),
Subject: pkix.Name{
Organization: []string{"Acme Co"},
},
NotBefore: notBefore,
NotAfter: notAfter,
KeyUsage: x509.KeyUsageKeyEncipherment | x509.KeyUsageDigitalSignature,
ExtKeyUsage: []x509.ExtKeyUsage{x509.ExtKeyUsageServerAuth},
BasicConstraintsValid: true,
}
hosts := strings.Split(*host, ",")
for _, h := range hosts {
if ip := net.ParseIP(h); ip != nil {
template.IPAddresses = append(template.IPAddresses, ip)
} else {
template.DNSNames = append(template.DNSNames, h)
}
}
if *isCA {
template.IsCA = true
template.KeyUsage |= x509.KeyUsageCertSign
}
derBytes, err := x509.CreateCertificate(rand.Reader, &template, &template, &priv.PublicKey, priv)
if err != nil {
log.Fatalf("Failed to create certificate: %s", err)
return
}
certOut, err := os.Create("cert.pem")
if err != nil {
log.Fatalf("failed to open cert.pem for writing: %s", err)
return
}
pem.Encode(certOut, &pem.Block{Type: "CERTIFICATE", Bytes: derBytes})
certOut.Close()
log.Print("written cert.pem\n")
keyOut, err := os.OpenFile("key.pem", os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0600)
if err != nil {
log.Print("failed to open key.pem for writing:", err)
return
}
pem.Encode(keyOut, &pem.Block{Type: "RSA PRIVATE KEY", Bytes: x509.MarshalPKCS1PrivateKey(priv)})
keyOut.Close()
log.Print("written key.pem\n")
}

View File

@ -1,40 +0,0 @@
// Copyright 2014-2017 Ulrich Kunitz. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
package main
import (
"bytes"
"io"
"log"
"os"
"github.com/ulikunitz/xz"
)
func main() {
const text = "The quick brown fox jumps over the lazy dog.\n"
var buf bytes.Buffer
// compress text
w, err := xz.NewWriter(&buf)
if err != nil {
log.Fatalf("xz.NewWriter error %s", err)
}
if _, err := io.WriteString(w, text); err != nil {
log.Fatalf("WriteString error %s", err)
}
if err := w.Close(); err != nil {
log.Fatalf("w.Close error %s", err)
}
// decompress buffer and write output to stdout
r, err := xz.NewReader(&buf)
if err != nil {
log.Fatalf("NewReader error %s", err)
}
if _, err = io.Copy(os.Stdout, r); err != nil {
log.Fatalf("io.Copy error %s", err)
}
}

View File

@ -1,712 +0,0 @@
// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
//go:generate go run gen.go
//go:generate go run gen.go -test
package main
import (
"bytes"
"flag"
"fmt"
"go/format"
"io/ioutil"
"math/rand"
"os"
"sort"
"strings"
)
// identifier converts s to a Go exported identifier.
// It converts "div" to "Div" and "accept-charset" to "AcceptCharset".
func identifier(s string) string {
b := make([]byte, 0, len(s))
cap := true
for _, c := range s {
if c == '-' {
cap = true
continue
}
if cap && 'a' <= c && c <= 'z' {
c -= 'a' - 'A'
}
cap = false
b = append(b, byte(c))
}
return string(b)
}
var test = flag.Bool("test", false, "generate table_test.go")
func genFile(name string, buf *bytes.Buffer) {
b, err := format.Source(buf.Bytes())
if err != nil {
fmt.Fprintln(os.Stderr, err)
os.Exit(1)
}
if err := ioutil.WriteFile(name, b, 0644); err != nil {
fmt.Fprintln(os.Stderr, err)
os.Exit(1)
}
}
func main() {
flag.Parse()
var all []string
all = append(all, elements...)
all = append(all, attributes...)
all = append(all, eventHandlers...)
all = append(all, extra...)
sort.Strings(all)
// uniq - lists have dups
w := 0
for _, s := range all {
if w == 0 || all[w-1] != s {
all[w] = s
w++
}
}
all = all[:w]
if *test {
var buf bytes.Buffer
fmt.Fprintln(&buf, "// Code generated by go generate gen.go; DO NOT EDIT.\n")
fmt.Fprintln(&buf, "//go:generate go run gen.go -test\n")
fmt.Fprintln(&buf, "package atom\n")
fmt.Fprintln(&buf, "var testAtomList = []string{")
for _, s := range all {
fmt.Fprintf(&buf, "\t%q,\n", s)
}
fmt.Fprintln(&buf, "}")
genFile("table_test.go", &buf)
return
}
// Find hash that minimizes table size.
var best *table
for i := 0; i < 1000000; i++ {
if best != nil && 1<<(best.k-1) < len(all) {
break
}
h := rand.Uint32()
for k := uint(0); k <= 16; k++ {
if best != nil && k >= best.k {
break
}
var t table
if t.init(h, k, all) {
best = &t
break
}
}
}
if best == nil {
fmt.Fprintf(os.Stderr, "failed to construct string table\n")
os.Exit(1)
}
// Lay out strings, using overlaps when possible.
layout := append([]string{}, all...)
// Remove strings that are substrings of other strings
for changed := true; changed; {
changed = false
for i, s := range layout {
if s == "" {
continue
}
for j, t := range layout {
if i != j && t != "" && strings.Contains(s, t) {
changed = true
layout[j] = ""
}
}
}
}
// Join strings where one suffix matches another prefix.
for {
// Find best i, j, k such that layout[i][len-k:] == layout[j][:k],
// maximizing overlap length k.
besti := -1
bestj := -1
bestk := 0
for i, s := range layout {
if s == "" {
continue
}
for j, t := range layout {
if i == j {
continue
}
for k := bestk + 1; k <= len(s) && k <= len(t); k++ {
if s[len(s)-k:] == t[:k] {
besti = i
bestj = j
bestk = k
}
}
}
}
if bestk > 0 {
layout[besti] += layout[bestj][bestk:]
layout[bestj] = ""
continue
}
break
}
text := strings.Join(layout, "")
atom := map[string]uint32{}
for _, s := range all {
off := strings.Index(text, s)
if off < 0 {
panic("lost string " + s)
}
atom[s] = uint32(off<<8 | len(s))
}
var buf bytes.Buffer
// Generate the Go code.
fmt.Fprintln(&buf, "// Code generated by go generate gen.go; DO NOT EDIT.\n")
fmt.Fprintln(&buf, "//go:generate go run gen.go\n")
fmt.Fprintln(&buf, "package atom\n\nconst (")
// compute max len
maxLen := 0
for _, s := range all {
if maxLen < len(s) {
maxLen = len(s)
}
fmt.Fprintf(&buf, "\t%s Atom = %#x\n", identifier(s), atom[s])
}
fmt.Fprintln(&buf, ")\n")
fmt.Fprintf(&buf, "const hash0 = %#x\n\n", best.h0)
fmt.Fprintf(&buf, "const maxAtomLen = %d\n\n", maxLen)
fmt.Fprintf(&buf, "var table = [1<<%d]Atom{\n", best.k)
for i, s := range best.tab {
if s == "" {
continue
}
fmt.Fprintf(&buf, "\t%#x: %#x, // %s\n", i, atom[s], s)
}
fmt.Fprintf(&buf, "}\n")
datasize := (1 << best.k) * 4
fmt.Fprintln(&buf, "const atomText =")
textsize := len(text)
for len(text) > 60 {
fmt.Fprintf(&buf, "\t%q +\n", text[:60])
text = text[60:]
}
fmt.Fprintf(&buf, "\t%q\n\n", text)
genFile("table.go", &buf)
fmt.Fprintf(os.Stdout, "%d atoms; %d string bytes + %d tables = %d total data\n", len(all), textsize, datasize, textsize+datasize)
}
type byLen []string
func (x byLen) Less(i, j int) bool { return len(x[i]) > len(x[j]) }
func (x byLen) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
func (x byLen) Len() int { return len(x) }
// fnv computes the FNV hash with an arbitrary starting value h.
func fnv(h uint32, s string) uint32 {
for i := 0; i < len(s); i++ {
h ^= uint32(s[i])
h *= 16777619
}
return h
}
// A table represents an attempt at constructing the lookup table.
// The lookup table uses cuckoo hashing, meaning that each string
// can be found in one of two positions.
type table struct {
h0 uint32
k uint
mask uint32
tab []string
}
// hash returns the two hashes for s.
func (t *table) hash(s string) (h1, h2 uint32) {
h := fnv(t.h0, s)
h1 = h & t.mask
h2 = (h >> 16) & t.mask
return
}
// init initializes the table with the given parameters.
// h0 is the initial hash value,
// k is the number of bits of hash value to use, and
// x is the list of strings to store in the table.
// init returns false if the table cannot be constructed.
func (t *table) init(h0 uint32, k uint, x []string) bool {
t.h0 = h0
t.k = k
t.tab = make([]string, 1<<k)
t.mask = 1<<k - 1
for _, s := range x {
if !t.insert(s) {
return false
}
}
return true
}
// insert inserts s in the table.
func (t *table) insert(s string) bool {
h1, h2 := t.hash(s)
if t.tab[h1] == "" {
t.tab[h1] = s
return true
}
if t.tab[h2] == "" {
t.tab[h2] = s
return true
}
if t.push(h1, 0) {
t.tab[h1] = s
return true
}
if t.push(h2, 0) {
t.tab[h2] = s
return true
}
return false
}
// push attempts to push aside the entry in slot i.
func (t *table) push(i uint32, depth int) bool {
if depth > len(t.tab) {
return false
}
s := t.tab[i]
h1, h2 := t.hash(s)
j := h1 + h2 - i
if t.tab[j] != "" && !t.push(j, depth+1) {
return false
}
t.tab[j] = s
return true
}
// The lists of element names and attribute keys were taken from
// https://html.spec.whatwg.org/multipage/indices.html#index
// as of the "HTML Living Standard - Last Updated 16 April 2018" version.
// "command", "keygen" and "menuitem" have been removed from the spec,
// but are kept here for backwards compatibility.
var elements = []string{
"a",
"abbr",
"address",
"area",
"article",
"aside",
"audio",
"b",
"base",
"bdi",
"bdo",
"blockquote",
"body",
"br",
"button",
"canvas",
"caption",
"cite",
"code",
"col",
"colgroup",
"command",
"data",
"datalist",
"dd",
"del",
"details",
"dfn",
"dialog",
"div",
"dl",
"dt",
"em",
"embed",
"fieldset",
"figcaption",
"figure",
"footer",
"form",
"h1",
"h2",
"h3",
"h4",
"h5",
"h6",
"head",
"header",
"hgroup",
"hr",
"html",
"i",
"iframe",
"img",
"input",
"ins",
"kbd",
"keygen",
"label",
"legend",
"li",
"link",
"main",
"map",
"mark",
"menu",
"menuitem",
"meta",
"meter",
"nav",
"noscript",
"object",
"ol",
"optgroup",
"option",
"output",
"p",
"param",
"picture",
"pre",
"progress",
"q",
"rp",
"rt",
"ruby",
"s",
"samp",
"script",
"section",
"select",
"slot",
"small",
"source",
"span",
"strong",
"style",
"sub",
"summary",
"sup",
"table",
"tbody",
"td",
"template",
"textarea",
"tfoot",
"th",
"thead",
"time",
"title",
"tr",
"track",
"u",
"ul",
"var",
"video",
"wbr",
}
// https://html.spec.whatwg.org/multipage/indices.html#attributes-3
//
// "challenge", "command", "contextmenu", "dropzone", "icon", "keytype", "mediagroup",
// "radiogroup", "spellcheck", "scoped", "seamless", "sortable" and "sorted" have been removed from the spec,
// but are kept here for backwards compatibility.
var attributes = []string{
"abbr",
"accept",
"accept-charset",
"accesskey",
"action",
"allowfullscreen",
"allowpaymentrequest",
"allowusermedia",
"alt",
"as",
"async",
"autocomplete",
"autofocus",
"autoplay",
"challenge",
"charset",
"checked",
"cite",
"class",
"color",
"cols",
"colspan",
"command",
"content",
"contenteditable",
"contextmenu",
"controls",
"coords",
"crossorigin",
"data",
"datetime",
"default",
"defer",
"dir",
"dirname",
"disabled",
"download",
"draggable",
"dropzone",
"enctype",
"for",
"form",
"formaction",
"formenctype",
"formmethod",
"formnovalidate",
"formtarget",
"headers",
"height",
"hidden",
"high",
"href",
"hreflang",
"http-equiv",
"icon",
"id",
"inputmode",
"integrity",
"is",
"ismap",
"itemid",
"itemprop",
"itemref",
"itemscope",
"itemtype",
"keytype",
"kind",
"label",
"lang",
"list",
"loop",
"low",
"manifest",
"max",
"maxlength",
"media",
"mediagroup",
"method",
"min",
"minlength",
"multiple",
"muted",
"name",
"nomodule",
"nonce",
"novalidate",
"open",
"optimum",
"pattern",
"ping",
"placeholder",
"playsinline",
"poster",
"preload",
"radiogroup",
"readonly",
"referrerpolicy",
"rel",
"required",
"reversed",
"rows",
"rowspan",
"sandbox",
"spellcheck",
"scope",
"scoped",
"seamless",
"selected",
"shape",
"size",
"sizes",
"sortable",
"sorted",
"slot",
"span",
"spellcheck",
"src",
"srcdoc",
"srclang",
"srcset",
"start",
"step",
"style",
"tabindex",
"target",
"title",
"translate",
"type",
"typemustmatch",
"updateviacache",
"usemap",
"value",
"width",
"workertype",
"wrap",
}
// "onautocomplete", "onautocompleteerror", "onmousewheel",
// "onshow" and "onsort" have been removed from the spec,
// but are kept here for backwards compatibility.
var eventHandlers = []string{
"onabort",
"onautocomplete",
"onautocompleteerror",
"onauxclick",
"onafterprint",
"onbeforeprint",
"onbeforeunload",
"onblur",
"oncancel",
"oncanplay",
"oncanplaythrough",
"onchange",
"onclick",
"onclose",
"oncontextmenu",
"oncopy",
"oncuechange",
"oncut",
"ondblclick",
"ondrag",
"ondragend",
"ondragenter",
"ondragexit",
"ondragleave",
"ondragover",
"ondragstart",
"ondrop",
"ondurationchange",
"onemptied",
"onended",
"onerror",
"onfocus",
"onhashchange",
"oninput",
"oninvalid",
"onkeydown",
"onkeypress",
"onkeyup",
"onlanguagechange",
"onload",
"onloadeddata",
"onloadedmetadata",
"onloadend",
"onloadstart",
"onmessage",
"onmessageerror",
"onmousedown",
"onmouseenter",
"onmouseleave",
"onmousemove",
"onmouseout",
"onmouseover",
"onmouseup",
"onmousewheel",
"onwheel",
"onoffline",
"ononline",
"onpagehide",
"onpageshow",
"onpaste",
"onpause",
"onplay",
"onplaying",
"onpopstate",
"onprogress",
"onratechange",
"onreset",
"onresize",
"onrejectionhandled",
"onscroll",
"onsecuritypolicyviolation",
"onseeked",
"onseeking",
"onselect",
"onshow",
"onsort",
"onstalled",
"onstorage",
"onsubmit",
"onsuspend",
"ontimeupdate",
"ontoggle",
"onunhandledrejection",
"onunload",
"onvolumechange",
"onwaiting",
}
// extra are ad-hoc values not covered by any of the lists above.
var extra = []string{
"acronym",
"align",
"annotation",
"annotation-xml",
"applet",
"basefont",
"bgsound",
"big",
"blink",
"center",
"color",
"desc",
"face",
"font",
"foreignObject", // HTML is case-insensitive, but SVG-embedded-in-HTML is case-sensitive.
"foreignobject",
"frame",
"frameset",
"image",
"isindex",
"listing",
"malignmark",
"marquee",
"math",
"mglyph",
"mi",
"mn",
"mo",
"ms",
"mtext",
"nobr",
"noembed",
"noframes",
"plaintext",
"prompt",
"public",
"rb",
"rtc",
"spacer",
"strike",
"svg",
"system",
"tt",
"xmp",
}

View File

@ -1,717 +0,0 @@
// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
package main
// This program generates table.go and table_test.go based on the authoritative
// public suffix list at https://publicsuffix.org/list/effective_tld_names.dat
//
// The version is derived from
// https://api.github.com/repos/publicsuffix/list/commits?path=public_suffix_list.dat
// and a human-readable form is at
// https://github.com/publicsuffix/list/commits/master/public_suffix_list.dat
//
// To fetch a particular git revision, such as 5c70ccd250, pass
// -url "https://raw.githubusercontent.com/publicsuffix/list/5c70ccd250/public_suffix_list.dat"
// and -version "an explicit version string".
import (
"bufio"
"bytes"
"flag"
"fmt"
"go/format"
"io"
"io/ioutil"
"net/http"
"os"
"regexp"
"sort"
"strings"
"golang.org/x/net/idna"
)
const (
// These sum of these four values must be no greater than 32.
nodesBitsChildren = 10
nodesBitsICANN = 1
nodesBitsTextOffset = 15
nodesBitsTextLength = 6
// These sum of these four values must be no greater than 32.
childrenBitsWildcard = 1
childrenBitsNodeType = 2
childrenBitsHi = 14
childrenBitsLo = 14
)
var (
maxChildren int
maxTextOffset int
maxTextLength int
maxHi uint32
maxLo uint32
)
func max(a, b int) int {
if a < b {
return b
}
return a
}
func u32max(a, b uint32) uint32 {
if a < b {
return b
}
return a
}
const (
nodeTypeNormal = 0
nodeTypeException = 1
nodeTypeParentOnly = 2
numNodeType = 3
)
func nodeTypeStr(n int) string {
switch n {
case nodeTypeNormal:
return "+"
case nodeTypeException:
return "!"
case nodeTypeParentOnly:
return "o"
}
panic("unreachable")
}
const (
defaultURL = "https://publicsuffix.org/list/effective_tld_names.dat"
gitCommitURL = "https://api.github.com/repos/publicsuffix/list/commits?path=public_suffix_list.dat"
)
var (
labelEncoding = map[string]uint32{}
labelsList = []string{}
labelsMap = map[string]bool{}
rules = []string{}
numICANNRules = 0
// validSuffixRE is used to check that the entries in the public suffix
// list are in canonical form (after Punycode encoding). Specifically,
// capital letters are not allowed.
validSuffixRE = regexp.MustCompile(`^[a-z0-9_\!\*\-\.]+$`)
shaRE = regexp.MustCompile(`"sha":"([^"]+)"`)
dateRE = regexp.MustCompile(`"committer":{[^{]+"date":"([^"]+)"`)
comments = flag.Bool("comments", false, "generate table.go comments, for debugging")
subset = flag.Bool("subset", false, "generate only a subset of the full table, for debugging")
url = flag.String("url", defaultURL, "URL of the publicsuffix.org list. If empty, stdin is read instead")
v = flag.Bool("v", false, "verbose output (to stderr)")
version = flag.String("version", "", "the effective_tld_names.dat version")
)
func main() {
if err := main1(); err != nil {
fmt.Fprintln(os.Stderr, err)
os.Exit(1)
}
}
func main1() error {
flag.Parse()
if nodesBitsTextLength+nodesBitsTextOffset+nodesBitsICANN+nodesBitsChildren > 32 {
return fmt.Errorf("not enough bits to encode the nodes table")
}
if childrenBitsLo+childrenBitsHi+childrenBitsNodeType+childrenBitsWildcard > 32 {
return fmt.Errorf("not enough bits to encode the children table")
}
if *version == "" {
if *url != defaultURL {
return fmt.Errorf("-version was not specified, and the -url is not the default one")
}
sha, date, err := gitCommit()
if err != nil {
return err
}
*version = fmt.Sprintf("publicsuffix.org's public_suffix_list.dat, git revision %s (%s)", sha, date)
}
var r io.Reader = os.Stdin
if *url != "" {
res, err := http.Get(*url)
if err != nil {
return err
}
if res.StatusCode != http.StatusOK {
return fmt.Errorf("bad GET status for %s: %d", *url, res.Status)
}
r = res.Body
defer res.Body.Close()
}
var root node
icann := false
br := bufio.NewReader(r)
for {
s, err := br.ReadString('\n')
if err != nil {
if err == io.EOF {
break
}
return err
}
s = strings.TrimSpace(s)
if strings.Contains(s, "BEGIN ICANN DOMAINS") {
if len(rules) != 0 {
return fmt.Errorf(`expected no rules before "BEGIN ICANN DOMAINS"`)
}
icann = true
continue
}
if strings.Contains(s, "END ICANN DOMAINS") {
icann, numICANNRules = false, len(rules)
continue
}
if s == "" || strings.HasPrefix(s, "//") {
continue
}
s, err = idna.ToASCII(s)
if err != nil {
return err
}
if !validSuffixRE.MatchString(s) {
return fmt.Errorf("bad publicsuffix.org list data: %q", s)
}
if *subset {
switch {
case s == "ac.jp" || strings.HasSuffix(s, ".ac.jp"):
case s == "ak.us" || strings.HasSuffix(s, ".ak.us"):
case s == "ao" || strings.HasSuffix(s, ".ao"):
case s == "ar" || strings.HasSuffix(s, ".ar"):
case s == "arpa" || strings.HasSuffix(s, ".arpa"):
case s == "cy" || strings.HasSuffix(s, ".cy"):
case s == "dyndns.org" || strings.HasSuffix(s, ".dyndns.org"):
case s == "jp":
case s == "kobe.jp" || strings.HasSuffix(s, ".kobe.jp"):
case s == "kyoto.jp" || strings.HasSuffix(s, ".kyoto.jp"):
case s == "om" || strings.HasSuffix(s, ".om"):
case s == "uk" || strings.HasSuffix(s, ".uk"):
case s == "uk.com" || strings.HasSuffix(s, ".uk.com"):
case s == "tw" || strings.HasSuffix(s, ".tw"):
case s == "zw" || strings.HasSuffix(s, ".zw"):
case s == "xn--p1ai" || strings.HasSuffix(s, ".xn--p1ai"):
// xn--p1ai is Russian-Cyrillic "рф".
default:
continue
}
}
rules = append(rules, s)
nt, wildcard := nodeTypeNormal, false
switch {
case strings.HasPrefix(s, "*."):
s, nt = s[2:], nodeTypeParentOnly
wildcard = true
case strings.HasPrefix(s, "!"):
s, nt = s[1:], nodeTypeException
}
labels := strings.Split(s, ".")
for n, i := &root, len(labels)-1; i >= 0; i-- {
label := labels[i]
n = n.child(label)
if i == 0 {
if nt != nodeTypeParentOnly && n.nodeType == nodeTypeParentOnly {
n.nodeType = nt
}
n.icann = n.icann && icann
n.wildcard = n.wildcard || wildcard
}
labelsMap[label] = true
}
}
labelsList = make([]string, 0, len(labelsMap))
for label := range labelsMap {
labelsList = append(labelsList, label)
}
sort.Strings(labelsList)
if err := generate(printReal, &root, "table.go"); err != nil {
return err
}
if err := generate(printTest, &root, "table_test.go"); err != nil {
return err
}
return nil
}
func generate(p func(io.Writer, *node) error, root *node, filename string) error {
buf := new(bytes.Buffer)
if err := p(buf, root); err != nil {
return err
}
b, err := format.Source(buf.Bytes())
if err != nil {
return err
}
return ioutil.WriteFile(filename, b, 0644)
}
func gitCommit() (sha, date string, retErr error) {
res, err := http.Get(gitCommitURL)
if err != nil {
return "", "", err
}
if res.StatusCode != http.StatusOK {
return "", "", fmt.Errorf("bad GET status for %s: %d", gitCommitURL, res.Status)
}
defer res.Body.Close()
b, err := ioutil.ReadAll(res.Body)
if err != nil {
return "", "", err
}
if m := shaRE.FindSubmatch(b); m != nil {
sha = string(m[1])
}
if m := dateRE.FindSubmatch(b); m != nil {
date = string(m[1])
}
if sha == "" || date == "" {
retErr = fmt.Errorf("could not find commit SHA and date in %s", gitCommitURL)
}
return sha, date, retErr
}
func printTest(w io.Writer, n *node) error {
fmt.Fprintf(w, "// generated by go run gen.go; DO NOT EDIT\n\n")
fmt.Fprintf(w, "package publicsuffix\n\nconst numICANNRules = %d\n\nvar rules = [...]string{\n", numICANNRules)
for _, rule := range rules {
fmt.Fprintf(w, "%q,\n", rule)
}
fmt.Fprintf(w, "}\n\nvar nodeLabels = [...]string{\n")
if err := n.walk(w, printNodeLabel); err != nil {
return err
}
fmt.Fprintf(w, "}\n")
return nil
}
func printReal(w io.Writer, n *node) error {
const header = `// generated by go run gen.go; DO NOT EDIT
package publicsuffix
const version = %q
const (
nodesBitsChildren = %d
nodesBitsICANN = %d
nodesBitsTextOffset = %d
nodesBitsTextLength = %d
childrenBitsWildcard = %d
childrenBitsNodeType = %d
childrenBitsHi = %d
childrenBitsLo = %d
)
const (
nodeTypeNormal = %d
nodeTypeException = %d
nodeTypeParentOnly = %d
)
// numTLD is the number of top level domains.
const numTLD = %d
`
fmt.Fprintf(w, header, *version,
nodesBitsChildren, nodesBitsICANN, nodesBitsTextOffset, nodesBitsTextLength,
childrenBitsWildcard, childrenBitsNodeType, childrenBitsHi, childrenBitsLo,
nodeTypeNormal, nodeTypeException, nodeTypeParentOnly, len(n.children))
text := combineText(labelsList)
if text == "" {
return fmt.Errorf("internal error: makeText returned no text")
}
for _, label := range labelsList {
offset, length := strings.Index(text, label), len(label)
if offset < 0 {
return fmt.Errorf("internal error: could not find %q in text %q", label, text)
}
maxTextOffset, maxTextLength = max(maxTextOffset, offset), max(maxTextLength, length)
if offset >= 1<<nodesBitsTextOffset {
return fmt.Errorf("text offset %d is too large, or nodeBitsTextOffset is too small", offset)
}
if length >= 1<<nodesBitsTextLength {
return fmt.Errorf("text length %d is too large, or nodeBitsTextLength is too small", length)
}
labelEncoding[label] = uint32(offset)<<nodesBitsTextLength | uint32(length)
}
fmt.Fprintf(w, "// Text is the combined text of all labels.\nconst text = ")
for len(text) > 0 {
n, plus := len(text), ""
if n > 64 {
n, plus = 64, " +"
}
fmt.Fprintf(w, "%q%s\n", text[:n], plus)
text = text[n:]
}
if err := n.walk(w, assignIndexes); err != nil {
return err
}
fmt.Fprintf(w, `
// nodes is the list of nodes. Each node is represented as a uint32, which
// encodes the node's children, wildcard bit and node type (as an index into
// the children array), ICANN bit and text.
//
// If the table was generated with the -comments flag, there is a //-comment
// after each node's data. In it is the nodes-array indexes of the children,
// formatted as (n0x1234-n0x1256), with * denoting the wildcard bit. The
// nodeType is printed as + for normal, ! for exception, and o for parent-only
// nodes that have children but don't match a domain label in their own right.
// An I denotes an ICANN domain.
//
// The layout within the uint32, from MSB to LSB, is:
// [%2d bits] unused
// [%2d bits] children index
// [%2d bits] ICANN bit
// [%2d bits] text index
// [%2d bits] text length
var nodes = [...]uint32{
`,
32-nodesBitsChildren-nodesBitsICANN-nodesBitsTextOffset-nodesBitsTextLength,
nodesBitsChildren, nodesBitsICANN, nodesBitsTextOffset, nodesBitsTextLength)
if err := n.walk(w, printNode); err != nil {
return err
}
fmt.Fprintf(w, `}
// children is the list of nodes' children, the parent's wildcard bit and the
// parent's node type. If a node has no children then their children index
// will be in the range [0, 6), depending on the wildcard bit and node type.
//
// The layout within the uint32, from MSB to LSB, is:
// [%2d bits] unused
// [%2d bits] wildcard bit
// [%2d bits] node type
// [%2d bits] high nodes index (exclusive) of children
// [%2d bits] low nodes index (inclusive) of children
var children=[...]uint32{
`,
32-childrenBitsWildcard-childrenBitsNodeType-childrenBitsHi-childrenBitsLo,
childrenBitsWildcard, childrenBitsNodeType, childrenBitsHi, childrenBitsLo)
for i, c := range childrenEncoding {
s := "---------------"
lo := c & (1<<childrenBitsLo - 1)
hi := (c >> childrenBitsLo) & (1<<childrenBitsHi - 1)
if lo != hi {
s = fmt.Sprintf("n0x%04x-n0x%04x", lo, hi)
}
nodeType := int(c>>(childrenBitsLo+childrenBitsHi)) & (1<<childrenBitsNodeType - 1)
wildcard := c>>(childrenBitsLo+childrenBitsHi+childrenBitsNodeType) != 0
if *comments {
fmt.Fprintf(w, "0x%08x, // c0x%04x (%s)%s %s\n",
c, i, s, wildcardStr(wildcard), nodeTypeStr(nodeType))
} else {
fmt.Fprintf(w, "0x%x,\n", c)
}
}
fmt.Fprintf(w, "}\n\n")
fmt.Fprintf(w, "// max children %d (capacity %d)\n", maxChildren, 1<<nodesBitsChildren-1)
fmt.Fprintf(w, "// max text offset %d (capacity %d)\n", maxTextOffset, 1<<nodesBitsTextOffset-1)
fmt.Fprintf(w, "// max text length %d (capacity %d)\n", maxTextLength, 1<<nodesBitsTextLength-1)
fmt.Fprintf(w, "// max hi %d (capacity %d)\n", maxHi, 1<<childrenBitsHi-1)
fmt.Fprintf(w, "// max lo %d (capacity %d)\n", maxLo, 1<<childrenBitsLo-1)
return nil
}
type node struct {
label string
nodeType int
icann bool
wildcard bool
// nodesIndex and childrenIndex are the index of this node in the nodes
// and the index of its children offset/length in the children arrays.
nodesIndex, childrenIndex int
// firstChild is the index of this node's first child, or zero if this
// node has no children.
firstChild int
// children are the node's children, in strictly increasing node label order.
children []*node
}
func (n *node) walk(w io.Writer, f func(w1 io.Writer, n1 *node) error) error {
if err := f(w, n); err != nil {
return err
}
for _, c := range n.children {
if err := c.walk(w, f); err != nil {
return err
}
}
return nil
}
// child returns the child of n with the given label. The child is created if
// it did not exist beforehand.
func (n *node) child(label string) *node {
for _, c := range n.children {
if c.label == label {
return c
}
}
c := &node{
label: label,
nodeType: nodeTypeParentOnly,
icann: true,
}
n.children = append(n.children, c)
sort.Sort(byLabel(n.children))
return c
}
type byLabel []*node
func (b byLabel) Len() int { return len(b) }
func (b byLabel) Swap(i, j int) { b[i], b[j] = b[j], b[i] }
func (b byLabel) Less(i, j int) bool { return b[i].label < b[j].label }
var nextNodesIndex int
// childrenEncoding are the encoded entries in the generated children array.
// All these pre-defined entries have no children.
var childrenEncoding = []uint32{
0 << (childrenBitsLo + childrenBitsHi), // Without wildcard bit, nodeTypeNormal.
1 << (childrenBitsLo + childrenBitsHi), // Without wildcard bit, nodeTypeException.
2 << (childrenBitsLo + childrenBitsHi), // Without wildcard bit, nodeTypeParentOnly.
4 << (childrenBitsLo + childrenBitsHi), // With wildcard bit, nodeTypeNormal.
5 << (childrenBitsLo + childrenBitsHi), // With wildcard bit, nodeTypeException.
6 << (childrenBitsLo + childrenBitsHi), // With wildcard bit, nodeTypeParentOnly.
}
var firstCallToAssignIndexes = true
func assignIndexes(w io.Writer, n *node) error {
if len(n.children) != 0 {
// Assign nodesIndex.
n.firstChild = nextNodesIndex
for _, c := range n.children {
c.nodesIndex = nextNodesIndex
nextNodesIndex++
}
// The root node's children is implicit.
if firstCallToAssignIndexes {
firstCallToAssignIndexes = false
return nil
}
// Assign childrenIndex.
maxChildren = max(maxChildren, len(childrenEncoding))
if len(childrenEncoding) >= 1<<nodesBitsChildren {
return fmt.Errorf("children table size %d is too large, or nodeBitsChildren is too small", len(childrenEncoding))
}
n.childrenIndex = len(childrenEncoding)
lo := uint32(n.firstChild)
hi := lo + uint32(len(n.children))
maxLo, maxHi = u32max(maxLo, lo), u32max(maxHi, hi)
if lo >= 1<<childrenBitsLo {
return fmt.Errorf("children lo %d is too large, or childrenBitsLo is too small", lo)
}
if hi >= 1<<childrenBitsHi {
return fmt.Errorf("children hi %d is too large, or childrenBitsHi is too small", hi)
}
enc := hi<<childrenBitsLo | lo
enc |= uint32(n.nodeType) << (childrenBitsLo + childrenBitsHi)
if n.wildcard {
enc |= 1 << (childrenBitsLo + childrenBitsHi + childrenBitsNodeType)
}
childrenEncoding = append(childrenEncoding, enc)
} else {
n.childrenIndex = n.nodeType
if n.wildcard {
n.childrenIndex += numNodeType
}
}
return nil
}
func printNode(w io.Writer, n *node) error {
for _, c := range n.children {
s := "---------------"
if len(c.children) != 0 {
s = fmt.Sprintf("n0x%04x-n0x%04x", c.firstChild, c.firstChild+len(c.children))
}
encoding := labelEncoding[c.label]
if c.icann {
encoding |= 1 << (nodesBitsTextLength + nodesBitsTextOffset)
}
encoding |= uint32(c.childrenIndex) << (nodesBitsTextLength + nodesBitsTextOffset + nodesBitsICANN)
if *comments {
fmt.Fprintf(w, "0x%08x, // n0x%04x c0x%04x (%s)%s %s %s %s\n",
encoding, c.nodesIndex, c.childrenIndex, s, wildcardStr(c.wildcard),
nodeTypeStr(c.nodeType), icannStr(c.icann), c.label,
)
} else {
fmt.Fprintf(w, "0x%x,\n", encoding)
}
}
return nil
}
func printNodeLabel(w io.Writer, n *node) error {
for _, c := range n.children {
fmt.Fprintf(w, "%q,\n", c.label)
}
return nil
}
func icannStr(icann bool) string {
if icann {
return "I"
}
return " "
}
func wildcardStr(wildcard bool) string {
if wildcard {
return "*"
}
return " "
}
// combineText combines all the strings in labelsList to form one giant string.
// Overlapping strings will be merged: "arpa" and "parliament" could yield
// "arparliament".
func combineText(labelsList []string) string {
beforeLength := 0
for _, s := range labelsList {
beforeLength += len(s)
}
text := crush(removeSubstrings(labelsList))
if *v {
fmt.Fprintf(os.Stderr, "crushed %d bytes to become %d bytes\n", beforeLength, len(text))
}
return text
}
type byLength []string
func (s byLength) Len() int { return len(s) }
func (s byLength) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
func (s byLength) Less(i, j int) bool { return len(s[i]) < len(s[j]) }
// removeSubstrings returns a copy of its input with any strings removed
// that are substrings of other provided strings.
func removeSubstrings(input []string) []string {
// Make a copy of input.
ss := append(make([]string, 0, len(input)), input...)
sort.Sort(byLength(ss))
for i, shortString := range ss {
// For each string, only consider strings higher than it in sort order, i.e.
// of equal length or greater.
for _, longString := range ss[i+1:] {
if strings.Contains(longString, shortString) {
ss[i] = ""
break
}
}
}
// Remove the empty strings.
sort.Strings(ss)
for len(ss) > 0 && ss[0] == "" {
ss = ss[1:]
}
return ss
}
// crush combines a list of strings, taking advantage of overlaps. It returns a
// single string that contains each input string as a substring.
func crush(ss []string) string {
maxLabelLen := 0
for _, s := range ss {
if maxLabelLen < len(s) {
maxLabelLen = len(s)
}
}
for prefixLen := maxLabelLen; prefixLen > 0; prefixLen-- {
prefixes := makePrefixMap(ss, prefixLen)
for i, s := range ss {
if len(s) <= prefixLen {
continue
}
mergeLabel(ss, i, prefixLen, prefixes)
}
}
return strings.Join(ss, "")
}
// mergeLabel merges the label at ss[i] with the first available matching label
// in prefixMap, where the last "prefixLen" characters in ss[i] match the first
// "prefixLen" characters in the matching label.
// It will merge ss[i] repeatedly until no more matches are available.
// All matching labels merged into ss[i] are replaced by "".
func mergeLabel(ss []string, i, prefixLen int, prefixes prefixMap) {
s := ss[i]
suffix := s[len(s)-prefixLen:]
for _, j := range prefixes[suffix] {
// Empty strings mean "already used." Also avoid merging with self.
if ss[j] == "" || i == j {
continue
}
if *v {
fmt.Fprintf(os.Stderr, "%d-length overlap at (%4d,%4d): %q and %q share %q\n",
prefixLen, i, j, ss[i], ss[j], suffix)
}
ss[i] += ss[j][prefixLen:]
ss[j] = ""
// ss[i] has a new suffix, so merge again if possible.
// Note: we only have to merge again at the same prefix length. Shorter
// prefix lengths will be handled in the next iteration of crush's for loop.
// Can there be matches for longer prefix lengths, introduced by the merge?
// I believe that any such matches would by necessity have been eliminated
// during substring removal or merged at a higher prefix length. For
// instance, in crush("abc", "cde", "bcdef"), combining "abc" and "cde"
// would yield "abcde", which could be merged with "bcdef." However, in
// practice "cde" would already have been elimintated by removeSubstrings.
mergeLabel(ss, i, prefixLen, prefixes)
return
}
}
// prefixMap maps from a prefix to a list of strings containing that prefix. The
// list of strings is represented as indexes into a slice of strings stored
// elsewhere.
type prefixMap map[string][]int
// makePrefixMap constructs a prefixMap from a slice of strings.
func makePrefixMap(ss []string, prefixLen int) prefixMap {
prefixes := make(prefixMap)
for i, s := range ss {
// We use < rather than <= because if a label matches on a prefix equal to
// its full length, that's actually a substring match handled by
// removeSubstrings.
if prefixLen < len(s) {
prefix := s[:prefixLen]
prefixes[prefix] = append(prefixes[prefix], i)
}
}
return prefixes
}

View File

@ -1,61 +0,0 @@
// Copyright 2018 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
// mkasm_darwin.go generates assembly trampolines to call libSystem routines from Go.
//This program must be run after mksyscall.go.
package main
import (
"bytes"
"fmt"
"io/ioutil"
"log"
"os"
"strings"
)
func main() {
in1, err := ioutil.ReadFile("syscall_darwin.go")
if err != nil {
log.Fatalf("can't open syscall_darwin.go: %s", err)
}
arch := os.Args[1]
in2, err := ioutil.ReadFile(fmt.Sprintf("syscall_darwin_%s.go", arch))
if err != nil {
log.Fatalf("can't open syscall_darwin_%s.go: %s", arch, err)
}
in3, err := ioutil.ReadFile(fmt.Sprintf("zsyscall_darwin_%s.go", arch))
if err != nil {
log.Fatalf("can't open zsyscall_darwin_%s.go: %s", arch, err)
}
in := string(in1) + string(in2) + string(in3)
trampolines := map[string]bool{}
var out bytes.Buffer
fmt.Fprintf(&out, "// go run mkasm_darwin.go %s\n", strings.Join(os.Args[1:], " "))
fmt.Fprintf(&out, "// Code generated by the command above; DO NOT EDIT.\n")
fmt.Fprintf(&out, "\n")
fmt.Fprintf(&out, "// +build go1.12\n")
fmt.Fprintf(&out, "\n")
fmt.Fprintf(&out, "#include \"textflag.h\"\n")
for _, line := range strings.Split(in, "\n") {
if !strings.HasPrefix(line, "func ") || !strings.HasSuffix(line, "_trampoline()") {
continue
}
fn := line[5 : len(line)-13]
if !trampolines[fn] {
trampolines[fn] = true
fmt.Fprintf(&out, "TEXT ·%s_trampoline(SB),NOSPLIT,$0-0\n", fn)
fmt.Fprintf(&out, "\tJMP\t%s(SB)\n", fn)
}
}
err = ioutil.WriteFile(fmt.Sprintf("zsyscall_darwin_%s.s", arch), out.Bytes(), 0644)
if err != nil {
log.Fatalf("can't write zsyscall_darwin_%s.s: %s", arch, err)
}
}

View File

@ -1,106 +0,0 @@
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
// mkpost processes the output of cgo -godefs to
// modify the generated types. It is used to clean up
// the sys API in an architecture specific manner.
//
// mkpost is run after cgo -godefs; see README.md.
package main
import (
"bytes"
"fmt"
"go/format"
"io/ioutil"
"log"
"os"
"regexp"
)
func main() {
// Get the OS and architecture (using GOARCH_TARGET if it exists)
goos := os.Getenv("GOOS")
goarch := os.Getenv("GOARCH_TARGET")
if goarch == "" {
goarch = os.Getenv("GOARCH")
}
// Check that we are using the Docker-based build system if we should be.
if goos == "linux" {
if os.Getenv("GOLANG_SYS_BUILD") != "docker" {
os.Stderr.WriteString("In the Docker-based build system, mkpost should not be called directly.\n")
os.Stderr.WriteString("See README.md\n")
os.Exit(1)
}
}
b, err := ioutil.ReadAll(os.Stdin)
if err != nil {
log.Fatal(err)
}
// Intentionally export __val fields in Fsid and Sigset_t
valRegex := regexp.MustCompile(`type (Fsid|Sigset_t) struct {(\s+)X__val(\s+\S+\s+)}`)
b = valRegex.ReplaceAll(b, []byte("type $1 struct {${2}Val$3}"))
// Intentionally export __fds_bits field in FdSet
fdSetRegex := regexp.MustCompile(`type (FdSet) struct {(\s+)X__fds_bits(\s+\S+\s+)}`)
b = fdSetRegex.ReplaceAll(b, []byte("type $1 struct {${2}Bits$3}"))
// If we have empty Ptrace structs, we should delete them. Only s390x emits
// nonempty Ptrace structs.
ptraceRexexp := regexp.MustCompile(`type Ptrace((Psw|Fpregs|Per) struct {\s*})`)
b = ptraceRexexp.ReplaceAll(b, nil)
// Replace the control_regs union with a blank identifier for now.
controlRegsRegex := regexp.MustCompile(`(Control_regs)\s+\[0\]uint64`)
b = controlRegsRegex.ReplaceAll(b, []byte("_ [0]uint64"))
// Remove fields that are added by glibc
// Note that this is unstable as the identifers are private.
removeFieldsRegex := regexp.MustCompile(`X__glibc\S*`)
b = removeFieldsRegex.ReplaceAll(b, []byte("_"))
// Convert [65]int8 to [65]byte in Utsname members to simplify
// conversion to string; see golang.org/issue/20753
convertUtsnameRegex := regexp.MustCompile(`((Sys|Node|Domain)name|Release|Version|Machine)(\s+)\[(\d+)\]u?int8`)
b = convertUtsnameRegex.ReplaceAll(b, []byte("$1$3[$4]byte"))
// Convert [1024]int8 to [1024]byte in Ptmget members
convertPtmget := regexp.MustCompile(`([SC]n)(\s+)\[(\d+)\]u?int8`)
b = convertPtmget.ReplaceAll(b, []byte("$1[$3]byte"))
// Remove spare fields (e.g. in Statx_t)
spareFieldsRegex := regexp.MustCompile(`X__spare\S*`)
b = spareFieldsRegex.ReplaceAll(b, []byte("_"))
// Remove cgo padding fields
removePaddingFieldsRegex := regexp.MustCompile(`Pad_cgo_\d+`)
b = removePaddingFieldsRegex.ReplaceAll(b, []byte("_"))
// Remove padding, hidden, or unused fields
removeFieldsRegex = regexp.MustCompile(`\b(X_\S+|Padding)`)
b = removeFieldsRegex.ReplaceAll(b, []byte("_"))
// Remove the first line of warning from cgo
b = b[bytes.IndexByte(b, '\n')+1:]
// Modify the command in the header to include:
// mkpost, our own warning, and a build tag.
replacement := fmt.Sprintf(`$1 | go run mkpost.go
// Code generated by the command above; see README.md. DO NOT EDIT.
// +build %s,%s`, goarch, goos)
cgoCommandRegex := regexp.MustCompile(`(cgo -godefs .*)`)
b = cgoCommandRegex.ReplaceAll(b, []byte(replacement))
// gofmt
b, err = format.Source(b)
if err != nil {
log.Fatal(err)
}
os.Stdout.Write(b)
}

View File

@ -1,407 +0,0 @@
// Copyright 2018 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
/*
This program reads a file containing function prototypes
(like syscall_darwin.go) and generates system call bodies.
The prototypes are marked by lines beginning with "//sys"
and read like func declarations if //sys is replaced by func, but:
* The parameter lists must give a name for each argument.
This includes return parameters.
* The parameter lists must give a type for each argument:
the (x, y, z int) shorthand is not allowed.
* If the return parameter is an error number, it must be named errno.
A line beginning with //sysnb is like //sys, except that the
goroutine will not be suspended during the execution of the system
call. This must only be used for system calls which can never
block, as otherwise the system call could cause all goroutines to
hang.
*/
package main
import (
"bufio"
"flag"
"fmt"
"os"
"regexp"
"strings"
)
var (
b32 = flag.Bool("b32", false, "32bit big-endian")
l32 = flag.Bool("l32", false, "32bit little-endian")
plan9 = flag.Bool("plan9", false, "plan9")
openbsd = flag.Bool("openbsd", false, "openbsd")
netbsd = flag.Bool("netbsd", false, "netbsd")
dragonfly = flag.Bool("dragonfly", false, "dragonfly")
arm = flag.Bool("arm", false, "arm") // 64-bit value should use (even, odd)-pair
tags = flag.String("tags", "", "build tags")
filename = flag.String("output", "", "output file name (standard output if omitted)")
)
// cmdLine returns this programs's commandline arguments
func cmdLine() string {
return "go run mksyscall.go " + strings.Join(os.Args[1:], " ")
}
// buildTags returns build tags
func buildTags() string {
return *tags
}
// Param is function parameter
type Param struct {
Name string
Type string
}
// usage prints the program usage
func usage() {
fmt.Fprintf(os.Stderr, "usage: go run mksyscall.go [-b32 | -l32] [-tags x,y] [file ...]\n")
os.Exit(1)
}
// parseParamList parses parameter list and returns a slice of parameters
func parseParamList(list string) []string {
list = strings.TrimSpace(list)
if list == "" {
return []string{}
}
return regexp.MustCompile(`\s*,\s*`).Split(list, -1)
}
// parseParam splits a parameter into name and type
func parseParam(p string) Param {
ps := regexp.MustCompile(`^(\S*) (\S*)$`).FindStringSubmatch(p)
if ps == nil {
fmt.Fprintf(os.Stderr, "malformed parameter: %s\n", p)
os.Exit(1)
}
return Param{ps[1], ps[2]}
}
func main() {
// Get the OS and architecture (using GOARCH_TARGET if it exists)
goos := os.Getenv("GOOS")
if goos == "" {
fmt.Fprintln(os.Stderr, "GOOS not defined in environment")
os.Exit(1)
}
goarch := os.Getenv("GOARCH_TARGET")
if goarch == "" {
goarch = os.Getenv("GOARCH")
}
// Check that we are using the Docker-based build system if we should
if goos == "linux" {
if os.Getenv("GOLANG_SYS_BUILD") != "docker" {
fmt.Fprintf(os.Stderr, "In the Docker-based build system, mksyscall should not be called directly.\n")
fmt.Fprintf(os.Stderr, "See README.md\n")
os.Exit(1)
}
}
flag.Usage = usage
flag.Parse()
if len(flag.Args()) <= 0 {
fmt.Fprintf(os.Stderr, "no files to parse provided\n")
usage()
}
endianness := ""
if *b32 {
endianness = "big-endian"
} else if *l32 {
endianness = "little-endian"
}
libc := false
if goos == "darwin" && strings.Contains(buildTags(), ",go1.12") {
libc = true
}
trampolines := map[string]bool{}
text := ""
for _, path := range flag.Args() {
file, err := os.Open(path)
if err != nil {
fmt.Fprintf(os.Stderr, err.Error())
os.Exit(1)
}
s := bufio.NewScanner(file)
for s.Scan() {
t := s.Text()
t = strings.TrimSpace(t)
t = regexp.MustCompile(`\s+`).ReplaceAllString(t, ` `)
nonblock := regexp.MustCompile(`^\/\/sysnb `).FindStringSubmatch(t)
if regexp.MustCompile(`^\/\/sys `).FindStringSubmatch(t) == nil && nonblock == nil {
continue
}
// Line must be of the form
// func Open(path string, mode int, perm int) (fd int, errno error)
// Split into name, in params, out params.
f := regexp.MustCompile(`^\/\/sys(nb)? (\w+)\(([^()]*)\)\s*(?:\(([^()]+)\))?\s*(?:=\s*((?i)SYS_[A-Z0-9_]+))?$`).FindStringSubmatch(t)
if f == nil {
fmt.Fprintf(os.Stderr, "%s:%s\nmalformed //sys declaration\n", path, t)
os.Exit(1)
}
funct, inps, outps, sysname := f[2], f[3], f[4], f[5]
// ClockGettime doesn't have a syscall number on Darwin, only generate libc wrappers.
if goos == "darwin" && !libc && funct == "ClockGettime" {
continue
}
// Split argument lists on comma.
in := parseParamList(inps)
out := parseParamList(outps)
// Try in vain to keep people from editing this file.
// The theory is that they jump into the middle of the file
// without reading the header.
text += "// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT\n\n"
// Go function header.
outDecl := ""
if len(out) > 0 {
outDecl = fmt.Sprintf(" (%s)", strings.Join(out, ", "))
}
text += fmt.Sprintf("func %s(%s)%s {\n", funct, strings.Join(in, ", "), outDecl)
// Check if err return available
errvar := ""
for _, param := range out {
p := parseParam(param)
if p.Type == "error" {
errvar = p.Name
break
}
}
// Prepare arguments to Syscall.
var args []string
n := 0
for _, param := range in {
p := parseParam(param)
if regexp.MustCompile(`^\*`).FindStringSubmatch(p.Type) != nil {
args = append(args, "uintptr(unsafe.Pointer("+p.Name+"))")
} else if p.Type == "string" && errvar != "" {
text += fmt.Sprintf("\tvar _p%d *byte\n", n)
text += fmt.Sprintf("\t_p%d, %s = BytePtrFromString(%s)\n", n, errvar, p.Name)
text += fmt.Sprintf("\tif %s != nil {\n\t\treturn\n\t}\n", errvar)
args = append(args, fmt.Sprintf("uintptr(unsafe.Pointer(_p%d))", n))
n++
} else if p.Type == "string" {
fmt.Fprintf(os.Stderr, path+":"+funct+" uses string arguments, but has no error return\n")
text += fmt.Sprintf("\tvar _p%d *byte\n", n)
text += fmt.Sprintf("\t_p%d, _ = BytePtrFromString(%s)\n", n, p.Name)
args = append(args, fmt.Sprintf("uintptr(unsafe.Pointer(_p%d))", n))
n++
} else if regexp.MustCompile(`^\[\](.*)`).FindStringSubmatch(p.Type) != nil {
// Convert slice into pointer, length.
// Have to be careful not to take address of &a[0] if len == 0:
// pass dummy pointer in that case.
// Used to pass nil, but some OSes or simulators reject write(fd, nil, 0).
text += fmt.Sprintf("\tvar _p%d unsafe.Pointer\n", n)
text += fmt.Sprintf("\tif len(%s) > 0 {\n\t\t_p%d = unsafe.Pointer(&%s[0])\n\t}", p.Name, n, p.Name)
text += fmt.Sprintf(" else {\n\t\t_p%d = unsafe.Pointer(&_zero)\n\t}\n", n)
args = append(args, fmt.Sprintf("uintptr(_p%d)", n), fmt.Sprintf("uintptr(len(%s))", p.Name))
n++
} else if p.Type == "int64" && (*openbsd || *netbsd) {
args = append(args, "0")
if endianness == "big-endian" {
args = append(args, fmt.Sprintf("uintptr(%s>>32)", p.Name), fmt.Sprintf("uintptr(%s)", p.Name))
} else if endianness == "little-endian" {
args = append(args, fmt.Sprintf("uintptr(%s)", p.Name), fmt.Sprintf("uintptr(%s>>32)", p.Name))
} else {
args = append(args, fmt.Sprintf("uintptr(%s)", p.Name))
}
} else if p.Type == "int64" && *dragonfly {
if regexp.MustCompile(`^(?i)extp(read|write)`).FindStringSubmatch(funct) == nil {
args = append(args, "0")
}
if endianness == "big-endian" {
args = append(args, fmt.Sprintf("uintptr(%s>>32)", p.Name), fmt.Sprintf("uintptr(%s)", p.Name))
} else if endianness == "little-endian" {
args = append(args, fmt.Sprintf("uintptr(%s)", p.Name), fmt.Sprintf("uintptr(%s>>32)", p.Name))
} else {
args = append(args, fmt.Sprintf("uintptr(%s)", p.Name))
}
} else if (p.Type == "int64" || p.Type == "uint64") && endianness != "" {
if len(args)%2 == 1 && *arm {
// arm abi specifies 64-bit argument uses
// (even, odd) pair
args = append(args, "0")
}
if endianness == "big-endian" {
args = append(args, fmt.Sprintf("uintptr(%s>>32)", p.Name), fmt.Sprintf("uintptr(%s)", p.Name))
} else {
args = append(args, fmt.Sprintf("uintptr(%s)", p.Name), fmt.Sprintf("uintptr(%s>>32)", p.Name))
}
} else {
args = append(args, fmt.Sprintf("uintptr(%s)", p.Name))
}
}
// Determine which form to use; pad args with zeros.
asm := "Syscall"
if nonblock != nil {
if errvar == "" && goos == "linux" {
asm = "RawSyscallNoError"
} else {
asm = "RawSyscall"
}
} else {
if errvar == "" && goos == "linux" {
asm = "SyscallNoError"
}
}
if len(args) <= 3 {
for len(args) < 3 {
args = append(args, "0")
}
} else if len(args) <= 6 {
asm += "6"
for len(args) < 6 {
args = append(args, "0")
}
} else if len(args) <= 9 {
asm += "9"
for len(args) < 9 {
args = append(args, "0")
}
} else {
fmt.Fprintf(os.Stderr, "%s:%s too many arguments to system call\n", path, funct)
}
// System call number.
if sysname == "" {
sysname = "SYS_" + funct
sysname = regexp.MustCompile(`([a-z])([A-Z])`).ReplaceAllString(sysname, `${1}_$2`)
sysname = strings.ToUpper(sysname)
}
var libcFn string
if libc {
asm = "syscall_" + strings.ToLower(asm[:1]) + asm[1:] // internal syscall call
sysname = strings.TrimPrefix(sysname, "SYS_") // remove SYS_
sysname = strings.ToLower(sysname) // lowercase
if sysname == "getdirentries64" {
// Special case - libSystem name and
// raw syscall name don't match.
sysname = "__getdirentries64"
}
libcFn = sysname
sysname = "funcPC(libc_" + sysname + "_trampoline)"
}
// Actual call.
arglist := strings.Join(args, ", ")
call := fmt.Sprintf("%s(%s, %s)", asm, sysname, arglist)
// Assign return values.
body := ""
ret := []string{"_", "_", "_"}
doErrno := false
for i := 0; i < len(out); i++ {
p := parseParam(out[i])
reg := ""
if p.Name == "err" && !*plan9 {
reg = "e1"
ret[2] = reg
doErrno = true
} else if p.Name == "err" && *plan9 {
ret[0] = "r0"
ret[2] = "e1"
break
} else {
reg = fmt.Sprintf("r%d", i)
ret[i] = reg
}
if p.Type == "bool" {
reg = fmt.Sprintf("%s != 0", reg)
}
if p.Type == "int64" && endianness != "" {
// 64-bit number in r1:r0 or r0:r1.
if i+2 > len(out) {
fmt.Fprintf(os.Stderr, "%s:%s not enough registers for int64 return\n", path, funct)
}
if endianness == "big-endian" {
reg = fmt.Sprintf("int64(r%d)<<32 | int64(r%d)", i, i+1)
} else {
reg = fmt.Sprintf("int64(r%d)<<32 | int64(r%d)", i+1, i)
}
ret[i] = fmt.Sprintf("r%d", i)
ret[i+1] = fmt.Sprintf("r%d", i+1)
}
if reg != "e1" || *plan9 {
body += fmt.Sprintf("\t%s = %s(%s)\n", p.Name, p.Type, reg)
}
}
if ret[0] == "_" && ret[1] == "_" && ret[2] == "_" {
text += fmt.Sprintf("\t%s\n", call)
} else {
if errvar == "" && goos == "linux" {
// raw syscall without error on Linux, see golang.org/issue/22924
text += fmt.Sprintf("\t%s, %s := %s\n", ret[0], ret[1], call)
} else {
text += fmt.Sprintf("\t%s, %s, %s := %s\n", ret[0], ret[1], ret[2], call)
}
}
text += body
if *plan9 && ret[2] == "e1" {
text += "\tif int32(r0) == -1 {\n"
text += "\t\terr = e1\n"
text += "\t}\n"
} else if doErrno {
text += "\tif e1 != 0 {\n"
text += "\t\terr = errnoErr(e1)\n"
text += "\t}\n"
}
text += "\treturn\n"
text += "}\n\n"
if libc && !trampolines[libcFn] {
// some system calls share a trampoline, like read and readlen.
trampolines[libcFn] = true
// Declare assembly trampoline.
text += fmt.Sprintf("func libc_%s_trampoline()\n", libcFn)
// Assembly trampoline calls the libc_* function, which this magic
// redirects to use the function from libSystem.
text += fmt.Sprintf("//go:linkname libc_%s libc_%s\n", libcFn, libcFn)
text += fmt.Sprintf("//go:cgo_import_dynamic libc_%s %s \"/usr/lib/libSystem.B.dylib\"\n", libcFn, libcFn)
text += "\n"
}
}
if err := s.Err(); err != nil {
fmt.Fprintf(os.Stderr, err.Error())
os.Exit(1)
}
file.Close()
}
fmt.Printf(srcTemplate, cmdLine(), buildTags(), text)
}
const srcTemplate = `// %s
// Code generated by the command above; see README.md. DO NOT EDIT.
// +build %s
package unix
import (
"syscall"
"unsafe"
)
var _ syscall.Errno
%s
`

View File

@ -1,415 +0,0 @@
// Copyright 2019 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
/*
This program reads a file containing function prototypes
(like syscall_aix.go) and generates system call bodies.
The prototypes are marked by lines beginning with "//sys"
and read like func declarations if //sys is replaced by func, but:
* The parameter lists must give a name for each argument.
This includes return parameters.
* The parameter lists must give a type for each argument:
the (x, y, z int) shorthand is not allowed.
* If the return parameter is an error number, it must be named err.
* If go func name needs to be different than its libc name,
* or the function is not in libc, name could be specified
* at the end, after "=" sign, like
//sys getsockopt(s int, level int, name int, val uintptr, vallen *_Socklen) (err error) = libsocket.getsockopt
*/
package main
import (
"bufio"
"flag"
"fmt"
"os"
"regexp"
"strings"
)
var (
b32 = flag.Bool("b32", false, "32bit big-endian")
l32 = flag.Bool("l32", false, "32bit little-endian")
aix = flag.Bool("aix", false, "aix")
tags = flag.String("tags", "", "build tags")
)
// cmdLine returns this programs's commandline arguments
func cmdLine() string {
return "go run mksyscall_aix_ppc.go " + strings.Join(os.Args[1:], " ")
}
// buildTags returns build tags
func buildTags() string {
return *tags
}
// Param is function parameter
type Param struct {
Name string
Type string
}
// usage prints the program usage
func usage() {
fmt.Fprintf(os.Stderr, "usage: go run mksyscall_aix_ppc.go [-b32 | -l32] [-tags x,y] [file ...]\n")
os.Exit(1)
}
// parseParamList parses parameter list and returns a slice of parameters
func parseParamList(list string) []string {
list = strings.TrimSpace(list)
if list == "" {
return []string{}
}
return regexp.MustCompile(`\s*,\s*`).Split(list, -1)
}
// parseParam splits a parameter into name and type
func parseParam(p string) Param {
ps := regexp.MustCompile(`^(\S*) (\S*)$`).FindStringSubmatch(p)
if ps == nil {
fmt.Fprintf(os.Stderr, "malformed parameter: %s\n", p)
os.Exit(1)
}
return Param{ps[1], ps[2]}
}
func main() {
flag.Usage = usage
flag.Parse()
if len(flag.Args()) <= 0 {
fmt.Fprintf(os.Stderr, "no files to parse provided\n")
usage()
}
endianness := ""
if *b32 {
endianness = "big-endian"
} else if *l32 {
endianness = "little-endian"
}
pack := ""
text := ""
cExtern := "/*\n#include <stdint.h>\n#include <stddef.h>\n"
for _, path := range flag.Args() {
file, err := os.Open(path)
if err != nil {
fmt.Fprintf(os.Stderr, err.Error())
os.Exit(1)
}
s := bufio.NewScanner(file)
for s.Scan() {
t := s.Text()
t = strings.TrimSpace(t)
t = regexp.MustCompile(`\s+`).ReplaceAllString(t, ` `)
if p := regexp.MustCompile(`^package (\S+)$`).FindStringSubmatch(t); p != nil && pack == "" {
pack = p[1]
}
nonblock := regexp.MustCompile(`^\/\/sysnb `).FindStringSubmatch(t)
if regexp.MustCompile(`^\/\/sys `).FindStringSubmatch(t) == nil && nonblock == nil {
continue
}
// Line must be of the form
// func Open(path string, mode int, perm int) (fd int, err error)
// Split into name, in params, out params.
f := regexp.MustCompile(`^\/\/sys(nb)? (\w+)\(([^()]*)\)\s*(?:\(([^()]+)\))?\s*(?:=\s*(?:(\w*)\.)?(\w*))?$`).FindStringSubmatch(t)
if f == nil {
fmt.Fprintf(os.Stderr, "%s:%s\nmalformed //sys declaration\n", path, t)
os.Exit(1)
}
funct, inps, outps, modname, sysname := f[2], f[3], f[4], f[5], f[6]
// Split argument lists on comma.
in := parseParamList(inps)
out := parseParamList(outps)
inps = strings.Join(in, ", ")
outps = strings.Join(out, ", ")
// Try in vain to keep people from editing this file.
// The theory is that they jump into the middle of the file
// without reading the header.
text += "// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT\n\n"
// Check if value return, err return available
errvar := ""
retvar := ""
rettype := ""
for _, param := range out {
p := parseParam(param)
if p.Type == "error" {
errvar = p.Name
} else {
retvar = p.Name
rettype = p.Type
}
}
// System call name.
if sysname == "" {
sysname = funct
}
sysname = regexp.MustCompile(`([a-z])([A-Z])`).ReplaceAllString(sysname, `${1}_$2`)
sysname = strings.ToLower(sysname) // All libc functions are lowercase.
cRettype := ""
if rettype == "unsafe.Pointer" {
cRettype = "uintptr_t"
} else if rettype == "uintptr" {
cRettype = "uintptr_t"
} else if regexp.MustCompile(`^_`).FindStringSubmatch(rettype) != nil {
cRettype = "uintptr_t"
} else if rettype == "int" {
cRettype = "int"
} else if rettype == "int32" {
cRettype = "int"
} else if rettype == "int64" {
cRettype = "long long"
} else if rettype == "uint32" {
cRettype = "unsigned int"
} else if rettype == "uint64" {
cRettype = "unsigned long long"
} else {
cRettype = "int"
}
if sysname == "exit" {
cRettype = "void"
}
// Change p.Types to c
var cIn []string
for _, param := range in {
p := parseParam(param)
if regexp.MustCompile(`^\*`).FindStringSubmatch(p.Type) != nil {
cIn = append(cIn, "uintptr_t")
} else if p.Type == "string" {
cIn = append(cIn, "uintptr_t")
} else if regexp.MustCompile(`^\[\](.*)`).FindStringSubmatch(p.Type) != nil {
cIn = append(cIn, "uintptr_t", "size_t")
} else if p.Type == "unsafe.Pointer" {
cIn = append(cIn, "uintptr_t")
} else if p.Type == "uintptr" {
cIn = append(cIn, "uintptr_t")
} else if regexp.MustCompile(`^_`).FindStringSubmatch(p.Type) != nil {
cIn = append(cIn, "uintptr_t")
} else if p.Type == "int" {
cIn = append(cIn, "int")
} else if p.Type == "int32" {
cIn = append(cIn, "int")
} else if p.Type == "int64" {
cIn = append(cIn, "long long")
} else if p.Type == "uint32" {
cIn = append(cIn, "unsigned int")
} else if p.Type == "uint64" {
cIn = append(cIn, "unsigned long long")
} else {
cIn = append(cIn, "int")
}
}
if funct != "fcntl" && funct != "FcntlInt" && funct != "readlen" && funct != "writelen" {
if sysname == "select" {
// select is a keyword of Go. Its name is
// changed to c_select.
cExtern += "#define c_select select\n"
}
// Imports of system calls from libc
cExtern += fmt.Sprintf("%s %s", cRettype, sysname)
cIn := strings.Join(cIn, ", ")
cExtern += fmt.Sprintf("(%s);\n", cIn)
}
// So file name.
if *aix {
if modname == "" {
modname = "libc.a/shr_64.o"
} else {
fmt.Fprintf(os.Stderr, "%s: only syscall using libc are available\n", funct)
os.Exit(1)
}
}
strconvfunc := "C.CString"
// Go function header.
if outps != "" {
outps = fmt.Sprintf(" (%s)", outps)
}
if text != "" {
text += "\n"
}
text += fmt.Sprintf("func %s(%s)%s {\n", funct, strings.Join(in, ", "), outps)
// Prepare arguments to Syscall.
var args []string
n := 0
argN := 0
for _, param := range in {
p := parseParam(param)
if regexp.MustCompile(`^\*`).FindStringSubmatch(p.Type) != nil {
args = append(args, "C.uintptr_t(uintptr(unsafe.Pointer("+p.Name+")))")
} else if p.Type == "string" && errvar != "" {
text += fmt.Sprintf("\t_p%d := uintptr(unsafe.Pointer(%s(%s)))\n", n, strconvfunc, p.Name)
args = append(args, fmt.Sprintf("C.uintptr_t(_p%d)", n))
n++
} else if p.Type == "string" {
fmt.Fprintf(os.Stderr, path+":"+funct+" uses string arguments, but has no error return\n")
text += fmt.Sprintf("\t_p%d := uintptr(unsafe.Pointer(%s(%s)))\n", n, strconvfunc, p.Name)
args = append(args, fmt.Sprintf("C.uintptr_t(_p%d)", n))
n++
} else if m := regexp.MustCompile(`^\[\](.*)`).FindStringSubmatch(p.Type); m != nil {
// Convert slice into pointer, length.
// Have to be careful not to take address of &a[0] if len == 0:
// pass nil in that case.
text += fmt.Sprintf("\tvar _p%d *%s\n", n, m[1])
text += fmt.Sprintf("\tif len(%s) > 0 {\n\t\t_p%d = &%s[0]\n\t}\n", p.Name, n, p.Name)
args = append(args, fmt.Sprintf("C.uintptr_t(uintptr(unsafe.Pointer(_p%d)))", n))
n++
text += fmt.Sprintf("\tvar _p%d int\n", n)
text += fmt.Sprintf("\t_p%d = len(%s)\n", n, p.Name)
args = append(args, fmt.Sprintf("C.size_t(_p%d)", n))
n++
} else if p.Type == "int64" && endianness != "" {
if endianness == "big-endian" {
args = append(args, fmt.Sprintf("uintptr(%s>>32)", p.Name), fmt.Sprintf("uintptr(%s)", p.Name))
} else {
args = append(args, fmt.Sprintf("uintptr(%s)", p.Name), fmt.Sprintf("uintptr(%s>>32)", p.Name))
}
n++
} else if p.Type == "bool" {
text += fmt.Sprintf("\tvar _p%d uint32\n", n)
text += fmt.Sprintf("\tif %s {\n\t\t_p%d = 1\n\t} else {\n\t\t_p%d = 0\n\t}\n", p.Name, n, n)
args = append(args, fmt.Sprintf("_p%d", n))
} else if regexp.MustCompile(`^_`).FindStringSubmatch(p.Type) != nil {
args = append(args, fmt.Sprintf("C.uintptr_t(uintptr(%s))", p.Name))
} else if p.Type == "unsafe.Pointer" {
args = append(args, fmt.Sprintf("C.uintptr_t(uintptr(%s))", p.Name))
} else if p.Type == "int" {
if (argN == 2) && ((funct == "readlen") || (funct == "writelen")) {
args = append(args, fmt.Sprintf("C.size_t(%s)", p.Name))
} else if argN == 0 && funct == "fcntl" {
args = append(args, fmt.Sprintf("C.uintptr_t(%s)", p.Name))
} else if (argN == 2) && ((funct == "fcntl") || (funct == "FcntlInt")) {
args = append(args, fmt.Sprintf("C.uintptr_t(%s)", p.Name))
} else {
args = append(args, fmt.Sprintf("C.int(%s)", p.Name))
}
} else if p.Type == "int32" {
args = append(args, fmt.Sprintf("C.int(%s)", p.Name))
} else if p.Type == "int64" {
args = append(args, fmt.Sprintf("C.longlong(%s)", p.Name))
} else if p.Type == "uint32" {
args = append(args, fmt.Sprintf("C.uint(%s)", p.Name))
} else if p.Type == "uint64" {
args = append(args, fmt.Sprintf("C.ulonglong(%s)", p.Name))
} else if p.Type == "uintptr" {
args = append(args, fmt.Sprintf("C.uintptr_t(%s)", p.Name))
} else {
args = append(args, fmt.Sprintf("C.int(%s)", p.Name))
}
argN++
}
// Actual call.
arglist := strings.Join(args, ", ")
call := ""
if sysname == "exit" {
if errvar != "" {
call += "er :="
} else {
call += ""
}
} else if errvar != "" {
call += "r0,er :="
} else if retvar != "" {
call += "r0,_ :="
} else {
call += ""
}
if sysname == "select" {
// select is a keyword of Go. Its name is
// changed to c_select.
call += fmt.Sprintf("C.c_%s(%s)", sysname, arglist)
} else {
call += fmt.Sprintf("C.%s(%s)", sysname, arglist)
}
// Assign return values.
body := ""
for i := 0; i < len(out); i++ {
p := parseParam(out[i])
reg := ""
if p.Name == "err" {
reg = "e1"
} else {
reg = "r0"
}
if reg != "e1" {
body += fmt.Sprintf("\t%s = %s(%s)\n", p.Name, p.Type, reg)
}
}
// verify return
if sysname != "exit" && errvar != "" {
if regexp.MustCompile(`^uintptr`).FindStringSubmatch(cRettype) != nil {
body += "\tif (uintptr(r0) ==^uintptr(0) && er != nil) {\n"
body += fmt.Sprintf("\t\t%s = er\n", errvar)
body += "\t}\n"
} else {
body += "\tif (r0 ==-1 && er != nil) {\n"
body += fmt.Sprintf("\t\t%s = er\n", errvar)
body += "\t}\n"
}
} else if errvar != "" {
body += "\tif (er != nil) {\n"
body += fmt.Sprintf("\t\t%s = er\n", errvar)
body += "\t}\n"
}
text += fmt.Sprintf("\t%s\n", call)
text += body
text += "\treturn\n"
text += "}\n"
}
if err := s.Err(); err != nil {
fmt.Fprintf(os.Stderr, err.Error())
os.Exit(1)
}
file.Close()
}
imp := ""
if pack != "unix" {
imp = "import \"golang.org/x/sys/unix\"\n"
}
fmt.Printf(srcTemplate, cmdLine(), buildTags(), pack, cExtern, imp, text)
}
const srcTemplate = `// %s
// Code generated by the command above; see README.md. DO NOT EDIT.
// +build %s
package %s
%s
*/
import "C"
import (
"unsafe"
)
%s
%s
`

View File

@ -1,614 +0,0 @@
// Copyright 2019 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
/*
This program reads a file containing function prototypes
(like syscall_aix.go) and generates system call bodies.
The prototypes are marked by lines beginning with "//sys"
and read like func declarations if //sys is replaced by func, but:
* The parameter lists must give a name for each argument.
This includes return parameters.
* The parameter lists must give a type for each argument:
the (x, y, z int) shorthand is not allowed.
* If the return parameter is an error number, it must be named err.
* If go func name needs to be different than its libc name,
* or the function is not in libc, name could be specified
* at the end, after "=" sign, like
//sys getsockopt(s int, level int, name int, val uintptr, vallen *_Socklen) (err error) = libsocket.getsockopt
This program will generate three files and handle both gc and gccgo implementation:
- zsyscall_aix_ppc64.go: the common part of each implementation (error handler, pointer creation)
- zsyscall_aix_ppc64_gc.go: gc part with //go_cgo_import_dynamic and a call to syscall6
- zsyscall_aix_ppc64_gccgo.go: gccgo part with C function and conversion to C type.
The generated code looks like this
zsyscall_aix_ppc64.go
func asyscall(...) (n int, err error) {
// Pointer Creation
r1, e1 := callasyscall(...)
// Type Conversion
// Error Handler
return
}
zsyscall_aix_ppc64_gc.go
//go:cgo_import_dynamic libc_asyscall asyscall "libc.a/shr_64.o"
//go:linkname libc_asyscall libc_asyscall
var asyscall syscallFunc
func callasyscall(...) (r1 uintptr, e1 Errno) {
r1, _, e1 = syscall6(uintptr(unsafe.Pointer(&libc_asyscall)), "nb_args", ... )
return
}
zsyscall_aix_ppc64_ggcgo.go
// int asyscall(...)
import "C"
func callasyscall(...) (r1 uintptr, e1 Errno) {
r1 = uintptr(C.asyscall(...))
e1 = syscall.GetErrno()
return
}
*/
package main
import (
"bufio"
"flag"
"fmt"
"io/ioutil"
"os"
"regexp"
"strings"
)
var (
b32 = flag.Bool("b32", false, "32bit big-endian")
l32 = flag.Bool("l32", false, "32bit little-endian")
aix = flag.Bool("aix", false, "aix")
tags = flag.String("tags", "", "build tags")
)
// cmdLine returns this programs's commandline arguments
func cmdLine() string {
return "go run mksyscall_aix_ppc64.go " + strings.Join(os.Args[1:], " ")
}
// buildTags returns build tags
func buildTags() string {
return *tags
}
// Param is function parameter
type Param struct {
Name string
Type string
}
// usage prints the program usage
func usage() {
fmt.Fprintf(os.Stderr, "usage: go run mksyscall_aix_ppc64.go [-b32 | -l32] [-tags x,y] [file ...]\n")
os.Exit(1)
}
// parseParamList parses parameter list and returns a slice of parameters
func parseParamList(list string) []string {
list = strings.TrimSpace(list)
if list == "" {
return []string{}
}
return regexp.MustCompile(`\s*,\s*`).Split(list, -1)
}
// parseParam splits a parameter into name and type
func parseParam(p string) Param {
ps := regexp.MustCompile(`^(\S*) (\S*)$`).FindStringSubmatch(p)
if ps == nil {
fmt.Fprintf(os.Stderr, "malformed parameter: %s\n", p)
os.Exit(1)
}
return Param{ps[1], ps[2]}
}
func main() {
flag.Usage = usage
flag.Parse()
if len(flag.Args()) <= 0 {
fmt.Fprintf(os.Stderr, "no files to parse provided\n")
usage()
}
endianness := ""
if *b32 {
endianness = "big-endian"
} else if *l32 {
endianness = "little-endian"
}
pack := ""
// GCCGO
textgccgo := ""
cExtern := "/*\n#include <stdint.h>\n"
// GC
textgc := ""
dynimports := ""
linknames := ""
var vars []string
// COMMON
textcommon := ""
for _, path := range flag.Args() {
file, err := os.Open(path)
if err != nil {
fmt.Fprintf(os.Stderr, err.Error())
os.Exit(1)
}
s := bufio.NewScanner(file)
for s.Scan() {
t := s.Text()
t = strings.TrimSpace(t)
t = regexp.MustCompile(`\s+`).ReplaceAllString(t, ` `)
if p := regexp.MustCompile(`^package (\S+)$`).FindStringSubmatch(t); p != nil && pack == "" {
pack = p[1]
}
nonblock := regexp.MustCompile(`^\/\/sysnb `).FindStringSubmatch(t)
if regexp.MustCompile(`^\/\/sys `).FindStringSubmatch(t) == nil && nonblock == nil {
continue
}
// Line must be of the form
// func Open(path string, mode int, perm int) (fd int, err error)
// Split into name, in params, out params.
f := regexp.MustCompile(`^\/\/sys(nb)? (\w+)\(([^()]*)\)\s*(?:\(([^()]+)\))?\s*(?:=\s*(?:(\w*)\.)?(\w*))?$`).FindStringSubmatch(t)
if f == nil {
fmt.Fprintf(os.Stderr, "%s:%s\nmalformed //sys declaration\n", path, t)
os.Exit(1)
}
funct, inps, outps, modname, sysname := f[2], f[3], f[4], f[5], f[6]
// Split argument lists on comma.
in := parseParamList(inps)
out := parseParamList(outps)
inps = strings.Join(in, ", ")
outps = strings.Join(out, ", ")
if sysname == "" {
sysname = funct
}
onlyCommon := false
if funct == "readlen" || funct == "writelen" || funct == "FcntlInt" || funct == "FcntlFlock" {
// This function call another syscall which is already implemented.
// Therefore, the gc and gccgo part must not be generated.
onlyCommon = true
}
// Try in vain to keep people from editing this file.
// The theory is that they jump into the middle of the file
// without reading the header.
textcommon += "// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT\n\n"
if !onlyCommon {
textgccgo += "// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT\n\n"
textgc += "// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT\n\n"
}
// Check if value return, err return available
errvar := ""
rettype := ""
for _, param := range out {
p := parseParam(param)
if p.Type == "error" {
errvar = p.Name
} else {
rettype = p.Type
}
}
sysname = regexp.MustCompile(`([a-z])([A-Z])`).ReplaceAllString(sysname, `${1}_$2`)
sysname = strings.ToLower(sysname) // All libc functions are lowercase.
// GCCGO Prototype return type
cRettype := ""
if rettype == "unsafe.Pointer" {
cRettype = "uintptr_t"
} else if rettype == "uintptr" {
cRettype = "uintptr_t"
} else if regexp.MustCompile(`^_`).FindStringSubmatch(rettype) != nil {
cRettype = "uintptr_t"
} else if rettype == "int" {
cRettype = "int"
} else if rettype == "int32" {
cRettype = "int"
} else if rettype == "int64" {
cRettype = "long long"
} else if rettype == "uint32" {
cRettype = "unsigned int"
} else if rettype == "uint64" {
cRettype = "unsigned long long"
} else {
cRettype = "int"
}
if sysname == "exit" {
cRettype = "void"
}
// GCCGO Prototype arguments type
var cIn []string
for i, param := range in {
p := parseParam(param)
if regexp.MustCompile(`^\*`).FindStringSubmatch(p.Type) != nil {
cIn = append(cIn, "uintptr_t")
} else if p.Type == "string" {
cIn = append(cIn, "uintptr_t")
} else if regexp.MustCompile(`^\[\](.*)`).FindStringSubmatch(p.Type) != nil {
cIn = append(cIn, "uintptr_t", "size_t")
} else if p.Type == "unsafe.Pointer" {
cIn = append(cIn, "uintptr_t")
} else if p.Type == "uintptr" {
cIn = append(cIn, "uintptr_t")
} else if regexp.MustCompile(`^_`).FindStringSubmatch(p.Type) != nil {
cIn = append(cIn, "uintptr_t")
} else if p.Type == "int" {
if (i == 0 || i == 2) && funct == "fcntl" {
// These fcntl arguments needs to be uintptr to be able to call FcntlInt and FcntlFlock
cIn = append(cIn, "uintptr_t")
} else {
cIn = append(cIn, "int")
}
} else if p.Type == "int32" {
cIn = append(cIn, "int")
} else if p.Type == "int64" {
cIn = append(cIn, "long long")
} else if p.Type == "uint32" {
cIn = append(cIn, "unsigned int")
} else if p.Type == "uint64" {
cIn = append(cIn, "unsigned long long")
} else {
cIn = append(cIn, "int")
}
}
if !onlyCommon {
// GCCGO Prototype Generation
// Imports of system calls from libc
if sysname == "select" {
// select is a keyword of Go. Its name is
// changed to c_select.
cExtern += "#define c_select select\n"
}
cExtern += fmt.Sprintf("%s %s", cRettype, sysname)
cIn := strings.Join(cIn, ", ")
cExtern += fmt.Sprintf("(%s);\n", cIn)
}
// GC Library name
if modname == "" {
modname = "libc.a/shr_64.o"
} else {
fmt.Fprintf(os.Stderr, "%s: only syscall using libc are available\n", funct)
os.Exit(1)
}
sysvarname := fmt.Sprintf("libc_%s", sysname)
if !onlyCommon {
// GC Runtime import of function to allow cross-platform builds.
dynimports += fmt.Sprintf("//go:cgo_import_dynamic %s %s \"%s\"\n", sysvarname, sysname, modname)
// GC Link symbol to proc address variable.
linknames += fmt.Sprintf("//go:linkname %s %s\n", sysvarname, sysvarname)
// GC Library proc address variable.
vars = append(vars, sysvarname)
}
strconvfunc := "BytePtrFromString"
strconvtype := "*byte"
// Go function header.
if outps != "" {
outps = fmt.Sprintf(" (%s)", outps)
}
if textcommon != "" {
textcommon += "\n"
}
textcommon += fmt.Sprintf("func %s(%s)%s {\n", funct, strings.Join(in, ", "), outps)
// Prepare arguments tocall.
var argscommon []string // Arguments in the common part
var argscall []string // Arguments for call prototype
var argsgc []string // Arguments for gc call (with syscall6)
var argsgccgo []string // Arguments for gccgo call (with C.name_of_syscall)
n := 0
argN := 0
for _, param := range in {
p := parseParam(param)
if regexp.MustCompile(`^\*`).FindStringSubmatch(p.Type) != nil {
argscommon = append(argscommon, fmt.Sprintf("uintptr(unsafe.Pointer(%s))", p.Name))
argscall = append(argscall, fmt.Sprintf("%s uintptr", p.Name))
argsgc = append(argsgc, p.Name)
argsgccgo = append(argsgccgo, fmt.Sprintf("C.uintptr_t(%s)", p.Name))
} else if p.Type == "string" && errvar != "" {
textcommon += fmt.Sprintf("\tvar _p%d %s\n", n, strconvtype)
textcommon += fmt.Sprintf("\t_p%d, %s = %s(%s)\n", n, errvar, strconvfunc, p.Name)
textcommon += fmt.Sprintf("\tif %s != nil {\n\t\treturn\n\t}\n", errvar)
argscommon = append(argscommon, fmt.Sprintf("uintptr(unsafe.Pointer(_p%d))", n))
argscall = append(argscall, fmt.Sprintf("_p%d uintptr ", n))
argsgc = append(argsgc, fmt.Sprintf("_p%d", n))
argsgccgo = append(argsgccgo, fmt.Sprintf("C.uintptr_t(_p%d)", n))
n++
} else if p.Type == "string" {
fmt.Fprintf(os.Stderr, path+":"+funct+" uses string arguments, but has no error return\n")
textcommon += fmt.Sprintf("\tvar _p%d %s\n", n, strconvtype)
textcommon += fmt.Sprintf("\t_p%d, %s = %s(%s)\n", n, errvar, strconvfunc, p.Name)
textcommon += fmt.Sprintf("\tif %s != nil {\n\t\treturn\n\t}\n", errvar)
argscommon = append(argscommon, fmt.Sprintf("uintptr(unsafe.Pointer(_p%d))", n))
argscall = append(argscall, fmt.Sprintf("_p%d uintptr", n))
argsgc = append(argsgc, fmt.Sprintf("_p%d", n))
argsgccgo = append(argsgccgo, fmt.Sprintf("C.uintptr_t(_p%d)", n))
n++
} else if m := regexp.MustCompile(`^\[\](.*)`).FindStringSubmatch(p.Type); m != nil {
// Convert slice into pointer, length.
// Have to be careful not to take address of &a[0] if len == 0:
// pass nil in that case.
textcommon += fmt.Sprintf("\tvar _p%d *%s\n", n, m[1])
textcommon += fmt.Sprintf("\tif len(%s) > 0 {\n\t\t_p%d = &%s[0]\n\t}\n", p.Name, n, p.Name)
argscommon = append(argscommon, fmt.Sprintf("uintptr(unsafe.Pointer(_p%d))", n), fmt.Sprintf("len(%s)", p.Name))
argscall = append(argscall, fmt.Sprintf("_p%d uintptr", n), fmt.Sprintf("_lenp%d int", n))
argsgc = append(argsgc, fmt.Sprintf("_p%d", n), fmt.Sprintf("uintptr(_lenp%d)", n))
argsgccgo = append(argsgccgo, fmt.Sprintf("C.uintptr_t(_p%d)", n), fmt.Sprintf("C.size_t(_lenp%d)", n))
n++
} else if p.Type == "int64" && endianness != "" {
fmt.Fprintf(os.Stderr, path+":"+funct+" uses int64 with 32 bits mode. Case not yet implemented\n")
} else if p.Type == "bool" {
fmt.Fprintf(os.Stderr, path+":"+funct+" uses bool. Case not yet implemented\n")
} else if regexp.MustCompile(`^_`).FindStringSubmatch(p.Type) != nil || p.Type == "unsafe.Pointer" {
argscommon = append(argscommon, fmt.Sprintf("uintptr(%s)", p.Name))
argscall = append(argscall, fmt.Sprintf("%s uintptr", p.Name))
argsgc = append(argsgc, p.Name)
argsgccgo = append(argsgccgo, fmt.Sprintf("C.uintptr_t(%s)", p.Name))
} else if p.Type == "int" {
if (argN == 0 || argN == 2) && ((funct == "fcntl") || (funct == "FcntlInt") || (funct == "FcntlFlock")) {
// These fcntl arguments need to be uintptr to be able to call FcntlInt and FcntlFlock
argscommon = append(argscommon, fmt.Sprintf("uintptr(%s)", p.Name))
argscall = append(argscall, fmt.Sprintf("%s uintptr", p.Name))
argsgc = append(argsgc, p.Name)
argsgccgo = append(argsgccgo, fmt.Sprintf("C.uintptr_t(%s)", p.Name))
} else {
argscommon = append(argscommon, p.Name)
argscall = append(argscall, fmt.Sprintf("%s int", p.Name))
argsgc = append(argsgc, fmt.Sprintf("uintptr(%s)", p.Name))
argsgccgo = append(argsgccgo, fmt.Sprintf("C.int(%s)", p.Name))
}
} else if p.Type == "int32" {
argscommon = append(argscommon, p.Name)
argscall = append(argscall, fmt.Sprintf("%s int32", p.Name))
argsgc = append(argsgc, fmt.Sprintf("uintptr(%s)", p.Name))
argsgccgo = append(argsgccgo, fmt.Sprintf("C.int(%s)", p.Name))
} else if p.Type == "int64" {
argscommon = append(argscommon, p.Name)
argscall = append(argscall, fmt.Sprintf("%s int64", p.Name))
argsgc = append(argsgc, fmt.Sprintf("uintptr(%s)", p.Name))
argsgccgo = append(argsgccgo, fmt.Sprintf("C.longlong(%s)", p.Name))
} else if p.Type == "uint32" {
argscommon = append(argscommon, p.Name)
argscall = append(argscall, fmt.Sprintf("%s uint32", p.Name))
argsgc = append(argsgc, fmt.Sprintf("uintptr(%s)", p.Name))
argsgccgo = append(argsgccgo, fmt.Sprintf("C.uint(%s)", p.Name))
} else if p.Type == "uint64" {
argscommon = append(argscommon, p.Name)
argscall = append(argscall, fmt.Sprintf("%s uint64", p.Name))
argsgc = append(argsgc, fmt.Sprintf("uintptr(%s)", p.Name))
argsgccgo = append(argsgccgo, fmt.Sprintf("C.ulonglong(%s)", p.Name))
} else if p.Type == "uintptr" {
argscommon = append(argscommon, p.Name)
argscall = append(argscall, fmt.Sprintf("%s uintptr", p.Name))
argsgc = append(argsgc, p.Name)
argsgccgo = append(argsgccgo, fmt.Sprintf("C.uintptr_t(%s)", p.Name))
} else {
argscommon = append(argscommon, fmt.Sprintf("int(%s)", p.Name))
argscall = append(argscall, fmt.Sprintf("%s int", p.Name))
argsgc = append(argsgc, fmt.Sprintf("uintptr(%s)", p.Name))
argsgccgo = append(argsgccgo, fmt.Sprintf("C.int(%s)", p.Name))
}
argN++
}
nargs := len(argsgc)
// COMMON function generation
argscommonlist := strings.Join(argscommon, ", ")
callcommon := fmt.Sprintf("call%s(%s)", sysname, argscommonlist)
ret := []string{"_", "_"}
body := ""
doErrno := false
for i := 0; i < len(out); i++ {
p := parseParam(out[i])
reg := ""
if p.Name == "err" {
reg = "e1"
ret[1] = reg
doErrno = true
} else {
reg = "r0"
ret[0] = reg
}
if p.Type == "bool" {
reg = fmt.Sprintf("%s != 0", reg)
}
if reg != "e1" {
body += fmt.Sprintf("\t%s = %s(%s)\n", p.Name, p.Type, reg)
}
}
if ret[0] == "_" && ret[1] == "_" {
textcommon += fmt.Sprintf("\t%s\n", callcommon)
} else {
textcommon += fmt.Sprintf("\t%s, %s := %s\n", ret[0], ret[1], callcommon)
}
textcommon += body
if doErrno {
textcommon += "\tif e1 != 0 {\n"
textcommon += "\t\terr = errnoErr(e1)\n"
textcommon += "\t}\n"
}
textcommon += "\treturn\n"
textcommon += "}\n"
if onlyCommon {
continue
}
// CALL Prototype
callProto := fmt.Sprintf("func call%s(%s) (r1 uintptr, e1 Errno) {\n", sysname, strings.Join(argscall, ", "))
// GC function generation
asm := "syscall6"
if nonblock != nil {
asm = "rawSyscall6"
}
if len(argsgc) <= 6 {
for len(argsgc) < 6 {
argsgc = append(argsgc, "0")
}
} else {
fmt.Fprintf(os.Stderr, "%s: too many arguments to system call", funct)
os.Exit(1)
}
argsgclist := strings.Join(argsgc, ", ")
callgc := fmt.Sprintf("%s(uintptr(unsafe.Pointer(&%s)), %d, %s)", asm, sysvarname, nargs, argsgclist)
textgc += callProto
textgc += fmt.Sprintf("\tr1, _, e1 = %s\n", callgc)
textgc += "\treturn\n}\n"
// GCCGO function generation
argsgccgolist := strings.Join(argsgccgo, ", ")
var callgccgo string
if sysname == "select" {
// select is a keyword of Go. Its name is
// changed to c_select.
callgccgo = fmt.Sprintf("C.c_%s(%s)", sysname, argsgccgolist)
} else {
callgccgo = fmt.Sprintf("C.%s(%s)", sysname, argsgccgolist)
}
textgccgo += callProto
textgccgo += fmt.Sprintf("\tr1 = uintptr(%s)\n", callgccgo)
textgccgo += "\te1 = syscall.GetErrno()\n"
textgccgo += "\treturn\n}\n"
}
if err := s.Err(); err != nil {
fmt.Fprintf(os.Stderr, err.Error())
os.Exit(1)
}
file.Close()
}
imp := ""
if pack != "unix" {
imp = "import \"golang.org/x/sys/unix\"\n"
}
// Print zsyscall_aix_ppc64.go
err := ioutil.WriteFile("zsyscall_aix_ppc64.go",
[]byte(fmt.Sprintf(srcTemplate1, cmdLine(), buildTags(), pack, imp, textcommon)),
0644)
if err != nil {
fmt.Fprintf(os.Stderr, err.Error())
os.Exit(1)
}
// Print zsyscall_aix_ppc64_gc.go
vardecls := "\t" + strings.Join(vars, ",\n\t")
vardecls += " syscallFunc"
err = ioutil.WriteFile("zsyscall_aix_ppc64_gc.go",
[]byte(fmt.Sprintf(srcTemplate2, cmdLine(), buildTags(), pack, imp, dynimports, linknames, vardecls, textgc)),
0644)
if err != nil {
fmt.Fprintf(os.Stderr, err.Error())
os.Exit(1)
}
// Print zsyscall_aix_ppc64_gccgo.go
err = ioutil.WriteFile("zsyscall_aix_ppc64_gccgo.go",
[]byte(fmt.Sprintf(srcTemplate3, cmdLine(), buildTags(), pack, cExtern, imp, textgccgo)),
0644)
if err != nil {
fmt.Fprintf(os.Stderr, err.Error())
os.Exit(1)
}
}
const srcTemplate1 = `// %s
// Code generated by the command above; see README.md. DO NOT EDIT.
// +build %s
package %s
import (
"unsafe"
)
%s
%s
`
const srcTemplate2 = `// %s
// Code generated by the command above; see README.md. DO NOT EDIT.
// +build %s
// +build !gccgo
package %s
import (
"unsafe"
)
%s
%s
%s
type syscallFunc uintptr
var (
%s
)
// Implemented in runtime/syscall_aix.go.
func rawSyscall6(trap, nargs, a1, a2, a3, a4, a5, a6 uintptr) (r1, r2 uintptr, err Errno)
func syscall6(trap, nargs, a1, a2, a3, a4, a5, a6 uintptr) (r1, r2 uintptr, err Errno)
%s
`
const srcTemplate3 = `// %s
// Code generated by the command above; see README.md. DO NOT EDIT.
// +build %s
// +build gccgo
package %s
%s
*/
import "C"
import (
"syscall"
)
%s
%s
`

View File

@ -1,335 +0,0 @@
// Copyright 2019 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
/*
This program reads a file containing function prototypes
(like syscall_solaris.go) and generates system call bodies.
The prototypes are marked by lines beginning with "//sys"
and read like func declarations if //sys is replaced by func, but:
* The parameter lists must give a name for each argument.
This includes return parameters.
* The parameter lists must give a type for each argument:
the (x, y, z int) shorthand is not allowed.
* If the return parameter is an error number, it must be named err.
* If go func name needs to be different than its libc name,
* or the function is not in libc, name could be specified
* at the end, after "=" sign, like
//sys getsockopt(s int, level int, name int, val uintptr, vallen *_Socklen) (err error) = libsocket.getsockopt
*/
package main
import (
"bufio"
"flag"
"fmt"
"os"
"regexp"
"strings"
)
var (
b32 = flag.Bool("b32", false, "32bit big-endian")
l32 = flag.Bool("l32", false, "32bit little-endian")
tags = flag.String("tags", "", "build tags")
)
// cmdLine returns this programs's commandline arguments
func cmdLine() string {
return "go run mksyscall_solaris.go " + strings.Join(os.Args[1:], " ")
}
// buildTags returns build tags
func buildTags() string {
return *tags
}
// Param is function parameter
type Param struct {
Name string
Type string
}
// usage prints the program usage
func usage() {
fmt.Fprintf(os.Stderr, "usage: go run mksyscall_solaris.go [-b32 | -l32] [-tags x,y] [file ...]\n")
os.Exit(1)
}
// parseParamList parses parameter list and returns a slice of parameters
func parseParamList(list string) []string {
list = strings.TrimSpace(list)
if list == "" {
return []string{}
}
return regexp.MustCompile(`\s*,\s*`).Split(list, -1)
}
// parseParam splits a parameter into name and type
func parseParam(p string) Param {
ps := regexp.MustCompile(`^(\S*) (\S*)$`).FindStringSubmatch(p)
if ps == nil {
fmt.Fprintf(os.Stderr, "malformed parameter: %s\n", p)
os.Exit(1)
}
return Param{ps[1], ps[2]}
}
func main() {
flag.Usage = usage
flag.Parse()
if len(flag.Args()) <= 0 {
fmt.Fprintf(os.Stderr, "no files to parse provided\n")
usage()
}
endianness := ""
if *b32 {
endianness = "big-endian"
} else if *l32 {
endianness = "little-endian"
}
pack := ""
text := ""
dynimports := ""
linknames := ""
var vars []string
for _, path := range flag.Args() {
file, err := os.Open(path)
if err != nil {
fmt.Fprintf(os.Stderr, err.Error())
os.Exit(1)
}
s := bufio.NewScanner(file)
for s.Scan() {
t := s.Text()
t = strings.TrimSpace(t)
t = regexp.MustCompile(`\s+`).ReplaceAllString(t, ` `)
if p := regexp.MustCompile(`^package (\S+)$`).FindStringSubmatch(t); p != nil && pack == "" {
pack = p[1]
}
nonblock := regexp.MustCompile(`^\/\/sysnb `).FindStringSubmatch(t)
if regexp.MustCompile(`^\/\/sys `).FindStringSubmatch(t) == nil && nonblock == nil {
continue
}
// Line must be of the form
// func Open(path string, mode int, perm int) (fd int, err error)
// Split into name, in params, out params.
f := regexp.MustCompile(`^\/\/sys(nb)? (\w+)\(([^()]*)\)\s*(?:\(([^()]+)\))?\s*(?:=\s*(?:(\w*)\.)?(\w*))?$`).FindStringSubmatch(t)
if f == nil {
fmt.Fprintf(os.Stderr, "%s:%s\nmalformed //sys declaration\n", path, t)
os.Exit(1)
}
funct, inps, outps, modname, sysname := f[2], f[3], f[4], f[5], f[6]
// Split argument lists on comma.
in := parseParamList(inps)
out := parseParamList(outps)
inps = strings.Join(in, ", ")
outps = strings.Join(out, ", ")
// Try in vain to keep people from editing this file.
// The theory is that they jump into the middle of the file
// without reading the header.
text += "// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT\n\n"
// So file name.
if modname == "" {
modname = "libc"
}
// System call name.
if sysname == "" {
sysname = funct
}
// System call pointer variable name.
sysvarname := fmt.Sprintf("proc%s", sysname)
strconvfunc := "BytePtrFromString"
strconvtype := "*byte"
sysname = strings.ToLower(sysname) // All libc functions are lowercase.
// Runtime import of function to allow cross-platform builds.
dynimports += fmt.Sprintf("//go:cgo_import_dynamic libc_%s %s \"%s.so\"\n", sysname, sysname, modname)
// Link symbol to proc address variable.
linknames += fmt.Sprintf("//go:linkname %s libc_%s\n", sysvarname, sysname)
// Library proc address variable.
vars = append(vars, sysvarname)
// Go function header.
outlist := strings.Join(out, ", ")
if outlist != "" {
outlist = fmt.Sprintf(" (%s)", outlist)
}
if text != "" {
text += "\n"
}
text += fmt.Sprintf("func %s(%s)%s {\n", funct, strings.Join(in, ", "), outlist)
// Check if err return available
errvar := ""
for _, param := range out {
p := parseParam(param)
if p.Type == "error" {
errvar = p.Name
continue
}
}
// Prepare arguments to Syscall.
var args []string
n := 0
for _, param := range in {
p := parseParam(param)
if regexp.MustCompile(`^\*`).FindStringSubmatch(p.Type) != nil {
args = append(args, "uintptr(unsafe.Pointer("+p.Name+"))")
} else if p.Type == "string" && errvar != "" {
text += fmt.Sprintf("\tvar _p%d %s\n", n, strconvtype)
text += fmt.Sprintf("\t_p%d, %s = %s(%s)\n", n, errvar, strconvfunc, p.Name)
text += fmt.Sprintf("\tif %s != nil {\n\t\treturn\n\t}\n", errvar)
args = append(args, fmt.Sprintf("uintptr(unsafe.Pointer(_p%d))", n))
n++
} else if p.Type == "string" {
fmt.Fprintf(os.Stderr, path+":"+funct+" uses string arguments, but has no error return\n")
text += fmt.Sprintf("\tvar _p%d %s\n", n, strconvtype)
text += fmt.Sprintf("\t_p%d, _ = %s(%s)\n", n, strconvfunc, p.Name)
args = append(args, fmt.Sprintf("uintptr(unsafe.Pointer(_p%d))", n))
n++
} else if s := regexp.MustCompile(`^\[\](.*)`).FindStringSubmatch(p.Type); s != nil {
// Convert slice into pointer, length.
// Have to be careful not to take address of &a[0] if len == 0:
// pass nil in that case.
text += fmt.Sprintf("\tvar _p%d *%s\n", n, s[1])
text += fmt.Sprintf("\tif len(%s) > 0 {\n\t\t_p%d = &%s[0]\n\t}\n", p.Name, n, p.Name)
args = append(args, fmt.Sprintf("uintptr(unsafe.Pointer(_p%d))", n), fmt.Sprintf("uintptr(len(%s))", p.Name))
n++
} else if p.Type == "int64" && endianness != "" {
if endianness == "big-endian" {
args = append(args, fmt.Sprintf("uintptr(%s>>32)", p.Name), fmt.Sprintf("uintptr(%s)", p.Name))
} else {
args = append(args, fmt.Sprintf("uintptr(%s)", p.Name), fmt.Sprintf("uintptr(%s>>32)", p.Name))
}
} else if p.Type == "bool" {
text += fmt.Sprintf("\tvar _p%d uint32\n", n)
text += fmt.Sprintf("\tif %s {\n\t\t_p%d = 1\n\t} else {\n\t\t_p%d = 0\n\t}\n", p.Name, n, n)
args = append(args, fmt.Sprintf("uintptr(_p%d)", n))
n++
} else {
args = append(args, fmt.Sprintf("uintptr(%s)", p.Name))
}
}
nargs := len(args)
// Determine which form to use; pad args with zeros.
asm := "sysvicall6"
if nonblock != nil {
asm = "rawSysvicall6"
}
if len(args) <= 6 {
for len(args) < 6 {
args = append(args, "0")
}
} else {
fmt.Fprintf(os.Stderr, "%s: too many arguments to system call\n", path)
os.Exit(1)
}
// Actual call.
arglist := strings.Join(args, ", ")
call := fmt.Sprintf("%s(uintptr(unsafe.Pointer(&%s)), %d, %s)", asm, sysvarname, nargs, arglist)
// Assign return values.
body := ""
ret := []string{"_", "_", "_"}
doErrno := false
for i := 0; i < len(out); i++ {
p := parseParam(out[i])
reg := ""
if p.Name == "err" {
reg = "e1"
ret[2] = reg
doErrno = true
} else {
reg = fmt.Sprintf("r%d", i)
ret[i] = reg
}
if p.Type == "bool" {
reg = fmt.Sprintf("%d != 0", reg)
}
if p.Type == "int64" && endianness != "" {
// 64-bit number in r1:r0 or r0:r1.
if i+2 > len(out) {
fmt.Fprintf(os.Stderr, "%s: not enough registers for int64 return\n", path)
os.Exit(1)
}
if endianness == "big-endian" {
reg = fmt.Sprintf("int64(r%d)<<32 | int64(r%d)", i, i+1)
} else {
reg = fmt.Sprintf("int64(r%d)<<32 | int64(r%d)", i+1, i)
}
ret[i] = fmt.Sprintf("r%d", i)
ret[i+1] = fmt.Sprintf("r%d", i+1)
}
if reg != "e1" {
body += fmt.Sprintf("\t%s = %s(%s)\n", p.Name, p.Type, reg)
}
}
if ret[0] == "_" && ret[1] == "_" && ret[2] == "_" {
text += fmt.Sprintf("\t%s\n", call)
} else {
text += fmt.Sprintf("\t%s, %s, %s := %s\n", ret[0], ret[1], ret[2], call)
}
text += body
if doErrno {
text += "\tif e1 != 0 {\n"
text += "\t\terr = e1\n"
text += "\t}\n"
}
text += "\treturn\n"
text += "}\n"
}
if err := s.Err(); err != nil {
fmt.Fprintf(os.Stderr, err.Error())
os.Exit(1)
}
file.Close()
}
imp := ""
if pack != "unix" {
imp = "import \"golang.org/x/sys/unix\"\n"
}
vardecls := "\t" + strings.Join(vars, ",\n\t")
vardecls += " syscallFunc"
fmt.Printf(srcTemplate, cmdLine(), buildTags(), pack, imp, dynimports, linknames, vardecls, text)
}
const srcTemplate = `// %s
// Code generated by the command above; see README.md. DO NOT EDIT.
// +build %s
package %s
import (
"syscall"
"unsafe"
)
%s
%s
%s
var (
%s
)
%s
`

View File

@ -1,190 +0,0 @@
// Copyright 2018 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
// Generate system call table for DragonFly, NetBSD,
// FreeBSD, OpenBSD or Darwin from master list
// (for example, /usr/src/sys/kern/syscalls.master or
// sys/syscall.h).
package main
import (
"bufio"
"fmt"
"io"
"io/ioutil"
"net/http"
"os"
"regexp"
"strings"
)
var (
goos, goarch string
)
// cmdLine returns this programs's commandline arguments
func cmdLine() string {
return "go run mksysnum.go " + strings.Join(os.Args[1:], " ")
}
// buildTags returns build tags
func buildTags() string {
return fmt.Sprintf("%s,%s", goarch, goos)
}
func checkErr(err error) {
if err != nil {
fmt.Fprintf(os.Stderr, "%v\n", err)
os.Exit(1)
}
}
// source string and substring slice for regexp
type re struct {
str string // source string
sub []string // matched sub-string
}
// Match performs regular expression match
func (r *re) Match(exp string) bool {
r.sub = regexp.MustCompile(exp).FindStringSubmatch(r.str)
if r.sub != nil {
return true
}
return false
}
// fetchFile fetches a text file from URL
func fetchFile(URL string) io.Reader {
resp, err := http.Get(URL)
checkErr(err)
defer resp.Body.Close()
body, err := ioutil.ReadAll(resp.Body)
checkErr(err)
return strings.NewReader(string(body))
}
// readFile reads a text file from path
func readFile(path string) io.Reader {
file, err := os.Open(os.Args[1])
checkErr(err)
return file
}
func format(name, num, proto string) string {
name = strings.ToUpper(name)
// There are multiple entries for enosys and nosys, so comment them out.
nm := re{str: name}
if nm.Match(`^SYS_E?NOSYS$`) {
name = fmt.Sprintf("// %s", name)
}
if name == `SYS_SYS_EXIT` {
name = `SYS_EXIT`
}
return fmt.Sprintf(" %s = %s; // %s\n", name, num, proto)
}
func main() {
// Get the OS (using GOOS_TARGET if it exist)
goos = os.Getenv("GOOS_TARGET")
if goos == "" {
goos = os.Getenv("GOOS")
}
// Get the architecture (using GOARCH_TARGET if it exists)
goarch = os.Getenv("GOARCH_TARGET")
if goarch == "" {
goarch = os.Getenv("GOARCH")
}
// Check if GOOS and GOARCH environment variables are defined
if goarch == "" || goos == "" {
fmt.Fprintf(os.Stderr, "GOARCH or GOOS not defined in environment\n")
os.Exit(1)
}
file := strings.TrimSpace(os.Args[1])
var syscalls io.Reader
if strings.HasPrefix(file, "https://") || strings.HasPrefix(file, "http://") {
// Download syscalls.master file
syscalls = fetchFile(file)
} else {
syscalls = readFile(file)
}
var text, line string
s := bufio.NewScanner(syscalls)
for s.Scan() {
t := re{str: line}
if t.Match(`^(.*)\\$`) {
// Handle continuation
line = t.sub[1]
line += strings.TrimLeft(s.Text(), " \t")
} else {
// New line
line = s.Text()
}
t = re{str: line}
if t.Match(`\\$`) {
continue
}
t = re{str: line}
switch goos {
case "dragonfly":
if t.Match(`^([0-9]+)\s+STD\s+({ \S+\s+(\w+).*)$`) {
num, proto := t.sub[1], t.sub[2]
name := fmt.Sprintf("SYS_%s", t.sub[3])
text += format(name, num, proto)
}
case "freebsd":
if t.Match(`^([0-9]+)\s+\S+\s+(?:NO)?STD\s+({ \S+\s+(\w+).*)$`) {
num, proto := t.sub[1], t.sub[2]
name := fmt.Sprintf("SYS_%s", t.sub[3])
text += format(name, num, proto)
}
case "openbsd":
if t.Match(`^([0-9]+)\s+STD\s+(NOLOCK\s+)?({ \S+\s+\*?(\w+).*)$`) {
num, proto, name := t.sub[1], t.sub[3], t.sub[4]
text += format(name, num, proto)
}
case "netbsd":
if t.Match(`^([0-9]+)\s+((STD)|(NOERR))\s+(RUMP\s+)?({\s+\S+\s*\*?\s*\|(\S+)\|(\S*)\|(\w+).*\s+})(\s+(\S+))?$`) {
num, proto, compat := t.sub[1], t.sub[6], t.sub[8]
name := t.sub[7] + "_" + t.sub[9]
if t.sub[11] != "" {
name = t.sub[7] + "_" + t.sub[11]
}
name = strings.ToUpper(name)
if compat == "" || compat == "13" || compat == "30" || compat == "50" {
text += fmt.Sprintf(" %s = %s; // %s\n", name, num, proto)
}
}
case "darwin":
if t.Match(`^#define\s+SYS_(\w+)\s+([0-9]+)`) {
name, num := t.sub[1], t.sub[2]
name = strings.ToUpper(name)
text += fmt.Sprintf(" SYS_%s = %s;\n", name, num)
}
default:
fmt.Fprintf(os.Stderr, "unrecognized GOOS=%s\n", goos)
os.Exit(1)
}
}
err := s.Err()
checkErr(err)
fmt.Printf(template, cmdLine(), buildTags(), text)
}
const template = `// %s
// Code generated by the command above; see README.md. DO NOT EDIT.
// +build %s
package unix
const(
%s)`

View File

@ -1,236 +0,0 @@
// Copyright 2018 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
// +build aix
/*
Input to cgo -godefs. See also mkerrors.sh and mkall.sh
*/
// +godefs map struct_in_addr [4]byte /* in_addr */
// +godefs map struct_in6_addr [16]byte /* in6_addr */
package unix
/*
#include <sys/types.h>
#include <sys/time.h>
#include <sys/limits.h>
#include <sys/un.h>
#include <utime.h>
#include <sys/utsname.h>
#include <sys/poll.h>
#include <sys/resource.h>
#include <sys/stat.h>
#include <sys/statfs.h>
#include <sys/termio.h>
#include <sys/ioctl.h>
#include <termios.h>
#include <net/if.h>
#include <net/if_dl.h>
#include <netinet/in.h>
#include <netinet/icmp6.h>
#include <dirent.h>
#include <fcntl.h>
enum {
sizeofPtr = sizeof(void*),
};
union sockaddr_all {
struct sockaddr s1; // this one gets used for fields
struct sockaddr_in s2; // these pad it out
struct sockaddr_in6 s3;
struct sockaddr_un s4;
struct sockaddr_dl s5;
};
struct sockaddr_any {
struct sockaddr addr;
char pad[sizeof(union sockaddr_all) - sizeof(struct sockaddr)];
};
*/
import "C"
// Machine characteristics
const (
SizeofPtr = C.sizeofPtr
SizeofShort = C.sizeof_short
SizeofInt = C.sizeof_int
SizeofLong = C.sizeof_long
SizeofLongLong = C.sizeof_longlong
PathMax = C.PATH_MAX
)
// Basic types
type (
_C_short C.short
_C_int C.int
_C_long C.long
_C_long_long C.longlong
)
type off64 C.off64_t
type off C.off_t
type Mode_t C.mode_t
// Time
type Timespec C.struct_timespec
type StTimespec C.struct_st_timespec
type Timeval C.struct_timeval
type Timeval32 C.struct_timeval32
type Timex C.struct_timex
type Time_t C.time_t
type Tms C.struct_tms
type Utimbuf C.struct_utimbuf
type Timezone C.struct_timezone
// Processes
type Rusage C.struct_rusage
type Rlimit C.struct_rlimit64
type Pid_t C.pid_t
type _Gid_t C.gid_t
type dev_t C.dev_t
// Files
type Stat_t C.struct_stat
type StatxTimestamp C.struct_statx_timestamp
type Statx_t C.struct_statx
type Dirent C.struct_dirent
// Sockets
type RawSockaddrInet4 C.struct_sockaddr_in
type RawSockaddrInet6 C.struct_sockaddr_in6
type RawSockaddrUnix C.struct_sockaddr_un
type RawSockaddr C.struct_sockaddr
type RawSockaddrAny C.struct_sockaddr_any
type _Socklen C.socklen_t
type Cmsghdr C.struct_cmsghdr
type ICMPv6Filter C.struct_icmp6_filter
type Iovec C.struct_iovec
type IPMreq C.struct_ip_mreq
type IPv6Mreq C.struct_ipv6_mreq
type IPv6MTUInfo C.struct_ip6_mtuinfo
type Linger C.struct_linger
type Msghdr C.struct_msghdr
const (
SizeofSockaddrInet4 = C.sizeof_struct_sockaddr_in
SizeofSockaddrInet6 = C.sizeof_struct_sockaddr_in6
SizeofSockaddrAny = C.sizeof_struct_sockaddr_any
SizeofSockaddrUnix = C.sizeof_struct_sockaddr_un
SizeofLinger = C.sizeof_struct_linger
SizeofIPMreq = C.sizeof_struct_ip_mreq
SizeofIPv6Mreq = C.sizeof_struct_ipv6_mreq
SizeofIPv6MTUInfo = C.sizeof_struct_ip6_mtuinfo
SizeofMsghdr = C.sizeof_struct_msghdr
SizeofCmsghdr = C.sizeof_struct_cmsghdr
SizeofICMPv6Filter = C.sizeof_struct_icmp6_filter
)
// Routing and interface messages
const (
SizeofIfMsghdr = C.sizeof_struct_if_msghdr
)
type IfMsgHdr C.struct_if_msghdr
// Misc
type FdSet C.fd_set
type Utsname C.struct_utsname
type Ustat_t C.struct_ustat
type Sigset_t C.sigset_t
const (
AT_FDCWD = C.AT_FDCWD
AT_REMOVEDIR = C.AT_REMOVEDIR
AT_SYMLINK_NOFOLLOW = C.AT_SYMLINK_NOFOLLOW
)
// Terminal handling
type Termios C.struct_termios
type Termio C.struct_termio
type Winsize C.struct_winsize
//poll
type PollFd struct {
Fd int32
Events uint16
Revents uint16
}
const (
POLLERR = C.POLLERR
POLLHUP = C.POLLHUP
POLLIN = C.POLLIN
POLLNVAL = C.POLLNVAL
POLLOUT = C.POLLOUT
POLLPRI = C.POLLPRI
POLLRDBAND = C.POLLRDBAND
POLLRDNORM = C.POLLRDNORM
POLLWRBAND = C.POLLWRBAND
POLLWRNORM = C.POLLWRNORM
)
//flock_t
type Flock_t C.struct_flock64
// Statfs
type Fsid_t C.struct_fsid_t
type Fsid64_t C.struct_fsid64_t
type Statfs_t C.struct_statfs
const RNDGETENTCNT = 0x80045200

View File

@ -1,283 +0,0 @@
// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
/*
Input to cgo -godefs. See README.md
*/
// +godefs map struct_in_addr [4]byte /* in_addr */
// +godefs map struct_in6_addr [16]byte /* in6_addr */
package unix
/*
#define __DARWIN_UNIX03 0
#define KERNEL
#define _DARWIN_USE_64_BIT_INODE
#include <dirent.h>
#include <fcntl.h>
#include <poll.h>
#include <signal.h>
#include <termios.h>
#include <unistd.h>
#include <mach/mach.h>
#include <mach/message.h>
#include <sys/event.h>
#include <sys/mman.h>
#include <sys/mount.h>
#include <sys/param.h>
#include <sys/ptrace.h>
#include <sys/resource.h>
#include <sys/select.h>
#include <sys/signal.h>
#include <sys/socket.h>
#include <sys/stat.h>
#include <sys/time.h>
#include <sys/types.h>
#include <sys/uio.h>
#include <sys/un.h>
#include <sys/utsname.h>
#include <sys/wait.h>
#include <net/bpf.h>
#include <net/if.h>
#include <net/if_dl.h>
#include <net/if_var.h>
#include <net/route.h>
#include <netinet/in.h>
#include <netinet/icmp6.h>
#include <netinet/tcp.h>
enum {
sizeofPtr = sizeof(void*),
};
union sockaddr_all {
struct sockaddr s1; // this one gets used for fields
struct sockaddr_in s2; // these pad it out
struct sockaddr_in6 s3;
struct sockaddr_un s4;
struct sockaddr_dl s5;
};
struct sockaddr_any {
struct sockaddr addr;
char pad[sizeof(union sockaddr_all) - sizeof(struct sockaddr)];
};
*/
import "C"
// Machine characteristics
const (
SizeofPtr = C.sizeofPtr
SizeofShort = C.sizeof_short
SizeofInt = C.sizeof_int
SizeofLong = C.sizeof_long
SizeofLongLong = C.sizeof_longlong
)
// Basic types
type (
_C_short C.short
_C_int C.int
_C_long C.long
_C_long_long C.longlong
)
// Time
type Timespec C.struct_timespec
type Timeval C.struct_timeval
type Timeval32 C.struct_timeval32
// Processes
type Rusage C.struct_rusage
type Rlimit C.struct_rlimit
type _Gid_t C.gid_t
// Files
type Stat_t C.struct_stat64
type Statfs_t C.struct_statfs64
type Flock_t C.struct_flock
type Fstore_t C.struct_fstore
type Radvisory_t C.struct_radvisory
type Fbootstraptransfer_t C.struct_fbootstraptransfer
type Log2phys_t C.struct_log2phys
type Fsid C.struct_fsid
type Dirent C.struct_dirent
// Sockets
type RawSockaddrInet4 C.struct_sockaddr_in
type RawSockaddrInet6 C.struct_sockaddr_in6
type RawSockaddrUnix C.struct_sockaddr_un
type RawSockaddrDatalink C.struct_sockaddr_dl
type RawSockaddr C.struct_sockaddr
type RawSockaddrAny C.struct_sockaddr_any
type _Socklen C.socklen_t
type Linger C.struct_linger
type Iovec C.struct_iovec
type IPMreq C.struct_ip_mreq
type IPv6Mreq C.struct_ipv6_mreq
type Msghdr C.struct_msghdr
type Cmsghdr C.struct_cmsghdr
type Inet4Pktinfo C.struct_in_pktinfo
type Inet6Pktinfo C.struct_in6_pktinfo
type IPv6MTUInfo C.struct_ip6_mtuinfo
type ICMPv6Filter C.struct_icmp6_filter
const (
SizeofSockaddrInet4 = C.sizeof_struct_sockaddr_in
SizeofSockaddrInet6 = C.sizeof_struct_sockaddr_in6
SizeofSockaddrAny = C.sizeof_struct_sockaddr_any
SizeofSockaddrUnix = C.sizeof_struct_sockaddr_un
SizeofSockaddrDatalink = C.sizeof_struct_sockaddr_dl
SizeofLinger = C.sizeof_struct_linger
SizeofIPMreq = C.sizeof_struct_ip_mreq
SizeofIPv6Mreq = C.sizeof_struct_ipv6_mreq
SizeofMsghdr = C.sizeof_struct_msghdr
SizeofCmsghdr = C.sizeof_struct_cmsghdr
SizeofInet4Pktinfo = C.sizeof_struct_in_pktinfo
SizeofInet6Pktinfo = C.sizeof_struct_in6_pktinfo
SizeofIPv6MTUInfo = C.sizeof_struct_ip6_mtuinfo
SizeofICMPv6Filter = C.sizeof_struct_icmp6_filter
)
// Ptrace requests
const (
PTRACE_TRACEME = C.PT_TRACE_ME
PTRACE_CONT = C.PT_CONTINUE
PTRACE_KILL = C.PT_KILL
)
// Events (kqueue, kevent)
type Kevent_t C.struct_kevent
// Select
type FdSet C.fd_set
// Routing and interface messages
const (
SizeofIfMsghdr = C.sizeof_struct_if_msghdr
SizeofIfData = C.sizeof_struct_if_data
SizeofIfaMsghdr = C.sizeof_struct_ifa_msghdr
SizeofIfmaMsghdr = C.sizeof_struct_ifma_msghdr
SizeofIfmaMsghdr2 = C.sizeof_struct_ifma_msghdr2
SizeofRtMsghdr = C.sizeof_struct_rt_msghdr
SizeofRtMetrics = C.sizeof_struct_rt_metrics
)
type IfMsghdr C.struct_if_msghdr
type IfData C.struct_if_data
type IfaMsghdr C.struct_ifa_msghdr
type IfmaMsghdr C.struct_ifma_msghdr
type IfmaMsghdr2 C.struct_ifma_msghdr2
type RtMsghdr C.struct_rt_msghdr
type RtMetrics C.struct_rt_metrics
// Berkeley packet filter
const (
SizeofBpfVersion = C.sizeof_struct_bpf_version
SizeofBpfStat = C.sizeof_struct_bpf_stat
SizeofBpfProgram = C.sizeof_struct_bpf_program
SizeofBpfInsn = C.sizeof_struct_bpf_insn
SizeofBpfHdr = C.sizeof_struct_bpf_hdr
)
type BpfVersion C.struct_bpf_version
type BpfStat C.struct_bpf_stat
type BpfProgram C.struct_bpf_program
type BpfInsn C.struct_bpf_insn
type BpfHdr C.struct_bpf_hdr
// Terminal handling
type Termios C.struct_termios
type Winsize C.struct_winsize
// fchmodat-like syscalls.
const (
AT_FDCWD = C.AT_FDCWD
AT_REMOVEDIR = C.AT_REMOVEDIR
AT_SYMLINK_FOLLOW = C.AT_SYMLINK_FOLLOW
AT_SYMLINK_NOFOLLOW = C.AT_SYMLINK_NOFOLLOW
)
// poll
type PollFd C.struct_pollfd
const (
POLLERR = C.POLLERR
POLLHUP = C.POLLHUP
POLLIN = C.POLLIN
POLLNVAL = C.POLLNVAL
POLLOUT = C.POLLOUT
POLLPRI = C.POLLPRI
POLLRDBAND = C.POLLRDBAND
POLLRDNORM = C.POLLRDNORM
POLLWRBAND = C.POLLWRBAND
POLLWRNORM = C.POLLWRNORM
)
// uname
type Utsname C.struct_utsname
// Clockinfo
const SizeofClockinfo = C.sizeof_struct_clockinfo
type Clockinfo C.struct_clockinfo

View File

@ -1,263 +0,0 @@
// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
/*
Input to cgo -godefs. See README.md
*/
// +godefs map struct_in_addr [4]byte /* in_addr */
// +godefs map struct_in6_addr [16]byte /* in6_addr */
package unix
/*
#define KERNEL
#include <dirent.h>
#include <fcntl.h>
#include <poll.h>
#include <signal.h>
#include <termios.h>
#include <stdio.h>
#include <unistd.h>
#include <sys/event.h>
#include <sys/mman.h>
#include <sys/mount.h>
#include <sys/param.h>
#include <sys/ptrace.h>
#include <sys/resource.h>
#include <sys/select.h>
#include <sys/signal.h>
#include <sys/socket.h>
#include <sys/stat.h>
#include <sys/time.h>
#include <sys/types.h>
#include <sys/un.h>
#include <sys/utsname.h>
#include <sys/wait.h>
#include <net/bpf.h>
#include <net/if.h>
#include <net/if_dl.h>
#include <net/route.h>
#include <netinet/in.h>
#include <netinet/icmp6.h>
#include <netinet/tcp.h>
enum {
sizeofPtr = sizeof(void*),
};
union sockaddr_all {
struct sockaddr s1; // this one gets used for fields
struct sockaddr_in s2; // these pad it out
struct sockaddr_in6 s3;
struct sockaddr_un s4;
struct sockaddr_dl s5;
};
struct sockaddr_any {
struct sockaddr addr;
char pad[sizeof(union sockaddr_all) - sizeof(struct sockaddr)];
};
*/
import "C"
// Machine characteristics
const (
SizeofPtr = C.sizeofPtr
SizeofShort = C.sizeof_short
SizeofInt = C.sizeof_int
SizeofLong = C.sizeof_long
SizeofLongLong = C.sizeof_longlong
)
// Basic types
type (
_C_short C.short
_C_int C.int
_C_long C.long
_C_long_long C.longlong
)
// Time
type Timespec C.struct_timespec
type Timeval C.struct_timeval
// Processes
type Rusage C.struct_rusage
type Rlimit C.struct_rlimit
type _Gid_t C.gid_t
// Files
type Stat_t C.struct_stat
type Statfs_t C.struct_statfs
type Flock_t C.struct_flock
type Dirent C.struct_dirent
type Fsid C.struct_fsid
// File system limits
const (
PathMax = C.PATH_MAX
)
// Sockets
type RawSockaddrInet4 C.struct_sockaddr_in
type RawSockaddrInet6 C.struct_sockaddr_in6
type RawSockaddrUnix C.struct_sockaddr_un
type RawSockaddrDatalink C.struct_sockaddr_dl
type RawSockaddr C.struct_sockaddr
type RawSockaddrAny C.struct_sockaddr_any
type _Socklen C.socklen_t
type Linger C.struct_linger
type Iovec C.struct_iovec
type IPMreq C.struct_ip_mreq
type IPv6Mreq C.struct_ipv6_mreq
type Msghdr C.struct_msghdr
type Cmsghdr C.struct_cmsghdr
type Inet6Pktinfo C.struct_in6_pktinfo
type IPv6MTUInfo C.struct_ip6_mtuinfo
type ICMPv6Filter C.struct_icmp6_filter
const (
SizeofSockaddrInet4 = C.sizeof_struct_sockaddr_in
SizeofSockaddrInet6 = C.sizeof_struct_sockaddr_in6
SizeofSockaddrAny = C.sizeof_struct_sockaddr_any
SizeofSockaddrUnix = C.sizeof_struct_sockaddr_un
SizeofSockaddrDatalink = C.sizeof_struct_sockaddr_dl
SizeofLinger = C.sizeof_struct_linger
SizeofIPMreq = C.sizeof_struct_ip_mreq
SizeofIPv6Mreq = C.sizeof_struct_ipv6_mreq
SizeofMsghdr = C.sizeof_struct_msghdr
SizeofCmsghdr = C.sizeof_struct_cmsghdr
SizeofInet6Pktinfo = C.sizeof_struct_in6_pktinfo
SizeofIPv6MTUInfo = C.sizeof_struct_ip6_mtuinfo
SizeofICMPv6Filter = C.sizeof_struct_icmp6_filter
)
// Ptrace requests
const (
PTRACE_TRACEME = C.PT_TRACE_ME
PTRACE_CONT = C.PT_CONTINUE
PTRACE_KILL = C.PT_KILL
)
// Events (kqueue, kevent)
type Kevent_t C.struct_kevent
// Select
type FdSet C.fd_set
// Routing and interface messages
const (
SizeofIfMsghdr = C.sizeof_struct_if_msghdr
SizeofIfData = C.sizeof_struct_if_data
SizeofIfaMsghdr = C.sizeof_struct_ifa_msghdr
SizeofIfmaMsghdr = C.sizeof_struct_ifma_msghdr
SizeofIfAnnounceMsghdr = C.sizeof_struct_if_announcemsghdr
SizeofRtMsghdr = C.sizeof_struct_rt_msghdr
SizeofRtMetrics = C.sizeof_struct_rt_metrics
)
type IfMsghdr C.struct_if_msghdr
type IfData C.struct_if_data
type IfaMsghdr C.struct_ifa_msghdr
type IfmaMsghdr C.struct_ifma_msghdr
type IfAnnounceMsghdr C.struct_if_announcemsghdr
type RtMsghdr C.struct_rt_msghdr
type RtMetrics C.struct_rt_metrics
// Berkeley packet filter
const (
SizeofBpfVersion = C.sizeof_struct_bpf_version
SizeofBpfStat = C.sizeof_struct_bpf_stat
SizeofBpfProgram = C.sizeof_struct_bpf_program
SizeofBpfInsn = C.sizeof_struct_bpf_insn
SizeofBpfHdr = C.sizeof_struct_bpf_hdr
)
type BpfVersion C.struct_bpf_version
type BpfStat C.struct_bpf_stat
type BpfProgram C.struct_bpf_program
type BpfInsn C.struct_bpf_insn
type BpfHdr C.struct_bpf_hdr
// Terminal handling
type Termios C.struct_termios
type Winsize C.struct_winsize
// fchmodat-like syscalls.
const (
AT_FDCWD = C.AT_FDCWD
AT_SYMLINK_NOFOLLOW = C.AT_SYMLINK_NOFOLLOW
)
// poll
type PollFd C.struct_pollfd
const (
POLLERR = C.POLLERR
POLLHUP = C.POLLHUP
POLLIN = C.POLLIN
POLLNVAL = C.POLLNVAL
POLLOUT = C.POLLOUT
POLLPRI = C.POLLPRI
POLLRDBAND = C.POLLRDBAND
POLLRDNORM = C.POLLRDNORM
POLLWRBAND = C.POLLWRBAND
POLLWRNORM = C.POLLWRNORM
)
// Uname
type Utsname C.struct_utsname

View File

@ -1,356 +0,0 @@
// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
/*
Input to cgo -godefs. See README.md
*/
// +godefs map struct_in_addr [4]byte /* in_addr */
// +godefs map struct_in6_addr [16]byte /* in6_addr */
package unix
/*
#define _WANT_FREEBSD11_STAT 1
#define _WANT_FREEBSD11_STATFS 1
#define _WANT_FREEBSD11_DIRENT 1
#define _WANT_FREEBSD11_KEVENT 1
#include <dirent.h>
#include <fcntl.h>
#include <poll.h>
#include <signal.h>
#include <termios.h>
#include <stdio.h>
#include <unistd.h>
#include <sys/capsicum.h>
#include <sys/event.h>
#include <sys/mman.h>
#include <sys/mount.h>
#include <sys/param.h>
#include <sys/ptrace.h>
#include <sys/resource.h>
#include <sys/select.h>
#include <sys/signal.h>
#include <sys/socket.h>
#include <sys/stat.h>
#include <sys/time.h>
#include <sys/types.h>
#include <sys/un.h>
#include <sys/utsname.h>
#include <sys/wait.h>
#include <net/bpf.h>
#include <net/if.h>
#include <net/if_dl.h>
#include <net/route.h>
#include <netinet/in.h>
#include <netinet/icmp6.h>
#include <netinet/tcp.h>
enum {
sizeofPtr = sizeof(void*),
};
union sockaddr_all {
struct sockaddr s1; // this one gets used for fields
struct sockaddr_in s2; // these pad it out
struct sockaddr_in6 s3;
struct sockaddr_un s4;
struct sockaddr_dl s5;
};
struct sockaddr_any {
struct sockaddr addr;
char pad[sizeof(union sockaddr_all) - sizeof(struct sockaddr)];
};
// This structure is a duplicate of if_data on FreeBSD 8-STABLE.
// See /usr/include/net/if.h.
struct if_data8 {
u_char ifi_type;
u_char ifi_physical;
u_char ifi_addrlen;
u_char ifi_hdrlen;
u_char ifi_link_state;
u_char ifi_spare_char1;
u_char ifi_spare_char2;
u_char ifi_datalen;
u_long ifi_mtu;
u_long ifi_metric;
u_long ifi_baudrate;
u_long ifi_ipackets;
u_long ifi_ierrors;
u_long ifi_opackets;
u_long ifi_oerrors;
u_long ifi_collisions;
u_long ifi_ibytes;
u_long ifi_obytes;
u_long ifi_imcasts;
u_long ifi_omcasts;
u_long ifi_iqdrops;
u_long ifi_noproto;
u_long ifi_hwassist;
// FIXME: these are now unions, so maybe need to change definitions?
#undef ifi_epoch
time_t ifi_epoch;
#undef ifi_lastchange
struct timeval ifi_lastchange;
};
// This structure is a duplicate of if_msghdr on FreeBSD 8-STABLE.
// See /usr/include/net/if.h.
struct if_msghdr8 {
u_short ifm_msglen;
u_char ifm_version;
u_char ifm_type;
int ifm_addrs;
int ifm_flags;
u_short ifm_index;
struct if_data8 ifm_data;
};
*/
import "C"
// Machine characteristics
const (
SizeofPtr = C.sizeofPtr
SizeofShort = C.sizeof_short
SizeofInt = C.sizeof_int
SizeofLong = C.sizeof_long
SizeofLongLong = C.sizeof_longlong
)
// Basic types
type (
_C_short C.short
_C_int C.int
_C_long C.long
_C_long_long C.longlong
)
// Time
type Timespec C.struct_timespec
type Timeval C.struct_timeval
// Processes
type Rusage C.struct_rusage
type Rlimit C.struct_rlimit
type _Gid_t C.gid_t
// Files
const (
_statfsVersion = C.STATFS_VERSION
_dirblksiz = C.DIRBLKSIZ
)
type Stat_t C.struct_stat
type stat_freebsd11_t C.struct_freebsd11_stat
type Statfs_t C.struct_statfs
type statfs_freebsd11_t C.struct_freebsd11_statfs
type Flock_t C.struct_flock
type Dirent C.struct_dirent
type dirent_freebsd11 C.struct_freebsd11_dirent
type Fsid C.struct_fsid
// File system limits
const (
PathMax = C.PATH_MAX
)
// Advice to Fadvise
const (
FADV_NORMAL = C.POSIX_FADV_NORMAL
FADV_RANDOM = C.POSIX_FADV_RANDOM
FADV_SEQUENTIAL = C.POSIX_FADV_SEQUENTIAL
FADV_WILLNEED = C.POSIX_FADV_WILLNEED
FADV_DONTNEED = C.POSIX_FADV_DONTNEED
FADV_NOREUSE = C.POSIX_FADV_NOREUSE
)
// Sockets
type RawSockaddrInet4 C.struct_sockaddr_in
type RawSockaddrInet6 C.struct_sockaddr_in6
type RawSockaddrUnix C.struct_sockaddr_un
type RawSockaddrDatalink C.struct_sockaddr_dl
type RawSockaddr C.struct_sockaddr
type RawSockaddrAny C.struct_sockaddr_any
type _Socklen C.socklen_t
type Linger C.struct_linger
type Iovec C.struct_iovec
type IPMreq C.struct_ip_mreq
type IPMreqn C.struct_ip_mreqn
type IPv6Mreq C.struct_ipv6_mreq
type Msghdr C.struct_msghdr
type Cmsghdr C.struct_cmsghdr
type Inet6Pktinfo C.struct_in6_pktinfo
type IPv6MTUInfo C.struct_ip6_mtuinfo
type ICMPv6Filter C.struct_icmp6_filter
const (
SizeofSockaddrInet4 = C.sizeof_struct_sockaddr_in
SizeofSockaddrInet6 = C.sizeof_struct_sockaddr_in6
SizeofSockaddrAny = C.sizeof_struct_sockaddr_any
SizeofSockaddrUnix = C.sizeof_struct_sockaddr_un
SizeofSockaddrDatalink = C.sizeof_struct_sockaddr_dl
SizeofLinger = C.sizeof_struct_linger
SizeofIPMreq = C.sizeof_struct_ip_mreq
SizeofIPMreqn = C.sizeof_struct_ip_mreqn
SizeofIPv6Mreq = C.sizeof_struct_ipv6_mreq
SizeofMsghdr = C.sizeof_struct_msghdr
SizeofCmsghdr = C.sizeof_struct_cmsghdr
SizeofInet6Pktinfo = C.sizeof_struct_in6_pktinfo
SizeofIPv6MTUInfo = C.sizeof_struct_ip6_mtuinfo
SizeofICMPv6Filter = C.sizeof_struct_icmp6_filter
)
// Ptrace requests
const (
PTRACE_TRACEME = C.PT_TRACE_ME
PTRACE_CONT = C.PT_CONTINUE
PTRACE_KILL = C.PT_KILL
)
// Events (kqueue, kevent)
type Kevent_t C.struct_kevent_freebsd11
// Select
type FdSet C.fd_set
// Routing and interface messages
const (
sizeofIfMsghdr = C.sizeof_struct_if_msghdr
SizeofIfMsghdr = C.sizeof_struct_if_msghdr8
sizeofIfData = C.sizeof_struct_if_data
SizeofIfData = C.sizeof_struct_if_data8
SizeofIfaMsghdr = C.sizeof_struct_ifa_msghdr
SizeofIfmaMsghdr = C.sizeof_struct_ifma_msghdr
SizeofIfAnnounceMsghdr = C.sizeof_struct_if_announcemsghdr
SizeofRtMsghdr = C.sizeof_struct_rt_msghdr
SizeofRtMetrics = C.sizeof_struct_rt_metrics
)
type ifMsghdr C.struct_if_msghdr
type IfMsghdr C.struct_if_msghdr8
type ifData C.struct_if_data
type IfData C.struct_if_data8
type IfaMsghdr C.struct_ifa_msghdr
type IfmaMsghdr C.struct_ifma_msghdr
type IfAnnounceMsghdr C.struct_if_announcemsghdr
type RtMsghdr C.struct_rt_msghdr
type RtMetrics C.struct_rt_metrics
// Berkeley packet filter
const (
SizeofBpfVersion = C.sizeof_struct_bpf_version
SizeofBpfStat = C.sizeof_struct_bpf_stat
SizeofBpfZbuf = C.sizeof_struct_bpf_zbuf
SizeofBpfProgram = C.sizeof_struct_bpf_program
SizeofBpfInsn = C.sizeof_struct_bpf_insn
SizeofBpfHdr = C.sizeof_struct_bpf_hdr
SizeofBpfZbufHeader = C.sizeof_struct_bpf_zbuf_header
)
type BpfVersion C.struct_bpf_version
type BpfStat C.struct_bpf_stat
type BpfZbuf C.struct_bpf_zbuf
type BpfProgram C.struct_bpf_program
type BpfInsn C.struct_bpf_insn
type BpfHdr C.struct_bpf_hdr
type BpfZbufHeader C.struct_bpf_zbuf_header
// Terminal handling
type Termios C.struct_termios
type Winsize C.struct_winsize
// fchmodat-like syscalls.
const (
AT_FDCWD = C.AT_FDCWD
AT_REMOVEDIR = C.AT_REMOVEDIR
AT_SYMLINK_FOLLOW = C.AT_SYMLINK_FOLLOW
AT_SYMLINK_NOFOLLOW = C.AT_SYMLINK_NOFOLLOW
)
// poll
type PollFd C.struct_pollfd
const (
POLLERR = C.POLLERR
POLLHUP = C.POLLHUP
POLLIN = C.POLLIN
POLLINIGNEOF = C.POLLINIGNEOF
POLLNVAL = C.POLLNVAL
POLLOUT = C.POLLOUT
POLLPRI = C.POLLPRI
POLLRDBAND = C.POLLRDBAND
POLLRDNORM = C.POLLRDNORM
POLLWRBAND = C.POLLWRBAND
POLLWRNORM = C.POLLWRNORM
)
// Capabilities
type CapRights C.struct_cap_rights
// Uname
type Utsname C.struct_utsname

View File

@ -1,289 +0,0 @@
// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
/*
Input to cgo -godefs. See README.md
*/
// +godefs map struct_in_addr [4]byte /* in_addr */
// +godefs map struct_in6_addr [16]byte /* in6_addr */
package unix
/*
#define KERNEL
#include <dirent.h>
#include <fcntl.h>
#include <poll.h>
#include <signal.h>
#include <termios.h>
#include <stdio.h>
#include <unistd.h>
#include <sys/param.h>
#include <sys/types.h>
#include <sys/event.h>
#include <sys/mman.h>
#include <sys/mount.h>
#include <sys/ptrace.h>
#include <sys/resource.h>
#include <sys/select.h>
#include <sys/signal.h>
#include <sys/socket.h>
#include <sys/stat.h>
#include <sys/sysctl.h>
#include <sys/time.h>
#include <sys/uio.h>
#include <sys/un.h>
#include <sys/utsname.h>
#include <sys/wait.h>
#include <net/bpf.h>
#include <net/if.h>
#include <net/if_dl.h>
#include <net/route.h>
#include <netinet/in.h>
#include <netinet/icmp6.h>
#include <netinet/tcp.h>
enum {
sizeofPtr = sizeof(void*),
};
union sockaddr_all {
struct sockaddr s1; // this one gets used for fields
struct sockaddr_in s2; // these pad it out
struct sockaddr_in6 s3;
struct sockaddr_un s4;
struct sockaddr_dl s5;
};
struct sockaddr_any {
struct sockaddr addr;
char pad[sizeof(union sockaddr_all) - sizeof(struct sockaddr)];
};
*/
import "C"
// Machine characteristics
const (
SizeofPtr = C.sizeofPtr
SizeofShort = C.sizeof_short
SizeofInt = C.sizeof_int
SizeofLong = C.sizeof_long
SizeofLongLong = C.sizeof_longlong
)
// Basic types
type (
_C_short C.short
_C_int C.int
_C_long C.long
_C_long_long C.longlong
)
// Time
type Timespec C.struct_timespec
type Timeval C.struct_timeval
// Processes
type Rusage C.struct_rusage
type Rlimit C.struct_rlimit
type _Gid_t C.gid_t
// Files
type Stat_t C.struct_stat
type Statfs_t C.struct_statfs
type Flock_t C.struct_flock
type Dirent C.struct_dirent
type Fsid C.fsid_t
// File system limits
const (
PathMax = C.PATH_MAX
)
// Advice to Fadvise
const (
FADV_NORMAL = C.POSIX_FADV_NORMAL
FADV_RANDOM = C.POSIX_FADV_RANDOM
FADV_SEQUENTIAL = C.POSIX_FADV_SEQUENTIAL
FADV_WILLNEED = C.POSIX_FADV_WILLNEED
FADV_DONTNEED = C.POSIX_FADV_DONTNEED
FADV_NOREUSE = C.POSIX_FADV_NOREUSE
)
// Sockets
type RawSockaddrInet4 C.struct_sockaddr_in
type RawSockaddrInet6 C.struct_sockaddr_in6
type RawSockaddrUnix C.struct_sockaddr_un
type RawSockaddrDatalink C.struct_sockaddr_dl
type RawSockaddr C.struct_sockaddr
type RawSockaddrAny C.struct_sockaddr_any
type _Socklen C.socklen_t
type Linger C.struct_linger
type Iovec C.struct_iovec
type IPMreq C.struct_ip_mreq
type IPv6Mreq C.struct_ipv6_mreq
type Msghdr C.struct_msghdr
type Cmsghdr C.struct_cmsghdr
type Inet6Pktinfo C.struct_in6_pktinfo
type IPv6MTUInfo C.struct_ip6_mtuinfo
type ICMPv6Filter C.struct_icmp6_filter
const (
SizeofSockaddrInet4 = C.sizeof_struct_sockaddr_in
SizeofSockaddrInet6 = C.sizeof_struct_sockaddr_in6
SizeofSockaddrAny = C.sizeof_struct_sockaddr_any
SizeofSockaddrUnix = C.sizeof_struct_sockaddr_un
SizeofSockaddrDatalink = C.sizeof_struct_sockaddr_dl
SizeofLinger = C.sizeof_struct_linger
SizeofIPMreq = C.sizeof_struct_ip_mreq
SizeofIPv6Mreq = C.sizeof_struct_ipv6_mreq
SizeofMsghdr = C.sizeof_struct_msghdr
SizeofCmsghdr = C.sizeof_struct_cmsghdr
SizeofInet6Pktinfo = C.sizeof_struct_in6_pktinfo
SizeofIPv6MTUInfo = C.sizeof_struct_ip6_mtuinfo
SizeofICMPv6Filter = C.sizeof_struct_icmp6_filter
)
// Ptrace requests
const (
PTRACE_TRACEME = C.PT_TRACE_ME
PTRACE_CONT = C.PT_CONTINUE
PTRACE_KILL = C.PT_KILL
)
// Events (kqueue, kevent)
type Kevent_t C.struct_kevent
// Select
type FdSet C.fd_set
// Routing and interface messages
const (
SizeofIfMsghdr = C.sizeof_struct_if_msghdr
SizeofIfData = C.sizeof_struct_if_data
SizeofIfaMsghdr = C.sizeof_struct_ifa_msghdr
SizeofIfAnnounceMsghdr = C.sizeof_struct_if_announcemsghdr
SizeofRtMsghdr = C.sizeof_struct_rt_msghdr
SizeofRtMetrics = C.sizeof_struct_rt_metrics
)
type IfMsghdr C.struct_if_msghdr
type IfData C.struct_if_data
type IfaMsghdr C.struct_ifa_msghdr
type IfAnnounceMsghdr C.struct_if_announcemsghdr
type RtMsghdr C.struct_rt_msghdr
type RtMetrics C.struct_rt_metrics
type Mclpool C.struct_mclpool
// Berkeley packet filter
const (
SizeofBpfVersion = C.sizeof_struct_bpf_version
SizeofBpfStat = C.sizeof_struct_bpf_stat
SizeofBpfProgram = C.sizeof_struct_bpf_program
SizeofBpfInsn = C.sizeof_struct_bpf_insn
SizeofBpfHdr = C.sizeof_struct_bpf_hdr
)
type BpfVersion C.struct_bpf_version
type BpfStat C.struct_bpf_stat
type BpfProgram C.struct_bpf_program
type BpfInsn C.struct_bpf_insn
type BpfHdr C.struct_bpf_hdr
type BpfTimeval C.struct_bpf_timeval
// Terminal handling
type Termios C.struct_termios
type Winsize C.struct_winsize
type Ptmget C.struct_ptmget
// fchmodat-like syscalls.
const (
AT_FDCWD = C.AT_FDCWD
AT_SYMLINK_NOFOLLOW = C.AT_SYMLINK_NOFOLLOW
)
// poll
type PollFd C.struct_pollfd
const (
POLLERR = C.POLLERR
POLLHUP = C.POLLHUP
POLLIN = C.POLLIN
POLLNVAL = C.POLLNVAL
POLLOUT = C.POLLOUT
POLLPRI = C.POLLPRI
POLLRDBAND = C.POLLRDBAND
POLLRDNORM = C.POLLRDNORM
POLLWRBAND = C.POLLWRBAND
POLLWRNORM = C.POLLWRNORM
)
// Sysctl
type Sysctlnode C.struct_sysctlnode
// Uname
type Utsname C.struct_utsname
// Clockinfo
const SizeofClockinfo = C.sizeof_struct_clockinfo
type Clockinfo C.struct_clockinfo

View File

@ -1,282 +0,0 @@
// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
/*
Input to cgo -godefs. See README.md
*/
// +godefs map struct_in_addr [4]byte /* in_addr */
// +godefs map struct_in6_addr [16]byte /* in6_addr */
package unix
/*
#define KERNEL
#include <dirent.h>
#include <fcntl.h>
#include <poll.h>
#include <signal.h>
#include <termios.h>
#include <stdio.h>
#include <unistd.h>
#include <sys/param.h>
#include <sys/types.h>
#include <sys/event.h>
#include <sys/mman.h>
#include <sys/mount.h>
#include <sys/ptrace.h>
#include <sys/resource.h>
#include <sys/select.h>
#include <sys/signal.h>
#include <sys/socket.h>
#include <sys/stat.h>
#include <sys/time.h>
#include <sys/uio.h>
#include <sys/un.h>
#include <sys/utsname.h>
#include <sys/wait.h>
#include <uvm/uvmexp.h>
#include <net/bpf.h>
#include <net/if.h>
#include <net/if_dl.h>
#include <net/route.h>
#include <netinet/in.h>
#include <netinet/icmp6.h>
#include <netinet/tcp.h>
enum {
sizeofPtr = sizeof(void*),
};
union sockaddr_all {
struct sockaddr s1; // this one gets used for fields
struct sockaddr_in s2; // these pad it out
struct sockaddr_in6 s3;
struct sockaddr_un s4;
struct sockaddr_dl s5;
};
struct sockaddr_any {
struct sockaddr addr;
char pad[sizeof(union sockaddr_all) - sizeof(struct sockaddr)];
};
*/
import "C"
// Machine characteristics
const (
SizeofPtr = C.sizeofPtr
SizeofShort = C.sizeof_short
SizeofInt = C.sizeof_int
SizeofLong = C.sizeof_long
SizeofLongLong = C.sizeof_longlong
)
// Basic types
type (
_C_short C.short
_C_int C.int
_C_long C.long
_C_long_long C.longlong
)
// Time
type Timespec C.struct_timespec
type Timeval C.struct_timeval
// Processes
type Rusage C.struct_rusage
type Rlimit C.struct_rlimit
type _Gid_t C.gid_t
// Files
type Stat_t C.struct_stat
type Statfs_t C.struct_statfs
type Flock_t C.struct_flock
type Dirent C.struct_dirent
type Fsid C.fsid_t
// File system limits
const (
PathMax = C.PATH_MAX
)
// Sockets
type RawSockaddrInet4 C.struct_sockaddr_in
type RawSockaddrInet6 C.struct_sockaddr_in6
type RawSockaddrUnix C.struct_sockaddr_un
type RawSockaddrDatalink C.struct_sockaddr_dl
type RawSockaddr C.struct_sockaddr
type RawSockaddrAny C.struct_sockaddr_any
type _Socklen C.socklen_t
type Linger C.struct_linger
type Iovec C.struct_iovec
type IPMreq C.struct_ip_mreq
type IPv6Mreq C.struct_ipv6_mreq
type Msghdr C.struct_msghdr
type Cmsghdr C.struct_cmsghdr
type Inet6Pktinfo C.struct_in6_pktinfo
type IPv6MTUInfo C.struct_ip6_mtuinfo
type ICMPv6Filter C.struct_icmp6_filter
const (
SizeofSockaddrInet4 = C.sizeof_struct_sockaddr_in
SizeofSockaddrInet6 = C.sizeof_struct_sockaddr_in6
SizeofSockaddrAny = C.sizeof_struct_sockaddr_any
SizeofSockaddrUnix = C.sizeof_struct_sockaddr_un
SizeofSockaddrDatalink = C.sizeof_struct_sockaddr_dl
SizeofLinger = C.sizeof_struct_linger
SizeofIPMreq = C.sizeof_struct_ip_mreq
SizeofIPv6Mreq = C.sizeof_struct_ipv6_mreq
SizeofMsghdr = C.sizeof_struct_msghdr
SizeofCmsghdr = C.sizeof_struct_cmsghdr
SizeofInet6Pktinfo = C.sizeof_struct_in6_pktinfo
SizeofIPv6MTUInfo = C.sizeof_struct_ip6_mtuinfo
SizeofICMPv6Filter = C.sizeof_struct_icmp6_filter
)
// Ptrace requests
const (
PTRACE_TRACEME = C.PT_TRACE_ME
PTRACE_CONT = C.PT_CONTINUE
PTRACE_KILL = C.PT_KILL
)
// Events (kqueue, kevent)
type Kevent_t C.struct_kevent
// Select
type FdSet C.fd_set
// Routing and interface messages
const (
SizeofIfMsghdr = C.sizeof_struct_if_msghdr
SizeofIfData = C.sizeof_struct_if_data
SizeofIfaMsghdr = C.sizeof_struct_ifa_msghdr
SizeofIfAnnounceMsghdr = C.sizeof_struct_if_announcemsghdr
SizeofRtMsghdr = C.sizeof_struct_rt_msghdr
SizeofRtMetrics = C.sizeof_struct_rt_metrics
)
type IfMsghdr C.struct_if_msghdr
type IfData C.struct_if_data
type IfaMsghdr C.struct_ifa_msghdr
type IfAnnounceMsghdr C.struct_if_announcemsghdr
type RtMsghdr C.struct_rt_msghdr
type RtMetrics C.struct_rt_metrics
type Mclpool C.struct_mclpool
// Berkeley packet filter
const (
SizeofBpfVersion = C.sizeof_struct_bpf_version
SizeofBpfStat = C.sizeof_struct_bpf_stat
SizeofBpfProgram = C.sizeof_struct_bpf_program
SizeofBpfInsn = C.sizeof_struct_bpf_insn
SizeofBpfHdr = C.sizeof_struct_bpf_hdr
)
type BpfVersion C.struct_bpf_version
type BpfStat C.struct_bpf_stat
type BpfProgram C.struct_bpf_program
type BpfInsn C.struct_bpf_insn
type BpfHdr C.struct_bpf_hdr
type BpfTimeval C.struct_bpf_timeval
// Terminal handling
type Termios C.struct_termios
type Winsize C.struct_winsize
// fchmodat-like syscalls.
const (
AT_FDCWD = C.AT_FDCWD
AT_SYMLINK_NOFOLLOW = C.AT_SYMLINK_NOFOLLOW
)
// poll
type PollFd C.struct_pollfd
const (
POLLERR = C.POLLERR
POLLHUP = C.POLLHUP
POLLIN = C.POLLIN
POLLNVAL = C.POLLNVAL
POLLOUT = C.POLLOUT
POLLPRI = C.POLLPRI
POLLRDBAND = C.POLLRDBAND
POLLRDNORM = C.POLLRDNORM
POLLWRBAND = C.POLLWRBAND
POLLWRNORM = C.POLLWRNORM
)
// Signal Sets
type Sigset_t C.sigset_t
// Uname
type Utsname C.struct_utsname
// Uvmexp
const SizeofUvmexp = C.sizeof_struct_uvmexp
type Uvmexp C.struct_uvmexp
// Clockinfo
const SizeofClockinfo = C.sizeof_struct_clockinfo
type Clockinfo C.struct_clockinfo

View File

@ -1,266 +0,0 @@
// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
/*
Input to cgo -godefs. See README.md
*/
// +godefs map struct_in_addr [4]byte /* in_addr */
// +godefs map struct_in6_addr [16]byte /* in6_addr */
package unix
/*
#define KERNEL
// These defines ensure that builds done on newer versions of Solaris are
// backwards-compatible with older versions of Solaris and
// OpenSolaris-based derivatives.
#define __USE_SUNOS_SOCKETS__ // msghdr
#define __USE_LEGACY_PROTOTYPES__ // iovec
#include <dirent.h>
#include <fcntl.h>
#include <netdb.h>
#include <limits.h>
#include <poll.h>
#include <signal.h>
#include <termios.h>
#include <termio.h>
#include <stdio.h>
#include <unistd.h>
#include <sys/mman.h>
#include <sys/mount.h>
#include <sys/param.h>
#include <sys/resource.h>
#include <sys/select.h>
#include <sys/signal.h>
#include <sys/socket.h>
#include <sys/stat.h>
#include <sys/statvfs.h>
#include <sys/time.h>
#include <sys/times.h>
#include <sys/types.h>
#include <sys/utsname.h>
#include <sys/un.h>
#include <sys/wait.h>
#include <net/bpf.h>
#include <net/if.h>
#include <net/if_dl.h>
#include <net/route.h>
#include <netinet/in.h>
#include <netinet/icmp6.h>
#include <netinet/tcp.h>
#include <ustat.h>
#include <utime.h>
enum {
sizeofPtr = sizeof(void*),
};
union sockaddr_all {
struct sockaddr s1; // this one gets used for fields
struct sockaddr_in s2; // these pad it out
struct sockaddr_in6 s3;
struct sockaddr_un s4;
struct sockaddr_dl s5;
};
struct sockaddr_any {
struct sockaddr addr;
char pad[sizeof(union sockaddr_all) - sizeof(struct sockaddr)];
};
*/
import "C"
// Machine characteristics
const (
SizeofPtr = C.sizeofPtr
SizeofShort = C.sizeof_short
SizeofInt = C.sizeof_int
SizeofLong = C.sizeof_long
SizeofLongLong = C.sizeof_longlong
PathMax = C.PATH_MAX
MaxHostNameLen = C.MAXHOSTNAMELEN
)
// Basic types
type (
_C_short C.short
_C_int C.int
_C_long C.long
_C_long_long C.longlong
)
// Time
type Timespec C.struct_timespec
type Timeval C.struct_timeval
type Timeval32 C.struct_timeval32
type Tms C.struct_tms
type Utimbuf C.struct_utimbuf
// Processes
type Rusage C.struct_rusage
type Rlimit C.struct_rlimit
type _Gid_t C.gid_t
// Files
type Stat_t C.struct_stat
type Flock_t C.struct_flock
type Dirent C.struct_dirent
// Filesystems
type _Fsblkcnt_t C.fsblkcnt_t
type Statvfs_t C.struct_statvfs
// Sockets
type RawSockaddrInet4 C.struct_sockaddr_in
type RawSockaddrInet6 C.struct_sockaddr_in6
type RawSockaddrUnix C.struct_sockaddr_un
type RawSockaddrDatalink C.struct_sockaddr_dl
type RawSockaddr C.struct_sockaddr
type RawSockaddrAny C.struct_sockaddr_any
type _Socklen C.socklen_t
type Linger C.struct_linger
type Iovec C.struct_iovec
type IPMreq C.struct_ip_mreq
type IPv6Mreq C.struct_ipv6_mreq
type Msghdr C.struct_msghdr
type Cmsghdr C.struct_cmsghdr
type Inet6Pktinfo C.struct_in6_pktinfo
type IPv6MTUInfo C.struct_ip6_mtuinfo
type ICMPv6Filter C.struct_icmp6_filter
const (
SizeofSockaddrInet4 = C.sizeof_struct_sockaddr_in
SizeofSockaddrInet6 = C.sizeof_struct_sockaddr_in6
SizeofSockaddrAny = C.sizeof_struct_sockaddr_any
SizeofSockaddrUnix = C.sizeof_struct_sockaddr_un
SizeofSockaddrDatalink = C.sizeof_struct_sockaddr_dl
SizeofLinger = C.sizeof_struct_linger
SizeofIPMreq = C.sizeof_struct_ip_mreq
SizeofIPv6Mreq = C.sizeof_struct_ipv6_mreq
SizeofMsghdr = C.sizeof_struct_msghdr
SizeofCmsghdr = C.sizeof_struct_cmsghdr
SizeofInet6Pktinfo = C.sizeof_struct_in6_pktinfo
SizeofIPv6MTUInfo = C.sizeof_struct_ip6_mtuinfo
SizeofICMPv6Filter = C.sizeof_struct_icmp6_filter
)
// Select
type FdSet C.fd_set
// Misc
type Utsname C.struct_utsname
type Ustat_t C.struct_ustat
const (
AT_FDCWD = C.AT_FDCWD
AT_SYMLINK_NOFOLLOW = C.AT_SYMLINK_NOFOLLOW
AT_SYMLINK_FOLLOW = C.AT_SYMLINK_FOLLOW
AT_REMOVEDIR = C.AT_REMOVEDIR
AT_EACCESS = C.AT_EACCESS
)
// Routing and interface messages
const (
SizeofIfMsghdr = C.sizeof_struct_if_msghdr
SizeofIfData = C.sizeof_struct_if_data
SizeofIfaMsghdr = C.sizeof_struct_ifa_msghdr
SizeofRtMsghdr = C.sizeof_struct_rt_msghdr
SizeofRtMetrics = C.sizeof_struct_rt_metrics
)
type IfMsghdr C.struct_if_msghdr
type IfData C.struct_if_data
type IfaMsghdr C.struct_ifa_msghdr
type RtMsghdr C.struct_rt_msghdr
type RtMetrics C.struct_rt_metrics
// Berkeley packet filter
const (
SizeofBpfVersion = C.sizeof_struct_bpf_version
SizeofBpfStat = C.sizeof_struct_bpf_stat
SizeofBpfProgram = C.sizeof_struct_bpf_program
SizeofBpfInsn = C.sizeof_struct_bpf_insn
SizeofBpfHdr = C.sizeof_struct_bpf_hdr
)
type BpfVersion C.struct_bpf_version
type BpfStat C.struct_bpf_stat
type BpfProgram C.struct_bpf_program
type BpfInsn C.struct_bpf_insn
type BpfTimeval C.struct_bpf_timeval
type BpfHdr C.struct_bpf_hdr
// Terminal handling
type Termios C.struct_termios
type Termio C.struct_termio
type Winsize C.struct_winsize
// poll
type PollFd C.struct_pollfd
const (
POLLERR = C.POLLERR
POLLHUP = C.POLLHUP
POLLIN = C.POLLIN
POLLNVAL = C.POLLNVAL
POLLOUT = C.POLLOUT
POLLPRI = C.POLLPRI
POLLRDBAND = C.POLLRDBAND
POLLRDNORM = C.POLLRDNORM
POLLWRBAND = C.POLLWRBAND
POLLWRNORM = C.POLLWRNORM
)

View File

@ -1,556 +0,0 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
package main
import (
"bufio"
"fmt"
"log"
"net/http"
"sort"
"strings"
"unicode/utf8"
"golang.org/x/text/encoding"
"golang.org/x/text/internal/gen"
)
const ascii = "\x00\x01\x02\x03\x04\x05\x06\x07\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f" +
"\x10\x11\x12\x13\x14\x15\x16\x17\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f" +
` !"#$%&'()*+,-./0123456789:;<=>?` +
`@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_` +
"`abcdefghijklmnopqrstuvwxyz{|}~\u007f"
var encodings = []struct {
name string
mib string
comment string
varName string
replacement byte
mapping string
}{
{
"IBM Code Page 037",
"IBM037",
"",
"CodePage037",
0x3f,
"http://source.icu-project.org/repos/icu/data/trunk/charset/data/ucm/glibc-IBM037-2.1.2.ucm",
},
{
"IBM Code Page 437",
"PC8CodePage437",
"",
"CodePage437",
encoding.ASCIISub,
"http://source.icu-project.org/repos/icu/data/trunk/charset/data/ucm/glibc-IBM437-2.1.2.ucm",
},
{
"IBM Code Page 850",
"PC850Multilingual",
"",
"CodePage850",
encoding.ASCIISub,
"http://source.icu-project.org/repos/icu/data/trunk/charset/data/ucm/glibc-IBM850-2.1.2.ucm",
},
{
"IBM Code Page 852",
"PCp852",
"",
"CodePage852",
encoding.ASCIISub,
"http://source.icu-project.org/repos/icu/data/trunk/charset/data/ucm/glibc-IBM852-2.1.2.ucm",
},
{
"IBM Code Page 855",
"IBM855",
"",
"CodePage855",
encoding.ASCIISub,
"http://source.icu-project.org/repos/icu/data/trunk/charset/data/ucm/glibc-IBM855-2.1.2.ucm",
},
{
"Windows Code Page 858", // PC latin1 with Euro
"IBM00858",
"",
"CodePage858",
encoding.ASCIISub,
"http://source.icu-project.org/repos/icu/data/trunk/charset/data/ucm/windows-858-2000.ucm",
},
{
"IBM Code Page 860",
"IBM860",
"",
"CodePage860",
encoding.ASCIISub,
"http://source.icu-project.org/repos/icu/data/trunk/charset/data/ucm/glibc-IBM860-2.1.2.ucm",
},
{
"IBM Code Page 862",
"PC862LatinHebrew",
"",
"CodePage862",
encoding.ASCIISub,
"http://source.icu-project.org/repos/icu/data/trunk/charset/data/ucm/glibc-IBM862-2.1.2.ucm",
},
{
"IBM Code Page 863",
"IBM863",
"",
"CodePage863",
encoding.ASCIISub,
"http://source.icu-project.org/repos/icu/data/trunk/charset/data/ucm/glibc-IBM863-2.1.2.ucm",
},
{
"IBM Code Page 865",
"IBM865",
"",
"CodePage865",
encoding.ASCIISub,
"http://source.icu-project.org/repos/icu/data/trunk/charset/data/ucm/glibc-IBM865-2.1.2.ucm",
},
{
"IBM Code Page 866",
"IBM866",
"",
"CodePage866",
encoding.ASCIISub,
"http://encoding.spec.whatwg.org/index-ibm866.txt",
},
{
"IBM Code Page 1047",
"IBM1047",
"",
"CodePage1047",
0x3f,
"http://source.icu-project.org/repos/icu/data/trunk/charset/data/ucm/glibc-IBM1047-2.1.2.ucm",
},
{
"IBM Code Page 1140",
"IBM01140",
"",
"CodePage1140",
0x3f,
"http://source.icu-project.org/repos/icu/data/trunk/charset/data/ucm/ibm-1140_P100-1997.ucm",
},
{
"ISO 8859-1",
"ISOLatin1",
"",
"ISO8859_1",
encoding.ASCIISub,
"http://source.icu-project.org/repos/icu/data/trunk/charset/data/ucm/iso-8859_1-1998.ucm",
},
{
"ISO 8859-2",
"ISOLatin2",
"",
"ISO8859_2",
encoding.ASCIISub,
"http://encoding.spec.whatwg.org/index-iso-8859-2.txt",
},
{
"ISO 8859-3",
"ISOLatin3",
"",
"ISO8859_3",
encoding.ASCIISub,
"http://encoding.spec.whatwg.org/index-iso-8859-3.txt",
},
{
"ISO 8859-4",
"ISOLatin4",
"",
"ISO8859_4",
encoding.ASCIISub,
"http://encoding.spec.whatwg.org/index-iso-8859-4.txt",
},
{
"ISO 8859-5",
"ISOLatinCyrillic",
"",
"ISO8859_5",
encoding.ASCIISub,
"http://encoding.spec.whatwg.org/index-iso-8859-5.txt",
},
{
"ISO 8859-6",
"ISOLatinArabic",
"",
"ISO8859_6,ISO8859_6E,ISO8859_6I",
encoding.ASCIISub,
"http://encoding.spec.whatwg.org/index-iso-8859-6.txt",
},
{
"ISO 8859-7",
"ISOLatinGreek",
"",
"ISO8859_7",
encoding.ASCIISub,
"http://encoding.spec.whatwg.org/index-iso-8859-7.txt",
},
{
"ISO 8859-8",
"ISOLatinHebrew",
"",
"ISO8859_8,ISO8859_8E,ISO8859_8I",
encoding.ASCIISub,
"http://encoding.spec.whatwg.org/index-iso-8859-8.txt",
},
{
"ISO 8859-9",
"ISOLatin5",
"",
"ISO8859_9",
encoding.ASCIISub,
"http://source.icu-project.org/repos/icu/data/trunk/charset/data/ucm/iso-8859_9-1999.ucm",
},
{
"ISO 8859-10",
"ISOLatin6",
"",
"ISO8859_10",
encoding.ASCIISub,
"http://encoding.spec.whatwg.org/index-iso-8859-10.txt",
},
{
"ISO 8859-13",
"ISO885913",
"",
"ISO8859_13",
encoding.ASCIISub,
"http://encoding.spec.whatwg.org/index-iso-8859-13.txt",
},
{
"ISO 8859-14",
"ISO885914",
"",
"ISO8859_14",
encoding.ASCIISub,
"http://encoding.spec.whatwg.org/index-iso-8859-14.txt",
},
{
"ISO 8859-15",
"ISO885915",
"",
"ISO8859_15",
encoding.ASCIISub,
"http://encoding.spec.whatwg.org/index-iso-8859-15.txt",
},
{
"ISO 8859-16",
"ISO885916",
"",
"ISO8859_16",
encoding.ASCIISub,
"http://encoding.spec.whatwg.org/index-iso-8859-16.txt",
},
{
"KOI8-R",
"KOI8R",
"",
"KOI8R",
encoding.ASCIISub,
"http://encoding.spec.whatwg.org/index-koi8-r.txt",
},
{
"KOI8-U",
"KOI8U",
"",
"KOI8U",
encoding.ASCIISub,
"http://encoding.spec.whatwg.org/index-koi8-u.txt",
},
{
"Macintosh",
"Macintosh",
"",
"Macintosh",
encoding.ASCIISub,
"http://encoding.spec.whatwg.org/index-macintosh.txt",
},
{
"Macintosh Cyrillic",
"MacintoshCyrillic",
"",
"MacintoshCyrillic",
encoding.ASCIISub,
"http://encoding.spec.whatwg.org/index-x-mac-cyrillic.txt",
},
{
"Windows 874",
"Windows874",
"",
"Windows874",
encoding.ASCIISub,
"http://encoding.spec.whatwg.org/index-windows-874.txt",
},
{
"Windows 1250",
"Windows1250",
"",
"Windows1250",
encoding.ASCIISub,
"http://encoding.spec.whatwg.org/index-windows-1250.txt",
},
{
"Windows 1251",
"Windows1251",
"",
"Windows1251",
encoding.ASCIISub,
"http://encoding.spec.whatwg.org/index-windows-1251.txt",
},
{
"Windows 1252",
"Windows1252",
"",
"Windows1252",
encoding.ASCIISub,
"http://encoding.spec.whatwg.org/index-windows-1252.txt",
},
{
"Windows 1253",
"Windows1253",
"",
"Windows1253",
encoding.ASCIISub,
"http://encoding.spec.whatwg.org/index-windows-1253.txt",
},
{
"Windows 1254",
"Windows1254",
"",
"Windows1254",
encoding.ASCIISub,
"http://encoding.spec.whatwg.org/index-windows-1254.txt",
},
{
"Windows 1255",
"Windows1255",
"",
"Windows1255",
encoding.ASCIISub,
"http://encoding.spec.whatwg.org/index-windows-1255.txt",
},
{
"Windows 1256",
"Windows1256",
"",
"Windows1256",
encoding.ASCIISub,
"http://encoding.spec.whatwg.org/index-windows-1256.txt",
},
{
"Windows 1257",
"Windows1257",
"",
"Windows1257",
encoding.ASCIISub,
"http://encoding.spec.whatwg.org/index-windows-1257.txt",
},
{
"Windows 1258",
"Windows1258",
"",
"Windows1258",
encoding.ASCIISub,
"http://encoding.spec.whatwg.org/index-windows-1258.txt",
},
{
"X-User-Defined",
"XUserDefined",
"It is defined at http://encoding.spec.whatwg.org/#x-user-defined",
"XUserDefined",
encoding.ASCIISub,
ascii +
"\uf780\uf781\uf782\uf783\uf784\uf785\uf786\uf787" +
"\uf788\uf789\uf78a\uf78b\uf78c\uf78d\uf78e\uf78f" +
"\uf790\uf791\uf792\uf793\uf794\uf795\uf796\uf797" +
"\uf798\uf799\uf79a\uf79b\uf79c\uf79d\uf79e\uf79f" +
"\uf7a0\uf7a1\uf7a2\uf7a3\uf7a4\uf7a5\uf7a6\uf7a7" +
"\uf7a8\uf7a9\uf7aa\uf7ab\uf7ac\uf7ad\uf7ae\uf7af" +
"\uf7b0\uf7b1\uf7b2\uf7b3\uf7b4\uf7b5\uf7b6\uf7b7" +
"\uf7b8\uf7b9\uf7ba\uf7bb\uf7bc\uf7bd\uf7be\uf7bf" +
"\uf7c0\uf7c1\uf7c2\uf7c3\uf7c4\uf7c5\uf7c6\uf7c7" +
"\uf7c8\uf7c9\uf7ca\uf7cb\uf7cc\uf7cd\uf7ce\uf7cf" +
"\uf7d0\uf7d1\uf7d2\uf7d3\uf7d4\uf7d5\uf7d6\uf7d7" +
"\uf7d8\uf7d9\uf7da\uf7db\uf7dc\uf7dd\uf7de\uf7df" +
"\uf7e0\uf7e1\uf7e2\uf7e3\uf7e4\uf7e5\uf7e6\uf7e7" +
"\uf7e8\uf7e9\uf7ea\uf7eb\uf7ec\uf7ed\uf7ee\uf7ef" +
"\uf7f0\uf7f1\uf7f2\uf7f3\uf7f4\uf7f5\uf7f6\uf7f7" +
"\uf7f8\uf7f9\uf7fa\uf7fb\uf7fc\uf7fd\uf7fe\uf7ff",
},
}
func getWHATWG(url string) string {
res, err := http.Get(url)
if err != nil {
log.Fatalf("%q: Get: %v", url, err)
}
defer res.Body.Close()
mapping := make([]rune, 128)
for i := range mapping {
mapping[i] = '\ufffd'
}
scanner := bufio.NewScanner(res.Body)
for scanner.Scan() {
s := strings.TrimSpace(scanner.Text())
if s == "" || s[0] == '#' {
continue
}
x, y := 0, 0
if _, err := fmt.Sscanf(s, "%d\t0x%x", &x, &y); err != nil {
log.Fatalf("could not parse %q", s)
}
if x < 0 || 128 <= x {
log.Fatalf("code %d is out of range", x)
}
if 0x80 <= y && y < 0xa0 {
// We diverge from the WHATWG spec by mapping control characters
// in the range [0x80, 0xa0) to U+FFFD.
continue
}
mapping[x] = rune(y)
}
return ascii + string(mapping)
}
func getUCM(url string) string {
res, err := http.Get(url)
if err != nil {
log.Fatalf("%q: Get: %v", url, err)
}
defer res.Body.Close()
mapping := make([]rune, 256)
for i := range mapping {
mapping[i] = '\ufffd'
}
charsFound := 0
scanner := bufio.NewScanner(res.Body)
for scanner.Scan() {
s := strings.TrimSpace(scanner.Text())
if s == "" || s[0] == '#' {
continue
}
var c byte
var r rune
if _, err := fmt.Sscanf(s, `<U%x> \x%x |0`, &r, &c); err != nil {
continue
}
mapping[c] = r
charsFound++
}
if charsFound < 200 {
log.Fatalf("%q: only %d characters found (wrong page format?)", url, charsFound)
}
return string(mapping)
}
func main() {
mibs := map[string]bool{}
all := []string{}
w := gen.NewCodeWriter()
defer w.WriteGoFile("tables.go", "charmap")
printf := func(s string, a ...interface{}) { fmt.Fprintf(w, s, a...) }
printf("import (\n")
printf("\t\"golang.org/x/text/encoding\"\n")
printf("\t\"golang.org/x/text/encoding/internal/identifier\"\n")
printf(")\n\n")
for _, e := range encodings {
varNames := strings.Split(e.varName, ",")
all = append(all, varNames...)
varName := varNames[0]
switch {
case strings.HasPrefix(e.mapping, "http://encoding.spec.whatwg.org/"):
e.mapping = getWHATWG(e.mapping)
case strings.HasPrefix(e.mapping, "http://source.icu-project.org/repos/icu/data/trunk/charset/data/ucm/"):
e.mapping = getUCM(e.mapping)
}
asciiSuperset, low := strings.HasPrefix(e.mapping, ascii), 0x00
if asciiSuperset {
low = 0x80
}
lvn := 1
if strings.HasPrefix(varName, "ISO") || strings.HasPrefix(varName, "KOI") {
lvn = 3
}
lowerVarName := strings.ToLower(varName[:lvn]) + varName[lvn:]
printf("// %s is the %s encoding.\n", varName, e.name)
if e.comment != "" {
printf("//\n// %s\n", e.comment)
}
printf("var %s *Charmap = &%s\n\nvar %s = Charmap{\nname: %q,\n",
varName, lowerVarName, lowerVarName, e.name)
if mibs[e.mib] {
log.Fatalf("MIB type %q declared multiple times.", e.mib)
}
printf("mib: identifier.%s,\n", e.mib)
printf("asciiSuperset: %t,\n", asciiSuperset)
printf("low: 0x%02x,\n", low)
printf("replacement: 0x%02x,\n", e.replacement)
printf("decode: [256]utf8Enc{\n")
i, backMapping := 0, map[rune]byte{}
for _, c := range e.mapping {
if _, ok := backMapping[c]; !ok && c != utf8.RuneError {
backMapping[c] = byte(i)
}
var buf [8]byte
n := utf8.EncodeRune(buf[:], c)
if n > 3 {
panic(fmt.Sprintf("rune %q (%U) is too long", c, c))
}
printf("{%d,[3]byte{0x%02x,0x%02x,0x%02x}},", n, buf[0], buf[1], buf[2])
if i%2 == 1 {
printf("\n")
}
i++
}
printf("},\n")
printf("encode: [256]uint32{\n")
encode := make([]uint32, 0, 256)
for c, i := range backMapping {
encode = append(encode, uint32(i)<<24|uint32(c))
}
sort.Sort(byRune(encode))
for len(encode) < cap(encode) {
encode = append(encode, encode[len(encode)-1])
}
for i, enc := range encode {
printf("0x%08x,", enc)
if i%8 == 7 {
printf("\n")
}
}
printf("},\n}\n")
// Add an estimate of the size of a single Charmap{} struct value, which
// includes two 256 elem arrays of 4 bytes and some extra fields, which
// align to 3 uint64s on 64-bit architectures.
w.Size += 2*4*256 + 3*8
}
// TODO: add proper line breaking.
printf("var listAll = []encoding.Encoding{\n%s,\n}\n\n", strings.Join(all, ",\n"))
}
type byRune []uint32
func (b byRune) Len() int { return len(b) }
func (b byRune) Less(i, j int) bool { return b[i]&0xffffff < b[j]&0xffffff }
func (b byRune) Swap(i, j int) { b[i], b[j] = b[j], b[i] }

View File

@ -1,173 +0,0 @@
// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
package main
import (
"bytes"
"encoding/json"
"fmt"
"log"
"strings"
"golang.org/x/text/internal/gen"
)
type group struct {
Encodings []struct {
Labels []string
Name string
}
}
func main() {
gen.Init()
r := gen.Open("https://encoding.spec.whatwg.org", "whatwg", "encodings.json")
var groups []group
if err := json.NewDecoder(r).Decode(&groups); err != nil {
log.Fatalf("Error reading encodings.json: %v", err)
}
w := &bytes.Buffer{}
fmt.Fprintln(w, "type htmlEncoding byte")
fmt.Fprintln(w, "const (")
for i, g := range groups {
for _, e := range g.Encodings {
key := strings.ToLower(e.Name)
name := consts[key]
if name == "" {
log.Fatalf("No const defined for %s.", key)
}
if i == 0 {
fmt.Fprintf(w, "%s htmlEncoding = iota\n", name)
} else {
fmt.Fprintf(w, "%s\n", name)
}
}
}
fmt.Fprintln(w, "numEncodings")
fmt.Fprint(w, ")\n\n")
fmt.Fprintln(w, "var canonical = [numEncodings]string{")
for _, g := range groups {
for _, e := range g.Encodings {
fmt.Fprintf(w, "%q,\n", strings.ToLower(e.Name))
}
}
fmt.Fprint(w, "}\n\n")
fmt.Fprintln(w, "var nameMap = map[string]htmlEncoding{")
for _, g := range groups {
for _, e := range g.Encodings {
for _, l := range e.Labels {
key := strings.ToLower(e.Name)
name := consts[key]
fmt.Fprintf(w, "%q: %s,\n", l, name)
}
}
}
fmt.Fprint(w, "}\n\n")
var tags []string
fmt.Fprintln(w, "var localeMap = []htmlEncoding{")
for _, loc := range locales {
tags = append(tags, loc.tag)
fmt.Fprintf(w, "%s, // %s \n", consts[loc.name], loc.tag)
}
fmt.Fprint(w, "}\n\n")
fmt.Fprintf(w, "const locales = %q\n", strings.Join(tags, " "))
gen.WriteGoFile("tables.go", "htmlindex", w.Bytes())
}
// consts maps canonical encoding name to internal constant.
var consts = map[string]string{
"utf-8": "utf8",
"ibm866": "ibm866",
"iso-8859-2": "iso8859_2",
"iso-8859-3": "iso8859_3",
"iso-8859-4": "iso8859_4",
"iso-8859-5": "iso8859_5",
"iso-8859-6": "iso8859_6",
"iso-8859-7": "iso8859_7",
"iso-8859-8": "iso8859_8",
"iso-8859-8-i": "iso8859_8I",
"iso-8859-10": "iso8859_10",
"iso-8859-13": "iso8859_13",
"iso-8859-14": "iso8859_14",
"iso-8859-15": "iso8859_15",
"iso-8859-16": "iso8859_16",
"koi8-r": "koi8r",
"koi8-u": "koi8u",
"macintosh": "macintosh",
"windows-874": "windows874",
"windows-1250": "windows1250",
"windows-1251": "windows1251",
"windows-1252": "windows1252",
"windows-1253": "windows1253",
"windows-1254": "windows1254",
"windows-1255": "windows1255",
"windows-1256": "windows1256",
"windows-1257": "windows1257",
"windows-1258": "windows1258",
"x-mac-cyrillic": "macintoshCyrillic",
"gbk": "gbk",
"gb18030": "gb18030",
// "hz-gb-2312": "hzgb2312", // Was removed from WhatWG
"big5": "big5",
"euc-jp": "eucjp",
"iso-2022-jp": "iso2022jp",
"shift_jis": "shiftJIS",
"euc-kr": "euckr",
"replacement": "replacement",
"utf-16be": "utf16be",
"utf-16le": "utf16le",
"x-user-defined": "xUserDefined",
}
// locales is taken from
// https://html.spec.whatwg.org/multipage/syntax.html#encoding-sniffing-algorithm.
var locales = []struct{ tag, name string }{
// The default value. Explicitly state latin to benefit from the exact
// script option, while still making 1252 the default encoding for languages
// written in Latin script.
{"und_Latn", "windows-1252"},
{"ar", "windows-1256"},
{"ba", "windows-1251"},
{"be", "windows-1251"},
{"bg", "windows-1251"},
{"cs", "windows-1250"},
{"el", "iso-8859-7"},
{"et", "windows-1257"},
{"fa", "windows-1256"},
{"he", "windows-1255"},
{"hr", "windows-1250"},
{"hu", "iso-8859-2"},
{"ja", "shift_jis"},
{"kk", "windows-1251"},
{"ko", "euc-kr"},
{"ku", "windows-1254"},
{"ky", "windows-1251"},
{"lt", "windows-1257"},
{"lv", "windows-1257"},
{"mk", "windows-1251"},
{"pl", "iso-8859-2"},
{"ru", "windows-1251"},
{"sah", "windows-1251"},
{"sk", "windows-1250"},
{"sl", "iso-8859-2"},
{"sr", "windows-1251"},
{"tg", "windows-1251"},
{"th", "windows-874"},
{"tr", "windows-1254"},
{"tt", "windows-1251"},
{"uk", "windows-1251"},
{"vi", "windows-1258"},
{"zh-hans", "gb18030"},
{"zh-hant", "big5"},
}

View File

@ -1,137 +0,0 @@
// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
package main
import (
"bytes"
"encoding/xml"
"fmt"
"io"
"log"
"strings"
"golang.org/x/text/internal/gen"
)
type registry struct {
XMLName xml.Name `xml:"registry"`
Updated string `xml:"updated"`
Registry []struct {
ID string `xml:"id,attr"`
Record []struct {
Name string `xml:"name"`
Xref []struct {
Type string `xml:"type,attr"`
Data string `xml:"data,attr"`
} `xml:"xref"`
Desc struct {
Data string `xml:",innerxml"`
// Any []struct {
// Data string `xml:",chardata"`
// } `xml:",any"`
// Data string `xml:",chardata"`
} `xml:"description,"`
MIB string `xml:"value"`
Alias []string `xml:"alias"`
MIME string `xml:"preferred_alias"`
} `xml:"record"`
} `xml:"registry"`
}
func main() {
r := gen.OpenIANAFile("assignments/character-sets/character-sets.xml")
reg := &registry{}
if err := xml.NewDecoder(r).Decode(&reg); err != nil && err != io.EOF {
log.Fatalf("Error decoding charset registry: %v", err)
}
if len(reg.Registry) == 0 || reg.Registry[0].ID != "character-sets-1" {
log.Fatalf("Unexpected ID %s", reg.Registry[0].ID)
}
w := &bytes.Buffer{}
fmt.Fprintf(w, "const (\n")
for _, rec := range reg.Registry[0].Record {
constName := ""
for _, a := range rec.Alias {
if strings.HasPrefix(a, "cs") && strings.IndexByte(a, '-') == -1 {
// Some of the constant definitions have comments in them. Strip those.
constName = strings.Title(strings.SplitN(a[2:], "\n", 2)[0])
}
}
if constName == "" {
switch rec.MIB {
case "2085":
constName = "HZGB2312" // Not listed as alias for some reason.
default:
log.Fatalf("No cs alias defined for %s.", rec.MIB)
}
}
if rec.MIME != "" {
rec.MIME = fmt.Sprintf(" (MIME: %s)", rec.MIME)
}
fmt.Fprintf(w, "// %s is the MIB identifier with IANA name %s%s.\n//\n", constName, rec.Name, rec.MIME)
if len(rec.Desc.Data) > 0 {
fmt.Fprint(w, "// ")
d := xml.NewDecoder(strings.NewReader(rec.Desc.Data))
inElem := true
attr := ""
for {
t, err := d.Token()
if err != nil {
if err != io.EOF {
log.Fatal(err)
}
break
}
switch x := t.(type) {
case xml.CharData:
attr = "" // Don't need attribute info.
a := bytes.Split([]byte(x), []byte("\n"))
for i, b := range a {
if b = bytes.TrimSpace(b); len(b) != 0 {
if !inElem && i > 0 {
fmt.Fprint(w, "\n// ")
}
inElem = false
fmt.Fprintf(w, "%s ", string(b))
}
}
case xml.StartElement:
if x.Name.Local == "xref" {
inElem = true
use := false
for _, a := range x.Attr {
if a.Name.Local == "type" {
use = use || a.Value != "person"
}
if a.Name.Local == "data" && use {
attr = a.Value + " "
}
}
}
case xml.EndElement:
inElem = false
fmt.Fprint(w, attr)
}
}
fmt.Fprint(w, "\n")
}
for _, x := range rec.Xref {
switch x.Type {
case "rfc":
fmt.Fprintf(w, "// Reference: %s\n", strings.ToUpper(x.Data))
case "uri":
fmt.Fprintf(w, "// Reference: %s\n", x.Data)
}
}
fmt.Fprintf(w, "%s MIB = %s\n", constName, rec.MIB)
fmt.Fprintln(w)
}
fmt.Fprintln(w, ")")
gen.WriteGoFile("mib.go", "identifier", w.Bytes())
}

View File

@ -1,161 +0,0 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
package main
// This program generates tables.go:
// go run maketables.go | gofmt > tables.go
// TODO: Emoji extensions?
// https://www.unicode.org/faq/emoji_dingbats.html
// https://www.unicode.org/Public/UNIDATA/EmojiSources.txt
import (
"bufio"
"fmt"
"log"
"net/http"
"sort"
"strings"
)
type entry struct {
jisCode, table int
}
func main() {
fmt.Printf("// generated by go run maketables.go; DO NOT EDIT\n\n")
fmt.Printf("// Package japanese provides Japanese encodings such as EUC-JP and Shift JIS.\n")
fmt.Printf(`package japanese // import "golang.org/x/text/encoding/japanese"` + "\n\n")
reverse := [65536]entry{}
for i := range reverse {
reverse[i].table = -1
}
tables := []struct {
url string
name string
}{
{"http://encoding.spec.whatwg.org/index-jis0208.txt", "0208"},
{"http://encoding.spec.whatwg.org/index-jis0212.txt", "0212"},
}
for i, table := range tables {
res, err := http.Get(table.url)
if err != nil {
log.Fatalf("%q: Get: %v", table.url, err)
}
defer res.Body.Close()
mapping := [65536]uint16{}
scanner := bufio.NewScanner(res.Body)
for scanner.Scan() {
s := strings.TrimSpace(scanner.Text())
if s == "" || s[0] == '#' {
continue
}
x, y := 0, uint16(0)
if _, err := fmt.Sscanf(s, "%d 0x%x", &x, &y); err != nil {
log.Fatalf("%q: could not parse %q", table.url, s)
}
if x < 0 || 120*94 <= x {
log.Fatalf("%q: JIS code %d is out of range", table.url, x)
}
mapping[x] = y
if reverse[y].table == -1 {
reverse[y] = entry{jisCode: x, table: i}
}
}
if err := scanner.Err(); err != nil {
log.Fatalf("%q: scanner error: %v", table.url, err)
}
fmt.Printf("// jis%sDecode is the decoding table from JIS %s code to Unicode.\n// It is defined at %s\n",
table.name, table.name, table.url)
fmt.Printf("var jis%sDecode = [...]uint16{\n", table.name)
for i, m := range mapping {
if m != 0 {
fmt.Printf("\t%d: 0x%04X,\n", i, m)
}
}
fmt.Printf("}\n\n")
}
// Any run of at least separation continuous zero entries in the reverse map will
// be a separate encode table.
const separation = 1024
intervals := []interval(nil)
low, high := -1, -1
for i, v := range reverse {
if v.table == -1 {
continue
}
if low < 0 {
low = i
} else if i-high >= separation {
if high >= 0 {
intervals = append(intervals, interval{low, high})
}
low = i
}
high = i + 1
}
if high >= 0 {
intervals = append(intervals, interval{low, high})
}
sort.Sort(byDecreasingLength(intervals))
fmt.Printf("const (\n")
fmt.Printf("\tjis0208 = 1\n")
fmt.Printf("\tjis0212 = 2\n")
fmt.Printf("\tcodeMask = 0x7f\n")
fmt.Printf("\tcodeShift = 7\n")
fmt.Printf("\ttableShift = 14\n")
fmt.Printf(")\n\n")
fmt.Printf("const numEncodeTables = %d\n\n", len(intervals))
fmt.Printf("// encodeX are the encoding tables from Unicode to JIS code,\n")
fmt.Printf("// sorted by decreasing length.\n")
for i, v := range intervals {
fmt.Printf("// encode%d: %5d entries for runes in [%5d, %5d).\n", i, v.len(), v.low, v.high)
}
fmt.Printf("//\n")
fmt.Printf("// The high two bits of the value record whether the JIS code comes from the\n")
fmt.Printf("// JIS0208 table (high bits == 1) or the JIS0212 table (high bits == 2).\n")
fmt.Printf("// The low 14 bits are two 7-bit unsigned integers j1 and j2 that form the\n")
fmt.Printf("// JIS code (94*j1 + j2) within that table.\n")
fmt.Printf("\n")
for i, v := range intervals {
fmt.Printf("const encode%dLow, encode%dHigh = %d, %d\n\n", i, i, v.low, v.high)
fmt.Printf("var encode%d = [...]uint16{\n", i)
for j := v.low; j < v.high; j++ {
x := reverse[j]
if x.table == -1 {
continue
}
fmt.Printf("\t%d - %d: jis%s<<14 | 0x%02X<<7 | 0x%02X,\n",
j, v.low, tables[x.table].name, x.jisCode/94, x.jisCode%94)
}
fmt.Printf("}\n\n")
}
}
// interval is a half-open interval [low, high).
type interval struct {
low, high int
}
func (i interval) len() int { return i.high - i.low }
// byDecreasingLength sorts intervals by decreasing length.
type byDecreasingLength []interval
func (b byDecreasingLength) Len() int { return len(b) }
func (b byDecreasingLength) Less(i, j int) bool { return b[i].len() > b[j].len() }
func (b byDecreasingLength) Swap(i, j int) { b[i], b[j] = b[j], b[i] }

View File

@ -1,143 +0,0 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
package main
// This program generates tables.go:
// go run maketables.go | gofmt > tables.go
import (
"bufio"
"fmt"
"log"
"net/http"
"sort"
"strings"
)
func main() {
fmt.Printf("// generated by go run maketables.go; DO NOT EDIT\n\n")
fmt.Printf("// Package korean provides Korean encodings such as EUC-KR.\n")
fmt.Printf(`package korean // import "golang.org/x/text/encoding/korean"` + "\n\n")
res, err := http.Get("http://encoding.spec.whatwg.org/index-euc-kr.txt")
if err != nil {
log.Fatalf("Get: %v", err)
}
defer res.Body.Close()
mapping := [65536]uint16{}
reverse := [65536]uint16{}
scanner := bufio.NewScanner(res.Body)
for scanner.Scan() {
s := strings.TrimSpace(scanner.Text())
if s == "" || s[0] == '#' {
continue
}
x, y := uint16(0), uint16(0)
if _, err := fmt.Sscanf(s, "%d 0x%x", &x, &y); err != nil {
log.Fatalf("could not parse %q", s)
}
if x < 0 || 178*(0xc7-0x81)+(0xfe-0xc7)*94+(0xff-0xa1) <= x {
log.Fatalf("EUC-KR code %d is out of range", x)
}
mapping[x] = y
if reverse[y] == 0 {
c0, c1 := uint16(0), uint16(0)
if x < 178*(0xc7-0x81) {
c0 = uint16(x/178) + 0x81
c1 = uint16(x % 178)
switch {
case c1 < 1*26:
c1 += 0x41
case c1 < 2*26:
c1 += 0x47
default:
c1 += 0x4d
}
} else {
x -= 178 * (0xc7 - 0x81)
c0 = uint16(x/94) + 0xc7
c1 = uint16(x%94) + 0xa1
}
reverse[y] = c0<<8 | c1
}
}
if err := scanner.Err(); err != nil {
log.Fatalf("scanner error: %v", err)
}
fmt.Printf("// decode is the decoding table from EUC-KR code to Unicode.\n")
fmt.Printf("// It is defined at http://encoding.spec.whatwg.org/index-euc-kr.txt\n")
fmt.Printf("var decode = [...]uint16{\n")
for i, v := range mapping {
if v != 0 {
fmt.Printf("\t%d: 0x%04X,\n", i, v)
}
}
fmt.Printf("}\n\n")
// Any run of at least separation continuous zero entries in the reverse map will
// be a separate encode table.
const separation = 1024
intervals := []interval(nil)
low, high := -1, -1
for i, v := range reverse {
if v == 0 {
continue
}
if low < 0 {
low = i
} else if i-high >= separation {
if high >= 0 {
intervals = append(intervals, interval{low, high})
}
low = i
}
high = i + 1
}
if high >= 0 {
intervals = append(intervals, interval{low, high})
}
sort.Sort(byDecreasingLength(intervals))
fmt.Printf("const numEncodeTables = %d\n\n", len(intervals))
fmt.Printf("// encodeX are the encoding tables from Unicode to EUC-KR code,\n")
fmt.Printf("// sorted by decreasing length.\n")
for i, v := range intervals {
fmt.Printf("// encode%d: %5d entries for runes in [%5d, %5d).\n", i, v.len(), v.low, v.high)
}
fmt.Printf("\n")
for i, v := range intervals {
fmt.Printf("const encode%dLow, encode%dHigh = %d, %d\n\n", i, i, v.low, v.high)
fmt.Printf("var encode%d = [...]uint16{\n", i)
for j := v.low; j < v.high; j++ {
x := reverse[j]
if x == 0 {
continue
}
fmt.Printf("\t%d-%d: 0x%04X,\n", j, v.low, x)
}
fmt.Printf("}\n\n")
}
}
// interval is a half-open interval [low, high).
type interval struct {
low, high int
}
func (i interval) len() int { return i.high - i.low }
// byDecreasingLength sorts intervals by decreasing length.
type byDecreasingLength []interval
func (b byDecreasingLength) Len() int { return len(b) }
func (b byDecreasingLength) Less(i, j int) bool { return b[i].len() > b[j].len() }
func (b byDecreasingLength) Swap(i, j int) { b[i], b[j] = b[j], b[i] }

View File

@ -1,161 +0,0 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
package main
// This program generates tables.go:
// go run maketables.go | gofmt > tables.go
import (
"bufio"
"fmt"
"log"
"net/http"
"sort"
"strings"
)
func main() {
fmt.Printf("// generated by go run maketables.go; DO NOT EDIT\n\n")
fmt.Printf("// Package simplifiedchinese provides Simplified Chinese encodings such as GBK.\n")
fmt.Printf(`package simplifiedchinese // import "golang.org/x/text/encoding/simplifiedchinese"` + "\n\n")
printGB18030()
printGBK()
}
func printGB18030() {
res, err := http.Get("http://encoding.spec.whatwg.org/index-gb18030.txt")
if err != nil {
log.Fatalf("Get: %v", err)
}
defer res.Body.Close()
fmt.Printf("// gb18030 is the table from http://encoding.spec.whatwg.org/index-gb18030.txt\n")
fmt.Printf("var gb18030 = [...][2]uint16{\n")
scanner := bufio.NewScanner(res.Body)
for scanner.Scan() {
s := strings.TrimSpace(scanner.Text())
if s == "" || s[0] == '#' {
continue
}
x, y := uint32(0), uint32(0)
if _, err := fmt.Sscanf(s, "%d 0x%x", &x, &y); err != nil {
log.Fatalf("could not parse %q", s)
}
if x < 0x10000 && y < 0x10000 {
fmt.Printf("\t{0x%04x, 0x%04x},\n", x, y)
}
}
fmt.Printf("}\n\n")
}
func printGBK() {
res, err := http.Get("http://encoding.spec.whatwg.org/index-gbk.txt")
if err != nil {
log.Fatalf("Get: %v", err)
}
defer res.Body.Close()
mapping := [65536]uint16{}
reverse := [65536]uint16{}
scanner := bufio.NewScanner(res.Body)
for scanner.Scan() {
s := strings.TrimSpace(scanner.Text())
if s == "" || s[0] == '#' {
continue
}
x, y := uint16(0), uint16(0)
if _, err := fmt.Sscanf(s, "%d 0x%x", &x, &y); err != nil {
log.Fatalf("could not parse %q", s)
}
if x < 0 || 126*190 <= x {
log.Fatalf("GBK code %d is out of range", x)
}
mapping[x] = y
if reverse[y] == 0 {
c0, c1 := x/190, x%190
if c1 >= 0x3f {
c1++
}
reverse[y] = (0x81+c0)<<8 | (0x40 + c1)
}
}
if err := scanner.Err(); err != nil {
log.Fatalf("scanner error: %v", err)
}
fmt.Printf("// decode is the decoding table from GBK code to Unicode.\n")
fmt.Printf("// It is defined at http://encoding.spec.whatwg.org/index-gbk.txt\n")
fmt.Printf("var decode = [...]uint16{\n")
for i, v := range mapping {
if v != 0 {
fmt.Printf("\t%d: 0x%04X,\n", i, v)
}
}
fmt.Printf("}\n\n")
// Any run of at least separation continuous zero entries in the reverse map will
// be a separate encode table.
const separation = 1024
intervals := []interval(nil)
low, high := -1, -1
for i, v := range reverse {
if v == 0 {
continue
}
if low < 0 {
low = i
} else if i-high >= separation {
if high >= 0 {
intervals = append(intervals, interval{low, high})
}
low = i
}
high = i + 1
}
if high >= 0 {
intervals = append(intervals, interval{low, high})
}
sort.Sort(byDecreasingLength(intervals))
fmt.Printf("const numEncodeTables = %d\n\n", len(intervals))
fmt.Printf("// encodeX are the encoding tables from Unicode to GBK code,\n")
fmt.Printf("// sorted by decreasing length.\n")
for i, v := range intervals {
fmt.Printf("// encode%d: %5d entries for runes in [%5d, %5d).\n", i, v.len(), v.low, v.high)
}
fmt.Printf("\n")
for i, v := range intervals {
fmt.Printf("const encode%dLow, encode%dHigh = %d, %d\n\n", i, i, v.low, v.high)
fmt.Printf("var encode%d = [...]uint16{\n", i)
for j := v.low; j < v.high; j++ {
x := reverse[j]
if x == 0 {
continue
}
fmt.Printf("\t%d-%d: 0x%04X,\n", j, v.low, x)
}
fmt.Printf("}\n\n")
}
}
// interval is a half-open interval [low, high).
type interval struct {
low, high int
}
func (i interval) len() int { return i.high - i.low }
// byDecreasingLength sorts intervals by decreasing length.
type byDecreasingLength []interval
func (b byDecreasingLength) Len() int { return len(b) }
func (b byDecreasingLength) Less(i, j int) bool { return b[i].len() > b[j].len() }
func (b byDecreasingLength) Swap(i, j int) { b[i], b[j] = b[j], b[i] }

View File

@ -1,140 +0,0 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
package main
// This program generates tables.go:
// go run maketables.go | gofmt > tables.go
import (
"bufio"
"fmt"
"log"
"net/http"
"sort"
"strings"
)
func main() {
fmt.Printf("// generated by go run maketables.go; DO NOT EDIT\n\n")
fmt.Printf("// Package traditionalchinese provides Traditional Chinese encodings such as Big5.\n")
fmt.Printf(`package traditionalchinese // import "golang.org/x/text/encoding/traditionalchinese"` + "\n\n")
res, err := http.Get("http://encoding.spec.whatwg.org/index-big5.txt")
if err != nil {
log.Fatalf("Get: %v", err)
}
defer res.Body.Close()
mapping := [65536]uint32{}
reverse := [65536 * 4]uint16{}
scanner := bufio.NewScanner(res.Body)
for scanner.Scan() {
s := strings.TrimSpace(scanner.Text())
if s == "" || s[0] == '#' {
continue
}
x, y := uint16(0), uint32(0)
if _, err := fmt.Sscanf(s, "%d 0x%x", &x, &y); err != nil {
log.Fatalf("could not parse %q", s)
}
if x < 0 || 126*157 <= x {
log.Fatalf("Big5 code %d is out of range", x)
}
mapping[x] = y
// The WHATWG spec http://encoding.spec.whatwg.org/#indexes says that
// "The index pointer for code point in index is the first pointer
// corresponding to code point in index", which would normally mean
// that the code below should be guarded by "if reverse[y] == 0", but
// last instead of first seems to match the behavior of
// "iconv -f UTF-8 -t BIG5". For example, U+8005 者 occurs twice in
// http://encoding.spec.whatwg.org/index-big5.txt, as index 2148
// (encoded as "\x8e\xcd") and index 6543 (encoded as "\xaa\xcc")
// and "echo 者 | iconv -f UTF-8 -t BIG5 | xxd" gives "\xaa\xcc".
c0, c1 := x/157, x%157
if c1 < 0x3f {
c1 += 0x40
} else {
c1 += 0x62
}
reverse[y] = (0x81+c0)<<8 | c1
}
if err := scanner.Err(); err != nil {
log.Fatalf("scanner error: %v", err)
}
fmt.Printf("// decode is the decoding table from Big5 code to Unicode.\n")
fmt.Printf("// It is defined at http://encoding.spec.whatwg.org/index-big5.txt\n")
fmt.Printf("var decode = [...]uint32{\n")
for i, v := range mapping {
if v != 0 {
fmt.Printf("\t%d: 0x%08X,\n", i, v)
}
}
fmt.Printf("}\n\n")
// Any run of at least separation continuous zero entries in the reverse map will
// be a separate encode table.
const separation = 1024
intervals := []interval(nil)
low, high := -1, -1
for i, v := range reverse {
if v == 0 {
continue
}
if low < 0 {
low = i
} else if i-high >= separation {
if high >= 0 {
intervals = append(intervals, interval{low, high})
}
low = i
}
high = i + 1
}
if high >= 0 {
intervals = append(intervals, interval{low, high})
}
sort.Sort(byDecreasingLength(intervals))
fmt.Printf("const numEncodeTables = %d\n\n", len(intervals))
fmt.Printf("// encodeX are the encoding tables from Unicode to Big5 code,\n")
fmt.Printf("// sorted by decreasing length.\n")
for i, v := range intervals {
fmt.Printf("// encode%d: %5d entries for runes in [%6d, %6d).\n", i, v.len(), v.low, v.high)
}
fmt.Printf("\n")
for i, v := range intervals {
fmt.Printf("const encode%dLow, encode%dHigh = %d, %d\n\n", i, i, v.low, v.high)
fmt.Printf("var encode%d = [...]uint16{\n", i)
for j := v.low; j < v.high; j++ {
x := reverse[j]
if x == 0 {
continue
}
fmt.Printf("\t%d-%d: 0x%04X,\n", j, v.low, x)
}
fmt.Printf("}\n\n")
}
}
// interval is a half-open interval [low, high).
type interval struct {
low, high int
}
func (i interval) len() int { return i.high - i.low }
// byDecreasingLength sorts intervals by decreasing length.
type byDecreasingLength []interval
func (b byDecreasingLength) Len() int { return len(b) }
func (b byDecreasingLength) Less(i, j int) bool { return b[i].len() > b[j].len() }
func (b byDecreasingLength) Swap(i, j int) { b[i], b[j] = b[j], b[i] }

View File

@ -1,64 +0,0 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
// Language tag table generator.
// Data read from the web.
package main
import (
"flag"
"fmt"
"log"
"golang.org/x/text/internal/gen"
"golang.org/x/text/unicode/cldr"
)
var (
test = flag.Bool("test",
false,
"test existing tables; can be used to compare web data with package data.")
outputFile = flag.String("output",
"tables.go",
"output file for generated tables")
)
func main() {
gen.Init()
w := gen.NewCodeWriter()
defer w.WriteGoFile("tables.go", "compact")
fmt.Fprintln(w, `import "golang.org/x/text/internal/language"`)
b := newBuilder(w)
gen.WriteCLDRVersion(w)
b.writeCompactIndex()
}
type builder struct {
w *gen.CodeWriter
data *cldr.CLDR
supp *cldr.SupplementalData
}
func newBuilder(w *gen.CodeWriter) *builder {
r := gen.OpenCLDRCoreZip()
defer r.Close()
d := &cldr.Decoder{}
data, err := d.DecodeZip(r)
if err != nil {
log.Fatal(err)
}
b := builder{
w: w,
data: data,
supp: data.Supplemental(),
}
return &b
}

View File

@ -1,113 +0,0 @@
// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
package main
// This file generates derivative tables based on the language package itself.
import (
"fmt"
"log"
"sort"
"strings"
"golang.org/x/text/internal/language"
)
// Compact indices:
// Note -va-X variants only apply to localization variants.
// BCP variants only ever apply to language.
// The only ambiguity between tags is with regions.
func (b *builder) writeCompactIndex() {
// Collect all language tags for which we have any data in CLDR.
m := map[language.Tag]bool{}
for _, lang := range b.data.Locales() {
// We include all locales unconditionally to be consistent with en_US.
// We want en_US, even though it has no data associated with it.
// TODO: put any of the languages for which no data exists at the end
// of the index. This allows all components based on ICU to use that
// as the cutoff point.
// if x := data.RawLDML(lang); false ||
// x.LocaleDisplayNames != nil ||
// x.Characters != nil ||
// x.Delimiters != nil ||
// x.Measurement != nil ||
// x.Dates != nil ||
// x.Numbers != nil ||
// x.Units != nil ||
// x.ListPatterns != nil ||
// x.Collations != nil ||
// x.Segmentations != nil ||
// x.Rbnf != nil ||
// x.Annotations != nil ||
// x.Metadata != nil {
// TODO: support POSIX natively, albeit non-standard.
tag := language.Make(strings.Replace(lang, "_POSIX", "-u-va-posix", 1))
m[tag] = true
// }
}
// TODO: plural rules are also defined for the deprecated tags:
// iw mo sh tl
// Consider removing these as compact tags.
// Include locales for plural rules, which uses a different structure.
for _, plurals := range b.supp.Plurals {
for _, rules := range plurals.PluralRules {
for _, lang := range strings.Split(rules.Locales, " ") {
m[language.Make(lang)] = true
}
}
}
var coreTags []language.CompactCoreInfo
var special []string
for t := range m {
if x := t.Extensions(); len(x) != 0 && fmt.Sprint(x) != "[u-va-posix]" {
log.Fatalf("Unexpected extension %v in %v", x, t)
}
if len(t.Variants()) == 0 && len(t.Extensions()) == 0 {
cci, ok := language.GetCompactCore(t)
if !ok {
log.Fatalf("Locale for non-basic language %q", t)
}
coreTags = append(coreTags, cci)
} else {
special = append(special, t.String())
}
}
w := b.w
sort.Slice(coreTags, func(i, j int) bool { return coreTags[i] < coreTags[j] })
sort.Strings(special)
w.WriteComment(`
NumCompactTags is the number of common tags. The maximum tag is
NumCompactTags-1.`)
w.WriteConst("NumCompactTags", len(m))
fmt.Fprintln(w, "const (")
for i, t := range coreTags {
fmt.Fprintf(w, "%s ID = %d\n", ident(t.Tag().String()), i)
}
for i, t := range special {
fmt.Fprintf(w, "%s ID = %d\n", ident(t), i+len(coreTags))
}
fmt.Fprintln(w, ")")
w.WriteVar("coreTags", coreTags)
w.WriteConst("specialTagsStr", strings.Join(special, " "))
}
func ident(s string) string {
return strings.Replace(s, "-", "", -1) + "Index"
}

View File

@ -1,54 +0,0 @@
// Copyright 2018 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
package main
import (
"log"
"golang.org/x/text/internal/gen"
"golang.org/x/text/internal/language"
"golang.org/x/text/internal/language/compact"
"golang.org/x/text/unicode/cldr"
)
func main() {
r := gen.OpenCLDRCoreZip()
defer r.Close()
d := &cldr.Decoder{}
data, err := d.DecodeZip(r)
if err != nil {
log.Fatalf("DecodeZip: %v", err)
}
w := gen.NewCodeWriter()
defer w.WriteGoFile("parents.go", "compact")
// Create parents table.
type ID uint16
parents := make([]ID, compact.NumCompactTags)
for _, loc := range data.Locales() {
tag := language.MustParse(loc)
index, ok := compact.FromTag(tag)
if !ok {
continue
}
parentIndex := compact.ID(0) // und
for p := tag.Parent(); p != language.Und; p = p.Parent() {
if x, ok := compact.FromTag(p); ok {
parentIndex = x
break
}
}
parents[index] = ID(parentIndex)
}
w.WriteComment(`
parents maps a compact index of a tag to the compact index of the parent of
this tag.`)
w.WriteVar("parents", parents)
}

File diff suppressed because it is too large Load Diff

View File

@ -1,20 +0,0 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
package main
// This file contains code common to the maketables.go and the package code.
// AliasType is the type of an alias in AliasMap.
type AliasType int8
const (
Deprecated AliasType = iota
Macro
Legacy
AliasTypeUnknown AliasType = -1
)

View File

@ -1,305 +0,0 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
// Language tag table generator.
// Data read from the web.
package main
import (
"flag"
"fmt"
"io"
"log"
"sort"
"strconv"
"strings"
"golang.org/x/text/internal/gen"
"golang.org/x/text/internal/language"
"golang.org/x/text/unicode/cldr"
)
var (
test = flag.Bool("test",
false,
"test existing tables; can be used to compare web data with package data.")
outputFile = flag.String("output",
"tables.go",
"output file for generated tables")
)
func main() {
gen.Init()
w := gen.NewCodeWriter()
defer w.WriteGoFile("tables.go", "language")
b := newBuilder(w)
gen.WriteCLDRVersion(w)
b.writeConstants()
b.writeMatchData()
}
type builder struct {
w *gen.CodeWriter
hw io.Writer // MultiWriter for w and w.Hash
data *cldr.CLDR
supp *cldr.SupplementalData
}
func (b *builder) langIndex(s string) uint16 {
return uint16(language.MustParseBase(s))
}
func (b *builder) regionIndex(s string) int {
return int(language.MustParseRegion(s))
}
func (b *builder) scriptIndex(s string) int {
return int(language.MustParseScript(s))
}
func newBuilder(w *gen.CodeWriter) *builder {
r := gen.OpenCLDRCoreZip()
defer r.Close()
d := &cldr.Decoder{}
data, err := d.DecodeZip(r)
if err != nil {
log.Fatal(err)
}
b := builder{
w: w,
hw: io.MultiWriter(w, w.Hash),
data: data,
supp: data.Supplemental(),
}
return &b
}
// writeConsts computes f(v) for all v in values and writes the results
// as constants named _v to a single constant block.
func (b *builder) writeConsts(f func(string) int, values ...string) {
fmt.Fprintln(b.w, "const (")
for _, v := range values {
fmt.Fprintf(b.w, "\t_%s = %v\n", v, f(v))
}
fmt.Fprintln(b.w, ")")
}
// TODO: region inclusion data will probably not be use used in future matchers.
var langConsts = []string{
"de", "en", "fr", "it", "mo", "no", "nb", "pt", "sh", "mul", "und",
}
var scriptConsts = []string{
"Latn", "Hani", "Hans", "Hant", "Qaaa", "Qaai", "Qabx", "Zinh", "Zyyy",
"Zzzz",
}
var regionConsts = []string{
"001", "419", "BR", "CA", "ES", "GB", "MD", "PT", "UK", "US",
"ZZ", "XA", "XC", "XK", // Unofficial tag for Kosovo.
}
func (b *builder) writeConstants() {
b.writeConsts(func(s string) int { return int(b.langIndex(s)) }, langConsts...)
b.writeConsts(b.regionIndex, regionConsts...)
b.writeConsts(b.scriptIndex, scriptConsts...)
}
type mutualIntelligibility struct {
want, have uint16
distance uint8
oneway bool
}
type scriptIntelligibility struct {
wantLang, haveLang uint16
wantScript, haveScript uint8
distance uint8
// Always oneway
}
type regionIntelligibility struct {
lang uint16 // compact language id
script uint8 // 0 means any
group uint8 // 0 means any; if bit 7 is set it means inverse
distance uint8
// Always twoway.
}
// writeMatchData writes tables with languages and scripts for which there is
// mutual intelligibility. The data is based on CLDR's languageMatching data.
// Note that we use a different algorithm than the one defined by CLDR and that
// we slightly modify the data. For example, we convert scores to confidence levels.
// We also drop all region-related data as we use a different algorithm to
// determine region equivalence.
func (b *builder) writeMatchData() {
lm := b.supp.LanguageMatching.LanguageMatches
cldr.MakeSlice(&lm).SelectAnyOf("type", "written_new")
regionHierarchy := map[string][]string{}
for _, g := range b.supp.TerritoryContainment.Group {
regions := strings.Split(g.Contains, " ")
regionHierarchy[g.Type] = append(regionHierarchy[g.Type], regions...)
}
regionToGroups := make([]uint8, language.NumRegions)
idToIndex := map[string]uint8{}
for i, mv := range lm[0].MatchVariable {
if i > 6 {
log.Fatalf("Too many groups: %d", i)
}
idToIndex[mv.Id] = uint8(i + 1)
// TODO: also handle '-'
for _, r := range strings.Split(mv.Value, "+") {
todo := []string{r}
for k := 0; k < len(todo); k++ {
r := todo[k]
regionToGroups[b.regionIndex(r)] |= 1 << uint8(i)
todo = append(todo, regionHierarchy[r]...)
}
}
}
b.w.WriteVar("regionToGroups", regionToGroups)
// maps language id to in- and out-of-group region.
paradigmLocales := [][3]uint16{}
locales := strings.Split(lm[0].ParadigmLocales[0].Locales, " ")
for i := 0; i < len(locales); i += 2 {
x := [3]uint16{}
for j := 0; j < 2; j++ {
pc := strings.SplitN(locales[i+j], "-", 2)
x[0] = b.langIndex(pc[0])
if len(pc) == 2 {
x[1+j] = uint16(b.regionIndex(pc[1]))
}
}
paradigmLocales = append(paradigmLocales, x)
}
b.w.WriteVar("paradigmLocales", paradigmLocales)
b.w.WriteType(mutualIntelligibility{})
b.w.WriteType(scriptIntelligibility{})
b.w.WriteType(regionIntelligibility{})
matchLang := []mutualIntelligibility{}
matchScript := []scriptIntelligibility{}
matchRegion := []regionIntelligibility{}
// Convert the languageMatch entries in lists keyed by desired language.
for _, m := range lm[0].LanguageMatch {
// Different versions of CLDR use different separators.
desired := strings.Replace(m.Desired, "-", "_", -1)
supported := strings.Replace(m.Supported, "-", "_", -1)
d := strings.Split(desired, "_")
s := strings.Split(supported, "_")
if len(d) != len(s) {
log.Fatalf("not supported: desired=%q; supported=%q", desired, supported)
continue
}
distance, _ := strconv.ParseInt(m.Distance, 10, 8)
switch len(d) {
case 2:
if desired == supported && desired == "*_*" {
continue
}
// language-script pair.
matchScript = append(matchScript, scriptIntelligibility{
wantLang: uint16(b.langIndex(d[0])),
haveLang: uint16(b.langIndex(s[0])),
wantScript: uint8(b.scriptIndex(d[1])),
haveScript: uint8(b.scriptIndex(s[1])),
distance: uint8(distance),
})
if m.Oneway != "true" {
matchScript = append(matchScript, scriptIntelligibility{
wantLang: uint16(b.langIndex(s[0])),
haveLang: uint16(b.langIndex(d[0])),
wantScript: uint8(b.scriptIndex(s[1])),
haveScript: uint8(b.scriptIndex(d[1])),
distance: uint8(distance),
})
}
case 1:
if desired == supported && desired == "*" {
continue
}
if distance == 1 {
// nb == no is already handled by macro mapping. Check there
// really is only this case.
if d[0] != "no" || s[0] != "nb" {
log.Fatalf("unhandled equivalence %s == %s", s[0], d[0])
}
continue
}
// TODO: consider dropping oneway field and just doubling the entry.
matchLang = append(matchLang, mutualIntelligibility{
want: uint16(b.langIndex(d[0])),
have: uint16(b.langIndex(s[0])),
distance: uint8(distance),
oneway: m.Oneway == "true",
})
case 3:
if desired == supported && desired == "*_*_*" {
continue
}
if desired != supported {
// This is now supported by CLDR, but only one case, which
// should already be covered by paradigm locales. For instance,
// test case "und, en, en-GU, en-IN, en-GB ; en-ZA ; en-GB" in
// testdata/CLDRLocaleMatcherTest.txt tests this.
if supported != "en_*_GB" {
log.Fatalf("not supported: desired=%q; supported=%q", desired, supported)
}
continue
}
ri := regionIntelligibility{
lang: b.langIndex(d[0]),
distance: uint8(distance),
}
if d[1] != "*" {
ri.script = uint8(b.scriptIndex(d[1]))
}
switch {
case d[2] == "*":
ri.group = 0x80 // not contained in anything
case strings.HasPrefix(d[2], "$!"):
ri.group = 0x80
d[2] = "$" + d[2][len("$!"):]
fallthrough
case strings.HasPrefix(d[2], "$"):
ri.group |= idToIndex[d[2]]
}
matchRegion = append(matchRegion, ri)
default:
log.Fatalf("not supported: desired=%q; supported=%q", desired, supported)
}
}
sort.SliceStable(matchLang, func(i, j int) bool {
return matchLang[i].distance < matchLang[j].distance
})
b.w.WriteComment(`
matchLang holds pairs of langIDs of base languages that are typically
mutually intelligible. Each pair is associated with a confidence and
whether the intelligibility goes one or both ways.`)
b.w.WriteVar("matchLang", matchLang)
b.w.WriteComment(`
matchScript holds pairs of scriptIDs where readers of one script
can typically also read the other. Each is associated with a confidence.`)
sort.SliceStable(matchScript, func(i, j int) bool {
return matchScript[i].distance < matchScript[j].distance
})
b.w.WriteVar("matchScript", matchScript)
sort.SliceStable(matchRegion, func(i, j int) bool {
return matchRegion[i].distance < matchRegion[j].distance
})
b.w.WriteVar("matchRegion", matchRegion)
}

View File

@ -1,133 +0,0 @@
// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
package main
import (
"flag"
"log"
"golang.org/x/text/internal/gen"
"golang.org/x/text/internal/triegen"
"golang.org/x/text/internal/ucd"
)
var outputFile = flag.String("out", "tables.go", "output file")
func main() {
gen.Init()
gen.Repackage("gen_trieval.go", "trieval.go", "bidi")
gen.Repackage("gen_ranges.go", "ranges_test.go", "bidi")
genTables()
}
// bidiClass names and codes taken from class "bc" in
// https://www.unicode.org/Public/8.0.0/ucd/PropertyValueAliases.txt
var bidiClass = map[string]Class{
"AL": AL, // ArabicLetter
"AN": AN, // ArabicNumber
"B": B, // ParagraphSeparator
"BN": BN, // BoundaryNeutral
"CS": CS, // CommonSeparator
"EN": EN, // EuropeanNumber
"ES": ES, // EuropeanSeparator
"ET": ET, // EuropeanTerminator
"L": L, // LeftToRight
"NSM": NSM, // NonspacingMark
"ON": ON, // OtherNeutral
"R": R, // RightToLeft
"S": S, // SegmentSeparator
"WS": WS, // WhiteSpace
"FSI": Control,
"PDF": Control,
"PDI": Control,
"LRE": Control,
"LRI": Control,
"LRO": Control,
"RLE": Control,
"RLI": Control,
"RLO": Control,
}
func genTables() {
if numClass > 0x0F {
log.Fatalf("Too many Class constants (%#x > 0x0F).", numClass)
}
w := gen.NewCodeWriter()
defer w.WriteVersionedGoFile(*outputFile, "bidi")
gen.WriteUnicodeVersion(w)
t := triegen.NewTrie("bidi")
// Build data about bracket mapping. These bits need to be or-ed with
// any other bits.
orMask := map[rune]uint64{}
xorMap := map[rune]int{}
xorMasks := []rune{0} // First value is no-op.
ucd.Parse(gen.OpenUCDFile("BidiBrackets.txt"), func(p *ucd.Parser) {
r1 := p.Rune(0)
r2 := p.Rune(1)
xor := r1 ^ r2
if _, ok := xorMap[xor]; !ok {
xorMap[xor] = len(xorMasks)
xorMasks = append(xorMasks, xor)
}
entry := uint64(xorMap[xor]) << xorMaskShift
switch p.String(2) {
case "o":
entry |= openMask
case "c", "n":
default:
log.Fatalf("Unknown bracket class %q.", p.String(2))
}
orMask[r1] = entry
})
w.WriteComment(`
xorMasks contains masks to be xor-ed with brackets to get the reverse
version.`)
w.WriteVar("xorMasks", xorMasks)
done := map[rune]bool{}
insert := func(r rune, c Class) {
if !done[r] {
t.Insert(r, orMask[r]|uint64(c))
done[r] = true
}
}
// Insert the derived BiDi properties.
ucd.Parse(gen.OpenUCDFile("extracted/DerivedBidiClass.txt"), func(p *ucd.Parser) {
r := p.Rune(0)
class, ok := bidiClass[p.String(1)]
if !ok {
log.Fatalf("%U: Unknown BiDi class %q", r, p.String(1))
}
insert(r, class)
})
visitDefaults(insert)
// TODO: use sparse blocks. This would reduce table size considerably
// from the looks of it.
sz, err := t.Gen(w)
if err != nil {
log.Fatal(err)
}
w.Size += sz
}
// dummy values to make methods in gen_common compile. The real versions
// will be generated by this file to tables.go.
var (
xorMasks []rune
)

View File

@ -1,57 +0,0 @@
// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
package main
import (
"unicode"
"golang.org/x/text/internal/gen"
"golang.org/x/text/internal/ucd"
"golang.org/x/text/unicode/rangetable"
)
// These tables are hand-extracted from:
// https://www.unicode.org/Public/8.0.0/ucd/extracted/DerivedBidiClass.txt
func visitDefaults(fn func(r rune, c Class)) {
// first write default values for ranges listed above.
visitRunes(fn, AL, []rune{
0x0600, 0x07BF, // Arabic
0x08A0, 0x08FF, // Arabic Extended-A
0xFB50, 0xFDCF, // Arabic Presentation Forms
0xFDF0, 0xFDFF,
0xFE70, 0xFEFF,
0x0001EE00, 0x0001EEFF, // Arabic Mathematical Alpha Symbols
})
visitRunes(fn, R, []rune{
0x0590, 0x05FF, // Hebrew
0x07C0, 0x089F, // Nko et al.
0xFB1D, 0xFB4F,
0x00010800, 0x00010FFF, // Cypriot Syllabary et. al.
0x0001E800, 0x0001EDFF,
0x0001EF00, 0x0001EFFF,
})
visitRunes(fn, ET, []rune{ // European Terminator
0x20A0, 0x20Cf, // Currency symbols
})
rangetable.Visit(unicode.Noncharacter_Code_Point, func(r rune) {
fn(r, BN) // Boundary Neutral
})
ucd.Parse(gen.OpenUCDFile("DerivedCoreProperties.txt"), func(p *ucd.Parser) {
if p.String(1) == "Default_Ignorable_Code_Point" {
fn(p.Rune(0), BN) // Boundary Neutral
}
})
}
func visitRunes(fn func(r rune, c Class), c Class, runes []rune) {
for i := 0; i < len(runes); i += 2 {
lo, hi := runes[i], runes[i+1]
for j := lo; j <= hi; j++ {
fn(j, c)
}
}
}

View File

@ -1,64 +0,0 @@
// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
package main
// Class is the Unicode BiDi class. Each rune has a single class.
type Class uint
const (
L Class = iota // LeftToRight
R // RightToLeft
EN // EuropeanNumber
ES // EuropeanSeparator
ET // EuropeanTerminator
AN // ArabicNumber
CS // CommonSeparator
B // ParagraphSeparator
S // SegmentSeparator
WS // WhiteSpace
ON // OtherNeutral
BN // BoundaryNeutral
NSM // NonspacingMark
AL // ArabicLetter
Control // Control LRO - PDI
numClass
LRO // LeftToRightOverride
RLO // RightToLeftOverride
LRE // LeftToRightEmbedding
RLE // RightToLeftEmbedding
PDF // PopDirectionalFormat
LRI // LeftToRightIsolate
RLI // RightToLeftIsolate
FSI // FirstStrongIsolate
PDI // PopDirectionalIsolate
unknownClass = ^Class(0)
)
var controlToClass = map[rune]Class{
0x202D: LRO, // LeftToRightOverride,
0x202E: RLO, // RightToLeftOverride,
0x202A: LRE, // LeftToRightEmbedding,
0x202B: RLE, // RightToLeftEmbedding,
0x202C: PDF, // PopDirectionalFormat,
0x2066: LRI, // LeftToRightIsolate,
0x2067: RLI, // RightToLeftIsolate,
0x2068: FSI, // FirstStrongIsolate,
0x2069: PDI, // PopDirectionalIsolate,
}
// A trie entry has the following bits:
// 7..5 XOR mask for brackets
// 4 1: Bracket open, 0: Bracket close
// 3..0 Class type
const (
openMask = 0x10
xorMaskShift = 5
)

View File

@ -1,986 +0,0 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
// Normalization table generator.
// Data read from the web.
// See forminfo.go for a description of the trie values associated with each rune.
package main
import (
"bytes"
"encoding/binary"
"flag"
"fmt"
"io"
"log"
"sort"
"strconv"
"strings"
"golang.org/x/text/internal/gen"
"golang.org/x/text/internal/triegen"
"golang.org/x/text/internal/ucd"
)
func main() {
gen.Init()
loadUnicodeData()
compactCCC()
loadCompositionExclusions()
completeCharFields(FCanonical)
completeCharFields(FCompatibility)
computeNonStarterCounts()
verifyComputed()
printChars()
testDerived()
printTestdata()
makeTables()
}
var (
tablelist = flag.String("tables",
"all",
"comma-separated list of which tables to generate; "+
"can be 'decomp', 'recomp', 'info' and 'all'")
test = flag.Bool("test",
false,
"test existing tables against DerivedNormalizationProps and generate test data for regression testing")
verbose = flag.Bool("verbose",
false,
"write data to stdout as it is parsed")
)
const MaxChar = 0x10FFFF // anything above this shouldn't exist
// Quick Check properties of runes allow us to quickly
// determine whether a rune may occur in a normal form.
// For a given normal form, a rune may be guaranteed to occur
// verbatim (QC=Yes), may or may not combine with another
// rune (QC=Maybe), or may not occur (QC=No).
type QCResult int
const (
QCUnknown QCResult = iota
QCYes
QCNo
QCMaybe
)
func (r QCResult) String() string {
switch r {
case QCYes:
return "Yes"
case QCNo:
return "No"
case QCMaybe:
return "Maybe"
}
return "***UNKNOWN***"
}
const (
FCanonical = iota // NFC or NFD
FCompatibility // NFKC or NFKD
FNumberOfFormTypes
)
const (
MComposed = iota // NFC or NFKC
MDecomposed // NFD or NFKD
MNumberOfModes
)
// This contains only the properties we're interested in.
type Char struct {
name string
codePoint rune // if zero, this index is not a valid code point.
ccc uint8 // canonical combining class
origCCC uint8
excludeInComp bool // from CompositionExclusions.txt
compatDecomp bool // it has a compatibility expansion
nTrailingNonStarters uint8
nLeadingNonStarters uint8 // must be equal to trailing if non-zero
forms [FNumberOfFormTypes]FormInfo // For FCanonical and FCompatibility
state State
}
var chars = make([]Char, MaxChar+1)
var cccMap = make(map[uint8]uint8)
func (c Char) String() string {
buf := new(bytes.Buffer)
fmt.Fprintf(buf, "%U [%s]:\n", c.codePoint, c.name)
fmt.Fprintf(buf, " ccc: %v\n", c.ccc)
fmt.Fprintf(buf, " excludeInComp: %v\n", c.excludeInComp)
fmt.Fprintf(buf, " compatDecomp: %v\n", c.compatDecomp)
fmt.Fprintf(buf, " state: %v\n", c.state)
fmt.Fprintf(buf, " NFC:\n")
fmt.Fprint(buf, c.forms[FCanonical])
fmt.Fprintf(buf, " NFKC:\n")
fmt.Fprint(buf, c.forms[FCompatibility])
return buf.String()
}
// In UnicodeData.txt, some ranges are marked like this:
// 3400;<CJK Ideograph Extension A, First>;Lo;0;L;;;;;N;;;;;
// 4DB5;<CJK Ideograph Extension A, Last>;Lo;0;L;;;;;N;;;;;
// parseCharacter keeps a state variable indicating the weirdness.
type State int
const (
SNormal State = iota // known to be zero for the type
SFirst
SLast
SMissing
)
var lastChar = rune('\u0000')
func (c Char) isValid() bool {
return c.codePoint != 0 && c.state != SMissing
}
type FormInfo struct {
quickCheck [MNumberOfModes]QCResult // index: MComposed or MDecomposed
verified [MNumberOfModes]bool // index: MComposed or MDecomposed
combinesForward bool // May combine with rune on the right
combinesBackward bool // May combine with rune on the left
isOneWay bool // Never appears in result
inDecomp bool // Some decompositions result in this char.
decomp Decomposition
expandedDecomp Decomposition
}
func (f FormInfo) String() string {
buf := bytes.NewBuffer(make([]byte, 0))
fmt.Fprintf(buf, " quickCheck[C]: %v\n", f.quickCheck[MComposed])
fmt.Fprintf(buf, " quickCheck[D]: %v\n", f.quickCheck[MDecomposed])
fmt.Fprintf(buf, " cmbForward: %v\n", f.combinesForward)
fmt.Fprintf(buf, " cmbBackward: %v\n", f.combinesBackward)
fmt.Fprintf(buf, " isOneWay: %v\n", f.isOneWay)
fmt.Fprintf(buf, " inDecomp: %v\n", f.inDecomp)
fmt.Fprintf(buf, " decomposition: %X\n", f.decomp)
fmt.Fprintf(buf, " expandedDecomp: %X\n", f.expandedDecomp)
return buf.String()
}
type Decomposition []rune
func parseDecomposition(s string, skipfirst bool) (a []rune, err error) {
decomp := strings.Split(s, " ")
if len(decomp) > 0 && skipfirst {
decomp = decomp[1:]
}
for _, d := range decomp {
point, err := strconv.ParseUint(d, 16, 64)
if err != nil {
return a, err
}
a = append(a, rune(point))
}
return a, nil
}
func loadUnicodeData() {
f := gen.OpenUCDFile("UnicodeData.txt")
defer f.Close()
p := ucd.New(f)
for p.Next() {
r := p.Rune(ucd.CodePoint)
char := &chars[r]
char.ccc = uint8(p.Uint(ucd.CanonicalCombiningClass))
decmap := p.String(ucd.DecompMapping)
exp, err := parseDecomposition(decmap, false)
isCompat := false
if err != nil {
if len(decmap) > 0 {
exp, err = parseDecomposition(decmap, true)
if err != nil {
log.Fatalf(`%U: bad decomp |%v|: "%s"`, r, decmap, err)
}
isCompat = true
}
}
char.name = p.String(ucd.Name)
char.codePoint = r
char.forms[FCompatibility].decomp = exp
if !isCompat {
char.forms[FCanonical].decomp = exp
} else {
char.compatDecomp = true
}
if len(decmap) > 0 {
char.forms[FCompatibility].decomp = exp
}
}
if err := p.Err(); err != nil {
log.Fatal(err)
}
}
// compactCCC converts the sparse set of CCC values to a continguous one,
// reducing the number of bits needed from 8 to 6.
func compactCCC() {
m := make(map[uint8]uint8)
for i := range chars {
c := &chars[i]
m[c.ccc] = 0
}
cccs := []int{}
for v, _ := range m {
cccs = append(cccs, int(v))
}
sort.Ints(cccs)
for i, c := range cccs {
cccMap[uint8(i)] = uint8(c)
m[uint8(c)] = uint8(i)
}
for i := range chars {
c := &chars[i]
c.origCCC = c.ccc
c.ccc = m[c.ccc]
}
if len(m) >= 1<<6 {
log.Fatalf("too many difference CCC values: %d >= 64", len(m))
}
}
// CompositionExclusions.txt has form:
// 0958 # ...
// See https://unicode.org/reports/tr44/ for full explanation
func loadCompositionExclusions() {
f := gen.OpenUCDFile("CompositionExclusions.txt")
defer f.Close()
p := ucd.New(f)
for p.Next() {
c := &chars[p.Rune(0)]
if c.excludeInComp {
log.Fatalf("%U: Duplicate entry in exclusions.", c.codePoint)
}
c.excludeInComp = true
}
if e := p.Err(); e != nil {
log.Fatal(e)
}
}
// hasCompatDecomp returns true if any of the recursive
// decompositions contains a compatibility expansion.
// In this case, the character may not occur in NFK*.
func hasCompatDecomp(r rune) bool {
c := &chars[r]
if c.compatDecomp {
return true
}
for _, d := range c.forms[FCompatibility].decomp {
if hasCompatDecomp(d) {
return true
}
}
return false
}
// Hangul related constants.
const (
HangulBase = 0xAC00
HangulEnd = 0xD7A4 // hangulBase + Jamo combinations (19 * 21 * 28)
JamoLBase = 0x1100
JamoLEnd = 0x1113
JamoVBase = 0x1161
JamoVEnd = 0x1176
JamoTBase = 0x11A8
JamoTEnd = 0x11C3
JamoLVTCount = 19 * 21 * 28
JamoTCount = 28
)
func isHangul(r rune) bool {
return HangulBase <= r && r < HangulEnd
}
func isHangulWithoutJamoT(r rune) bool {
if !isHangul(r) {
return false
}
r -= HangulBase
return r < JamoLVTCount && r%JamoTCount == 0
}
func ccc(r rune) uint8 {
return chars[r].ccc
}
// Insert a rune in a buffer, ordered by Canonical Combining Class.
func insertOrdered(b Decomposition, r rune) Decomposition {
n := len(b)
b = append(b, 0)
cc := ccc(r)
if cc > 0 {
// Use bubble sort.
for ; n > 0; n-- {
if ccc(b[n-1]) <= cc {
break
}
b[n] = b[n-1]
}
}
b[n] = r
return b
}
// Recursively decompose.
func decomposeRecursive(form int, r rune, d Decomposition) Decomposition {
dcomp := chars[r].forms[form].decomp
if len(dcomp) == 0 {
return insertOrdered(d, r)
}
for _, c := range dcomp {
d = decomposeRecursive(form, c, d)
}
return d
}
func completeCharFields(form int) {
// Phase 0: pre-expand decomposition.
for i := range chars {
f := &chars[i].forms[form]
if len(f.decomp) == 0 {
continue
}
exp := make(Decomposition, 0)
for _, c := range f.decomp {
exp = decomposeRecursive(form, c, exp)
}
f.expandedDecomp = exp
}
// Phase 1: composition exclusion, mark decomposition.
for i := range chars {
c := &chars[i]
f := &c.forms[form]
// Marks script-specific exclusions and version restricted.
f.isOneWay = c.excludeInComp
// Singletons
f.isOneWay = f.isOneWay || len(f.decomp) == 1
// Non-starter decompositions
if len(f.decomp) > 1 {
chk := c.ccc != 0 || chars[f.decomp[0]].ccc != 0
f.isOneWay = f.isOneWay || chk
}
// Runes that decompose into more than two runes.
f.isOneWay = f.isOneWay || len(f.decomp) > 2
if form == FCompatibility {
f.isOneWay = f.isOneWay || hasCompatDecomp(c.codePoint)
}
for _, r := range f.decomp {
chars[r].forms[form].inDecomp = true
}
}
// Phase 2: forward and backward combining.
for i := range chars {
c := &chars[i]
f := &c.forms[form]
if !f.isOneWay && len(f.decomp) == 2 {
f0 := &chars[f.decomp[0]].forms[form]
f1 := &chars[f.decomp[1]].forms[form]
if !f0.isOneWay {
f0.combinesForward = true
}
if !f1.isOneWay {
f1.combinesBackward = true
}
}
if isHangulWithoutJamoT(rune(i)) {
f.combinesForward = true
}
}
// Phase 3: quick check values.
for i := range chars {
c := &chars[i]
f := &c.forms[form]
switch {
case len(f.decomp) > 0:
f.quickCheck[MDecomposed] = QCNo
case isHangul(rune(i)):
f.quickCheck[MDecomposed] = QCNo
default:
f.quickCheck[MDecomposed] = QCYes
}
switch {
case f.isOneWay:
f.quickCheck[MComposed] = QCNo
case (i & 0xffff00) == JamoLBase:
f.quickCheck[MComposed] = QCYes
if JamoLBase <= i && i < JamoLEnd {
f.combinesForward = true
}
if JamoVBase <= i && i < JamoVEnd {
f.quickCheck[MComposed] = QCMaybe
f.combinesBackward = true
f.combinesForward = true
}
if JamoTBase <= i && i < JamoTEnd {
f.quickCheck[MComposed] = QCMaybe
f.combinesBackward = true
}
case !f.combinesBackward:
f.quickCheck[MComposed] = QCYes
default:
f.quickCheck[MComposed] = QCMaybe
}
}
}
func computeNonStarterCounts() {
// Phase 4: leading and trailing non-starter count
for i := range chars {
c := &chars[i]
runes := []rune{rune(i)}
// We always use FCompatibility so that the CGJ insertion points do not
// change for repeated normalizations with different forms.
if exp := c.forms[FCompatibility].expandedDecomp; len(exp) > 0 {
runes = exp
}
// We consider runes that combine backwards to be non-starters for the
// purpose of Stream-Safe Text Processing.
for _, r := range runes {
if cr := &chars[r]; cr.ccc == 0 && !cr.forms[FCompatibility].combinesBackward {
break
}
c.nLeadingNonStarters++
}
for i := len(runes) - 1; i >= 0; i-- {
if cr := &chars[runes[i]]; cr.ccc == 0 && !cr.forms[FCompatibility].combinesBackward {
break
}
c.nTrailingNonStarters++
}
if c.nTrailingNonStarters > 3 {
log.Fatalf("%U: Decomposition with more than 3 (%d) trailing modifiers (%U)", i, c.nTrailingNonStarters, runes)
}
if isHangul(rune(i)) {
c.nTrailingNonStarters = 2
if isHangulWithoutJamoT(rune(i)) {
c.nTrailingNonStarters = 1
}
}
if l, t := c.nLeadingNonStarters, c.nTrailingNonStarters; l > 0 && l != t {
log.Fatalf("%U: number of leading and trailing non-starters should be equal (%d vs %d)", i, l, t)
}
if t := c.nTrailingNonStarters; t > 3 {
log.Fatalf("%U: number of trailing non-starters is %d > 3", t)
}
}
}
func printBytes(w io.Writer, b []byte, name string) {
fmt.Fprintf(w, "// %s: %d bytes\n", name, len(b))
fmt.Fprintf(w, "var %s = [...]byte {", name)
for i, c := range b {
switch {
case i%64 == 0:
fmt.Fprintf(w, "\n// Bytes %x - %x\n", i, i+63)
case i%8 == 0:
fmt.Fprintf(w, "\n")
}
fmt.Fprintf(w, "0x%.2X, ", c)
}
fmt.Fprint(w, "\n}\n\n")
}
// See forminfo.go for format.
func makeEntry(f *FormInfo, c *Char) uint16 {
e := uint16(0)
if r := c.codePoint; HangulBase <= r && r < HangulEnd {
e |= 0x40
}
if f.combinesForward {
e |= 0x20
}
if f.quickCheck[MDecomposed] == QCNo {
e |= 0x4
}
switch f.quickCheck[MComposed] {
case QCYes:
case QCNo:
e |= 0x10
case QCMaybe:
e |= 0x18
default:
log.Fatalf("Illegal quickcheck value %v.", f.quickCheck[MComposed])
}
e |= uint16(c.nTrailingNonStarters)
return e
}
// decompSet keeps track of unique decompositions, grouped by whether
// the decomposition is followed by a trailing and/or leading CCC.
type decompSet [7]map[string]bool
const (
normalDecomp = iota
firstMulti
firstCCC
endMulti
firstLeadingCCC
firstCCCZeroExcept
firstStarterWithNLead
lastDecomp
)
var cname = []string{"firstMulti", "firstCCC", "endMulti", "firstLeadingCCC", "firstCCCZeroExcept", "firstStarterWithNLead", "lastDecomp"}
func makeDecompSet() decompSet {
m := decompSet{}
for i := range m {
m[i] = make(map[string]bool)
}
return m
}
func (m *decompSet) insert(key int, s string) {
m[key][s] = true
}
func printCharInfoTables(w io.Writer) int {
mkstr := func(r rune, f *FormInfo) (int, string) {
d := f.expandedDecomp
s := string([]rune(d))
if max := 1 << 6; len(s) >= max {
const msg = "%U: too many bytes in decomposition: %d >= %d"
log.Fatalf(msg, r, len(s), max)
}
head := uint8(len(s))
if f.quickCheck[MComposed] != QCYes {
head |= 0x40
}
if f.combinesForward {
head |= 0x80
}
s = string([]byte{head}) + s
lccc := ccc(d[0])
tccc := ccc(d[len(d)-1])
cc := ccc(r)
if cc != 0 && lccc == 0 && tccc == 0 {
log.Fatalf("%U: trailing and leading ccc are 0 for non-zero ccc %d", r, cc)
}
if tccc < lccc && lccc != 0 {
const msg = "%U: lccc (%d) must be <= tcc (%d)"
log.Fatalf(msg, r, lccc, tccc)
}
index := normalDecomp
nTrail := chars[r].nTrailingNonStarters
nLead := chars[r].nLeadingNonStarters
if tccc > 0 || lccc > 0 || nTrail > 0 {
tccc <<= 2
tccc |= nTrail
s += string([]byte{tccc})
index = endMulti
for _, r := range d[1:] {
if ccc(r) == 0 {
index = firstCCC
}
}
if lccc > 0 || nLead > 0 {
s += string([]byte{lccc})
if index == firstCCC {
log.Fatalf("%U: multi-segment decomposition not supported for decompositions with leading CCC != 0", r)
}
index = firstLeadingCCC
}
if cc != lccc {
if cc != 0 {
log.Fatalf("%U: for lccc != ccc, expected ccc to be 0; was %d", r, cc)
}
index = firstCCCZeroExcept
}
} else if len(d) > 1 {
index = firstMulti
}
return index, s
}
decompSet := makeDecompSet()
const nLeadStr = "\x00\x01" // 0-byte length and tccc with nTrail.
decompSet.insert(firstStarterWithNLead, nLeadStr)
// Store the uniqued decompositions in a byte buffer,
// preceded by their byte length.
for _, c := range chars {
for _, f := range c.forms {
if len(f.expandedDecomp) == 0 {
continue
}
if f.combinesBackward {
log.Fatalf("%U: combinesBackward and decompose", c.codePoint)
}
index, s := mkstr(c.codePoint, &f)
decompSet.insert(index, s)
}
}
decompositions := bytes.NewBuffer(make([]byte, 0, 10000))
size := 0
positionMap := make(map[string]uint16)
decompositions.WriteString("\000")
fmt.Fprintln(w, "const (")
for i, m := range decompSet {
sa := []string{}
for s := range m {
sa = append(sa, s)
}
sort.Strings(sa)
for _, s := range sa {
p := decompositions.Len()
decompositions.WriteString(s)
positionMap[s] = uint16(p)
}
if cname[i] != "" {
fmt.Fprintf(w, "%s = 0x%X\n", cname[i], decompositions.Len())
}
}
fmt.Fprintln(w, "maxDecomp = 0x8000")
fmt.Fprintln(w, ")")
b := decompositions.Bytes()
printBytes(w, b, "decomps")
size += len(b)
varnames := []string{"nfc", "nfkc"}
for i := 0; i < FNumberOfFormTypes; i++ {
trie := triegen.NewTrie(varnames[i])
for r, c := range chars {
f := c.forms[i]
d := f.expandedDecomp
if len(d) != 0 {
_, key := mkstr(c.codePoint, &f)
trie.Insert(rune(r), uint64(positionMap[key]))
if c.ccc != ccc(d[0]) {
// We assume the lead ccc of a decomposition !=0 in this case.
if ccc(d[0]) == 0 {
log.Fatalf("Expected leading CCC to be non-zero; ccc is %d", c.ccc)
}
}
} else if c.nLeadingNonStarters > 0 && len(f.expandedDecomp) == 0 && c.ccc == 0 && !f.combinesBackward {
// Handle cases where it can't be detected that the nLead should be equal
// to nTrail.
trie.Insert(c.codePoint, uint64(positionMap[nLeadStr]))
} else if v := makeEntry(&f, &c)<<8 | uint16(c.ccc); v != 0 {
trie.Insert(c.codePoint, uint64(0x8000|v))
}
}
sz, err := trie.Gen(w, triegen.Compact(&normCompacter{name: varnames[i]}))
if err != nil {
log.Fatal(err)
}
size += sz
}
return size
}
func contains(sa []string, s string) bool {
for _, a := range sa {
if a == s {
return true
}
}
return false
}
func makeTables() {
w := &bytes.Buffer{}
size := 0
if *tablelist == "" {
return
}
list := strings.Split(*tablelist, ",")
if *tablelist == "all" {
list = []string{"recomp", "info"}
}
// Compute maximum decomposition size.
max := 0
for _, c := range chars {
if n := len(string(c.forms[FCompatibility].expandedDecomp)); n > max {
max = n
}
}
fmt.Fprintln(w, `import "sync"`)
fmt.Fprintln(w)
fmt.Fprintln(w, "const (")
fmt.Fprintln(w, "\t// Version is the Unicode edition from which the tables are derived.")
fmt.Fprintf(w, "\tVersion = %q\n", gen.UnicodeVersion())
fmt.Fprintln(w)
fmt.Fprintln(w, "\t// MaxTransformChunkSize indicates the maximum number of bytes that Transform")
fmt.Fprintln(w, "\t// may need to write atomically for any Form. Making a destination buffer at")
fmt.Fprintln(w, "\t// least this size ensures that Transform can always make progress and that")
fmt.Fprintln(w, "\t// the user does not need to grow the buffer on an ErrShortDst.")
fmt.Fprintf(w, "\tMaxTransformChunkSize = %d+maxNonStarters*4\n", len(string(0x034F))+max)
fmt.Fprintln(w, ")\n")
// Print the CCC remap table.
size += len(cccMap)
fmt.Fprintf(w, "var ccc = [%d]uint8{", len(cccMap))
for i := 0; i < len(cccMap); i++ {
if i%8 == 0 {
fmt.Fprintln(w)
}
fmt.Fprintf(w, "%3d, ", cccMap[uint8(i)])
}
fmt.Fprintln(w, "\n}\n")
if contains(list, "info") {
size += printCharInfoTables(w)
}
if contains(list, "recomp") {
// Note that we use 32 bit keys, instead of 64 bit.
// This clips the bits of three entries, but we know
// this won't cause a collision. The compiler will catch
// any changes made to UnicodeData.txt that introduces
// a collision.
// Note that the recomposition map for NFC and NFKC
// are identical.
// Recomposition map
nrentries := 0
for _, c := range chars {
f := c.forms[FCanonical]
if !f.isOneWay && len(f.decomp) > 0 {
nrentries++
}
}
sz := nrentries * 8
size += sz
fmt.Fprintf(w, "// recompMap: %d bytes (entries only)\n", sz)
fmt.Fprintln(w, "var recompMap map[uint32]rune")
fmt.Fprintln(w, "var recompMapOnce sync.Once\n")
fmt.Fprintln(w, `const recompMapPacked = "" +`)
var buf [8]byte
for i, c := range chars {
f := c.forms[FCanonical]
d := f.decomp
if !f.isOneWay && len(d) > 0 {
key := uint32(uint16(d[0]))<<16 + uint32(uint16(d[1]))
binary.BigEndian.PutUint32(buf[:4], key)
binary.BigEndian.PutUint32(buf[4:], uint32(i))
fmt.Fprintf(w, "\t\t%q + // 0x%.8X: 0x%.8X\n", string(buf[:]), key, uint32(i))
}
}
// hack so we don't have to special case the trailing plus sign
fmt.Fprintf(w, ` ""`)
fmt.Fprintln(w)
}
fmt.Fprintf(w, "// Total size of tables: %dKB (%d bytes)\n", (size+512)/1024, size)
gen.WriteVersionedGoFile("tables.go", "norm", w.Bytes())
}
func printChars() {
if *verbose {
for _, c := range chars {
if !c.isValid() || c.state == SMissing {
continue
}
fmt.Println(c)
}
}
}
// verifyComputed does various consistency tests.
func verifyComputed() {
for i, c := range chars {
for _, f := range c.forms {
isNo := (f.quickCheck[MDecomposed] == QCNo)
if (len(f.decomp) > 0) != isNo && !isHangul(rune(i)) {
log.Fatalf("%U: NF*D QC must be No if rune decomposes", i)
}
isMaybe := f.quickCheck[MComposed] == QCMaybe
if f.combinesBackward != isMaybe {
log.Fatalf("%U: NF*C QC must be Maybe if combinesBackward", i)
}
if len(f.decomp) > 0 && f.combinesForward && isMaybe {
log.Fatalf("%U: NF*C QC must be Yes or No if combinesForward and decomposes", i)
}
if len(f.expandedDecomp) != 0 {
continue
}
if a, b := c.nLeadingNonStarters > 0, (c.ccc > 0 || f.combinesBackward); a != b {
// We accept these runes to be treated differently (it only affects
// segment breaking in iteration, most likely on improper use), but
// reconsider if more characters are added.
// U+FF9E HALFWIDTH KATAKANA VOICED SOUND MARK;Lm;0;L;<narrow> 3099;;;;N;;;;;
// U+FF9F HALFWIDTH KATAKANA SEMI-VOICED SOUND MARK;Lm;0;L;<narrow> 309A;;;;N;;;;;
// U+3133 HANGUL LETTER KIYEOK-SIOS;Lo;0;L;<compat> 11AA;;;;N;HANGUL LETTER GIYEOG SIOS;;;;
// U+318E HANGUL LETTER ARAEAE;Lo;0;L;<compat> 11A1;;;;N;HANGUL LETTER ALAE AE;;;;
// U+FFA3 HALFWIDTH HANGUL LETTER KIYEOK-SIOS;Lo;0;L;<narrow> 3133;;;;N;HALFWIDTH HANGUL LETTER GIYEOG SIOS;;;;
// U+FFDC HALFWIDTH HANGUL LETTER I;Lo;0;L;<narrow> 3163;;;;N;;;;;
if i != 0xFF9E && i != 0xFF9F && !(0x3133 <= i && i <= 0x318E) && !(0xFFA3 <= i && i <= 0xFFDC) {
log.Fatalf("%U: nLead was %v; want %v", i, a, b)
}
}
}
nfc := c.forms[FCanonical]
nfkc := c.forms[FCompatibility]
if nfc.combinesBackward != nfkc.combinesBackward {
log.Fatalf("%U: Cannot combine combinesBackward\n", c.codePoint)
}
}
}
// Use values in DerivedNormalizationProps.txt to compare against the
// values we computed.
// DerivedNormalizationProps.txt has form:
// 00C0..00C5 ; NFD_QC; N # ...
// 0374 ; NFD_QC; N # ...
// See https://unicode.org/reports/tr44/ for full explanation
func testDerived() {
f := gen.OpenUCDFile("DerivedNormalizationProps.txt")
defer f.Close()
p := ucd.New(f)
for p.Next() {
r := p.Rune(0)
c := &chars[r]
var ftype, mode int
qt := p.String(1)
switch qt {
case "NFC_QC":
ftype, mode = FCanonical, MComposed
case "NFD_QC":
ftype, mode = FCanonical, MDecomposed
case "NFKC_QC":
ftype, mode = FCompatibility, MComposed
case "NFKD_QC":
ftype, mode = FCompatibility, MDecomposed
default:
continue
}
var qr QCResult
switch p.String(2) {
case "Y":
qr = QCYes
case "N":
qr = QCNo
case "M":
qr = QCMaybe
default:
log.Fatalf(`Unexpected quick check value "%s"`, p.String(2))
}
if got := c.forms[ftype].quickCheck[mode]; got != qr {
log.Printf("%U: FAILED %s (was %v need %v)\n", r, qt, got, qr)
}
c.forms[ftype].verified[mode] = true
}
if err := p.Err(); err != nil {
log.Fatal(err)
}
// Any unspecified value must be QCYes. Verify this.
for i, c := range chars {
for j, fd := range c.forms {
for k, qr := range fd.quickCheck {
if !fd.verified[k] && qr != QCYes {
m := "%U: FAIL F:%d M:%d (was %v need Yes) %s\n"
log.Printf(m, i, j, k, qr, c.name)
}
}
}
}
}
var testHeader = `const (
Yes = iota
No
Maybe
)
type formData struct {
qc uint8
combinesForward bool
decomposition string
}
type runeData struct {
r rune
ccc uint8
nLead uint8
nTrail uint8
f [2]formData // 0: canonical; 1: compatibility
}
func f(qc uint8, cf bool, dec string) [2]formData {
return [2]formData{{qc, cf, dec}, {qc, cf, dec}}
}
func g(qc, qck uint8, cf, cfk bool, d, dk string) [2]formData {
return [2]formData{{qc, cf, d}, {qck, cfk, dk}}
}
var testData = []runeData{
`
func printTestdata() {
type lastInfo struct {
ccc uint8
nLead uint8
nTrail uint8
f string
}
last := lastInfo{}
w := &bytes.Buffer{}
fmt.Fprintf(w, testHeader)
for r, c := range chars {
f := c.forms[FCanonical]
qc, cf, d := f.quickCheck[MComposed], f.combinesForward, string(f.expandedDecomp)
f = c.forms[FCompatibility]
qck, cfk, dk := f.quickCheck[MComposed], f.combinesForward, string(f.expandedDecomp)
s := ""
if d == dk && qc == qck && cf == cfk {
s = fmt.Sprintf("f(%s, %v, %q)", qc, cf, d)
} else {
s = fmt.Sprintf("g(%s, %s, %v, %v, %q, %q)", qc, qck, cf, cfk, d, dk)
}
current := lastInfo{c.ccc, c.nLeadingNonStarters, c.nTrailingNonStarters, s}
if last != current {
fmt.Fprintf(w, "\t{0x%x, %d, %d, %d, %s},\n", r, c.origCCC, c.nLeadingNonStarters, c.nTrailingNonStarters, s)
last = current
}
}
fmt.Fprintln(w, "}")
gen.WriteVersionedGoFile("data_test.go", "norm", w.Bytes())
}

View File

@ -1,117 +0,0 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
// Trie table generator.
// Used by make*tables tools to generate a go file with trie data structures
// for mapping UTF-8 to a 16-bit value. All but the last byte in a UTF-8 byte
// sequence are used to lookup offsets in the index table to be used for the
// next byte. The last byte is used to index into a table with 16-bit values.
package main
import (
"fmt"
"io"
)
const maxSparseEntries = 16
type normCompacter struct {
sparseBlocks [][]uint64
sparseOffset []uint16
sparseCount int
name string
}
func mostFrequentStride(a []uint64) int {
counts := make(map[int]int)
var v int
for _, x := range a {
if stride := int(x) - v; v != 0 && stride >= 0 {
counts[stride]++
}
v = int(x)
}
var maxs, maxc int
for stride, cnt := range counts {
if cnt > maxc || (cnt == maxc && stride < maxs) {
maxs, maxc = stride, cnt
}
}
return maxs
}
func countSparseEntries(a []uint64) int {
stride := mostFrequentStride(a)
var v, count int
for _, tv := range a {
if int(tv)-v != stride {
if tv != 0 {
count++
}
}
v = int(tv)
}
return count
}
func (c *normCompacter) Size(v []uint64) (sz int, ok bool) {
if n := countSparseEntries(v); n <= maxSparseEntries {
return (n+1)*4 + 2, true
}
return 0, false
}
func (c *normCompacter) Store(v []uint64) uint32 {
h := uint32(len(c.sparseOffset))
c.sparseBlocks = append(c.sparseBlocks, v)
c.sparseOffset = append(c.sparseOffset, uint16(c.sparseCount))
c.sparseCount += countSparseEntries(v) + 1
return h
}
func (c *normCompacter) Handler() string {
return c.name + "Sparse.lookup"
}
func (c *normCompacter) Print(w io.Writer) (retErr error) {
p := func(f string, x ...interface{}) {
if _, err := fmt.Fprintf(w, f, x...); retErr == nil && err != nil {
retErr = err
}
}
ls := len(c.sparseBlocks)
p("// %sSparseOffset: %d entries, %d bytes\n", c.name, ls, ls*2)
p("var %sSparseOffset = %#v\n\n", c.name, c.sparseOffset)
ns := c.sparseCount
p("// %sSparseValues: %d entries, %d bytes\n", c.name, ns, ns*4)
p("var %sSparseValues = [%d]valueRange {", c.name, ns)
for i, b := range c.sparseBlocks {
p("\n// Block %#x, offset %#x", i, c.sparseOffset[i])
var v int
stride := mostFrequentStride(b)
n := countSparseEntries(b)
p("\n{value:%#04x,lo:%#02x},", stride, uint8(n))
for i, nv := range b {
if int(nv)-v != stride {
if v != 0 {
p(",hi:%#02x},", 0x80+i-1)
}
if nv != 0 {
p("\n{value:%#04x,lo:%#02x", nv, 0x80+i)
}
}
v = int(nv)
}
if v != 0 {
p(",hi:%#02x},", 0x80+len(b)-1)
}
}
p("\n}\n\n")
return
}

302
vendor/modules.txt vendored
View File

@ -1,11 +1,11 @@
# cloud.google.com/go v0.36.0
cloud.google.com/go/compute/metadata
cloud.google.com/go/storage
cloud.google.com/go/iam
cloud.google.com/go/internal
cloud.google.com/go/internal/optional
cloud.google.com/go/internal/trace
cloud.google.com/go/internal/version
cloud.google.com/go/storage
# contrib.go.opencensus.io/exporter/ocagent v0.5.0
contrib.go.opencensus.io/exporter/ocagent
# github.com/1and1/oneandone-cloudserver-sdk-go v1.0.1
@ -14,59 +14,59 @@ github.com/1and1/oneandone-cloudserver-sdk-go
github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute
github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2019-03-01/compute
github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network
github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2016-06-01/subscriptions
github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2018-02-01/resources
github.com/Azure/azure-sdk-for-go/services/storage/mgmt/2017-10-01/storage
github.com/Azure/azure-sdk-for-go/storage
github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2016-06-01/subscriptions
github.com/Azure/azure-sdk-for-go/version
# github.com/Azure/go-autorest v12.0.0+incompatible
github.com/Azure/go-autorest/autorest
github.com/Azure/go-autorest/autorest/adal
github.com/Azure/go-autorest/autorest/azure
github.com/Azure/go-autorest/autorest/to
github.com/Azure/go-autorest/autorest/date
github.com/Azure/go-autorest/autorest/to
github.com/Azure/go-autorest/autorest/validation
github.com/Azure/go-autorest/tracing
github.com/Azure/go-autorest/logger
github.com/Azure/go-autorest/tracing
# github.com/Azure/go-ntlmssp v0.0.0-20180810175552-4a21cbd618b4
github.com/Azure/go-ntlmssp
# github.com/ChrisTrenkamp/goxpath v0.0.0-20170625215350-4fe035839290
github.com/ChrisTrenkamp/goxpath
github.com/ChrisTrenkamp/goxpath/tree/xmltree
github.com/ChrisTrenkamp/goxpath/tree
github.com/ChrisTrenkamp/goxpath/internal/execxp
github.com/ChrisTrenkamp/goxpath/internal/parser
github.com/ChrisTrenkamp/goxpath/tree/xmltree/xmlbuilder
github.com/ChrisTrenkamp/goxpath/tree/xmltree/xmlele
github.com/ChrisTrenkamp/goxpath/internal/lexer
github.com/ChrisTrenkamp/goxpath/internal/parser
github.com/ChrisTrenkamp/goxpath/internal/parser/findutil
github.com/ChrisTrenkamp/goxpath/internal/parser/intfns
github.com/ChrisTrenkamp/goxpath/internal/parser/pathexpr
github.com/ChrisTrenkamp/goxpath/internal/xconst
github.com/ChrisTrenkamp/goxpath/internal/xsort
github.com/ChrisTrenkamp/goxpath/tree
github.com/ChrisTrenkamp/goxpath/tree/xmltree
github.com/ChrisTrenkamp/goxpath/tree/xmltree/xmlbuilder
github.com/ChrisTrenkamp/goxpath/tree/xmltree/xmlele
github.com/ChrisTrenkamp/goxpath/tree/xmltree/xmlnode
# github.com/NaverCloudPlatform/ncloud-sdk-go v0.0.0-20180110055012-c2e73f942591
github.com/NaverCloudPlatform/ncloud-sdk-go/sdk
github.com/NaverCloudPlatform/ncloud-sdk-go/common
github.com/NaverCloudPlatform/ncloud-sdk-go/request
github.com/NaverCloudPlatform/ncloud-sdk-go/oauth
github.com/NaverCloudPlatform/ncloud-sdk-go/request
github.com/NaverCloudPlatform/ncloud-sdk-go/sdk
# github.com/PuerkitoBio/goquery v1.5.0
github.com/PuerkitoBio/goquery
# github.com/Telmate/proxmox-api-go v0.0.0-20190614181158-26cd147831a4
# github.com/Telmate/proxmox-api-go v0.0.0-20190815172943-ef9222844e60
github.com/Telmate/proxmox-api-go/proxmox
# github.com/aliyun/alibaba-cloud-sdk-go v0.0.0-20190418113227-25233c783f4e
github.com/aliyun/alibaba-cloud-sdk-go/sdk/errors
github.com/aliyun/alibaba-cloud-sdk-go/sdk/requests
github.com/aliyun/alibaba-cloud-sdk-go/sdk/responses
github.com/aliyun/alibaba-cloud-sdk-go/services/ecs
github.com/aliyun/alibaba-cloud-sdk-go/services/ram
github.com/aliyun/alibaba-cloud-sdk-go/sdk/utils
github.com/aliyun/alibaba-cloud-sdk-go/sdk
github.com/aliyun/alibaba-cloud-sdk-go/sdk/auth
github.com/aliyun/alibaba-cloud-sdk-go/sdk/auth/credentials
github.com/aliyun/alibaba-cloud-sdk-go/sdk/auth/credentials/provider
github.com/aliyun/alibaba-cloud-sdk-go/sdk/endpoints
github.com/aliyun/alibaba-cloud-sdk-go/sdk/auth/signers
github.com/aliyun/alibaba-cloud-sdk-go/sdk/endpoints
github.com/aliyun/alibaba-cloud-sdk-go/sdk/errors
github.com/aliyun/alibaba-cloud-sdk-go/sdk/requests
github.com/aliyun/alibaba-cloud-sdk-go/sdk/responses
github.com/aliyun/alibaba-cloud-sdk-go/sdk/utils
github.com/aliyun/alibaba-cloud-sdk-go/services/ecs
github.com/aliyun/alibaba-cloud-sdk-go/services/ram
# github.com/aliyun/aliyun-oss-go-sdk v0.0.0-20170113022742-e6dbea820a9f
github.com/aliyun/aliyun-oss-go-sdk/oss
# github.com/andybalholm/cascadia v1.0.0
@ -91,48 +91,48 @@ github.com/armon/go-metrics
github.com/armon/go-radix
# github.com/aws/aws-sdk-go v1.22.2
github.com/aws/aws-sdk-go/aws
github.com/aws/aws-sdk-go/aws/ec2metadata
github.com/aws/aws-sdk-go/aws/session
github.com/aws/aws-sdk-go/service/ec2
github.com/aws/aws-sdk-go/aws/awserr
github.com/aws/aws-sdk-go/aws/credentials
github.com/aws/aws-sdk-go/aws/request
github.com/aws/aws-sdk-go/service/ec2/ec2iface
github.com/aws/aws-sdk-go/service/sts
github.com/aws/aws-sdk-go/service/ecr
github.com/aws/aws-sdk-go/service/s3
github.com/aws/aws-sdk-go/service/s3/s3manager
github.com/aws/aws-sdk-go/aws/endpoints
github.com/aws/aws-sdk-go/internal/sdkio
github.com/aws/aws-sdk-go/aws/awsutil
github.com/aws/aws-sdk-go/aws/client
github.com/aws/aws-sdk-go/aws/client/metadata
github.com/aws/aws-sdk-go/aws/corehandlers
github.com/aws/aws-sdk-go/internal/sdkuri
github.com/aws/aws-sdk-go/aws/credentials
github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds
github.com/aws/aws-sdk-go/aws/credentials/endpointcreds
github.com/aws/aws-sdk-go/aws/credentials/processcreds
github.com/aws/aws-sdk-go/aws/credentials/stscreds
github.com/aws/aws-sdk-go/aws/csm
github.com/aws/aws-sdk-go/aws/defaults
github.com/aws/aws-sdk-go/internal/ini
github.com/aws/aws-sdk-go/internal/shareddefaults
github.com/aws/aws-sdk-go/aws/awsutil
github.com/aws/aws-sdk-go/aws/ec2metadata
github.com/aws/aws-sdk-go/aws/endpoints
github.com/aws/aws-sdk-go/aws/request
github.com/aws/aws-sdk-go/aws/session
github.com/aws/aws-sdk-go/aws/signer/v4
github.com/aws/aws-sdk-go/internal/ini
github.com/aws/aws-sdk-go/internal/s3err
github.com/aws/aws-sdk-go/internal/sdkio
github.com/aws/aws-sdk-go/internal/sdkrand
github.com/aws/aws-sdk-go/internal/sdkuri
github.com/aws/aws-sdk-go/internal/shareddefaults
github.com/aws/aws-sdk-go/private/protocol
github.com/aws/aws-sdk-go/private/protocol/ec2query
github.com/aws/aws-sdk-go/private/protocol/query
github.com/aws/aws-sdk-go/private/protocol/jsonrpc
github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds
github.com/aws/aws-sdk-go/internal/s3err
github.com/aws/aws-sdk-go/private/protocol/eventstream
github.com/aws/aws-sdk-go/private/protocol/eventstream/eventstreamapi
github.com/aws/aws-sdk-go/private/protocol/json/jsonutil
github.com/aws/aws-sdk-go/private/protocol/jsonrpc
github.com/aws/aws-sdk-go/private/protocol/query
github.com/aws/aws-sdk-go/private/protocol/query/queryutil
github.com/aws/aws-sdk-go/private/protocol/rest
github.com/aws/aws-sdk-go/private/protocol/restxml
github.com/aws/aws-sdk-go/private/protocol/xml/xmlutil
github.com/aws/aws-sdk-go/service/ec2
github.com/aws/aws-sdk-go/service/ec2/ec2iface
github.com/aws/aws-sdk-go/service/ecr
github.com/aws/aws-sdk-go/service/s3
github.com/aws/aws-sdk-go/service/s3/s3iface
github.com/aws/aws-sdk-go/service/s3/s3manager
github.com/aws/aws-sdk-go/service/sts
github.com/aws/aws-sdk-go/service/sts/stsiface
github.com/aws/aws-sdk-go/aws/credentials/endpointcreds
github.com/aws/aws-sdk-go/private/protocol/query/queryutil
github.com/aws/aws-sdk-go/private/protocol/json/jsonutil
# github.com/bgentry/go-netrc v0.0.0-20140422174119-9fd32a8b3d3d
github.com/bgentry/go-netrc/netrc
# github.com/bgentry/speakeasy v0.1.0
@ -189,11 +189,11 @@ github.com/go-ini/ini
# github.com/gobwas/glob v0.2.3
github.com/gobwas/glob
github.com/gobwas/glob/compiler
github.com/gobwas/glob/syntax
github.com/gobwas/glob/match
github.com/gobwas/glob/syntax
github.com/gobwas/glob/syntax/ast
github.com/gobwas/glob/util/runes
github.com/gobwas/glob/syntax/lexer
github.com/gobwas/glob/util/runes
github.com/gobwas/glob/util/strings
# github.com/gocolly/colly v1.2.0
github.com/gocolly/colly
@ -206,19 +206,19 @@ github.com/gofrs/uuid
# github.com/golang-collections/collections v0.0.0-20130729185459-604e922904d3
github.com/golang-collections/collections/stack
# github.com/golang/protobuf v1.3.1
github.com/golang/protobuf/proto
github.com/golang/protobuf/ptypes/timestamp
github.com/golang/protobuf/jsonpb
github.com/golang/protobuf/proto
github.com/golang/protobuf/protoc-gen-go/descriptor
github.com/golang/protobuf/ptypes/any
github.com/golang/protobuf/ptypes/empty
github.com/golang/protobuf/ptypes
github.com/golang/protobuf/ptypes/duration
github.com/golang/protobuf/ptypes/struct
github.com/golang/protobuf/ptypes/wrappers
github.com/golang/protobuf/protoc-gen-go/generator
github.com/golang/protobuf/protoc-gen-go/generator/internal/remap
github.com/golang/protobuf/protoc-gen-go/plugin
github.com/golang/protobuf/ptypes
github.com/golang/protobuf/ptypes/any
github.com/golang/protobuf/ptypes/duration
github.com/golang/protobuf/ptypes/empty
github.com/golang/protobuf/ptypes/struct
github.com/golang/protobuf/ptypes/timestamp
github.com/golang/protobuf/ptypes/wrappers
# github.com/golang/snappy v0.0.0-20180518054509-2e65f85255db
github.com/golang/snappy
# github.com/google/go-cmp v0.2.0
@ -236,6 +236,7 @@ github.com/google/uuid
github.com/googleapis/gax-go/v2
# github.com/gophercloud/gophercloud v0.2.0
github.com/gophercloud/gophercloud
github.com/gophercloud/gophercloud/internal
github.com/gophercloud/gophercloud/openstack
github.com/gophercloud/gophercloud/openstack/blockstorage/extensions/volumeactions
github.com/gophercloud/gophercloud/openstack/blockstorage/v3/volumes
@ -244,27 +245,26 @@ github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/bootfromvolum
github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/keypairs
github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/startstop
github.com/gophercloud/gophercloud/openstack/compute/v2/flavors
github.com/gophercloud/gophercloud/openstack/compute/v2/images
github.com/gophercloud/gophercloud/openstack/compute/v2/servers
github.com/gophercloud/gophercloud/openstack/identity/v2/tenants
github.com/gophercloud/gophercloud/openstack/identity/v2/tokens
github.com/gophercloud/gophercloud/openstack/identity/v3/tokens
github.com/gophercloud/gophercloud/openstack/imageservice/v2/images
github.com/gophercloud/gophercloud/openstack/imageservice/v2/members
github.com/gophercloud/gophercloud/openstack/networking/v2/extensions/external
github.com/gophercloud/gophercloud/openstack/networking/v2/extensions/layer3/floatingips
github.com/gophercloud/gophercloud/openstack/networking/v2/networks
github.com/gophercloud/gophercloud/pagination
github.com/gophercloud/gophercloud/openstack/identity/v2/tokens
github.com/gophercloud/gophercloud/openstack/identity/v3/tokens
github.com/gophercloud/gophercloud/openstack/utils
github.com/gophercloud/gophercloud/openstack/compute/v2/images
github.com/gophercloud/gophercloud/internal
github.com/gophercloud/gophercloud/openstack/identity/v2/tenants
github.com/gophercloud/gophercloud/pagination
# github.com/gophercloud/utils v0.0.0-20190124192022-a5c25e7a53a6
github.com/gophercloud/utils/openstack/clientconfig
# github.com/gorilla/websocket v0.0.0-20170319172727-a91eba7f9777
github.com/gorilla/websocket
# github.com/grpc-ecosystem/grpc-gateway v1.8.5
github.com/grpc-ecosystem/grpc-gateway/internal
github.com/grpc-ecosystem/grpc-gateway/runtime
github.com/grpc-ecosystem/grpc-gateway/utilities
github.com/grpc-ecosystem/grpc-gateway/internal
# github.com/hashicorp/consul v1.4.0
github.com/hashicorp/consul/api
# github.com/hashicorp/errwrap v1.0.0
@ -281,9 +281,9 @@ github.com/hashicorp/go-immutable-radix
# github.com/hashicorp/go-multierror v1.0.0
github.com/hashicorp/go-multierror
# github.com/hashicorp/go-oracle-terraform v0.0.0-20181016190316-007121241b79
github.com/hashicorp/go-oracle-terraform/client
github.com/hashicorp/go-oracle-terraform/compute
github.com/hashicorp/go-oracle-terraform/opc
github.com/hashicorp/go-oracle-terraform/client
# github.com/hashicorp/go-retryablehttp v0.5.2
github.com/hashicorp/go-retryablehttp
# github.com/hashicorp/go-rootcerts v0.0.0-20160503143440-6bb64b370b90
@ -302,21 +302,21 @@ github.com/hashicorp/golang-lru/simplelru
github.com/hashicorp/hcl
github.com/hashicorp/hcl/hcl/ast
github.com/hashicorp/hcl/hcl/parser
github.com/hashicorp/hcl/hcl/token
github.com/hashicorp/hcl/json/parser
github.com/hashicorp/hcl/hcl/scanner
github.com/hashicorp/hcl/hcl/strconv
github.com/hashicorp/hcl/hcl/token
github.com/hashicorp/hcl/json/parser
github.com/hashicorp/hcl/json/scanner
github.com/hashicorp/hcl/json/token
# github.com/hashicorp/serf v0.8.2
github.com/hashicorp/serf/coordinate
# github.com/hashicorp/vault v1.1.0
github.com/hashicorp/vault/api
github.com/hashicorp/vault/helper/compressutil
github.com/hashicorp/vault/helper/consts
github.com/hashicorp/vault/helper/hclutil
github.com/hashicorp/vault/helper/jsonutil
github.com/hashicorp/vault/helper/parseutil
github.com/hashicorp/vault/helper/compressutil
github.com/hashicorp/vault/helper/strutil
# github.com/hashicorp/yamux v0.0.0-20181012175058-2f1d1f20f75d
github.com/hashicorp/yamux
@ -327,24 +327,24 @@ github.com/hetznercloud/hcloud-go/hcloud/schema
github.com/hyperonecom/h1-client-go
# github.com/jdcloud-api/jdcloud-sdk-go v1.9.1-0.20190605102154-3d81a50ca961
github.com/jdcloud-api/jdcloud-sdk-go/core
github.com/jdcloud-api/jdcloud-sdk-go/services/charge/models
github.com/jdcloud-api/jdcloud-sdk-go/services/common/models
github.com/jdcloud-api/jdcloud-sdk-go/services/disk/models
github.com/jdcloud-api/jdcloud-sdk-go/services/vm/apis
github.com/jdcloud-api/jdcloud-sdk-go/services/vm/client
github.com/jdcloud-api/jdcloud-sdk-go/services/vm/models
github.com/jdcloud-api/jdcloud-sdk-go/services/vpc/apis
github.com/jdcloud-api/jdcloud-sdk-go/services/vpc/client
github.com/jdcloud-api/jdcloud-sdk-go/services/vpc/models
github.com/jdcloud-api/jdcloud-sdk-go/services/common/models
github.com/jdcloud-api/jdcloud-sdk-go/services/charge/models
github.com/jdcloud-api/jdcloud-sdk-go/services/disk/models
# github.com/jmespath/go-jmespath v0.0.0-20180206201540-c2b33e8439af
github.com/jmespath/go-jmespath
# github.com/joyent/triton-go v0.0.0-20180116165742-545edbe0d564
github.com/joyent/triton-go
github.com/joyent/triton-go/authentication
github.com/joyent/triton-go/client
github.com/joyent/triton-go/compute
github.com/joyent/triton-go/errors
github.com/joyent/triton-go/network
github.com/joyent/triton-go/client
# github.com/json-iterator/go v1.1.6
github.com/json-iterator/go
# github.com/kardianos/osext v0.0.0-20170510131534-ae77be60afb1
@ -433,8 +433,8 @@ github.com/pkg/sftp
github.com/pmezard/go-difflib/difflib
# github.com/posener/complete v1.1.1
github.com/posener/complete
github.com/posener/complete/cmd/install
github.com/posener/complete/cmd
github.com/posener/complete/cmd/install
github.com/posener/complete/match
# github.com/profitbricks/profitbricks-sdk-go v4.0.2+incompatible
github.com/profitbricks/profitbricks-sdk-go
@ -450,8 +450,8 @@ github.com/saintfish/chardet
github.com/satori/go.uuid
# github.com/scaleway/scaleway-cli v0.0.0-20180921094345-7b12c9699d70
github.com/scaleway/scaleway-cli/pkg/api
github.com/scaleway/scaleway-cli/pkg/utils
github.com/scaleway/scaleway-cli/pkg/sshcommand
github.com/scaleway/scaleway-cli/pkg/utils
# github.com/sirupsen/logrus v1.3.0
github.com/sirupsen/logrus
# github.com/stretchr/testify v1.3.0
@ -462,11 +462,13 @@ github.com/temoto/robotstxt
# github.com/tencentcloud/tencentcloud-sdk-go v3.0.71+incompatible
github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common
github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common/errors
github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common/http
github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common/profile
github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/cvm/v20170312
github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/vpc/v20170312
github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common/http
# github.com/ucloud/ucloud-sdk-go v0.8.7
github.com/ucloud/ucloud-sdk-go/private/protocol/http
github.com/ucloud/ucloud-sdk-go/private/utils
github.com/ucloud/ucloud-sdk-go/services/uaccount
github.com/ucloud/ucloud-sdk-go/services/uhost
github.com/ucloud/ucloud-sdk-go/services/unet
@ -474,67 +476,64 @@ github.com/ucloud/ucloud-sdk-go/services/vpc
github.com/ucloud/ucloud-sdk-go/ucloud
github.com/ucloud/ucloud-sdk-go/ucloud/auth
github.com/ucloud/ucloud-sdk-go/ucloud/error
github.com/ucloud/ucloud-sdk-go/ucloud/log
github.com/ucloud/ucloud-sdk-go/ucloud/request
github.com/ucloud/ucloud-sdk-go/ucloud/response
github.com/ucloud/ucloud-sdk-go/private/utils
github.com/ucloud/ucloud-sdk-go/ucloud/log
github.com/ucloud/ucloud-sdk-go/private/protocol/http
github.com/ucloud/ucloud-sdk-go/ucloud/version
# github.com/ugorji/go v0.0.0-20151218193438-646ae4a518c1
github.com/ugorji/go/codec
# github.com/ulikunitz/xz v0.5.5
github.com/ulikunitz/xz
github.com/ulikunitz/xz/internal/hash
github.com/ulikunitz/xz/internal/xlog
github.com/ulikunitz/xz/lzma
github.com/ulikunitz/xz/internal/hash
# github.com/vmware/govmomi v0.0.0-20170707011325-c2105a174311
github.com/vmware/govmomi
github.com/vmware/govmomi/find
github.com/vmware/govmomi/list
github.com/vmware/govmomi/object
github.com/vmware/govmomi/vim25/types
github.com/vmware/govmomi/property
github.com/vmware/govmomi/session
github.com/vmware/govmomi/vim25
github.com/vmware/govmomi/vim25/soap
github.com/vmware/govmomi/list
github.com/vmware/govmomi/vim25/mo
github.com/vmware/govmomi/task
github.com/vmware/govmomi/vim25/methods
github.com/vmware/govmomi/vim25/progress
github.com/vmware/govmomi/vim25
github.com/vmware/govmomi/vim25/debug
github.com/vmware/govmomi/vim25/methods
github.com/vmware/govmomi/vim25/mo
github.com/vmware/govmomi/vim25/progress
github.com/vmware/govmomi/vim25/soap
github.com/vmware/govmomi/vim25/types
github.com/vmware/govmomi/vim25/xml
# github.com/xanzy/go-cloudstack v0.0.0-20190526095453-42f262b63ed0
github.com/xanzy/go-cloudstack/cloudstack
# github.com/yandex-cloud/go-genproto v0.0.0-20190401174212-1db0ef3dce9b
github.com/yandex-cloud/go-genproto/yandex/cloud/compute/v1
github.com/yandex-cloud/go-genproto/yandex/cloud/endpoint
github.com/yandex-cloud/go-genproto/yandex/cloud/vpc/v1
github.com/yandex-cloud/go-genproto/yandex/api
github.com/yandex-cloud/go-genproto/yandex/cloud/operation
github.com/yandex-cloud/go-genproto/yandex/cloud/validation
github.com/yandex-cloud/go-genproto/yandex/cloud/iam/v1
github.com/yandex-cloud/go-genproto/yandex/cloud/access
github.com/yandex-cloud/go-genproto/yandex/cloud/compute/v1
github.com/yandex-cloud/go-genproto/yandex/cloud/containerregistry/v1
github.com/yandex-cloud/go-genproto/yandex/cloud/endpoint
github.com/yandex-cloud/go-genproto/yandex/cloud/iam/v1
github.com/yandex-cloud/go-genproto/yandex/cloud/iam/v1/awscompatibility
github.com/yandex-cloud/go-genproto/yandex/cloud/loadbalancer/v1
github.com/yandex-cloud/go-genproto/yandex/cloud/mdb/clickhouse/v1
github.com/yandex-cloud/go-genproto/yandex/cloud/mdb/mongodb/v1
github.com/yandex-cloud/go-genproto/yandex/cloud/mdb/postgresql/v1
github.com/yandex-cloud/go-genproto/yandex/cloud/mdb/redis/v1
github.com/yandex-cloud/go-genproto/yandex/cloud/resourcemanager/v1
github.com/yandex-cloud/go-genproto/yandex/cloud/iam/v1/awscompatibility
github.com/yandex-cloud/go-genproto/yandex/cloud/mdb/clickhouse/v1/config
github.com/yandex-cloud/go-genproto/yandex/cloud/mdb/mongodb/v1
github.com/yandex-cloud/go-genproto/yandex/cloud/mdb/mongodb/v1/config
github.com/yandex-cloud/go-genproto/yandex/cloud/mdb/postgresql/v1
github.com/yandex-cloud/go-genproto/yandex/cloud/mdb/postgresql/v1/config
github.com/yandex-cloud/go-genproto/yandex/cloud/mdb/redis/v1
github.com/yandex-cloud/go-genproto/yandex/cloud/mdb/redis/v1/config
github.com/yandex-cloud/go-genproto/yandex/cloud/operation
github.com/yandex-cloud/go-genproto/yandex/cloud/resourcemanager/v1
github.com/yandex-cloud/go-genproto/yandex/cloud/validation
github.com/yandex-cloud/go-genproto/yandex/cloud/vpc/v1
# github.com/yandex-cloud/go-sdk v0.0.0-20190402114215-3fc1d6947035
github.com/yandex-cloud/go-sdk
github.com/yandex-cloud/go-sdk/iamkey
github.com/yandex-cloud/go-sdk/pkg/requestid
github.com/yandex-cloud/go-sdk/dial
github.com/yandex-cloud/go-sdk/gen/apiendpoint
github.com/yandex-cloud/go-sdk/gen/compute
github.com/yandex-cloud/go-sdk/gen/containerregistry
github.com/yandex-cloud/go-sdk/gen/iam
github.com/yandex-cloud/go-sdk/gen/iam/awscompatibility
github.com/yandex-cloud/go-sdk/gen/loadbalancer
github.com/yandex-cloud/go-sdk/gen/mdb/clickhouse
github.com/yandex-cloud/go-sdk/gen/mdb/mongodb
@ -543,80 +542,76 @@ github.com/yandex-cloud/go-sdk/gen/mdb/redis
github.com/yandex-cloud/go-sdk/gen/operation
github.com/yandex-cloud/go-sdk/gen/resourcemanager
github.com/yandex-cloud/go-sdk/gen/vpc
github.com/yandex-cloud/go-sdk/iamkey
github.com/yandex-cloud/go-sdk/operation
github.com/yandex-cloud/go-sdk/pkg/grpcclient
github.com/yandex-cloud/go-sdk/pkg/requestid
github.com/yandex-cloud/go-sdk/pkg/sdkerrors
github.com/yandex-cloud/go-sdk/pkg/singleflight
github.com/yandex-cloud/go-sdk/gen/iam/awscompatibility
# go.opencensus.io v0.21.0
go.opencensus.io/plugin/ochttp
go.opencensus.io/plugin/ochttp/propagation/tracecontext
go.opencensus.io/stats/view
go.opencensus.io/trace
go.opencensus.io
go.opencensus.io/plugin/ocgrpc
go.opencensus.io/resource
go.opencensus.io/stats
go.opencensus.io/tag
go.opencensus.io/trace/tracestate
go.opencensus.io/plugin/ochttp/propagation/b3
go.opencensus.io/trace/propagation
go.opencensus.io/internal
go.opencensus.io/internal/tagencoding
go.opencensus.io/metric/metricdata
go.opencensus.io/metric/metricproducer
go.opencensus.io/plugin/ocgrpc
go.opencensus.io/plugin/ochttp
go.opencensus.io/plugin/ochttp/propagation/b3
go.opencensus.io/plugin/ochttp/propagation/tracecontext
go.opencensus.io/resource
go.opencensus.io/stats
go.opencensus.io/stats/internal
go.opencensus.io/internal
go.opencensus.io/stats/view
go.opencensus.io/tag
go.opencensus.io/trace
go.opencensus.io/trace/internal
go.opencensus.io/trace/propagation
go.opencensus.io/trace/tracestate
# golang.org/x/crypto v0.0.0-20190424203555-c05e17bb3b2d
golang.org/x/crypto/ssh
golang.org/x/crypto/ssh/agent
golang.org/x/crypto/curve25519
golang.org/x/crypto/ed25519
golang.org/x/crypto/internal/chacha20
golang.org/x/crypto/poly1305
golang.org/x/crypto/md4
golang.org/x/crypto/ed25519/internal/edwards25519
golang.org/x/crypto/internal/chacha20
golang.org/x/crypto/internal/subtle
golang.org/x/crypto/md4
golang.org/x/crypto/poly1305
golang.org/x/crypto/ssh
golang.org/x/crypto/ssh/agent
golang.org/x/crypto/ssh/terminal
# golang.org/x/net v0.0.0-20190424112056-4829fb13d2c6
golang.org/x/net/proxy
golang.org/x/net/http2
golang.org/x/net/html/charset
golang.org/x/net/context
golang.org/x/net/trace
golang.org/x/net/context/ctxhttp
golang.org/x/net/html
golang.org/x/net/internal/socks
golang.org/x/net/html/atom
golang.org/x/net/html/charset
golang.org/x/net/http/httpguts
golang.org/x/net/http2
golang.org/x/net/http2/hpack
golang.org/x/net/idna
golang.org/x/net/context/ctxhttp
golang.org/x/net/publicsuffix
golang.org/x/net/internal/socks
golang.org/x/net/internal/timeseries
golang.org/x/net/html/atom
golang.org/x/net/proxy
golang.org/x/net/publicsuffix
golang.org/x/net/trace
# golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421
golang.org/x/oauth2
golang.org/x/oauth2/google
golang.org/x/oauth2/jwt
golang.org/x/oauth2/internal
golang.org/x/oauth2/jws
golang.org/x/oauth2/jwt
# golang.org/x/sync v0.0.0-20190423024810-112230192c58
golang.org/x/sync/semaphore
golang.org/x/sync/errgroup
golang.org/x/sync/semaphore
# golang.org/x/sys v0.0.0-20190425145619-16072639606e
golang.org/x/sys/unix
golang.org/x/sys/cpu
golang.org/x/sys/unix
golang.org/x/sys/windows
# golang.org/x/text v0.3.1
golang.org/x/text/encoding
golang.org/x/text/encoding/charmap
golang.org/x/text/encoding/htmlindex
golang.org/x/text/transform
golang.org/x/text/secure/bidirule
golang.org/x/text/unicode/bidi
golang.org/x/text/unicode/norm
golang.org/x/text/language
golang.org/x/text/encoding/internal/identifier
golang.org/x/text/encoding/internal
golang.org/x/text/encoding/internal/identifier
golang.org/x/text/encoding/japanese
golang.org/x/text/encoding/korean
golang.org/x/text/encoding/simplifiedchinese
@ -624,53 +619,59 @@ golang.org/x/text/encoding/traditionalchinese
golang.org/x/text/encoding/unicode
golang.org/x/text/internal/language
golang.org/x/text/internal/language/compact
golang.org/x/text/internal/utf8internal
golang.org/x/text/runes
golang.org/x/text/internal/tag
golang.org/x/text/internal/utf8internal
golang.org/x/text/language
golang.org/x/text/runes
golang.org/x/text/secure/bidirule
golang.org/x/text/transform
golang.org/x/text/unicode/bidi
golang.org/x/text/unicode/norm
# golang.org/x/time v0.0.0-20181108054448-85acf8d2951c
golang.org/x/time/rate
# google.golang.org/api v0.4.0
google.golang.org/api/compute/v1
google.golang.org/api/storage/v1
google.golang.org/api/gensupport
google.golang.org/api/googleapi
google.golang.org/api/option
google.golang.org/api/transport/http
google.golang.org/api/iterator
google.golang.org/api/googleapi/internal/uritemplates
google.golang.org/api/internal
google.golang.org/api/googleapi/transport
google.golang.org/api/transport/http/internal/propagation
google.golang.org/api/internal
google.golang.org/api/iterator
google.golang.org/api/option
google.golang.org/api/storage/v1
google.golang.org/api/support/bundler
google.golang.org/api/transport/http
google.golang.org/api/transport/http/internal/propagation
# google.golang.org/appengine v1.4.0
google.golang.org/appengine
google.golang.org/appengine/urlfetch
google.golang.org/appengine/internal
google.golang.org/appengine/internal/app_identity
google.golang.org/appengine/internal/modules
google.golang.org/appengine/internal/urlfetch
google.golang.org/appengine/internal/base
google.golang.org/appengine/internal/datastore
google.golang.org/appengine/internal/log
google.golang.org/appengine/internal/modules
google.golang.org/appengine/internal/remote_api
google.golang.org/appengine/internal/urlfetch
google.golang.org/appengine/urlfetch
# google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19
google.golang.org/genproto/googleapis/api/annotations
google.golang.org/genproto/protobuf/field_mask
google.golang.org/genproto/googleapis/api/httpbody
google.golang.org/genproto/googleapis/iam/v1
google.golang.org/genproto/googleapis/rpc/code
google.golang.org/genproto/googleapis/rpc/errdetails
google.golang.org/genproto/googleapis/rpc/status
google.golang.org/genproto/googleapis/iam/v1
google.golang.org/genproto/googleapis/type/timeofday
google.golang.org/genproto/googleapis/rpc/code
google.golang.org/genproto/googleapis/api/httpbody
google.golang.org/genproto/protobuf/field_mask
# google.golang.org/grpc v1.20.1
google.golang.org/grpc
google.golang.org/grpc/metadata
google.golang.org/grpc/credentials
google.golang.org/grpc/status
google.golang.org/grpc/balancer
google.golang.org/grpc/balancer/base
google.golang.org/grpc/balancer/roundrobin
google.golang.org/grpc/binarylog/grpc_binarylog_v1
google.golang.org/grpc/codes
google.golang.org/grpc/connectivity
google.golang.org/grpc/credentials
google.golang.org/grpc/credentials/internal
google.golang.org/grpc/encoding
google.golang.org/grpc/encoding/proto
google.golang.org/grpc/grpclog
@ -682,19 +683,18 @@ google.golang.org/grpc/internal/channelz
google.golang.org/grpc/internal/envconfig
google.golang.org/grpc/internal/grpcrand
google.golang.org/grpc/internal/grpcsync
google.golang.org/grpc/internal/syscall
google.golang.org/grpc/internal/transport
google.golang.org/grpc/keepalive
google.golang.org/grpc/metadata
google.golang.org/grpc/naming
google.golang.org/grpc/peer
google.golang.org/grpc/resolver
google.golang.org/grpc/resolver/dns
google.golang.org/grpc/resolver/passthrough
google.golang.org/grpc/stats
google.golang.org/grpc/status
google.golang.org/grpc/tap
google.golang.org/grpc/credentials/internal
google.golang.org/grpc/balancer/base
google.golang.org/grpc/binarylog/grpc_binarylog_v1
google.golang.org/grpc/internal/syscall
# gopkg.in/ini.v1 v1.42.0
gopkg.in/ini.v1
# gopkg.in/resty.v1 v1.12.0

View File

@ -14,6 +14,42 @@ Packer has a vibrant community of contributors who have built a number of great
tools on top of Packer. There are also quite a few projects demonstrating the
power of Packer templates.
## Third-Party plugins
The plugins listed below have been built by the community of Packer users and
vendors. These plugins are not officially tested nor officially maintained by
HashiCorp, and are listed here in order to help users find them easily.
To learn more about how to use community plugins, or how to build your own,
check out the docs on [extending Packer](/docs/extending/plugins.html)
If you have built a plugin and would like to add it to this community list,
please make a pull request to the website so that we can document your
contribution here!
### Community Builders
- [ARM builder](https://github.com/solo-io/packer-builder-arm-image) - A builder
for creating ARM images
- [vSphere builder](https://github.com/jetbrains-infra/packer-builder-vsphere) -
A builder for interacting directly with the vSphere API rather than the esx
host directly.
- [Vultr builder](https://github.com/vultr/packer-builder-vultr) - A builder
for creating [Vultr](https://www.vultr.com/) snapshots.
### Community Provisioners
- [Comment Provisioner](https://github.com/SwampDragons/packer-provisioner-comment) -
Example provisioner that allows you to annotate your build with bubble-text
comments.
- [Windows Update provisioner](https://github.com/rgl/packer-provisioner-windows-update) -
A provisioner for gracefully handling windows updates and the reboots they
cause.
## Templates
- [bento](https://github.com/chef/bento) - Packer templates for building minimal
@ -35,7 +71,7 @@ power of Packer templates.
- [cbednarski/packer-ubuntu](https://github.com/cbednarski/packer-ubuntu) -
Ubuntu LTS Virtual Machines for Vagrant
* [geerlingguy/packer-ubuntu-1604](https://github.com/geerlingguy/packer-ubuntu-1604)
* [geerlingguy/packer-ubuntu-1604](https://github.com/geerlingguy/packer-ubuntu-1604)
\- Ubuntu 16.04 minimal Vagrant Box using Ansible provisioner
* [jakobadam/packer-qemu-templates](https://github.com/jakobadam/packer-qemu-templates)

View File

@ -22,6 +22,10 @@ description: |-
<strong>Discussion list:</strong>
<a href="https://groups.google.com/group/packer-tool">Packer Google Group</a>
</p>
<p>
<strong>Community Forum:</strong>
<a href="https://discuss.hashicorp.com/c/packer">Packer Community Forum</a>
</p>
<p>
<strong>Bug Tracker:</strong>
<a href="https://github.com/hashicorp/packer/issues">Issue tracker

View File

@ -166,4 +166,4 @@ volumes created by this builder, any volumes inn the source AMI which are not
marked for deletion on termination will remain in your account.
<%= partial "partials/builders/aws-ssh-differentiation-table" %>
<%= partial "partials/builders/aws-ssh-differentiation-table" %>

View File

@ -1,7 +1,7 @@
---
description: |
The amazon-ebsvolume Packer builder is like the EBS builder, but is intended to
create EBS volumes rather than a machine image.
The amazon-ebsvolume Packer builder is like the EBS builder, but is
intended to create EBS volumes rather than a machine image.
layout: docs
page_title: 'Amazon EBS Volume - Builders'
sidebar_current: 'docs-builders-amazon-ebsvolume'
@ -14,9 +14,12 @@ Type: `amazon-ebsvolume`
The `amazon-ebsvolume` Packer builder is able to create Amazon Elastic Block
Store volumes which are prepopulated with filesystems or data.
This builder builds EBS volumes by launching an EC2 instance from a source AMI,
provisioning that running machine, and then destroying the source machine,
keeping the volumes intact.
This builder creates EBS volumes by launching an EC2 instance from a source
provisioning that running machine, and then destroying the source machine, AMI.
One or more EBS volumes are attached to the running instance, allowing keeping
the volumes intact. them to be provisioned into from the running machine. Once
provisioning is complete the source machine is destroyed. The provisioned
volumes are kept intact.
This is all done in your own AWS account. The builder will create temporary key
pairs, security group rules, etc. that provide it temporary access to the
@ -183,4 +186,4 @@ volumes created by this builder, any volumes inn the source AMI which are not
marked for deletion on termination will remain in your account.
<%= partial "partials/builders/aws-ssh-differentiation-table" %>
<%= partial "partials/builders/aws-ssh-differentiation-table" %>

View File

@ -14,6 +14,8 @@ interpolations. You may access variables in the Packer config you called the
console with, or provide variables when you call console using the -var or
-var-file command line options.
~> **Note:** `console` is available from version 1.4.2 and above.
Type in the interpolation to test and hit \<enter\> to see the result.
To exit the console, type "exit" and hit \<enter\>, or use Control-C.

View File

@ -21,10 +21,11 @@ This section will cover how to install and use plugins. If you're interested in
developing plugins, the documentation for that is available below, in the
[developing plugins](#developing-plugins) section.
Because Packer is so young, there is no official listing of available Packer
plugins. Plugins are best found via Google. Typically, searching "packer plugin
*x*" will find what you're looking for if it exists. As Packer gets older, an
official plugin directory is planned.
The current official listing of available Packer plugins can be found
[here](/community-tools.html#third-party-plugins). This is an incomplete list,
and more plugins can be found by searching. Typically, searching "packer plugin
*x*" will find what you're looking for if it exists. We hope to create an
offical registry for third party plugins in the future.
## How Plugins Work
@ -149,7 +150,7 @@ func main() {
panic(err)
}
server.RegisterBuilder(new(Builder))
server.Serve()
server.Serve()
}
```

View File

@ -71,6 +71,15 @@ to force the log to always go to a specific file when logging is enabled. Note
that even when `PACKER_LOG_PATH` is set, `PACKER_LOG` must be set in order for
any logging to be enabled.
### Debugging Plugins
Each packer plugin runs in a separate process and communicates with RCP over a
socket therefore using a debugger will not work (be complicated at least).
But most of the Packer code is really simple and easy to follow with PACKER_LOG
turned on. If that doesn't work adding some extra debug print outs when you have
homed in on the problem is usually enough.
### Debugging Packer in Powershell/Windows
In Windows you can set the detailed logs environmental variable `PACKER_LOG` or

View File

@ -1,7 +1,6 @@
---
description: |
The Packer vSphere post-processor takes an artifact from the VMware builder and
uploads it to a vSphere endpoint.
The Packer vSphere post-processor takes an artifact and uploads it to a vSphere endpoint.
layout: docs
page_title: 'vSphere - Post-Processors'
sidebar_current: 'docs-post-processors-vsphere'
@ -11,8 +10,8 @@ sidebar_current: 'docs-post-processors-vsphere'
Type: `vsphere`
The Packer vSphere post-processor takes an artifact from the VMware builder and
uploads it to a vSphere endpoint.
The Packer vSphere post-processor takes an artifact and uploads it to a vSphere endpoint.
The artifact must have a vmx/ova/ovf image.
## Configuration

View File

@ -126,9 +126,4 @@ To fix this, you can create a symlink to packer that uses a different name like
`packer.io`, or invoke the `packer` binary you want using its absolute path,
e.g. `/usr/local/packer`.
On *Arch* Linux there is a package named `packer` in the main
repository and in the AUR. The package `packer` in the AUR is an old
name for a package management tool for Arch, it's not Hashicorp
Packer.
[Continue to building an image](./build-image.html)

View File

@ -1,5 +1,18 @@
<!-- Code generated from the comments of the Config struct in builder/amazon/ebsvolume/builder.go; DO NOT EDIT MANUALLY -->
- `ena_support` (\*bool) - Enable enhanced networking (ENA but not SriovNetSupport) on
HVM-compatible AMIs. If set, add `ec2:ModifyInstanceAttribute` to your
AWS IAM policy. Note: you must make sure enhanced networking is enabled
on your instance. See [Amazon's documentation on enabling enhanced
networking](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html#enabling_enhanced_networking).
- `sriov_support` (bool) - Enable enhanced networking (SriovNetSupport but not ENA) on
HVM-compatible AMIs. If true, add `ec2:ModifyInstanceAttribute` to your
AWS IAM policy. Note: you must make sure enhanced networking is enabled
on your instance. See [Amazon's documentation on enabling enhanced
networking](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html#enabling_enhanced_networking).
Default `false`.
- `ebs_volumes` (BlockDevices) - Add the block device mappings to the AMI. If you add instance store
volumes or EBS volumes in addition to the root device volume, the
created AMI will contain block device mapping information for those
@ -10,17 +23,16 @@
source instance. See the [BlockDevices](#block-devices-configuration)
documentation for fields.
- `ena_support` (\*bool) - Enable enhanced networking (ENA but not SriovNetSupport) on
HVM-compatible AMIs. If set, add ec2:ModifyInstanceAttribute to your AWS
IAM policy. If false, this will disable enhanced networking in the final
AMI as opposed to passing the setting through unchanged from the source.
Note: you must make sure enhanced networking is enabled on your
instance. See Amazon's documentation on enabling enhanced networking.
- `run_volume_tags` (awscommon.TagMap) - Tags to apply to the volumes of the instance that is *launched* to
create EBS Volumes. These tags will *not* appear in the tags of the
resulting EBS volumes unless they're duplicated under `tags` in the
`ebs_volumes` setting. This is a [template
engine](/docs/templates/engine.html), see [Build template
data](#build-template-data) for more information.
- `sriov_support` (bool) - Enable enhanced networking (SriovNetSupport but not ENA) on
HVM-compatible AMIs. If true, add `ec2:ModifyInstanceAttribute` to your
AWS IAM policy. Note: you must make sure enhanced networking is enabled
on your instance. See [Amazon's documentation on enabling enhanced
networking](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html#enabling_enhanced_networking).
Default `false`.
Note: The tags specified here will be *temporarily* applied to volumes
specified in `ebs_volumes` - but only while the instance is being
created. Packer will replace all tags on the volume with the tags
configured in the `ebs_volumes` section as soon as the instance is
reported as 'ready'.