Merge remote-tracking branch 'upstream/main' into pep508-grammar-fix

This commit is contained in:
Hugo van Kemenade 2023-09-20 21:35:50 +03:00
commit 1fb32c8046
790 changed files with 51468 additions and 21019 deletions

View File

@ -0,0 +1,17 @@
of scoped seem allright. I still think there is not enough need
da de dum, hmm, hmm, dum de dum.
output=`dmesg | grep hda`
p2 = Popen(["grep", "hda"], stdin=p1.stdout, stdout=PIPE)
Error-de_DE=Wenn ist das Nunstück git und Slotermeyer?
Ja! Beiherhund das Oder die Virtualenvironment gersput!
<https://devguide.python.org/pullrequest/#licensing>`__
class ClassE[T: [str, int]]: ... # Type checker error: illegal expression form
class ClassE[T: t1]: ... # Type checker error: literal tuple expression required
explicitly declared using ``in``, ``out`` and ``inout`` keywords.
| | | | | | | inout |

View File

@ -0,0 +1,21 @@
adaptee
ancilliary
ans
arithmetics
asend
ba
clos
complies
crate
dedented
extraversion
falsy
fo
iif
nd
ned
recuse
reenable
referencable
therefor
warmup

5
.codespellrc Normal file
View File

@ -0,0 +1,5 @@
[codespell]
skip = ./.git
ignore-words = .codespell/ignore-words.txt
exclude-file = .codespell/exclude-file.txt
uri-ignore-words-list = daa,ist,searchin,theses

4
.gitattributes vendored
View File

@ -3,3 +3,7 @@
*.png binary
*.pptx binary
*.odp binary
# Instruct linguist not to ignore the PEPs
# https://github.com/github-linguist/linguist/blob/master/docs/overrides.md
peps/*.rst text linguist-detectable

1274
.github/CODEOWNERS vendored

File diff suppressed because it is too large Load Diff

View File

@ -1,11 +0,0 @@
<!--
Please include the PEP number in the pull request title, example:
PEP NNN: Summary of the changes made
In addition, please sign the CLA.
For more information, please read our Contributing Guidelines (CONTRIBUTING.rst)
-->

View File

@ -0,0 +1,42 @@
<!--
You can use the following checklist when double-checking your PEP,
and you can help complete some of it yourself if you like
by ticking any boxes you're sure about, like this: [x]
If you're unsure about anything, just leave it blank and we'll take a look.
If your PEP is not Standards Track, remove the corresponding section.
-->
## Basic requirements (all PEP Types)
* [ ] Read and followed [PEP 1](https://peps.python.org/1) & [PEP 12](https://peps.python.org/12)
* [ ] File created from the [latest PEP template](https://github.com/python/peps/blob/main/peps/pep-0012/pep-NNNN.rst?plain=1)
* [ ] PEP has next available number, & set in filename (``pep-NNNN.rst``), PR title (``PEP 123: <Title of PEP>``) and ``PEP`` header
* [ ] Title clearly, accurately and concisely describes the content in 79 characters or less
* [ ] Core dev/PEP editor listed as ``Author`` or ``Sponsor``, and formally confirmed their approval
* [ ] ``Author``, ``Status`` (``Draft``), ``Type`` and ``Created`` headers filled out correctly
* [ ] ``PEP-Delegate``, ``Topic``, ``Requires`` and ``Replaces`` headers completed if appropriate
* [ ] Required sections included
* [ ] Abstract (first section)
* [ ] Copyright (last section; exact wording from template required)
* [ ] Code is well-formatted (PEP 7/PEP 8) and is in [code blocks, with the right lexer names](https://peps.python.org/pep-0012/#literal-blocks) if non-Python
* [ ] PEP builds with no warnings, pre-commit checks pass and content displays as intended in the rendered HTML
* [ ] Authors/sponsor added to ``.github/CODEOWNERS`` for the PEP
## Standards Track requirements
* [ ] PEP topic [discussed in a suitable venue](https://peps.python.org/pep-0001/#start-with-an-idea-for-python) with general agreement that a PEP is appropriate
* [ ] [Suggested sections](https://peps.python.org/pep-0012/#suggested-sections) included (unless not applicable)
* [ ] Motivation
* [ ] Rationale
* [ ] Specification
* [ ] Backwards Compatibility
* [ ] Security Implications
* [ ] How to Teach This
* [ ] Reference Implementation
* [ ] Rejected Ideas
* [ ] Open Issues
* [ ] ``Python-Version`` set to valid (pre-beta) future Python version, if relevant
* [ ] Any project stated in the PEP as supporting/endorsing/benefiting from the PEP formally confirmed such
* [ ] Right before or after initial merging, [PEP discussion thread](https://peps.python.org/pep-0001/#discussing-a-pep) created and linked to in ``Discussions-To`` and ``Post-History``

View File

@ -0,0 +1,10 @@
<!--
**Please** read our Contributing Guidelines (CONTRIBUTING.rst)
to make sure this repo is the right place for your proposed change. Thanks!
-->
* Change is either:
* [ ] To a Draft PEP
* [ ] To an Accepted or Final PEP, with Steering Council approval
* [ ] To fix an editorial issue (markup, typo, link, header, etc)
* [ ] PR title prefixed with PEP number (e.g. ``PEP 123: Summary of changes``)

View File

@ -0,0 +1,13 @@
<!--
You can help complete the following checklist yourself if you like
by ticking any boxes you're sure about, like this: [x]
If you're unsure about anything, just leave it blank and we'll take a look.
-->
* [ ] SC/PEP Delegate has formally accepted/rejected the PEP and posted to the ``Discussions-To`` thread
* [ ] Pull request title in appropriate format (``PEP 123: Mark as Accepted``)
* [ ] ``Status`` changed to ``Accepted``/``Rejected``
* [ ] ``Resolution`` link points directly to SC/PEP Delegate official acceptance/rejected post
* [ ] Acceptance/rejection notice added, if the SC/PEP delegate had major conditions or comments
* [ ] ``Discussions-To``, ``Post-History`` and ``Python-Version`` up to date

View File

@ -0,0 +1,12 @@
<!--
You can help complete the following checklist yourself if you like
by ticking any boxes you're sure about, like this: [x]
If you're unsure about something, just leave it blank and we'll take a look.
-->
* [ ] Final implementation has been merged (including tests and docs)
* [ ] PEP matches the final implementation
* [ ] Any substantial changes since the accepted version approved by the SC/PEP delegate
* [ ] Pull request title in appropriate format (``PEP 123: Mark Final``)
* [ ] ``Status`` changed to ``Final`` (and ``Python-Version`` is correct)
* [ ] Canonical docs/spec linked with a ``canonical-doc`` directive (or ``canonical-pypa-spec``, for packaging PEPs)

View File

@ -0,0 +1,5 @@
<!--
This template is for an infra or meta change not belonging to another category.
**Please** read our Contributing Guidelines (CONTRIBUTING.rst)
to make sure this repo is the right place for your proposed change. Thanks!
-->

View File

@ -1,37 +0,0 @@
name: Build
on: [push, pull_request]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: 3.8
- name: Install dependencies
run: |
python -m pip install -U pip
python -m pip install -U docutils
- name: Build
run: |
make rss
make -j$(nproc)
- name: Deploy
if: >
(
github.repository == 'python/peps' &&
github.ref == 'refs/heads/master'
)
run: |
bash deploy.bash
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}

View File

@ -1,50 +0,0 @@
name: Deploy to GitHub Pages
on:
push:
branches: [master]
jobs:
deploy-to-pages:
runs-on: ubuntu-latest
steps:
- name: 🛎️ Checkout
uses: actions/checkout@v2
with:
fetch-depth: 0 # fetch all history so that last modified date-times are accurate
- name: 🐍 Set up Python 3.9
uses: actions/setup-python@v2
with:
python-version: 3.9
- name: 🧳 Cache pip
uses: actions/cache@v2
with:
# This path is specific to Ubuntu
path: ~/.cache/pip
# Look to see if there is a cache hit for the corresponding requirements file
key: ${{ runner.os }}-pip-${{ hashFiles('requirements.txt') }}
restore-keys: |
${{ runner.os }}-pip-
${{ runner.os }}-
- name: 👷‍ Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: 🔧 Build PEPs
run: make pages -j$(nproc)
# remove the .doctrees folder when building for deployment as it takes two thirds of disk space
- name: 🔥 Clean up files
run: rm -r build/.doctrees/
- name: 🚀 Deploy to GitHub pages
uses: JamesIves/github-pages-deploy-action@4.1.1
with:
branch: gh-pages # The branch to deploy to.
folder: build # Synchronise with build.py -> build_directory
single-commit: true # Delete existing files

View File

@ -0,0 +1,23 @@
name: Read the Docs PR preview
on:
pull_request_target:
types:
- opened
permissions:
contents: read
pull-requests: write
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
documentation-links:
runs-on: ubuntu-latest
steps:
- uses: readthedocs/actions/preview@v1
with:
project-slug: "pep-previews"
single-version: "true"

View File

@ -1,11 +1,52 @@
name: Lint
name: Lint PEPs
on: [push, pull_request]
on:
push:
pull_request:
workflow_dispatch:
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
env:
FORCE_COLOR: 1
RUFF_FORMAT: github
jobs:
pre-commit:
name: Run pre-commit
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/setup-python@v2
- uses: pre-commit/action@v2.0.0
- uses: actions/checkout@v4
- name: Set up Python 3
uses: actions/setup-python@v4
with:
python-version: "3.x"
cache: pip
- name: Run pre-commit hooks
uses: pre-commit/action@v3.0.0
- name: Check spelling
uses: pre-commit/action@v3.0.0
with:
extra_args: --all-files --hook-stage manual codespell || true
check-peps:
name: Run check-peps
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Python 3
uses: actions/setup-python@v4
with:
python-version: "3"
- name: Run check-peps
run: python check-peps.py --detailed

68
.github/workflows/render.yml vendored Normal file
View File

@ -0,0 +1,68 @@
name: Render PEPs
on:
push:
pull_request:
workflow_dispatch:
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
env:
FORCE_COLOR: 1
jobs:
render-peps:
name: Render PEPs
runs-on: ubuntu-latest
permissions:
contents: write
strategy:
fail-fast: false
matrix:
python-version:
- "3.x"
- "3.12-dev"
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0 # fetch all history so that last modified date-times are accurate
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
cache: pip
- name: Update pip
run: |
python -m pip install --upgrade pip
- name: Render PEPs
run: make dirhtml JOBS=$(nproc)
# remove the .doctrees folder when building for deployment as it takes two thirds of disk space
- name: Clean up files
run: rm -r build/.doctrees/
- name: Deploy to GitHub pages
# This allows CI to build branches for testing
if: (github.ref == 'refs/heads/main') && (matrix.python-version == '3.x')
uses: JamesIves/github-pages-deploy-action@v4
with:
folder: build # Synchronise with Makefile -> OUTPUT_DIR
single-commit: true # Delete existing files
- name: Purge CDN cache
if: github.ref == 'refs/heads/main'
run: |
curl -H "Accept: application/json" -H "Fastly-Key: $FASTLY_TOKEN" -X POST "https://api.fastly.com/service/$FASTLY_SERVICE_ID/purge_all"
env:
FASTLY_TOKEN: ${{ secrets.FASTLY_TOKEN }}
FASTLY_SERVICE_ID: ${{ secrets.FASTLY_SERVICE_ID }}

64
.github/workflows/test.yml vendored Normal file
View File

@ -0,0 +1,64 @@
name: Test Sphinx Extensions
on:
push:
paths:
- ".github/workflows/test.yml"
- "pep_sphinx_extensions/**"
- "tox.ini"
pull_request:
paths:
- ".github/workflows/test.yml"
- "pep_sphinx_extensions/**"
- "tox.ini"
workflow_dispatch:
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
env:
FORCE_COLOR: 1
jobs:
test:
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
python-version:
- "3.9"
- "3.10"
- "3.11"
- "3.12-dev"
os:
- "windows-latest"
- "macos-latest"
- "ubuntu-latest"
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
cache: pip
- name: Install dependencies
run: |
python -m pip install -U pip
python -m pip install -U wheel
python -m pip install -U tox
- name: Run tests
run: |
tox -e py -- -v --cov-report term
- name: Upload coverage
uses: codecov/codecov-action@v3
with:
flags: ${{ matrix.os }}
name: ${{ matrix.os }} Python ${{ matrix.python-version }}

24
.gitignore vendored
View File

@ -1,14 +1,24 @@
pep-0000.txt
# PEPs
pep-0000.rst
pep-????.html
peps.rss
topic
/build
# Bytecode
__pycache__
*.pyc
*.pyo
*.py[co]
# Editors
*~
*env
.idea
.vscode
*.swp
/build
/package
# Tests
coverage.xml
.coverage
.tox
# Virtual environments
*env
/venv

View File

@ -1,78 +1,231 @@
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v3.4.0
hooks:
- id: mixed-line-ending
name: Normalize mixed line endings
args: [--fix=lf]
# See https://pre-commit.com for more information
# See https://pre-commit.com/hooks.html for more hooks
minimum_pre_commit_version: '2.8.2'
default_language_version:
python: python3
default_stages: [commit]
repos:
# General file checks and fixers
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.4.0
hooks:
- id: mixed-line-ending
name: "Normalize mixed line endings"
args: [--fix=lf]
- id: file-contents-sorter
name: "Sort codespell ignore list"
files: '.codespell/ignore-words.txt'
- id: check-case-conflict
name: "Check for case conflicts"
- id: check-merge-conflict
name: "Check for merge conflict markers"
- id: check-executables-have-shebangs
name: "Check that executables have shebangs"
- id: check-shebang-scripts-are-executable
name: "Check that shebangs are executable"
- id: check-vcs-permalinks
name: "Check that VCS links are permalinks"
# - id: check-ast
# name: "Check Python AST"
- id: check-json
name: "Check JSON"
- id: check-toml
name: "Check TOML"
- id: check-yaml
name: "Check YAML"
- repo: https://github.com/psf/black
rev: 23.7.0
hooks:
- id: black
name: "Format with Black"
args:
- '--target-version=py39'
- '--target-version=py310'
files: 'pep_sphinx_extensions/tests/.*'
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.0.287
hooks:
- id: ruff
name: "Lint with Ruff"
args:
- '--exit-non-zero-on-fix'
files: '^pep_sphinx_extensions/tests/'
- repo: https://github.com/tox-dev/tox-ini-fmt
rev: 1.3.1
hooks:
- id: tox-ini-fmt
name: "Format tox.ini"
- repo: https://github.com/sphinx-contrib/sphinx-lint
rev: v0.6.8
hooks:
- id: sphinx-lint
name: "Sphinx lint"
args: ["--disable=trailing-whitespace"]
# RST checks
- repo: https://github.com/pre-commit/pygrep-hooks
rev: v1.8.0
rev: v1.10.0
hooks:
- id: rst-backticks
- id: rst-inline-touching-normal
files: '^pep-\d+\.txt|\.rst$'
types: [text]
- id: rst-directive-colons
files: '^pep-\d+\.txt|\.rst$'
types: [text]
name: "Check RST: No single backticks"
- id: rst-inline-touching-normal
name: "Check RST: No backticks touching text"
- id: rst-directive-colons
name: "Check RST: 2 colons after directives"
# Manual codespell check
- repo: https://github.com/codespell-project/codespell
rev: v2.2.5
hooks:
- id: codespell
name: "Check for common misspellings in text files"
stages: [manual]
# Local checks for PEP headers and more
- repo: local
hooks:
- id: check-required-fields
name: "Check all PEPs have required fields"
# # Hook to run "check-peps.py"
# - id: "check-peps"
# name: "Check PEPs for metadata and content enforcement"
# entry: "python check-peps.py"
# language: "system"
# files: "^pep-\d{4}\.(rst|txt)$"
# require_serial: true
- id: check-required-headers
name: "PEPs must have all required headers"
language: pygrep
entry: '(?-m:^PEP:(?=[\s\S]*\nTitle:)(?=[\s\S]*\nAuthor:)(?=[\s\S]*\nStatus:)(?=[\s\S]*\nType:)(?=[\s\S]*\nContent-Type:)(?=[\s\S]*\nCreated:))'
args: ['--negate', '--multiline']
files: '^pep-\d+\.(rst|txt)$'
types: [text]
files: '^peps/pep-\d+\.rst$'
- id: check-header-order
name: "PEP header order must follow PEP 12"
language: pygrep
entry: '^PEP:[^\n]+\nTitle:[^\n]+\n(Version:[^\n]+\n)?(Last-Modified:[^\n]+\n)?Author:[^\n]+\n( +\S[^\n]+\n)*(Sponsor:[^\n]+\n)?((PEP|BDFL)-Delegate:[^\n]*\n)?(Discussions-To:[^\n]*\n)?Status:[^\n]+\nType:[^\n]+\n(Topic:[^\n]+\n)?Content-Type:[^\n]+\n(Requires:[^\n]+\n)?Created:[^\n]+\n(Python-Version:[^\n]*\n)?(Post-History:[^\n]*\n( +\S[^\n]*\n)*)?(Replaces:[^\n]+\n)?(Superseded-By:[^\n]+\n)?(Resolution:[^\n]*\n)?\n'
args: ['--negate', '--multiline']
files: '^peps/pep-\d+\.rst$'
- id: validate-pep-number
name: "Validate PEP number field"
name: "'PEP' header must be a number 1-9999"
language: pygrep
entry: '(?-m:^PEP:(?:(?! +(0|[1-9][0-9]{0,3})\n)))'
args: ['--multiline']
files: '^pep-\d+\.(rst|txt)$'
types: [text]
files: '^peps/pep-\d+\.rst$'
- id: validate-title
name: "'Title' must be 1-79 characters"
language: pygrep
entry: '(?<=\n)Title:(?:(?! +\S.{1,78}\n(?=[A-Z])))'
args: ['--multiline']
files: '^peps/pep-\d+\.rst$'
exclude: '^peps/pep-(0499)\.rst$'
- id: validate-author
name: "'Author' must be list of 'Name <email@example.com>, ...'"
language: pygrep
entry: '(?<=\n)Author:(?:(?!((( +|\n {1,8})[^!#$%&()*+,/:;<=>?@\[\\\]\^_`{|}~]+( <[\w!#$%&''*+\-/=?^_{|}~.]+(@| at )[\w\-.]+\.[A-Za-z0-9]+>)?)(,|(?=\n[^ ])))+\n(?=[A-Z])))'
args: ["--multiline"]
files: '^peps/pep-\d+\.rst$'
- id: validate-sponsor
name: "'Sponsor' must have format 'Name <email@example.com>'"
language: pygrep
entry: '^Sponsor:(?: (?! *[^!#$%&()*+,/:;<=>?@\[\\\]\^_`{|}~]+( <[\w!#$%&''*+\-/=?^_{|}~.]+(@| at )[\w\-.]+\.[A-Za-z0-9]+>)?$))'
files: '^peps/pep-\d+\.rst$'
- id: validate-delegate
name: "'Delegate' must have format 'Name <email@example.com>'"
language: pygrep
entry: '^(PEP|BDFL)-Delegate: (?:(?! *[^!#$%&()*+,/:;<=>?@\[\\\]\^_`{|}~]+( <[\w!#$%&''*+\-/=?^_{|}~.]+(@| at )[\w\-.]+\.[A-Za-z0-9]+>)?$))'
files: '^peps/pep-\d+\.rst$'
exclude: '^peps/pep-(0451)\.rst$'
- id: validate-discussions-to
name: "'Discussions-To' must be a thread URL"
language: pygrep
entry: '^Discussions-To: (?:(?!([\w\-]+@(python\.org|googlegroups\.com))|https://((discuss\.python\.org/t/([\w\-]+/)?\d+/?)|(mail\.python\.org/pipermail/[\w\-]+/\d{4}-[A-Za-z]+/[A-Za-z0-9]+\.html)|(mail\.python\.org/archives/list/[\w\-]+@python\.org/thread/[A-Za-z0-9]+/?))$))'
files: '^peps/pep-\d+\.rst$'
- id: validate-status
name: "Validate PEP Status field"
name: "'Status' must be a valid PEP status"
language: pygrep
entry: '^Status:(?:(?! +(Draft|Withdrawn|Rejected|Accepted|Final|Active|Provisional|Deferred|Superseded|April Fool!)$))'
files: '^pep-\d+\.(rst|txt)$'
types: [text]
files: '^peps/pep-\d+\.rst$'
- id: validate-type
name: "Validate PEP Type field"
name: "'Type' must be a valid PEP type"
language: pygrep
entry: '^Type:(?:(?! +(Standards Track|Informational|Process)$))'
files: '^pep-\d+\.(rst|txt)$'
types: [text]
files: '^peps/pep-\d+\.rst$'
- id: validate-topic
name: "'Topic' must be for a valid sub-index"
language: pygrep
entry: '^Topic:(?:(?! +(Governance|Packaging|Typing|Release)(, (Governance|Packaging|Typing|Release))*$))'
files: '^peps/pep-\d+\.rst$'
- id: validate-content-type
name: "Validate PEP Content-Type field"
name: "'Content-Type' must be 'text/x-rst'"
language: pygrep
entry: '^Content-Type:(?:(?! +text\/x-rst$))'
files: '^pep-\d+\.(rst|txt)$'
types: [text]
entry: '^Content-Type:(?:(?! +text/x-rst$))'
files: '^peps/pep-\d+\.rst$'
- id: validate-pep-references
name: "Validate PEP reference fields"
name: "`Requires`/`Replaces`/`Superseded-By` must be 'NNN' PEP IDs"
language: pygrep
entry: '^(Requires|Replaces|Superseded-By):(?:(?! +( ?(0|[1-9][0-9]{0,3}),?)+$))'
files: '^pep-\d+\.(rst|txt)$'
types: [text]
entry: '^(Requires|Replaces|Superseded-By):(?:(?! *( (0|[1-9][0-9]{0,3})(,|$))+$))'
files: '^peps/pep-\d+\.rst$'
- id: validate-created
name: "Validate created dates"
name: "'Created' must be a 'DD-mmm-YYYY' date"
language: pygrep
entry: '^Created:(?:(?! +([0-2][0-9]|(3[01]))-(Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)-(199[0-9]|20[0-9][0-9])( \([^()]+\))?$))'
files: '^pep-\d+\.(rst|txt)$'
types: [text]
entry: '^Created:(?:(?! +([0-2][0-9]|(3[01]))-(Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)-(199[0-9]|20[0-9][0-9])$))'
files: '^peps/pep-\d+\.rst$'
- id: validate-python-version
name: "Validate PEP Python-Version field"
name: "'Python-Version' must be a 'X.Y[.Z]` version"
language: pygrep
entry: '^Python-Version:(?:(?! +( ?[1-9]\.([0-9][0-9]?|x)(\.[1-9][0-9]?)?\??,?)+( \([^()]+\))?$))'
files: '^pep-\d+\.(rst|txt)$'
types: [text]
entry: '^Python-Version:(?:(?! *( [1-9]\.([0-9][0-9]?|x)(\.[1-9][0-9]?)?(,|$))+$))'
files: '^peps/pep-\d+\.rst$'
- id: validate-post-history
name: "'Post-History' must be '`DD-mmm-YYYY <Thread URL>`__, ...'"
language: pygrep
entry: '(?<=\n)Post-History:(?:(?! ?\n|((( +|\n {1,14})(([0-2][0-9]|(3[01]))-(Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)-(199[0-9]|20[0-9][0-9])|`([0-2][0-9]|(3[01]))-(Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)-(199[0-9]|20[0-9][0-9]) <https://((discuss\.python\.org/t/([\w\-]+/)?\d+(?:/\d+/|/?))|(mail\.python\.org/pipermail/[\w\-]+/\d{4}-[A-Za-z]+/[A-Za-z0-9]+\.html)|(mail\.python\.org/archives/list/[\w\-]+@python\.org/thread/[A-Za-z0-9]+/?(#[A-Za-z0-9]+)?))>`__)(,|(?=\n[^ ])))+\n(?=[A-Z\n]))))'
args: [--multiline]
files: '^peps/pep-\d+\.rst$'
- id: validate-resolution
name: "Validate PEP Resolution field"
name: "'Resolution' must be a direct thread/message URL"
language: pygrep
entry: '(?<!\n\n)^Resolution: (?:(?!https:\/\/\S*\n))'
entry: '(?<!\n\n)(?<=\n)Resolution: (?:(?!https://((discuss\.python\.org/t/([\w\-]+/)?\d+(/\d+)?/?)|(mail\.python\.org/pipermail/[\w\-]+/\d{4}-[A-Za-z]+/[A-Za-z0-9]+\.html)|(mail\.python\.org/archives/list/[\w\-]+@python\.org/(message|thread)/[A-Za-z0-9]+/?(#[A-Za-z0-9]+)?))\n))'
args: ['--multiline']
files: '^pep-\d+\.(rst|txt)$'
types: [text]
files: '^peps/pep-\d+\.rst$'
- id: check-direct-pep-links
name: "Check that PEPs aren't linked directly"
language: pygrep
entry: '(dev/peps|peps\.python\.org)/pep-\d+'
files: '^peps/pep-\d+\.rst$'
exclude: '^peps/pep-(0009|0287|0676|0684|8001)\.rst$'
- id: check-direct-rfc-links
name: "Check that RFCs aren't linked directly"
language: pygrep
entry: '(rfc-editor\.org|ietf\.org)/[\.\-_\?\&\#\w/]*[Rr][Ff][Cc][\-_]?\d+'
types: ['rst']

15
.ruff.toml Normal file
View File

@ -0,0 +1,15 @@
ignore = [
"E501", # Line too long
]
select = [
"E", # pycodestyle errors
"F", # pyflakes
"I", # isort
"PT", # flake8-pytest-style
"W", # pycodestyle warnings
]
show-source = true
target-version = "py39"

View File

@ -1,11 +0,0 @@
Overridden Name,Surname First,Name Reference
The Python core team and community,The Python core team and community,python-dev
Ernest W. Durbin III,"Durbin, Ernest W., III",Durbin
Greg Ewing,"Ewing, Gregory",Ewing
Guido van Rossum,"van Rossum, Guido (GvR)",GvR
Inada Naoki,"Inada, Naoki",Inada
Jim Jewett,"Jewett, Jim J.",Jewett
Just van Rossum,"van Rossum, Just (JvR)",JvR
Martin v. Löwis,"von Löwis, Martin",von Löwis
Nathaniel Smith,"Smith, Nathaniel J.",Smith
P.J. Eby,"Eby, Phillip J.",Eby
1 Overridden Name Surname First Name Reference
2 The Python core team and community The Python core team and community python-dev
3 Ernest W. Durbin III Durbin, Ernest W., III Durbin
4 Greg Ewing Ewing, Gregory Ewing
5 Guido van Rossum van Rossum, Guido (GvR) GvR
6 Inada Naoki Inada, Naoki Inada
7 Jim Jewett Jewett, Jim J. Jewett
8 Just van Rossum van Rossum, Just (JvR) JvR
9 Martin v. Löwis von Löwis, Martin von Löwis
10 Nathaniel Smith Smith, Nathaniel J. Smith
11 P.J. Eby Eby, Phillip J. Eby

View File

@ -1,13 +0,0 @@
# Code of Conduct
Please note that all interactions on
[Python Software Foundation](https://www.python.org/psf-landing/)-supported
infrastructure is
[covered](https://www.python.org/psf/records/board/minutes/2014-01-06/#management-of-the-psfs-web-properties)
by the [PSF Code of Conduct](https://www.python.org/psf/codeofconduct/),
which includes all infrastructure used in the development of Python itself
(e.g. mailing lists, issue trackers, GitHub, etc.).
In general this means everyone is expected to be open, considerate, and
respectful of others no matter what their position is within the project.

View File

@ -1,47 +1,71 @@
Contributing Guidelines
=======================
To learn more about the purpose of PEPs and how to go about writing a PEP, please
start reading at PEP 1 (`pep-0001.txt <./pep-0001.txt>`_ in this repo). Note that
PEP 0, the index PEP, is now automatically generated, and not committed to the repo.
To learn more about the purpose of PEPs and how to go about writing one, please
start reading at `PEP 1 <https://peps.python.org/pep-0001/>`_.
Also, make sure to check the `README <./README.rst>`_ for information
on how to render the PEPs in this repository.
Thanks again for your contributions, and we look forward to reviewing them!
Before writing a new PEP
------------------------
Has this idea been proposed on `python-ideas <https://mail.python.org/mailman/listinfo/python-ideas>`_
and received general acceptance as being an idea worth pursuing? (if not then
please start a discussion there before submitting a pull request).
More details about it in `PEP 1 <https://www.python.org/dev/peps/pep-0001/#start-with-an-idea-for-python>`_.
Do you have an implementation of your idea? (this is important for when you
propose this PEP to `python-dev <https://mail.python.org/mailman/listinfo/python-dev>`_
as code maintenance is a critical aspect of all PEP proposals prior to a
final decision; in special circumstances an implementation can be deferred)
Prior to submitting a pull request here with your draft PEP, see `PEP 1
<https://peps.python.org/pep-0001/#start-with-an-idea-for-python>`_
for some important steps to consider, including proposing and discussing it
first in an appropriate venue, drafting a PEP and gathering feedback, and
developing at least a prototype reference implementation of your idea.
Commit messages
---------------
Contributing changes to existing PEPs
-------------------------------------
When committing to a PEP, please always include the PEP number in the subject
title. For example, ``PEP NNN: <summary of changes>``.
In general, most non-Draft/Active PEPs are considered to be historical
documents rather than living specifications or documentation. Major changes to
their core content usually require a new PEP, while smaller modifications may
or may not be appropriate, depending on the PEP's status. See `PEP Maintenance
<https://peps.python.org/pep-0001/#pep-maintenance>`_
and `Changing Existing PEPs
<https://peps.python.org/pep-0001/#changing-existing-peps>`_ in PEP 1 for more.
Copyediting and proofreading Draft and Active PEPs is welcome (subject to
review by the PEP author), and can be done via pull request to this repo.
Substantive content changes should first be proposed on PEP discussion threads.
We do advise against PRs that simply mass-correct minor typos on older PEPs
which don't significantly impair meaning and understanding.
If you're still unsure, we encourage you to reach out first before opening a
PR here. For example, you could contact the PEP author(s), propose your idea in
a discussion venue appropriate to the PEP (such as `Typing-SIG
<https://mail.python.org/archives/list/typing-sig@python.org/>`__ for static
typing, or `Packaging Discourse <https://discuss.python.org/c/packaging/>`__
for packaging), or `open an issue <https://github.com/python/peps/issues>`__.
Sign the CLA
------------
Commit messages and PR titles
-----------------------------
Before you hit "Create pull request", please take a moment to ensure that this
project can legally accept your contribution by verifying you have signed the
PSF Contributor Agreement:
When adding or modifying a PEP, please include the PEP number in the commit
summary and pull request title. For example, ``PEP NNN: <summary of changes>``.
Likewise, prefix rendering infrastructure changes with ``Infra:``, linting
alterations with ``Lint:`` and other non-PEP meta changes, such as updates to
the Readme/Contributing Guide, issue/PR template, etc., with ``Meta:``.
https://www.python.org/psf/contrib/contrib-form/
If you haven't signed the CLA before, please follow the steps outlined in the
CPython devguide to do so:
Sign the Contributor License Agreement
--------------------------------------
https://devguide.python.org/pullrequest/#licensing
All contributors need to sign the
`PSF Contributor Agreement <https://www.python.org/psf/contrib/contrib-form/>`_.
to ensure we legally accept your work.
Thanks again to your contribution and we look forward to looking at it!
You don't need to do anything beforehand;
go ahead and create your pull request,
and our bot will ping you to sign the CLA if needed.
`See the CPython devguide
<https://devguide.python.org/pullrequest/#licensing>`__
for more information.
Code of Conduct
@ -49,5 +73,82 @@ Code of Conduct
All interactions for this project are covered by the
`PSF Code of Conduct <https://www.python.org/psf/codeofconduct/>`_. Everyone is
expected to be open, considerate, and respectful of others no matter their
expected to be open, considerate, and respectful of others, no matter their
position within the project.
Run pre-commit linting locally
------------------------------
You can run this repo's basic linting suite locally,
either on-demand, or automatically against modified files
whenever you commit your changes.
They are also run in CI, so you don't have to run them locally, though doing
so will help you catch and potentially fix common mistakes before pushing
your changes and opening a pull request.
This repository uses the `pre-commit <https://pre-commit.com/>`_ tool to
install, configure and update a suite of hooks that check for
common problems and issues, and fix many of them automatically.
If your system has ``make`` installed, you can run the pre-commit checkers
on the full repo by running ``make lint``. This will
install pre-commit in the current virtual environment if it isn't already,
so make sure you've activated the environment you want it to use
before running this command.
Otherwise, you can install pre-commit with
.. code-block:: bash
python -m pip install pre-commit
(or your choice of installer), and then run the hooks on all the files
in the repo with
.. code-block:: bash
pre-commit run --all-files
or only on any files that have been modified but not yet committed with
.. code-block:: bash
pre-commit run
If you would like pre-commit to run automatically against any modified files
every time you commit, install the hooks with
.. code-block:: bash
pre-commit install
Then, whenever you ``git commit``, pre-commit will run and report any issues
it finds or changes it makes, and abort the commit to allow you to check,
and if necessary correct them before committing again.
Check and fix PEP spelling
--------------------------
To check for common spelling mistakes in your PEP and automatically suggest
corrections, you can run the codespell tool through pre-commit as well.
Like the linters, on a system with ``make`` available, it can be installed
(in the currently-activated environment) and run on all files in the
repository with a single command, ``make spellcheck``.
For finer control or on other systems, after installing pre-commit as in
the previous section, you can run it against only the files
you've modified and not yet committed with
.. code-block:: bash
pre-commit run --hook-stage manual codespell
or against all files with
.. code-block:: bash
pre-commit run --all-files --hook-stage manual codespell

135
Makefile
View File

@ -1,81 +1,82 @@
# Builds PEP files to HTML using docutils or sphinx
# Also contains testing targets
# Builds PEP files to HTML using sphinx
PEP2HTML=pep2html.py
# You can set these variables from the command line.
PYTHON = python3
VENVDIR = .venv
SPHINXBUILD = PATH=$(VENVDIR)/bin:$$PATH sphinx-build
BUILDER = html
JOBS = 8
SOURCES =
# synchronise with render.yml -> deploy step
OUTPUT_DIR = build
SPHINXERRORHANDLING = -W --keep-going -w sphinx-warnings.txt
PYTHON=python3
ALLSPHINXOPTS = -b $(BUILDER) -j $(JOBS) \
$(SPHINXOPTS) $(SPHINXERRORHANDLING) peps $(OUTPUT_DIR) $(SOURCES)
VENV_DIR=venv
## html to render PEPs to "pep-NNNN.html" files
.PHONY: html
html: venv
$(SPHINXBUILD) $(ALLSPHINXOPTS)
.SUFFIXES: .txt .html .rst
## htmlview to open the index page built by the html target in your browser
.PHONY: htmlview
htmlview: html
$(PYTHON) -c "import os, webbrowser; webbrowser.open('file://' + os.path.realpath('build/index.html'))"
.txt.html:
@$(PYTHON) $(PEP2HTML) $<
## dirhtml to render PEPs to "index.html" files within "pep-NNNN" directories
.PHONY: dirhtml
dirhtml: BUILDER = dirhtml
dirhtml: venv
$(SPHINXBUILD) $(ALLSPHINXOPTS)
.rst.html:
@$(PYTHON) $(PEP2HTML) $<
## check-links to check validity of links within PEP sources
.PHONY: check-links
check-links: BUILDER = linkcheck
check-links: venv
$(SPHINXBUILD) $(ALLSPHINXOPTS)
TARGETS= $(patsubst %.rst,%.html,$(wildcard pep-????.rst)) $(patsubst %.txt,%.html,$(wildcard pep-????.txt)) pep-0000.html
## clean to remove the venv and build files
.PHONY: clean
clean: clean-venv
-rm -rf build topic
all: pep-0000.rst $(TARGETS)
$(TARGETS): pep2html.py
pep-0000.rst: $(wildcard pep-????.txt) $(wildcard pep-????.rst) $(wildcard pep0/*.py) genpepindex.py
$(PYTHON) genpepindex.py .
rss:
$(PYTHON) pep2rss.py .
install:
echo "Installing is not necessary anymore. It will be done in post-commit."
clean:
-rm pep-0000.rst
-rm *.html
-rm -rf build
update:
git pull https://github.com/python/peps.git
## clean-venv to remove the venv
.PHONY: clean-venv
clean-venv:
rm -rf $(VENVDIR)
## venv to create a venv with necessary tools
.PHONY: venv
venv:
$(PYTHON) -m venv $(VENV_DIR)
./$(VENV_DIR)/bin/python -m pip install -r requirements.txt
@if [ -d $(VENVDIR) ] ; then \
echo "venv already exists."; \
echo "To recreate it, remove it first with \`make clean-venv'."; \
else \
$(PYTHON) -m venv $(VENVDIR); \
$(VENVDIR)/bin/python3 -m pip install -U pip wheel; \
$(VENVDIR)/bin/python3 -m pip install -r requirements.txt; \
echo "The venv has been created in the $(VENVDIR) directory"; \
fi
package: all rss
mkdir -p build/peps
cp pep-*.txt build/peps/
cp pep-*.rst build/peps/
cp *.html build/peps/
cp *.png build/peps/
cp *.rss build/peps/
tar -C build -czf build/peps.tar.gz peps
## lint to lint all the files
.PHONY: lint
lint: venv
$(VENVDIR)/bin/python3 -m pre_commit --version > /dev/null || $(VENVDIR)/bin/python3 -m pip install pre-commit
$(VENVDIR)/bin/python3 -m pre_commit run --all-files
lint:
pre-commit --version > /dev/null || python3 -m pip install pre-commit
pre-commit run --all-files
## test to test the Sphinx extensions for PEPs
.PHONY: test
test: venv
$(VENVDIR)/bin/python3 -bb -X dev -W error -m pytest
# New Sphinx targets:
## spellcheck to check spelling
.PHONY: spellcheck
spellcheck: venv
$(VENVDIR)/bin/python3 -m pre_commit --version > /dev/null || $(VENVDIR)/bin/python3 -m pip install pre-commit
$(VENVDIR)/bin/python3 -m pre_commit run --all-files --hook-stage manual codespell
SPHINX_JOBS=8
SPHINX_BUILD=$(PYTHON) build.py -j $(SPHINX_JOBS)
# TODO replace `rss:` with this when merged & tested
pep_rss:
$(PYTHON) pep_rss_gen.py
pages: pep_rss
$(SPHINX_BUILD) --index-file
sphinx:
$(SPHINX_BUILD)
# for building Sphinx without a web-server
sphinx-local:
$(SPHINX_BUILD) --build-files
fail-warning:
$(SPHINX_BUILD) --fail-on-warning
check-links:
$(SPHINX_BUILD) --check-links
.PHONY: help
help : Makefile
@echo "Please use \`make <target>' where <target> is one of"
@sed -n 's/^##//p' $<

View File

@ -1,456 +0,0 @@
"""PyRSS2Gen - A Python library for generating RSS 2.0 feeds."""
__name__ = "PyRSS2Gen"
__version__ = (1, 1, 0)
__author__ = "Andrew Dalke <dalke@dalkescientific.com>"
_generator_name = __name__ + "-" + ".".join(map(str, __version__))
import datetime
import sys
if sys.version_info[0] == 3:
# Python 3
basestring = str
from io import StringIO
else:
# Python 2
try:
from cStringIO import StringIO
except ImportError:
# Very old (or memory constrained) systems might
# have left out the compiled C version. Fall back
# to the pure Python one. Haven't seen this sort
# of system since the early 2000s.
from StringIO import StringIO
# Could make this the base class; will need to add 'publish'
class WriteXmlMixin:
def write_xml(self, outfile, encoding = "iso-8859-1"):
from xml.sax import saxutils
handler = saxutils.XMLGenerator(outfile, encoding)
handler.startDocument()
self.publish(handler)
handler.endDocument()
def to_xml(self, encoding = "iso-8859-1"):
f = StringIO()
self.write_xml(f, encoding)
return f.getvalue()
def _element(handler, name, obj, d = {}):
if isinstance(obj, basestring) or obj is None:
# special-case handling to make the API easier
# to use for the common case.
handler.startElement(name, d)
if obj is not None:
handler.characters(obj)
handler.endElement(name)
else:
# It better know how to emit the correct XML.
obj.publish(handler)
def _opt_element(handler, name, obj):
if obj is None:
return
_element(handler, name, obj)
def _format_date(dt):
"""convert a datetime into an RFC 822 formatted date
Input date must be in GMT.
"""
# Looks like:
# Sat, 07 Sep 2002 00:00:01 GMT
# Can't use strftime because that's locale dependent
#
# Isn't there a standard way to do this for Python? The
# rfc822 and email.Utils modules assume a timestamp. The
# following is based on the rfc822 module.
return "%s, %02d %s %04d %02d:%02d:%02d GMT" % (
["Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun"][dt.weekday()],
dt.day,
["Jan", "Feb", "Mar", "Apr", "May", "Jun",
"Jul", "Aug", "Sep", "Oct", "Nov", "Dec"][dt.month-1],
dt.year, dt.hour, dt.minute, dt.second)
##
# A couple simple wrapper objects for the fields which
# take a simple value other than a string.
class IntElement:
"""implements the 'publish' API for integers
Takes the tag name and the integer value to publish.
(Could be used for anything which uses str() to be published
to text for XML.)
"""
element_attrs = {}
def __init__(self, name, val):
self.name = name
self.val = val
def publish(self, handler):
handler.startElement(self.name, self.element_attrs)
handler.characters(str(self.val))
handler.endElement(self.name)
class DateElement:
"""implements the 'publish' API for a datetime.datetime
Takes the tag name and the datetime to publish.
Converts the datetime to RFC 2822 timestamp (4-digit year).
"""
def __init__(self, name, dt):
self.name = name
self.dt = dt
def publish(self, handler):
_element(handler, self.name, _format_date(self.dt))
####
class Category:
"""Publish a category element"""
def __init__(self, category, domain = None):
self.category = category
self.domain = domain
def publish(self, handler):
d = {}
if self.domain is not None:
d["domain"] = self.domain
_element(handler, "category", self.category, d)
class Cloud:
"""Publish a cloud"""
def __init__(self, domain, port, path,
registerProcedure, protocol):
self.domain = domain
self.port = port
self.path = path
self.registerProcedure = registerProcedure
self.protocol = protocol
def publish(self, handler):
_element(handler, "cloud", None, {
"domain": self.domain,
"port": str(self.port),
"path": self.path,
"registerProcedure": self.registerProcedure,
"protocol": self.protocol})
class Image:
"""Publish a channel Image"""
element_attrs = {}
def __init__(self, url, title, link,
width = None, height = None, description = None):
self.url = url
self.title = title
self.link = link
self.width = width
self.height = height
self.description = description
def publish(self, handler):
handler.startElement("image", self.element_attrs)
_element(handler, "url", self.url)
_element(handler, "title", self.title)
_element(handler, "link", self.link)
width = self.width
if isinstance(width, int):
width = IntElement("width", width)
_opt_element(handler, "width", width)
height = self.height
if isinstance(height, int):
height = IntElement("height", height)
_opt_element(handler, "height", height)
_opt_element(handler, "description", self.description)
handler.endElement("image")
class Guid:
"""Publish a guid
Defaults to being a permalink, which is the assumption if it's
omitted. Hence strings are always permalinks.
"""
def __init__(self, guid, isPermaLink = 1):
self.guid = guid
self.isPermaLink = isPermaLink
def publish(self, handler):
d = {}
if self.isPermaLink:
d["isPermaLink"] = "true"
else:
d["isPermaLink"] = "false"
_element(handler, "guid", self.guid, d)
class TextInput:
"""Publish a textInput
Apparently this is rarely used.
"""
element_attrs = {}
def __init__(self, title, description, name, link):
self.title = title
self.description = description
self.name = name
self.link = link
def publish(self, handler):
handler.startElement("textInput", self.element_attrs)
_element(handler, "title", self.title)
_element(handler, "description", self.description)
_element(handler, "name", self.name)
_element(handler, "link", self.link)
handler.endElement("textInput")
class Enclosure:
"""Publish an enclosure"""
def __init__(self, url, length, type):
self.url = url
self.length = length
self.type = type
def publish(self, handler):
_element(handler, "enclosure", None,
{"url": self.url,
"length": str(self.length),
"type": self.type,
})
class Source:
"""Publish the item's original source, used by aggregators"""
def __init__(self, name, url):
self.name = name
self.url = url
def publish(self, handler):
_element(handler, "source", self.name, {"url": self.url})
class SkipHours:
"""Publish the skipHours
This takes a list of hours, as integers.
"""
element_attrs = {}
def __init__(self, hours):
self.hours = hours
def publish(self, handler):
if self.hours:
handler.startElement("skipHours", self.element_attrs)
for hour in self.hours:
_element(handler, "hour", str(hour))
handler.endElement("skipHours")
class SkipDays:
"""Publish the skipDays
This takes a list of days as strings.
"""
element_attrs = {}
def __init__(self, days):
self.days = days
def publish(self, handler):
if self.days:
handler.startElement("skipDays", self.element_attrs)
for day in self.days:
_element(handler, "day", day)
handler.endElement("skipDays")
class RSS2(WriteXmlMixin):
"""The main RSS class.
Stores the channel attributes, with the "category" elements under
".categories" and the RSS items under ".items".
"""
rss_attrs = {"version": "2.0"}
element_attrs = {}
def __init__(self,
title,
link,
description,
language = None,
copyright = None,
managingEditor = None,
webMaster = None,
pubDate = None, # a datetime, *in* *GMT*
lastBuildDate = None, # a datetime
categories = None, # list of strings or Category
generator = _generator_name,
docs = "http://blogs.law.harvard.edu/tech/rss",
cloud = None, # a Cloud
ttl = None, # integer number of minutes
image = None, # an Image
rating = None, # a string; I don't know how it's used
textInput = None, # a TextInput
skipHours = None, # a SkipHours with a list of integers
skipDays = None, # a SkipDays with a list of strings
items = None, # list of RSSItems
):
self.title = title
self.link = link
self.description = description
self.language = language
self.copyright = copyright
self.managingEditor = managingEditor
self.webMaster = webMaster
self.pubDate = pubDate
self.lastBuildDate = lastBuildDate
if categories is None:
categories = []
self.categories = categories
self.generator = generator
self.docs = docs
self.cloud = cloud
self.ttl = ttl
self.image = image
self.rating = rating
self.textInput = textInput
self.skipHours = skipHours
self.skipDays = skipDays
if items is None:
items = []
self.items = items
def publish(self, handler):
handler.startElement("rss", self.rss_attrs)
handler.startElement("channel", self.element_attrs)
_element(handler, "title", self.title)
_element(handler, "link", self.link)
_element(handler, "description", self.description)
self.publish_extensions(handler)
_opt_element(handler, "language", self.language)
_opt_element(handler, "copyright", self.copyright)
_opt_element(handler, "managingEditor", self.managingEditor)
_opt_element(handler, "webMaster", self.webMaster)
pubDate = self.pubDate
if isinstance(pubDate, datetime.datetime):
pubDate = DateElement("pubDate", pubDate)
_opt_element(handler, "pubDate", pubDate)
lastBuildDate = self.lastBuildDate
if isinstance(lastBuildDate, datetime.datetime):
lastBuildDate = DateElement("lastBuildDate", lastBuildDate)
_opt_element(handler, "lastBuildDate", lastBuildDate)
for category in self.categories:
if isinstance(category, basestring):
category = Category(category)
category.publish(handler)
_opt_element(handler, "generator", self.generator)
_opt_element(handler, "docs", self.docs)
if self.cloud is not None:
self.cloud.publish(handler)
ttl = self.ttl
if isinstance(self.ttl, int):
ttl = IntElement("ttl", ttl)
_opt_element(handler, "ttl", ttl)
if self.image is not None:
self.image.publish(handler)
_opt_element(handler, "rating", self.rating)
if self.textInput is not None:
self.textInput.publish(handler)
if self.skipHours is not None:
self.skipHours.publish(handler)
if self.skipDays is not None:
self.skipDays.publish(handler)
for item in self.items:
item.publish(handler)
handler.endElement("channel")
handler.endElement("rss")
def publish_extensions(self, handler):
# Derived classes can hook into this to insert
# output after the three required fields.
pass
class RSSItem(WriteXmlMixin):
"""Publish an RSS Item"""
element_attrs = {}
def __init__(self,
title = None, # string
link = None, # url as string
description = None, # string
author = None, # email address as string
categories = None, # list of string or Category
comments = None, # url as string
enclosure = None, # an Enclosure
guid = None, # a unique string
pubDate = None, # a datetime
source = None, # a Source
):
if title is None and description is None:
raise TypeError(
"must define at least one of 'title' or 'description'")
self.title = title
self.link = link
self.description = description
self.author = author
if categories is None:
categories = []
self.categories = categories
self.comments = comments
self.enclosure = enclosure
self.guid = guid
self.pubDate = pubDate
self.source = source
# It sure does get tedious typing these names three times...
def publish(self, handler):
handler.startElement("item", self.element_attrs)
_opt_element(handler, "title", self.title)
_opt_element(handler, "link", self.link)
self.publish_extensions(handler)
_opt_element(handler, "description", self.description)
_opt_element(handler, "author", self.author)
for category in self.categories:
if isinstance(category, basestring):
category = Category(category)
category.publish(handler)
_opt_element(handler, "comments", self.comments)
if self.enclosure is not None:
self.enclosure.publish(handler)
_opt_element(handler, "guid", self.guid)
pubDate = self.pubDate
if isinstance(pubDate, datetime.datetime):
pubDate = DateElement("pubDate", pubDate)
_opt_element(handler, "pubDate", pubDate)
if self.source is not None:
self.source.publish(handler)
handler.endElement("item")
def publish_extensions(self, handler):
# Derived classes can hook into this to insert
# output after the title and link elements
pass

View File

@ -1,14 +1,23 @@
Python Enhancement Proposals
============================
.. image:: https://travis-ci.org/python/peps.svg?branch=master
:target: https://travis-ci.org/python/peps
.. image:: https://github.com/python/peps/actions/workflows/render.yml/badge.svg
:target: https://github.com/python/peps/actions
The PEPs in this repo are published automatically on the web at
https://www.python.org/dev/peps/. To learn more about the purpose of
PEPs and how to go about writing a PEP, please start reading at PEP 1
(``pep-0001.txt`` in this repo). Note that PEP 0, the index PEP, is
now automatically generated, and not committed to the repo.
https://peps.python.org/. To learn more about the purpose of PEPs and how to go
about writing one, please start reading at :pep:`1`. Note that the PEP Index
(:pep:`0`) is automatically generated based on the metadata headers in other PEPs.
Canonical links
===============
The canonical form of PEP links are zero-padded, such as
``https://peps.python.org/pep-0008/``
Shortcut redirects are also available.
For example, ``https://peps.python.org/8`` redirects to the canonical link.
Contributing to PEPs
@ -17,113 +26,50 @@ Contributing to PEPs
See the `Contributing Guidelines <./CONTRIBUTING.rst>`_.
reStructuredText for PEPs
=========================
Original PEP source should be written in reStructuredText format,
which is a constrained version of plaintext, and is described in
PEP 12. Older PEPs were often written in a more mildly restricted
plaintext format, as described in PEP 9. The ``pep2html.py``
processing and installation script knows how to produce the HTML
for either PEP format.
For processing reStructuredText format PEPs, you need the docutils
package, which is available from `PyPI <https://pypi.org/>`_.
If you have pip, ``pip install docutils`` should install it.
Generating the PEP Index
========================
PEP 0 is automatically generated based on the metadata headers in other
PEPs. The script handling this is ``genpepindex.py``, with supporting
libraries in the ``pep0`` directory.
Checking PEP formatting and rendering
=====================================
Do not commit changes with bad formatting. To check the formatting of
a PEP, use the Makefile. In particular, to generate HTML for PEP 9999,
your source code should be in ``pep-9999.rst`` and the HTML will be
generated to ``pep-9999.html`` by the command ``make pep-9999.html``.
The default Make target generates HTML for all PEPs.
If you don't have Make, use the ``pep2html.py`` script directly.
Please don't commit changes with reStructuredText syntax errors that cause PEP
generation to fail, or result in major rendering defects relative to what you
intend.
Generating HTML for python.org
==============================
Browse the ReadTheDocs preview
------------------------------
python.org includes its own helper modules to render PEPs as HTML, with
suitable links back to the source pages in the version control repository.
For every PR, we automatically create a preview of the rendered PEPs using
`ReadTheDocs <https://readthedocs.org/>`_.
You can find it in the merge box at the bottom of the PR page:
These can be found at https://github.com/python/pythondotorg/tree/master/peps
When making changes to the PEP management process that may impact python.org's
rendering pipeline:
* Clone the python.org repository from https://github.com/python/pythondotorg/
* Get set up for local python.org development as per
https://pythondotorg.readthedocs.io/install.html#manual-setup
* Adjust ``PEP_REPO_PATH`` in ``pydotorg/settings/local.py`` to refer to your
local clone of the PEP repository
* Run ``./manage.py generate_pep_pages`` as described in
https://pythondotorg.readthedocs.io/pep_generation.html
1. Click "Show all checks" to expand the checks section
2. Find the line for ``docs/readthedocs.org:pep-previews``
3. Click on "Details" to the right
Rendering PEPs with Sphinx
==========================
Render PEPs locally
-------------------
There is a Sphinx-rendered version of the PEPs at https://python.github.io/peps/
(updated on every push to ``master``)
See the `build documentation <./docs/build.rst>`__ for full
instructions on how to render PEPs locally.
In summary, run the following in a fresh, activated virtual environment:
**Warning:** This version is not, and should not be taken to be, a canonical
source for PEPs whilst it remains in preview (`please report any rendering bugs
<https://github.com/python/peps/issues/new>`_). The canonical source for PEPs remains
https://www.python.org/dev/peps/
.. code-block:: bash
Build PEPs with Sphinx locally:
-------------------------------
# Install requirements
python -m pip install -U -r requirements.txt
1. Ensure you have Python >=3.9 and Sphinx installed
2. If you have access to ``make``, follow (i), otherwise (ii)
# Build the PEPs
make html
i. Run ``make sphinx-local``
ii. Run ``python build.py -j 8 --build-files``. Note that the jobs argument
only takes effect on unix (non-mac) systems.
3. Wait for Sphinx to render the PEPs. There may be a series of warnings about
unreferenced citations or labels -- whilst these are valid warnings they do
not impact the build process.
4. Navigate to the ``build`` directory of your PEPs repo to find the HTML pages.
PEP 0 provides a formatted index, and may be a useful reference.
# Or, if you don't have 'make':
python build.py
Arguments to ``build.py``:
--------------------------
The output HTML is found under the ``build`` directory.
Renderers:
``-f`` or ``--build-files``
Renders PEPs to ``pep-XXXX.html`` files
Check and lint PEPs
-------------------
``-d`` or ``--build-dirs``
Renders PEPs to ``index.html`` files within ``pep-XXXX`` directories
Options:
``-i`` or ``--index-file``
Copies PEP 0 to a base index file
``-j`` or ``--jobs``
How many parallel jobs to run (if supported). Integer, default 1
``-n`` or ``--nitpicky``
Runs Sphinx in `nitpicky` mode
``-w`` or ``--fail-on-warning``
Fails Sphinx on warnings
Tools:
``-l`` or ``--check-links``
Checks validity of links within PEP sources
You can check for and fix common linting and spelling issues,
either on-demand or automatically as you commit, with our pre-commit suite.
See the `Contributing Guide <./CONTRIBUTING.rst>`_ for details.

74
build.py Normal file → Executable file
View File

@ -1,6 +1,11 @@
#!/usr/bin/env python3
# This file is placed in the public domain or under the
# CC0-1.0-Universal license, whichever is more permissive.
"""Build script for Sphinx documentation"""
import argparse
import os
from pathlib import Path
from sphinx.application import Sphinx
@ -9,17 +14,26 @@ from sphinx.application import Sphinx
def create_parser():
parser = argparse.ArgumentParser(description="Build PEP documents")
# alternative builders:
parser.add_argument("-l", "--check-links", action="store_true")
parser.add_argument("-f", "--build-files", action="store_true")
parser.add_argument("-d", "--build-dirs", action="store_true")
builders = parser.add_mutually_exclusive_group()
builders.add_argument("-l", "--check-links", action="store_const",
dest="builder", const="linkcheck",
help='Check validity of links within PEP sources. '
'Cannot be used with "-f" or "-d".')
builders.add_argument("-f", "--build-files", action="store_const",
dest="builder", const="html",
help='Render PEPs to "pep-NNNN.html" files (default). '
'Cannot be used with "-d" or "-l".')
builders.add_argument("-d", "--build-dirs", action="store_const",
dest="builder", const="dirhtml",
help='Render PEPs to "index.html" files within "pep-NNNN" directories. '
'Cannot be used with "-f" or "-l".')
# flags / options
parser.add_argument("-w", "--fail-on-warning", action="store_true")
parser.add_argument("-n", "--nitpicky", action="store_true")
parser.add_argument("-j", "--jobs", type=int, default=1)
# extra build steps
parser.add_argument("-i", "--index-file", action="store_true") # for PEP 0
parser.add_argument(
"-o",
"--output-dir",
default="build",
help="Output directory, relative to root. Default 'build'.",
)
return parser.parse_args()
@ -39,40 +53,24 @@ def create_index_file(html_root: Path, builder: str) -> None:
if __name__ == "__main__":
args = create_parser()
root_directory = Path(".").absolute()
source_directory = root_directory
build_directory = root_directory / "build" # synchronise with deploy-gh-pages.yaml -> deploy step
doctree_directory = build_directory / ".doctrees"
root_directory = Path(__file__).resolve().parent
source_directory = root_directory / "peps"
build_directory = root_directory / args.output_dir
# builder configuration
if args.build_files:
sphinx_builder = "html"
elif args.build_dirs:
sphinx_builder = "dirhtml"
elif args.check_links:
sphinx_builder = "linkcheck"
else:
# default builder
sphinx_builder = "dirhtml"
# other configuration
config_overrides = {}
if args.nitpicky:
config_overrides["nitpicky"] = True
sphinx_builder = args.builder or "html"
app = Sphinx(
source_directory,
confdir=source_directory,
outdir=build_directory,
doctreedir=doctree_directory,
outdir=build_directory / sphinx_builder,
doctreedir=build_directory / "doctrees",
buildername=sphinx_builder,
confoverrides=config_overrides,
warningiserror=args.fail_on_warning,
parallel=args.jobs,
warningiserror=True,
parallel=os.cpu_count() or 1,
tags=["internal_builder"],
keep_going=True,
)
app.builder.copysource = False # Prevent unneeded source copying - we link direct to GitHub
app.builder.search = False # Disable search
app.build()
if args.index_file:
create_index_file(build_directory, sphinx_builder)
create_index_file(build_directory, sphinx_builder)

605
check-peps.py Executable file
View File

@ -0,0 +1,605 @@
#!/usr/bin/env python3
# This file is placed in the public domain or under the
# CC0-1.0-Universal license, whichever is more permissive.
"""check-peps: Check PEPs for common mistakes.
Usage: check-peps [-d | --detailed] <PEP files...>
Only the PEPs specified are checked.
If none are specified, all PEPs are checked.
Use "--detailed" to show the contents of lines where errors were found.
"""
from __future__ import annotations
import datetime as dt
import re
import sys
from pathlib import Path
TYPE_CHECKING = False
if TYPE_CHECKING:
from collections.abc import Iterable, Iterator, KeysView, Sequence
from typing import TypeAlias
# (line number, warning message)
Message: TypeAlias = tuple[int, str]
MessageIterator: TypeAlias = Iterator[Message]
# get the directory with the PEP sources
ROOT_DIR = Path(__file__).resolve().parent
PEP_ROOT = ROOT_DIR / "peps"
# See PEP 12 for the order
# Note we retain "BDFL-Delegate"
ALL_HEADERS = (
"PEP",
"Title",
"Version",
"Last-Modified",
"Author",
"Sponsor",
"BDFL-Delegate", "PEP-Delegate",
"Discussions-To",
"Status",
"Type",
"Topic",
"Content-Type",
"Requires",
"Created",
"Python-Version",
"Post-History",
"Replaces",
"Superseded-By",
"Resolution",
)
REQUIRED_HEADERS = frozenset({"PEP", "Title", "Author", "Status", "Type", "Created"})
# See PEP 1 for the full list
ALL_STATUSES = frozenset({
"Accepted",
"Active",
"April Fool!",
"Deferred",
"Draft",
"Final",
"Provisional",
"Rejected",
"Superseded",
"Withdrawn",
})
# PEPs that are allowed to link directly to PEPs
SKIP_DIRECT_PEP_LINK_CHECK = frozenset({"0009", "0287", "0676", "0684", "8001"})
DEFAULT_FLAGS = re.ASCII | re.IGNORECASE # Insensitive latin
# any sequence of letters or '-', followed by a single ':' and a space or end of line
HEADER_PATTERN = re.compile(r"^([a-z\-]+):(?: |$)", DEFAULT_FLAGS)
# any sequence of unicode letters or legal special characters
NAME_PATTERN = re.compile(r"(?:[^\W\d_]|[ ',\-.])+(?: |$)")
# any sequence of ASCII letters, digits, or legal special characters
EMAIL_LOCAL_PART_PATTERN = re.compile(r"[\w!#$%&'*+\-/=?^{|}~.]+", DEFAULT_FLAGS)
DISCOURSE_THREAD_PATTERN = re.compile(r"([\w\-]+/)?\d+", DEFAULT_FLAGS)
DISCOURSE_POST_PATTERN = re.compile(r"([\w\-]+/)?\d+(/\d+)?", DEFAULT_FLAGS)
MAILMAN_2_PATTERN = re.compile(r"[\w\-]+/\d{4}-[a-z]+/\d+\.html", DEFAULT_FLAGS)
MAILMAN_3_THREAD_PATTERN = re.compile(r"[\w\-]+@python\.org/thread/[a-z0-9]+/?", DEFAULT_FLAGS)
MAILMAN_3_MESSAGE_PATTERN = re.compile(r"[\w\-]+@python\.org/message/[a-z0-9]+/?(#[a-z0-9]+)?", DEFAULT_FLAGS)
# Controlled by the "--detailed" flag
DETAILED_ERRORS = False
def check(filenames: Sequence[str] = (), /) -> int:
"""The main entry-point."""
if filenames:
filenames = map(Path, filenames)
else:
filenames = PEP_ROOT.glob("pep-????.rst")
if (count := sum(map(check_file, filenames))) > 0:
s = "s" * (count != 1)
print(f"check-peps failed: {count} error{s}", file=sys.stderr)
return 1
return 0
def check_file(filename: Path, /) -> int:
filename = filename.resolve()
try:
content = filename.read_text(encoding="utf-8")
except FileNotFoundError:
return _output_error(filename, [""], [(0, "Could not read PEP!")])
else:
lines = content.splitlines()
return _output_error(filename, lines, check_peps(filename, lines))
def check_peps(filename: Path, lines: Sequence[str], /) -> MessageIterator:
yield from check_headers(lines)
for line_num, line in enumerate(lines, start=1):
if filename.stem.removeprefix("pep-") in SKIP_DIRECT_PEP_LINK_CHECK:
continue
yield from check_direct_links(line_num, line.lstrip())
def check_headers(lines: Sequence[str], /) -> MessageIterator:
yield from _validate_pep_number(next(iter(lines), ""))
found_headers = {}
line_num = 0
for line_num, line in enumerate(lines, start=1):
if line.strip() == "":
headers_end_line_num = line_num
break
if match := HEADER_PATTERN.match(line):
header = match[1]
if header in ALL_HEADERS:
if header not in found_headers:
found_headers[match[1]] = line_num
else:
yield line_num, f"Must not have duplicate header: {header} "
else:
yield line_num, f"Must not have invalid header: {header}"
else:
headers_end_line_num = line_num
yield from _validate_required_headers(found_headers.keys())
shifted_line_nums = list(found_headers.values())[1:]
for i, (header, line_num) in enumerate(found_headers.items()):
start = line_num - 1
end = headers_end_line_num - 1
if i < len(found_headers) - 1:
end = shifted_line_nums[i] - 1
remainder = "\n".join(lines[start:end]).removeprefix(f"{header}:")
if remainder != "":
if remainder[0] not in {" ", "\n"}:
yield line_num, f"Headers must have a space after the colon: {header}"
remainder = remainder.lstrip()
yield from _validate_header(header, line_num, remainder)
def _validate_header(header: str, line_num: int, content: str) -> MessageIterator:
if header == "Title":
yield from _validate_title(line_num, content)
elif header == "Author":
yield from _validate_author(line_num, content)
elif header == "Sponsor":
yield from _validate_sponsor(line_num, content)
elif header in {"BDFL-Delegate", "PEP-Delegate"}:
yield from _validate_delegate(line_num, content)
elif header == "Discussions-To":
yield from _validate_discussions_to(line_num, content)
elif header == "Status":
yield from _validate_status(line_num, content)
elif header == "Type":
yield from _validate_type(line_num, content)
elif header == "Topic":
yield from _validate_topic(line_num, content)
elif header == "Content-Type":
yield from _validate_content_type(line_num, content)
elif header in {"Requires", "Replaces", "Superseded-By"}:
yield from _validate_pep_references(line_num, content)
elif header == "Created":
yield from _validate_created(line_num, content)
elif header == "Python-Version":
yield from _validate_python_version(line_num, content)
elif header == "Post-History":
yield from _validate_post_history(line_num, content)
elif header == "Resolution":
yield from _validate_resolution(line_num, content)
def check_direct_links(line_num: int, line: str) -> MessageIterator:
"""Check that PEPs and RFCs aren't linked directly"""
line = line.lower()
if "dev/peps/pep-" in line or "peps.python.org/pep-" in line:
yield line_num, "Use the :pep:`NNN` role to refer to PEPs"
if "rfc-editor.org/rfc/" in line or "ietf.org/doc/html/rfc" in line:
yield line_num, "Use the :rfc:`NNN` role to refer to RFCs"
def _output_error(filename: Path, lines: Sequence[str], errors: Iterable[Message]) -> int:
relative_filename = filename.relative_to(ROOT_DIR)
err_count = 0
for line_num, msg in errors:
err_count += 1
print(f"{relative_filename}:{line_num}: {msg}")
if not DETAILED_ERRORS:
continue
line = lines[line_num - 1]
print(" |")
print(f"{line_num: >4} | '{line}'")
print(" |")
return err_count
###########################
# PEP Header Validators #
###########################
def _validate_required_headers(found_headers: KeysView[str]) -> MessageIterator:
"""PEPs must have all required headers, in the PEP 12 order"""
if missing := REQUIRED_HEADERS.difference(found_headers):
for missing_header in sorted(missing, key=ALL_HEADERS.index):
yield 1, f"Must have required header: {missing_header}"
ordered_headers = sorted(found_headers, key=ALL_HEADERS.index)
if list(found_headers) != ordered_headers:
order_str = ", ".join(ordered_headers)
yield 1, "Headers must be in PEP 12 order. Correct order: " + order_str
def _validate_pep_number(line: str) -> MessageIterator:
"""'PEP' header must be a number 1-9999"""
if not line.startswith("PEP: "):
yield 1, "PEP must begin with the 'PEP:' header"
return
pep_number = line.removeprefix("PEP: ").lstrip()
yield from _pep_num(1, pep_number, "'PEP:' header")
def _validate_title(line_num: int, line: str) -> MessageIterator:
"""'Title' must be 1-79 characters"""
if len(line) == 0:
yield line_num, "PEP must have a title"
elif len(line) > 79:
yield line_num, "PEP title must be less than 80 characters"
def _validate_author(line_num: int, body: str) -> MessageIterator:
"""'Author' must be list of 'Name <email@example.com>, …'"""
lines = body.split("\n")
for offset, line in enumerate(lines):
if offset >= 1 and line[:9].isspace():
# Checks for:
# Author: Alice
# Bob
# ^^^^
# Note that len("Author: ") == 8
yield line_num + offset, "Author line must not be over-indented"
if offset < len(lines) - 1:
if not line.endswith(","):
yield line_num + offset, "Author continuation lines must end with a comma"
for part in line.removesuffix(",").split(", "):
yield from _email(line_num + offset, part, "Author")
def _validate_sponsor(line_num: int, line: str) -> MessageIterator:
"""'Sponsor' must have format 'Name <email@example.com>'"""
yield from _email(line_num, line, "Sponsor")
def _validate_delegate(line_num: int, line: str) -> MessageIterator:
"""'Delegate' must have format 'Name <email@example.com>'"""
if line == "":
return
# PEP 451
if ", " in line:
for part in line.removesuffix(",").split(", "):
yield from _email(line_num, part, "Delegate")
return
yield from _email(line_num, line, "Delegate")
def _validate_discussions_to(line_num: int, line: str) -> MessageIterator:
"""'Discussions-To' must be a thread URL"""
yield from _thread(line_num, line, "Discussions-To", discussions_to=True)
if line.startswith("https://"):
return
for suffix in "@python.org", "@googlegroups.com":
if line.endswith(suffix):
remainder = line.removesuffix(suffix)
if re.fullmatch(r"[\w\-]+", remainder) is None:
yield line_num, "Discussions-To must be a valid mailing list"
return
yield line_num, "Discussions-To must be a valid thread URL or mailing list"
def _validate_status(line_num: int, line: str) -> MessageIterator:
"""'Status' must be a valid PEP status"""
if line not in ALL_STATUSES:
yield line_num, "Status must be a valid PEP status"
def _validate_type(line_num: int, line: str) -> MessageIterator:
"""'Type' must be a valid PEP type"""
if line not in {"Standards Track", "Informational", "Process"}:
yield line_num, "Type must be a valid PEP type"
def _validate_topic(line_num: int, line: str) -> MessageIterator:
"""'Topic' must be for a valid sub-index"""
topics = line.split(", ")
unique_topics = set(topics)
if len(topics) > len(unique_topics):
yield line_num, "Topic must not contain duplicates"
if unique_topics - {"Governance", "Packaging", "Typing", "Release"}:
if not all(map(str.istitle, unique_topics)):
yield line_num, "Topic must be properly capitalised (Title Case)"
if unique_topics - {"governance", "packaging", "typing", "release"}:
yield line_num, "Topic must be for a valid sub-index"
if sorted(topics) != topics:
yield line_num, "Topic must be sorted lexicographically"
def _validate_content_type(line_num: int, line: str) -> MessageIterator:
"""'Content-Type' must be 'text/x-rst'"""
if line != "text/x-rst":
yield line_num, "Content-Type must be 'text/x-rst'"
def _validate_pep_references(line_num: int, line: str) -> MessageIterator:
"""`Requires`/`Replaces`/`Superseded-By` must be 'NNN' PEP IDs"""
line = line.removesuffix(",").rstrip()
if line.count(", ") != line.count(","):
yield line_num, "PEP references must be separated by comma-spaces (', ')"
return
references = line.split(", ")
for reference in references:
yield from _pep_num(line_num, reference, "PEP reference")
def _validate_created(line_num: int, line: str) -> MessageIterator:
"""'Created' must be a 'DD-mmm-YYYY' date"""
yield from _date(line_num, line, "Created")
def _validate_python_version(line_num: int, line: str) -> MessageIterator:
"""'Python-Version' must be an ``X.Y[.Z]`` version"""
versions = line.split(", ")
for version in versions:
if version.count(".") not in {1, 2}:
yield line_num, f"Python-Version must have two or three segments: {version}"
continue
try:
major, minor, micro = version.split(".", 2)
except ValueError:
major, minor = version.split(".", 1)
micro = ""
if major not in "123":
yield line_num, f"Python-Version major part must be 1, 2, or 3: {version}"
if not _is_digits(minor) and minor != "x":
yield line_num, f"Python-Version minor part must be numeric: {version}"
elif minor != "0" and minor[0] == "0":
yield line_num, f"Python-Version minor part must not have leading zeros: {version}"
if micro == "":
return
if minor == "x":
yield line_num, f"Python-Version micro part must be empty if minor part is 'x': {version}"
elif micro[0] == "0":
yield line_num, f"Python-Version micro part must not have leading zeros: {version}"
elif not _is_digits(micro):
yield line_num, f"Python-Version micro part must be numeric: {version}"
def _validate_post_history(line_num: int, body: str) -> MessageIterator:
"""'Post-History' must be '`DD-mmm-YYYY <Thread URL>`__, …'"""
if body == "":
return
for offset, line in enumerate(body.removesuffix(",").split("\n"), start=line_num):
for post in line.removesuffix(",").strip().split(", "):
if not post.startswith("`") and not post.endswith(">`__"):
yield from _date(offset, post, "Post-History")
else:
post_date, post_url = post[1:-4].split(" <")
yield from _date(offset, post_date, "Post-History")
yield from _thread(offset, post_url, "Post-History")
def _validate_resolution(line_num: int, line: str) -> MessageIterator:
"""'Resolution' must be a direct thread/message URL"""
yield from _thread(line_num, line, "Resolution", allow_message=True)
########################
# Validation Helpers #
########################
def _pep_num(line_num: int, pep_number: str, prefix: str) -> MessageIterator:
if pep_number == "":
yield line_num, f"{prefix} must not be blank: {pep_number!r}"
return
if pep_number.startswith("0") and pep_number != "0":
yield line_num, f"{prefix} must not contain leading zeros: {pep_number!r}"
if not _is_digits(pep_number):
yield line_num, f"{prefix} must be numeric: {pep_number!r}"
elif not 0 <= int(pep_number) <= 9999:
yield line_num, f"{prefix} must be between 0 and 9999: {pep_number!r}"
def _is_digits(string: str) -> bool:
"""Match a string of ASCII digits ([0-9]+)."""
return string.isascii() and string.isdigit()
def _email(line_num: int, author_email: str, prefix: str) -> MessageIterator:
author_email = author_email.strip()
if author_email.count("<") > 1:
msg = f"{prefix} entries must not contain multiple '<': {author_email!r}"
yield line_num, msg
if author_email.count(">") > 1:
msg = f"{prefix} entries must not contain multiple '>': {author_email!r}"
yield line_num, msg
if author_email.count("@") > 1:
msg = f"{prefix} entries must not contain multiple '@': {author_email!r}"
yield line_num, msg
author = author_email.split("<", 1)[0].rstrip()
if NAME_PATTERN.fullmatch(author) is None:
msg = f"{prefix} entries must begin with a valid 'Name': {author_email!r}"
yield line_num, msg
return
email_text = author_email.removeprefix(author)
if not email_text:
# Does not have the optional email part
return
if not email_text.startswith(" <") or not email_text.endswith(">"):
msg = f"{prefix} entries must be formatted as 'Name <email@example.com>': {author_email!r}"
yield line_num, msg
email_text = email_text.removeprefix(" <").removesuffix(">")
if "@" in email_text:
local, domain = email_text.rsplit("@", 1)
elif " at " in email_text:
local, domain = email_text.rsplit(" at ", 1)
else:
yield line_num, f"{prefix} entries must contain a valid email address: {author_email!r}"
return
if EMAIL_LOCAL_PART_PATTERN.fullmatch(local) is None or _invalid_domain(domain):
yield line_num, f"{prefix} entries must contain a valid email address: {author_email!r}"
def _invalid_domain(domain_part: str) -> bool:
*labels, root = domain_part.split(".")
for label in labels:
if not label.replace("-", "").isalnum():
return True
return not root.isalnum() or not root.isascii()
def _thread(line_num: int, url: str, prefix: str, *, allow_message: bool = False, discussions_to: bool = False) -> MessageIterator:
if allow_message and discussions_to:
msg = "allow_message and discussions_to cannot both be True"
raise ValueError(msg)
msg = f"{prefix} must be a valid thread URL"
if not url.startswith("https://"):
if not discussions_to:
yield line_num, msg
return
if url.startswith("https://discuss.python.org/t/"):
remainder = url.removeprefix("https://discuss.python.org/t/").removesuffix("/")
# Discussions-To links must be the thread itself, not a post
if discussions_to:
# The equivalent pattern is similar to '([\w\-]+/)?\d+',
# but the topic name must contain a non-numeric character
# We use ``str.rpartition`` as the topic name is optional
topic_name, _, topic_id = remainder.rpartition("/")
if topic_name == '' and _is_digits(topic_id):
return
topic_name = topic_name.replace("-", "0").replace("_", "0")
# the topic name must not be entirely numeric
valid_topic_name = not _is_digits(topic_name) and topic_name.isalnum()
if valid_topic_name and _is_digits(topic_id):
return
else:
# The equivalent pattern is similar to '([\w\-]+/)?\d+(/\d+)?',
# but the topic name must contain a non-numeric character
if remainder.count("/") == 2:
# When there are three parts, the URL must be "topic-name/topic-id/post-id".
topic_name, topic_id, post_id = remainder.rsplit("/", 2)
topic_name = topic_name.replace("-", "0").replace("_", "0")
valid_topic_name = not _is_digits(topic_name) and topic_name.isalnum()
if valid_topic_name and _is_digits(topic_id) and _is_digits(post_id):
# the topic name must not be entirely numeric
return
elif remainder.count("/") == 1:
# When there are only two parts, there's an ambiguity between
# "topic-name/topic-id" and "topic-id/post-id".
# We disambiguate by checking if the LHS is a valid name and
# the RHS is a valid topic ID (for the former),
# and then if both the LHS and RHS are valid IDs (for the latter).
left, right = remainder.rsplit("/")
left = left.replace("-", "0").replace("_", "0")
# the topic name must not be entirely numeric
left_is_name = not _is_digits(left) and left.isalnum()
if left_is_name and _is_digits(right):
return
elif _is_digits(left) and _is_digits(right):
return
else:
# When there's only one part, it must be a valid topic ID.
if _is_digits(remainder):
return
if url.startswith("https://mail.python.org/pipermail/"):
remainder = url.removeprefix("https://mail.python.org/pipermail/")
if MAILMAN_2_PATTERN.fullmatch(remainder) is not None:
return
if url.startswith("https://mail.python.org/archives/list/"):
remainder = url.removeprefix("https://mail.python.org/archives/list/")
if allow_message and MAILMAN_3_MESSAGE_PATTERN.fullmatch(remainder) is not None:
return
if MAILMAN_3_THREAD_PATTERN.fullmatch(remainder) is not None:
return
yield line_num, msg
def _date(line_num: int, date_str: str, prefix: str) -> MessageIterator:
try:
parsed_date = dt.datetime.strptime(date_str, "%d-%b-%Y")
except ValueError:
yield line_num, f"{prefix} must be a 'DD-mmm-YYYY' date: {date_str!r}"
return
else:
if date_str[1] == "-": # Date must be zero-padded
yield line_num, f"{prefix} must be a 'DD-mmm-YYYY' date: {date_str!r}"
return
if parsed_date.year < 1990:
yield line_num, f"{prefix} must not be before Python was invented: {date_str!r}"
if parsed_date > (dt.datetime.now() + dt.timedelta(days=14)):
yield line_num, f"{prefix} must not be in the future: {date_str!r}"
if __name__ == "__main__":
if {"-h", "--help", "-?"}.intersection(sys.argv[1:]):
print(__doc__, file=sys.stderr)
raise SystemExit(0)
files = {}
for arg in sys.argv[1:]:
if not arg.startswith("-"):
files[arg] = None
elif arg in {"-d", "--detailed"}:
DETAILED_ERRORS = True
else:
print(f"Unknown option: {arg!r}", file=sys.stderr)
raise SystemExit(1)
raise SystemExit(check(files))

56
conf.py
View File

@ -1,56 +0,0 @@
"""Configuration for building PEPs using Sphinx."""
from pathlib import Path
import sys
sys.path.append(str(Path("pep_sphinx_extensions").absolute()))
# -- Project information -----------------------------------------------------
project = "PEPs"
master_doc = "contents"
# -- General configuration ---------------------------------------------------
# Add any Sphinx extension module names here, as strings.
extensions = ["pep_sphinx_extensions", "sphinx.ext.githubpages"]
# The file extensions of source files. Sphinx uses these suffixes as sources.
source_suffix = {
".rst": "pep",
".txt": "pep",
}
# List of patterns (relative to source dir) to ignore when looking for source files.
exclude_patterns = [
# Windows:
"Thumbs.db",
".DS_Store",
# Python:
"venv",
"requirements.txt",
# Sphinx:
"build",
"output.txt", # Link-check output
# PEPs:
"README.rst",
"CONTRIBUTING.rst",
]
# -- Options for HTML output -------------------------------------------------
# HTML output settings
html_math_renderer = "maths_to_html" # Maths rendering
html_show_copyright = False # Turn off miscellany
html_show_sphinx = False
html_title = "peps.python.org" # Set <title/>
# Theme settings
html_theme_path = ["pep_sphinx_extensions"]
html_theme = "pep_theme" # The actual theme directory (child of html_theme_path)
html_use_index = False # Disable index (we use PEP 0)
html_sourcelink_suffix = "" # Fix links to GitHub (don't append .txt)
html_style = "" # must be defined here or in theme.conf, but is unused
html_permalinks = False # handled in the PEPContents transform
templates_path = ['pep_sphinx_extensions/pep_theme/templates'] # Theme template relative paths from `confdir`

View File

@ -1,16 +0,0 @@
Python Enhancement Proposals (PEPs)
***********************************
This is an internal Sphinx page, please go to the :doc:`PEP Index<pep-0000>`.
.. toctree::
:maxdepth: 3
:titlesonly:
:hidden:
:glob:
:caption: PEP Table of Contents (needed for Sphinx):
pep-*

View File

@ -1,6 +0,0 @@
#!/bin/bash
set -ex
make package
pip install awscli
aws s3 cp --acl public-read build/peps.tar.gz s3://pythondotorg-assets-staging/peps.tar.gz
aws s3 cp --acl public-read build/peps.tar.gz s3://pythondotorg-assets/peps.tar.gz

95
docs/build.rst Normal file
View File

@ -0,0 +1,95 @@
:author: Adam Turner
Building PEPs Locally
=====================
Whilst editing a PEP, it is useful to review the rendered output locally.
This can also be used to check that the PEP is valid reStructuredText before
submission to the PEP editors.
The rest of this document assumes you are working from a local clone of the
`PEPs repository <https://github.com/python/peps>`__,
with **Python 3.9 or later** installed.
Render PEPs locally
-------------------
1. Create a virtual environment and install requirements:
.. code-block:: shell
make venv
If you don't have access to ``make``, run:
.. code-block:: ps1con
PS> python -m venv .venv
PS> .\.venv\Scripts\activate
(venv) PS> python -m pip install --upgrade pip
(venv) PS> python -m pip install -r requirements.txt
2. **(Optional)** Delete prior build files.
Generally only needed when making changes to the rendering system itself.
.. code-block:: shell
rm -rf build
3. Run the build script:
.. code-block:: shell
make html
If you don't have access to ``make``, run:
.. code-block:: ps1con
(venv) PS> python build.py
4. Navigate to the ``build`` directory of your PEPs repo to find the HTML pages.
PEP 0 provides a formatted index, and may be a useful reference.
``build.py`` tools
------------------
Several additional tools can be run through ``build.py``, or the Makefile.
Note that before using ``build.py`` you must activate the virtual environment
created earlier:
.. code-block:: shell
source .venv/bin/activate
Or on Windows:
.. code-block:: ps1con
PS> .\.venv\Scripts\activate
Check links
'''''''''''
Check the validity of links within PEP sources (runs the `Sphinx linkchecker
<https://www.sphinx-doc.org/en/master/usage/builders/index.html#sphinx.builders.linkcheck.CheckExternalLinksBuilder>`__).
.. code-block:: shell
python build.py --check-links
make check-links
``build.py`` usage
------------------
For details on the command-line options to the ``build.py`` script, run:
.. code-block:: shell
python build.py --help

240
docs/rendering_system.rst Normal file
View File

@ -0,0 +1,240 @@
:author: Adam Turner
..
We can't use :pep:`N` references in this document, as they use links relative
to the current file, which doesn't work in a subdirectory like this one.
An Overview of the PEP Rendering System
=======================================
This document provides an overview of the PEP rendering system, as a companion
to `PEP 676 <https://peps.python.org/pep-0676/>`__.
1. Configuration
----------------
Configuration is stored in three files:
- ``peps/conf.py`` contains the majority of the Sphinx configuration
- ``peps/contents.rst`` contains the compulsory table of contents directive
- ``pep_sphinx_extensions/pep_theme/theme.conf`` sets the Pygments themes
The configuration:
- registers the custom Sphinx extension
- sets the ``.rst`` suffix to be parsed as PEPs
- tells Sphinx which source files to use
- registers the PEP theme, maths renderer, and template
- disables some default settings that are covered in the extension
- sets the default and "dark mode" code formatter styles
2. Orchestration
----------------
``build.py`` manages the rendering process.
Usage is covered in `Building PEPs Locally <./build.rst>`_.
3. Extension
------------
The Sphinx extension and theme are contained in the ``pep_sphinx_extensions``
directory.
The following is a brief overview of the stages of the PEP rendering process,
and how the extension functions at each point.
3.1 Extension setup
'''''''''''''''''''
The extension registers several objects:
- ``FileBuilder`` and ``DirectoryBuilder`` run the build process for file- and
directory-based building, respectively.
- ``PEPParser`` registers the custom document transforms and parses PEPs to
a Docutils document.
- ``PEPTranslator`` converts a Docutils document into HTML.
- ``PEPRole`` handles ``:pep:`` roles in the reStructuredText source.
The extension also patches default behaviour:
- updating the default settings
- updating the Docutils inliner
- using HTML maths display over MathJax
3.2 Builder initialised
'''''''''''''''''''''''
After the Sphinx builder object is created and initialised, we ensure the
configuration is correct for the builder chosen.
Currently this involves updating the relative link template.
See ``_update_config_for_builder`` in ``pep_sphinx_extensions/__init__.py``.
3.3 Before documents are read
'''''''''''''''''''''''''''''
The ``create_pep_zero`` hook is called. See `5. PEP 0`_.
3.4 Read document
'''''''''''''''''
Parsing the document is handled by ``PEPParser``
(``pep_sphinx_extensions.pep_processor.parsing.pep_parser.PEPParser``), a
lightweight wrapper over ``sphinx.parsers.RSTParser``.
``PEPParser`` reads the document with leading :rfc:`2822` headers and registers
the transforms we want to apply.
These are:
- ``PEPHeaders``
- ``PEPTitle``
- ``PEPContents``
- ``PEPFooter``
Transforms are then applied in priority order.
3.4.1 ``PEPRole`` role
**********************
This overrides the built-in ``:pep:`` role to return the correct URL.
3.4.2 ``PEPHeaders`` transform
******************************
PEPs start with a set of :rfc:`2822` headers,
per `PEP 1 <https://peps.python.org/pep-0001/>`__.
This transform validates that the required headers are present and of the
correct data type, and removes headers not for display.
It must run before the ``PEPTitle`` transform.
3.4.3 ``PEPTitle`` transform
****************************
We generate the title node from the parsed title in the PEP headers, and make
all nodes in the document children of the new title node.
This transform must also handle parsing reStructuredText markup within PEP
titles, such as `PEP 604 <https://peps.python.org/pep-0604/>`__.
3.4.4 ``PEPContents`` transform
*******************************
The automatic table of contents (TOC) is inserted in this transform in a
two-part process.
First, the transform inserts a placeholder for the TOC and a horizontal rule
after the document title and PEP headers.
A callback transform then recursively walks the document to create the TOC,
starting from after the placeholder node.
Whilst walking the document, all reference nodes in the titles are removed, and
titles are given a self-link.
3.4.5 ``PEPFooter`` transform
*****************************
This first builds a map of file modification times from a single git call, as
a speed-up. This will return incorrect results on a shallow checkout of the
repository, as is the default on continuous integration systems.
We then attempt to remove any empty references sections, and append metadata in
the footer (source link and last modified timestamp).
3.5 Prepare for writing
''''''''''''''''''''''''
``pep_html_builder.FileBuilder.prepare_writing`` initialises the bare minimum
of the Docutils writer and the settings for writing documents.
This provides a significant speed-up over the base Sphinx implementation, as
most of the data automatically initialised was unused.
3.6 Translate Docutils to HTML
'''''''''''''''''''''''''''''''
``PEPTranslator`` overrides paragraph and reference logic to replicate
processing from the previous ``docutils.writers.pep``-based system.
Paragraphs are made compact where possible by omitting ``<p>`` tags, and
footnote references are be enclosed in square brackets.
3.7 Prepare for export to Jinja
'''''''''''''''''''''''''''''''
Finally in ``pep_html_builder``, we gather all the parts to be passed to the
Jinja template.
This is also where we create the sidebar table of contents.
The HTML files are then written out to the build directory.
4. Theme
--------
The theme is comprised of the HTML template in
``pep_sphinx_extensions/pep_theme/templates/page.html`` and the stylesheets in
``pep_sphinx_extensions/pep_theme/static``.
The template is entirely self-contained, not relying on any default behaviour
from Sphinx.
It specifies the CSS files to include, the favicon, and basic semantic
information for the document structure.
The styles are defined in two parts:
- ``style.css`` handles the meat of the layout
- ``mq.css`` adds media queries for a responsive design
5. \PEP 0
---------
The generation of the index, PEP 0, happens in three phases.
The reStructuredText source file is generated, it is then added to Sphinx, and
finally the data is post processed.
5.1 File creation
'''''''''''''''''
``pep-0000.rst`` is created during a callback, before documents are loaded by
Sphinx.
We first parse the individual PEP files to get the :rfc:`2822` header, and then
parse and validate that metadata.
After collecting and validating all the PEP data, the index itself is created in
three steps:
1. Output the header text
2. Output the category and numerical indices
3. Output the author index
We then add the newly created PEP 0 file to two Sphinx variables so that it will
be processed as a normal source document.
5.2 Post processing
'''''''''''''''''''
The ``PEPHeaders`` transform schedules the \PEP 0 post-processing code.
This serves two functions: masking email addresses and linking numeric
PEP references to the actual documents.
6. RSS Feed
-----------
The RSS feed is created by extracting the header metadata and abstract from the
ten most recent PEPs.

View File

@ -1,21 +0,0 @@
# Configuration file for Docutils.
# See http://docutils.sf.net/docs/tools.html
[general]
# These entries are for the page footer:
source-link: 1
datestamp: %Y-%m-%d %H:%M UTC
generator: 1
# use the local stylesheet
stylesheet: pep.css
template: pyramid-pep-template
# link to the stylesheet; don't embed it
embed-stylesheet: 0
# path to PEPs, for template:
pep-home: /dev/peps/
# base URL for PEP references (no host so mirrors work):
pep-base-url: /dev/peps/

View File

@ -1,68 +0,0 @@
#!/usr/bin/env python
"""Auto-generate PEP 0 (PEP index).
Generating the PEP index is a multi-step process. To begin, you must first
parse the PEP files themselves, which in and of itself takes a couple of steps:
1. Parse metadata.
2. Validate metadata.
With the PEP information collected, to create the index itself you must:
1. Output static text.
2. Format an entry for the PEP.
3. Output the PEP (both by category and numerical index).
"""
from __future__ import absolute_import, with_statement
from __future__ import print_function
import sys
import os
import codecs
from operator import attrgetter
from pep0.output import write_pep0
from pep0.pep import PEP, PEPError
def main(argv):
if not argv[1:]:
path = '.'
else:
path = argv[1]
peps = []
if os.path.isdir(path):
for file_path in os.listdir(path):
if file_path.startswith('pep-0000.'):
continue
abs_file_path = os.path.join(path, file_path)
if not os.path.isfile(abs_file_path):
continue
if file_path.startswith("pep-") and file_path.endswith((".txt", "rst")):
with codecs.open(abs_file_path, 'r', encoding='UTF-8') as pep_file:
try:
pep = PEP(pep_file)
if pep.number != int(file_path[4:-4]):
raise PEPError('PEP number does not match file name',
file_path, pep.number)
peps.append(pep)
except PEPError as e:
errmsg = "Error processing PEP %s (%s), excluding:" % \
(e.number, e.filename)
print(errmsg, e, file=sys.stderr)
sys.exit(1)
peps.sort(key=attrgetter('number'))
elif os.path.isfile(path):
with open(path, 'r') as pep_file:
peps.append(PEP(pep_file))
else:
raise ValueError("argument must be a directory or file path")
with codecs.open('pep-0000.rst', 'w', encoding='UTF-8') as pep0_file:
write_pep0(peps, pep0_file)
if __name__ == "__main__":
main(sys.argv)

2
infra/.gitignore vendored Normal file
View File

@ -0,0 +1,2 @@
.terraform*
terraform.tfstate*

22
infra/config.tf Normal file
View File

@ -0,0 +1,22 @@
terraform {
required_providers {
fastly = {
source = "fastly/fastly"
version = "1.1.2"
}
}
required_version = ">= 1.1.8"
cloud {
organization = "psf"
workspaces {
name = "peps"
}
}
}
variable "fastly_token" {
type = string
sensitive = true
}
provider "fastly" {
api_key = var.fastly_token
}

84
infra/main.tf Normal file
View File

@ -0,0 +1,84 @@
resource "fastly_service_vcl" "peps" {
name = "peps.python.org"
activate = true
domain { name = "peps.python.org" }
backend {
name = "GitHub Pages"
address = "python.github.io"
port = 443
override_host = "peps.python.org"
use_ssl = true
ssl_check_cert = true
ssl_cert_hostname = "python.github.io"
ssl_sni_hostname = "python.github.io"
}
header {
name = "HSTS"
type = "response"
action = "set"
destination = "http.Strict-Transport-Security"
ignore_if_set = false
source = "\"max-age=31536000; includeSubDomains; preload\""
}
request_setting {
name = "Force TLS"
force_ssl = true
}
snippet {
name = "serve-rss"
type = "recv"
content = <<-EOT
if (req.url == "/peps.rss/") {
set req.url = "/peps.rss";
}
EOT
}
snippet {
name = "topics"
type = "recv"
content = <<-EOT
if (req.url ~ "^/topics($|/)") {
set req.http.Location = regsub(req.http.Location, "^/topics/?", "/topic/");
error 618;
}
EOT
}
snippet {
name = "redirect"
type = "error"
content = <<-EOT
if (obj.status == 618) {
set obj.status = 302;
set obj.http.Location = "https://" + req.http.host + req.http.Location;
return(deliver);
}
EOT
}
snippet {
name = "redirect-numbers"
type = "recv"
content = <<-EOT
if (req.url ~ "^/(\d|\d\d|\d\d\d|\d\d\d\d)/?$") {
set req.http.Location = "/pep-" + std.strpad(re.group.1, 4, "0") + "/";
error 618;
}
EOT
}
snippet {
name = "left-pad-pep-numbers"
type = "recv"
content = <<-EOT
if (req.url ~ "^/pep-(\d|\d\d|\d\d\d)/?$") {
set req.http.Location = "/pep-" + std.strpad(re.group.1, 4, "0") + "/";
error 618;
}
EOT
}
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

View File

@ -1,213 +0,0 @@
PEP: 2
Title: Procedure for Adding New Modules
Version: $Revision$
Last-Modified: $Date$
Author: Martijn Faassen <faassen@infrae.com>
Status: Superseded
Type: Process
Content-Type: text/x-rst
Created: 07-Jul-2001
Post-History: 07-Jul-2001, 09-Mar-2002
PEP Replacement
===============
This PEP has been superseded by the updated material in the Python
Developer's Guide [1]_.
Introduction
============
The Python Standard Library contributes significantly to Python's
success. The language comes with "batteries included", so it is easy
for people to become productive with just the standard library alone.
It is therefore important that this library grows with the language,
and that such growth is supported and encouraged.
Many contributions to the library are not created by core developers
but by people from the Python community who are experts in their
particular field. Furthermore, community members are also the users of
the standard library, applying it in a great diversity of settings.
This makes the community well equipped to detect and report gaps in
the library; things that are missing but should be added.
New functionality is commonly added to the library in the form of new
modules. This PEP will describe the procedure for the *addition* of
new modules. PEP 4 deals with procedures for deprecation of modules;
the *removal* of old and unused modules from the standard library.
Finally there is also the issue of *changing* existing modules to make
the picture of library evolution complete. PEP 3 and PEP 5 give some
guidelines on this. The continued maintenance of existing modules is
an integral part of the decision on whether to add a new module to the
standard library. Therefore, this PEP also introduces concepts
(integrators, maintainers) relevant to the maintenance issue.
Integrators
===========
The integrators are a group of people with the following
responsibilities:
* They determine if a proposed contribution should become part of the
standard library.
* They integrate accepted contributions into the standard library.
* They produce standard library releases.
This group of people shall be PythonLabs, led by Guido.
Maintainer(s)
=============
All contributions to the standard library need one or more
maintainers. This can be an individual, but it is frequently a group
of people such as the XML- SIG. Groups may subdivide maintenance
tasks among themselves. One or more maintainers shall be the *head
maintainer* (usually this is also the main developer). Head
maintainers are convenient people the integrators can address if they
want to resolve specific issues, such as the ones detailed later in
this document.
Developers(s)
=============
Contributions to the standard library have been developed by one or
more developers. The initial maintainers are the original developers
unless there are special circumstances (which should be detailed in
the PEP proposing the contribution).
Acceptance Procedure
====================
When developers wish to have a contribution accepted into the standard
library, they will first form a group of maintainers (normally
initially consisting of themselves).
Then, this group shall produce a PEP called a library PEP. A library
PEP is a special form of standards track PEP. The library PEP gives
an overview of the proposed contribution, along with the proposed
contribution as the reference implementation. This PEP should also
contain a motivation on why this contribution should be part of the
standard library.
One or more maintainers shall step forward as PEP champion (the people
listed in the Author field are the champions). The PEP champion(s)
shall be the initial head maintainer(s).
As described in PEP 1, a standards track PEP should consist of a
design document and a reference implementation. The library PEP
differs from a normal standard track PEP in that the reference
implementation should in this case always already have been written
before the PEP is to be reviewed for inclusion by the integrators and
to be commented upon by the community; the reference implementation
*is* the proposed contribution.
This different requirement exists for the following reasons:
* The integrators can only properly evaluate a contribution to the
standard library when there is source code and documentation to look
at; i.e. the reference implementation is always necessary to aid
people in studying the PEP.
* Even rejected contributions will be useful outside the standard
library, so there will a lower risk of waste of effort by the
developers.
* It will impress the integrators of the seriousness of contribution
and will help guard them against having to evaluate too many
frivolous proposals.
Once the library PEP has been submitted for review, the integrators
will then evaluate it. The PEP will follow the normal PEP work flow
as described in PEP 1. If the PEP is accepted, they will work through
the head maintainers to make the contribution ready for integration.
Maintenance Procedure
=====================
After a contribution has been accepted, the job is not over for both
integrators and maintainers. The integrators will forward any bug
reports in the standard library to the appropriate head maintainers.
Before the feature freeze preparing for a release of the standard
library, the integrators will check with the head maintainers for all
contributions, to see if there are any updates to be included in the
next release. The integrators will evaluate any such updates for
issues like backwards compatibility and may require PEPs if the
changes are deemed to be large.
The head maintainers should take an active role in keeping up to date
with the Python development process. If a head maintainer is unable
to function in this way, he or she should announce the intention to
step down to the integrators and the rest of the maintainers, so that
a replacement can step forward. The integrators should at all times
be capable of reaching the head maintainers by email.
In the case where no head maintainer can be found (possibly because
there are no maintainers left), the integrators will issue a call to
the community at large asking for new maintainers to step forward. If
no one does, the integrators can decide to declare the contribution
deprecated as described in PEP 4.
Open issues
===========
There needs to be some procedure so that the integrators can always
reach the maintainers (or at least the head maintainers). This could
be accomplished by a mailing list to which all head maintainers should
be subscribed (this could be python-dev). Another possibility, which
may be useful in any case, is the maintenance of a list similar to
that of the list of PEPs which lists all the contributions and their
head maintainers with contact info. This could in fact be part of the
list of the PEPs, as a new contribution requires a PEP. But since the
authors/owners of a PEP introducing a new module may eventually be
different from those who maintain it, this wouldn't resolve all issues
yet.
Should there be a list of what criteria integrators use for evaluating
contributions? (Source code but also things like documentation and a
test suite, as well as such vague things like 'dependability of the
maintainers'.)
This relates to all the technical issues; check-in privileges, coding
style requirements, documentation requirements, test suite
requirements. These are preferably part of another PEP.
Should the current standard library be subdivided among maintainers?
Many parts already have (informal) maintainers; it may be good to make
this more explicit.
Perhaps there is a better word for 'contribution'; the word
'contribution' may not imply enough that the process (of development
and maintenance) does not stop after the contribution is accepted and
integrated into the library.
Relationship to the mythical Catalog?
References
==========
.. [1] Adding to the Stdlib
(http://docs.python.org/devguide/stdlibchanges.html)
Copyright
=========
This document has been placed in the public domain.
..
Local Variables:
mode: indented-text
indent-tabs-mode: nil
fill-column: 70
coding: utf-8
End:

View File

@ -1,321 +0,0 @@
PEP: 4
Title: Deprecation of Standard Modules
Version: $Revision$
Last-Modified: $Date$
Author: Brett Cannon <brett@python.org>, Martin von Löwis <martin@v.loewis.de>
Status: Active
Type: Process
Content-Type: text/x-rst
Created: 01-Oct-2000
Post-History:
Introduction
============
When new modules were added to the standard Python library in the
past, it was not possible to foresee whether they would still be
useful in the future. Even though Python "Comes With Batteries
Included", batteries may discharge over time. Carrying old modules
around is a burden on the maintainer, especially when there is no
interest in the module anymore.
At the same time, removing a module from the distribution is
difficult, as it is not known in general whether anybody is still
using it. This PEP defines a procedure for removing modules from the
standard Python library. Usage of a module may be 'deprecated', which
means that it may be removed from a future Python release. The
rationale for deprecating a module is also collected in this PEP. If
the rationale turns out faulty, the module may become 'undeprecated'.
Procedure for declaring a module deprecated
===========================================
Since the status of module deprecation is recorded in this PEP,
proposals for deprecating modules MUST be made by providing a change
to the text of this PEP.
A proposal for deprecation of the module MUST include the date of the
proposed deprecation and a rationale for deprecating it. In addition,
the proposal MUST include a change to the documentation of the module;
deprecation is indicated by saying that the module is "obsolete" or
"deprecated". The proposal SHOULD include a patch for the module's
source code to indicate deprecation there as well, by raising a
DeprecationWarning. The proposal MUST include patches to remove any
use of the deprecated module from the standard library.
It is expected that deprecated modules are included in the Python
release that immediately follows the deprecation; later releases may
ship without the deprecated modules.
For modules existing in both Python 2.7 and Python 3.5
------------------------------------------------------
In order to facilitate writing code that works in both Python 2 & 3
simultaneously, any module that exists in both Python 3.5 and
Python 2.7 will not be removed from the standard library until
Python 2.7 is no longer supported as specified by PEP 373. Exempted
from this rule is any module in the idlelib package as well as any
exceptions granted by the Python development team.
Procedure for declaring a module undeprecated
=============================================
When a module becomes deprecated, a rationale is given for its
deprecation. In some cases, an alternative interface for the same
functionality is provided, so the old interface is deprecated. In
other cases, the need for having the functionality of the module may
not exist anymore.
If the rationale is faulty, again a change to this PEP's text MUST be
submitted. This change MUST include the date of undeprecation and a
rationale for undeprecation. Modules that are undeprecated under this
procedure MUST be listed in this PEP for at least one major release of
Python.
Obsolete modules
================
A number of modules are already listed as obsolete in the library
documentation. These are listed here for completeness.
cl, sv, timing
All these modules have been declared as obsolete in Python 2.0, some
even earlier.
The following obsolete modules were removed in Python 2.5:
addpack, cmp, cmpcache, codehack, dircmp, dump, find, fmt,
grep, lockfile, newdir, ni, packmail, Para, poly,
rand, reconvert, regex, regsub, statcache, tb, tzparse,
util, whatsound, whrandom, zmod
The following modules were removed in Python 2.6:
gopherlib, rgbimg, macfs
The following modules currently lack a DeprecationWarning:
rfc822, mimetools, multifile
Deprecated modules
==================
::
Module name: posixfile
Rationale: Locking is better done by fcntl.lockf().
Date: Before 1-Oct-2000.
Documentation: Already documented as obsolete. Deprecation
warning added in Python 2.6.
Module name: gopherlib
Rationale: The gopher protocol is not in active use anymore.
Date: 1-Oct-2000.
Documentation: Documented as deprecated since Python 2.5. Removed
in Python 2.6.
Module name: rgbimgmodule
Rationale: In a 2001-04-24 c.l.py post, Jason Petrone mentions
that he occasionally uses it; no other references to
its use can be found as of 2003-11-19.
Date: 1-Oct-2000
Documentation: Documented as deprecated since Python 2.5. Removed
in Python 2.6.
Module name: pre
Rationale: The underlying PCRE engine doesn't support Unicode, and
has been unmaintained since Python 1.5.2.
Date: 10-Apr-2002
Documentation: It was only mentioned as an implementation detail,
and never had a section of its own. This mention
has now been removed.
Module name: whrandom
Rationale: The module's default seed computation was
inherently insecure; the random module should be
used instead.
Date: 11-Apr-2002
Documentation: This module has been documented as obsolete since
Python 2.1, but listing in this PEP was neglected.
The deprecation warning will be added to the module
one year after Python 2.3 is released, and the
module will be removed one year after that.
Module name: rfc822
Rationale: Supplanted by Python 2.2's email package.
Date: 18-Mar-2002
Documentation: Documented as "deprecated since release 2.3" since
Python 2.2.2.
Module name: mimetools
Rationale: Supplanted by Python 2.2's email package.
Date: 18-Mar-2002
Documentation: Documented as "deprecated since release 2.3" since
Python 2.2.2.
Module name: MimeWriter
Rationale: Supplanted by Python 2.2's email package.
Date: 18-Mar-2002
Documentation: Documented as "deprecated since release 2.3" since
Python 2.2.2. Raises a DeprecationWarning as of
Python 2.6.
Module name: mimify
Rationale: Supplanted by Python 2.2's email package.
Date: 18-Mar-2002
Documentation: Documented as "deprecated since release 2.3" since
Python 2.2.2. Raises a DeprecationWarning as of
Python 2.6.
Module name: rotor
Rationale: Uses insecure algorithm.
Date: 24-Apr-2003
Documentation: The documentation has been removed from the library
reference in Python 2.4.
Module name: TERMIOS.py
Rationale: The constants in this file are now in the 'termios' module.
Date: 10-Aug-2004
Documentation: This module has been documented as obsolete since
Python 2.1, but listing in this PEP was neglected.
Removed from the library reference in Python 2.4.
Module name: statcache
Rationale: Using the cache can be fragile and error-prone;
applications should just use os.stat() directly.
Date: 10-Aug-2004
Documentation: This module has been documented as obsolete since
Python 2.2, but listing in this PEP was neglected.
Removed from the library reference in Python 2.5.
Module name: mpz
Rationale: Third-party packages provide similar features
and wrap more of GMP's API.
Date: 10-Aug-2004
Documentation: This module has been documented as obsolete since
Python 2.2, but listing in this PEP was neglected.
Removed from the library reference in Python 2.4.
Module name: xreadlines
Rationale: Using 'for line in file', introduced in 2.3, is preferable.
Date: 10-Aug-2004
Documentation: This module has been documented as obsolete since
Python 2.3, but listing in this PEP was neglected.
Removed from the library reference in Python 2.4.
Module name: multifile
Rationale: Supplanted by the email package.
Date: 21-Feb-2006
Documentation: Documented as deprecated as of Python 2.5.
Module name: sets
Rationale: The built-in set/frozenset types, introduced in
Python 2.4, supplant the module.
Date: 12-Jan-2007
Documentation: Documented as deprecated as of Python 2.6.
Module name: buildtools
Rationale: Unknown.
Date: 15-May-2007
Documentation: Documented as deprecated as of Python 2.3, but
listing in this PEP was neglected. Raised a
DeprecationWarning as of Python 2.6.
Module name: cfmfile
Rationale: Unknown.
Date: 15-May-2007
Documentation: Documented as deprecated as of Python 2.4, but
listing in this PEP was neglected. A
DeprecationWarning was added in Python 2.6.
Module name: macfs
Rationale: Unknown.
Date: 15-May-2007
Documentation: Documented as deprecated as of Python 2.3, but
listing in this PEP was neglected. Removed in
Python 2.6.
Module name: md5
Rationale: Replaced by the 'hashlib' module.
Date: 15-May-2007
Documentation: Documented as deprecated as of Python 2.5, but
listing in this PEP was neglected.
DeprecationWarning raised as of Python 2.6.
Module name: sha
Rationale: Replaced by the 'hashlib' module.
Date: 15-May-2007
Documentation: Documented as deprecated as of Python 2.5, but
listing in this PEP was neglected.
DeprecationWarning added in Python 2.6.
Module name: plat-freebsd2/IN and plat-freebsd3/IN
Rationale: Platforms are obsolete (last released in 2000)
Removed from 2.6
Date: 15-May-2007
Documentation: None
Module name: plat-freebsd4/IN and possibly plat-freebsd5/IN
Rationale: Platforms are obsolete/unsupported
Date: 15-May-2007
Remove from 2.7
Documentation: None
Module name: imp
Rationale: Replaced by the importlib module.
Date: 2013-02-10
Documentation: Deprecated as of Python 3.4.
Module name: formatter
Rationale: Lack of use in the community, no tests to keep
code working.
Date: 2013-08-12
Documentation: Deprecated as of Python 3.4.
Module name: macpath
Rationale: Obsolete macpath module dangerously broken
and should be removed.
Date: 2017-05-15
Documentation: Platform is obsolete/unsupported.
Module name: xml.etree.cElementTree
Rationale: Obsolete, use xml.etree.ElementTree
Date: 2019-04-06
Documentation: Documented as deprecated since 3.3
Deprecation of modules removed in Python 3.0
============================================
PEP 3108 lists all modules that have been removed from Python 3.0.
They all are documented as deprecated in Python 2.6, and raise a
DeprecationWarning if the -3 flag is activated.
Undeprecated modules
====================
None.
Copyright
=========
This document has been placed in the public domain.
..
Local Variables:
mode: indented-text
indent-tabs-mode: nil
sentence-end-double-space: t
fill-column: 70
coding: utf-8
End:

View File

@ -1,297 +0,0 @@
PEP: 11
Title: Removing support for little used platforms
Version: $Revision$
Last-Modified: $Date$
Author: Martin von Löwis <martin@v.loewis.de>,
Brett Cannon <brett@python.org>
Status: Active
Type: Process
Content-Type: text/x-rst
Created: 07-Jul-2002
Post-History: 18-Aug-2007
16-May-2014
20-Feb-2015
Abstract
--------
This PEP documents how an operating system (platform) becomes
supported in CPython and documents past support.
Rationale
---------
Over time, the CPython source code has collected various pieces of
platform-specific code, which, at some point in time, was
considered necessary to use Python on a specific platform.
Without access to this platform, it is not possible to determine
whether this code is still needed. As a result, this code may
either break during Python's evolution, or it may become
unnecessary as the platforms evolve as well.
The growing amount of these fragments poses the risk of
unmaintainability: without having experts for a large number of
platforms, it is not possible to determine whether a certain
change to the CPython source code will work on all supported
platforms.
To reduce this risk, this PEP specifies what is required for a
platform to be considered supported by Python as well as providing a
procedure to remove code for platforms with few or no Python
users.
Supporting platforms
--------------------
Gaining official platform support requires two things. First, a core
developer needs to volunteer to maintain platform-specific code. This
core developer can either already be a member of the Python
development team or be given contributor rights on the basis of
maintaining platform support (it is at the discretion of the Python
development team to decide if a person is ready to have such rights
even if it is just for supporting a specific platform).
Second, a stable buildbot must be provided [2]_. This guarantees that
platform support will not be accidentally broken by a Python core
developer who does not have personal access to the platform. For a
buildbot to be considered stable it requires that the machine be
reliably up and functioning (but it is up to the Python core
developers to decide whether to promote a buildbot to being
considered stable).
This policy does not disqualify supporting other platforms
indirectly. Patches which are not platform-specific but still done to
add platform support will be considered for inclusion. For example,
if platform-independent changes were necessary in the configure
script which were motivated to support a specific platform that could
be accepted. Patches which add platform-specific code such as the
name of a specific platform to the configure script will generally
not be accepted without the platform having official support.
CPU architecture and compiler support are viewed in a similar manner
as platforms. For example, to consider the ARM architecture supported
a buildbot running on ARM would be required along with support from
the Python development team. In general it is not required to have
a CPU architecture run under every possible platform in order to be
considered supported.
Unsupporting platforms
----------------------
If a certain platform that currently has special code in CPython is
deemed to be without enough Python users or lacks proper support from
the Python development team and/or a buildbot, a note must be posted
in this PEP that this platform is no longer actively supported. This
note must include:
- the name of the system
- the first release number that does not support this platform
anymore, and
- the first release where the historical support code is actively
removed
In some cases, it is not possible to identify the specific list of
systems for which some code is used (e.g. when autoconf tests for
absence of some feature which is considered present on all
supported systems). In this case, the name will give the precise
condition (usually a preprocessor symbol) that will become
unsupported.
At the same time, the CPython source code must be changed to
produce a build-time error if somebody tries to install Python on
this platform. On platforms using autoconf, configure must fail.
This gives potential users of the platform a chance to step
forward and offer maintenance.
Re-supporting platforms
-----------------------
If a user of a platform wants to see this platform supported
again, he may volunteer to maintain the platform support. Such an
offer must be recorded in the PEP, and the user can submit patches
to remove the build-time errors, and perform any other maintenance
work for the platform.
Microsoft Windows
-----------------
Microsoft has established a policy called product support lifecycle
[1]_. Each product's lifecycle has a mainstream support phase, where
the product is generally commercially available, and an extended
support phase, where paid support is still available, and certain bug
fixes are released (in particular security fixes).
CPython's Windows support now follows this lifecycle. A new feature
release X.Y.0 will support all Windows releases whose extended support
phase is not yet expired. Subsequent bug fix releases will support
the same Windows releases as the original feature release (even if
the extended support phase has ended).
Because of this policy, no further Windows releases need to be listed
in this PEP.
Each feature release is built by a specific version of Microsoft
Visual Studio. That version should have mainstream support when the
release is made. Developers of extension modules will generally need
to use the same Visual Studio release; they are concerned both with
the availability of the versions they need to use, and with keeping
the zoo of versions small. The CPython source tree will keep
unmaintained build files for older Visual Studio releases, for which
patches will be accepted. Such build files will be removed from the
source tree 3 years after the extended support for the compiler has
ended (but continue to remain available in revision control).
Legacy C Locale
---------------
Starting with CPython 3.7.0, \*nix platforms are expected to provide
at least one of ``C.UTF-8`` (full locale), ``C.utf8`` (full locale) or
``UTF-8`` (``LC_CTYPE``-only locale) as an alternative to the legacy ``C``
locale.
Any Unicode-related integration problems that occur only in the legacy ``C``
locale and cannot be reproduced in an appropriately configured non-ASCII
locale will be closed as "won't fix".
No-longer-supported platforms
-----------------------------
* | Name: MS-DOS, MS-Windows 3.x
| Unsupported in: Python 2.0
| Code removed in: Python 2.1
* | Name: SunOS 4
| Unsupported in: Python 2.3
| Code removed in: Python 2.4
* | Name: DYNIX
| Unsupported in: Python 2.3
| Code removed in: Python 2.4
* | Name: dgux
| Unsupported in: Python 2.3
| Code removed in: Python 2.4
* | Name: Minix
| Unsupported in: Python 2.3
| Code removed in: Python 2.4
* | Name: Irix 4 and --with-sgi-dl
| Unsupported in: Python 2.3
| Code removed in: Python 2.4
* | Name: Linux 1
| Unsupported in: Python 2.3
| Code removed in: Python 2.4
* | Name: Systems defining __d6_pthread_create (configure.in)
| Unsupported in: Python 2.3
| Code removed in: Python 2.4
* | Name: Systems defining PY_PTHREAD_D4, PY_PTHREAD_D6,
or PY_PTHREAD_D7 in thread_pthread.h
| Unsupported in: Python 2.3
| Code removed in: Python 2.4
* | Name: Systems using --with-dl-dld
| Unsupported in: Python 2.3
| Code removed in: Python 2.4
* | Name: Systems using --without-universal-newlines,
| Unsupported in: Python 2.3
| Code removed in: Python 2.4
* | Name: MacOS 9
| Unsupported in: Python 2.4
| Code removed in: Python 2.4
* | Name: Systems using --with-wctype-functions
| Unsupported in: Python 2.6
| Code removed in: Python 2.6
* | Name: Win9x, WinME, NT4
| Unsupported in: Python 2.6 (warning in 2.5 installer)
| Code removed in: Python 2.6
* | Name: AtheOS
| Unsupported in: Python 2.6 (with "AtheOS" changed to "Syllable")
| Build broken in: Python 2.7 (edit configure to reenable)
| Code removed in: Python 3.0
| Details: http://www.syllable.org/discussion.php?id=2320
* | Name: BeOS
| Unsupported in: Python 2.6 (warning in configure)
| Build broken in: Python 2.7 (edit configure to reenable)
| Code removed in: Python 3.0
* | Name: Systems using Mach C Threads
| Unsupported in: Python 3.2
| Code removed in: Python 3.3
* | Name: SunOS lightweight processes (LWP)
| Unsupported in: Python 3.2
| Code removed in: Python 3.3
* | Name: Systems using --with-pth (GNU pth threads)
| Unsupported in: Python 3.2
| Code removed in: Python 3.3
* | Name: Systems using Irix threads
| Unsupported in: Python 3.2
| Code removed in: Python 3.3
* | Name: OSF* systems (issue 8606)
| Unsupported in: Python 3.2
| Code removed in: Python 3.3
* | Name: OS/2 (issue 16135)
| Unsupported in: Python 3.3
| Code removed in: Python 3.4
* | Name: VMS (issue 16136)
| Unsupported in: Python 3.3
| Code removed in: Python 3.4
* | Name: Windows 2000
| Unsupported in: Python 3.3
| Code removed in: Python 3.4
* | Name: Windows systems where COMSPEC points to command.com
| Unsupported in: Python 3.3
| Code removed in: Python 3.4
* | Name: RISC OS
| Unsupported in: Python 3.0 (some code actually removed)
| Code removed in: Python 3.4
* | Name: IRIX
| Unsupported in: Python 3.7
| Code removed in: Python 3.7
* | Name: Systems without multithreading support
| Unsupported in: Python 3.7
| Code removed in: Python 3.7
References
----------
.. [1] http://support.microsoft.com/lifecycle/
.. [2] http://buildbot.python.org/3.x.stable/
Copyright
---------
This document has been placed in the public domain.
..
Local Variables:
mode: indented-text
indent-tabs-mode: nil
sentence-end-double-space: t
fill-column: 70
coding: utf-8
End:

Binary file not shown.

Before

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 132 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

View File

@ -1,437 +0,0 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!-- Created with Inkscape (http://www.inkscape.org/) -->
<svg
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:cc="http://creativecommons.org/ns#"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:svg="http://www.w3.org/2000/svg"
xmlns="http://www.w3.org/2000/svg"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
width="150mm"
height="140mm"
viewBox="0 0 531.49606 496.06299"
id="svg14800"
version="1.1"
inkscape:version="0.91 r13725"
sodipodi:docname="pep-0495-gap.svg"
inkscape:export-filename="/Users/a/Work/peps/pep-0495-fold.png"
inkscape:export-xdpi="90"
inkscape:export-ydpi="90">
<defs
id="defs14802">
<marker
inkscape:stockid="DotM"
orient="auto"
refY="0"
refX="0"
id="DotM"
style="overflow:visible"
inkscape:isstock="true">
<path
id="path6980"
d="m -2.5,-1 c 0,2.76 -2.24,5 -5,5 -2.76,0 -5,-2.24 -5,-5 0,-2.76 2.24,-5 5,-5 2.76,0 5,2.24 5,5 z"
style="fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1"
transform="matrix(0.4,0,0,0.4,2.96,0.4)"
inkscape:connector-curvature="0" />
</marker>
<marker
inkscape:stockid="DiamondSstart"
orient="auto"
refY="0"
refX="0"
id="DiamondSstart"
style="overflow:visible"
inkscape:isstock="true">
<path
id="path7010"
d="M 0,-7.0710768 -7.0710894,0 0,7.0710589 7.0710462,0 0,-7.0710768 Z"
style="fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1"
transform="matrix(0.2,0,0,0.2,1.2,0)"
inkscape:connector-curvature="0" />
</marker>
<marker
inkscape:stockid="Arrow2Mend"
orient="auto"
refY="0"
refX="0"
id="Arrow2Mend"
style="overflow:visible"
inkscape:isstock="true">
<path
id="path6943"
style="fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:0.625;stroke-linejoin:round;stroke-opacity:1"
d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
transform="scale(-0.6,-0.6)"
inkscape:connector-curvature="0" />
</marker>
<pattern
inkscape:collect="always"
xlink:href="#pattern15623"
id="pattern15646"
patternTransform="translate(0,2.8515625e-5)" />
<pattern
inkscape:collect="always"
xlink:href="#Strips1_1"
id="pattern15599"
patternTransform="matrix(10,0,0,10,424.80508,-468.3217)" />
<pattern
inkscape:isstock="true"
inkscape:stockid="Stripes 1:1"
id="Strips1_1"
patternTransform="translate(0,0) scale(10,10)"
height="1"
width="2"
patternUnits="userSpaceOnUse"
inkscape:collect="always">
<rect
id="rect6108"
height="2"
width="1"
y="-0.5"
x="0"
style="fill:black;stroke:none" />
</pattern>
<marker
inkscape:stockid="Arrow1Lstart"
orient="auto"
refY="0"
refX="0"
id="Arrow1Lstart"
style="overflow:visible"
inkscape:isstock="true">
<path
id="path6916"
d="M 0,0 5,-5 -12.5,0 5,5 0,0 Z"
style="fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1"
transform="matrix(0.8,0,0,0.8,10,0)"
inkscape:connector-curvature="0" />
</marker>
<marker
inkscape:stockid="Arrow1Lend"
orient="auto"
refY="0"
refX="0"
id="Arrow1Lend"
style="overflow:visible"
inkscape:isstock="true">
<path
id="path6919"
d="M 0,0 5,-5 -12.5,0 5,5 0,0 Z"
style="fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1"
transform="matrix(-0.8,0,0,-0.8,-10,0)"
inkscape:connector-curvature="0" />
</marker>
<marker
inkscape:stockid="Arrow1Mend"
orient="auto"
refY="0"
refX="0"
id="Arrow1Mend"
style="overflow:visible"
inkscape:isstock="true">
<path
id="path6925"
d="M 0,0 5,-5 -12.5,0 5,5 0,0 Z"
style="fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1"
transform="matrix(-0.4,0,0,-0.4,-4,0)"
inkscape:connector-curvature="0" />
</marker>
<pattern
patternUnits="userSpaceOnUse"
width="265.19116"
height="51.983494"
patternTransform="translate(-424.80508,468.3217)"
id="pattern15596">
<path
inkscape:connector-curvature="0"
id="path15588"
d="m 0.376692,25.991752 0,-25.61506 132.218888,0 132.21889,0 0,25.61506 0,25.61505 -132.21889,0 -132.218888,0 0,-25.61505 z"
style="opacity:0.5;fill:url(#pattern15599);fill-opacity:1;stroke:#ffd640;stroke-width:0.75338399;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:0.75338398, 0.75338398;stroke-dashoffset:0;stroke-opacity:1" />
</pattern>
<pattern
patternUnits="userSpaceOnUse"
width="213.59843"
height="36.4331"
patternTransform="translate(-0.5,1122.7283)"
id="pattern15623">
<path
inkscape:connector-curvature="0"
id="path15613"
d="m 0.5,0.5 212.59843,0 0,17.7166 -212.59843,0 z"
style="fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:10;stroke-opacity:1" />
<path
inkscape:connector-curvature="0"
id="path15615"
d="m 0.5,18.2166 0,17.7165 212.59843,0 0,-17.7165"
style="fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:10;stroke-opacity:1" />
<path
inkscape:connector-curvature="0"
id="path15617"
d="m 0.98017929,9.3247 0,-7.9105 105.47376071,0 105.47375,0 0,7.9105 0,7.9105 -105.47375,0 -105.47376071,0 0,-7.9105 z"
style="opacity:0.5;fill:#ffd744;fill-opacity:1;stroke:#ffd744;stroke-width:0.75338399;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:7.5338397;stroke-opacity:0.50196078" />
<path
inkscape:connector-curvature="0"
id="path15621"
d="m 0.98017929,27.0292 0,-8.2872 105.47376071,0 105.47375,0 0,8.2872 0,8.2872 -105.47375,0 -105.47376071,0 0,-8.2872 z"
style="opacity:0.5;fill:#326c9c;fill-opacity:1;stroke:#326c9b;stroke-width:0.75338399;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:7.5338397;stroke-opacity:0.50196078" />
</pattern>
<pattern
patternUnits="userSpaceOnUse"
width="213.59843"
height="36.433102"
patternTransform="translate(-0.5,1122.7283)"
id="pattern15643">
<rect
id="rect15629"
y="0"
x="0"
height="36.433102"
width="213.59843"
style="fill:url(#pattern15646);stroke:none" />
</pattern>
</defs>
<sodipodi:namedview
id="base"
pagecolor="#ffffff"
bordercolor="#666666"
borderopacity="1.0"
inkscape:pageopacity="0.0"
inkscape:pageshadow="2"
inkscape:zoom="2.8284272"
inkscape:cx="215.26543"
inkscape:cy="232.89973"
inkscape:document-units="mm"
inkscape:current-layer="layer2"
showgrid="true"
inkscape:window-width="2556"
inkscape:window-height="1555"
inkscape:window-x="1"
inkscape:window-y="0"
inkscape:window-maximized="0"
objecttolerance="10000"
showborder="false"
fit-margin-top="0"
fit-margin-left="0"
fit-margin-right="0"
fit-margin-bottom="0">
<inkscape:grid
type="xygrid"
id="grid14808"
originx="37.568003"
spacingx="17.716536"
spacingy="17.716536"
empspacing="3"
originy="-71.39131" />
</sodipodi:namedview>
<metadata
id="metadata14805">
<rdf:RDF>
<cc:Work
rdf:about="">
<dc:format>image/svg+xml</dc:format>
<dc:type
rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
<dc:title></dc:title>
</cc:Work>
</rdf:RDF>
</metadata>
<g
inkscape:label="Layer 1"
inkscape:groupmode="layer"
id="layer1"
transform="translate(37.568003,-484.90789)">
<path
style="fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:0.99921262;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;marker-start:url(#Arrow1Lstart);marker-end:url(#Arrow1Lend)"
d="M 476.5503,945.88825 0,946.42873 0,521.76422"
id="path14810"
inkscape:connector-curvature="0"
sodipodi:nodetypes="ccc" />
<flowRoot
xml:space="preserve"
id="flowRoot15458"
style="font-style:normal;font-weight:normal;font-size:40px;line-height:125%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"><flowRegion
id="flowRegion15460"><rect
id="rect15462"
width="159.44882"
height="106.29922"
x="-425.19687"
y="946.06299" /></flowRegion><flowPara
id="flowPara15464" /></flowRoot> <flowRoot
xml:space="preserve"
id="flowRoot15466"
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:40px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"><flowRegion
id="flowRegion15468"><rect
id="rect15470"
width="159.44882"
height="88.58268"
x="212.59843"
y="1070.0787"
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:40px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:center;writing-mode:lr-tb;text-anchor:middle" /></flowRegion><flowPara
id="flowPara15474" /></flowRoot> <flowRoot
xml:space="preserve"
id="flowRoot15480"
style="font-style:normal;font-weight:normal;font-size:40px;line-height:125%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"><flowRegion
id="flowRegion15482"><rect
id="rect15484"
width="70.866142"
height="53.149609"
x="212.59843"
y="1105.5118" /></flowRegion><flowPara
id="flowPara15486" /></flowRoot> <flowRoot
xml:space="preserve"
id="flowRoot15488"
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:22.5px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
transform="translate(270.90867,-112.71393)"><flowRegion
id="flowRegion15490"><rect
id="rect15492"
width="265.74805"
height="88.58268"
x="159.44882"
y="1070.0787"
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:22.5px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;writing-mode:lr-tb;text-anchor:start" /></flowRegion><flowPara
id="flowPara15496">UTC</flowPara></flowRoot> <text
xml:space="preserve"
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:22.5px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="-570.61304"
y="-20.473276"
id="text15498"
sodipodi:linespacing="125%"
transform="matrix(0,-1,1,0,0,0)"><tspan
sodipodi:role="line"
id="tspan15500"
x="-570.61304"
y="-20.473276">local</tspan></text>
<path
style="fill:none;fill-rule:evenodd;stroke:#336d9c;stroke-width:2.12598419;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
d="M 52.152923,893.91006 266.74473,679.31828"
id="path15502"
inkscape:connector-curvature="0" />
<path
style="fill:none;fill-rule:evenodd;stroke:#336d9c;stroke-width:2.12598419;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
d="M 265.74804,733.46456 425.19686,574.01574"
id="path15504"
inkscape:connector-curvature="0" />
<path
style="fill:none;fill-rule:evenodd;stroke:#336d9c;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:1, 12;stroke-dashoffset:0;stroke-opacity:1"
d="m 265.74804,680.31496 0,53.1496 z"
id="path15678"
inkscape:connector-curvature="0"
sodipodi:nodetypes="ccc" />
<text
xml:space="preserve"
style="font-style:italic;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:20px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Italic';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="-17.04035"
y="703.841"
id="text16422"
sodipodi:linespacing="125%"><tspan
sodipodi:role="line"
id="tspan16424"
x="-17.04035"
y="703.841">t</tspan></text>
<text
xml:space="preserve"
style="font-style:italic;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:20px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Italic';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="240.81497"
y="962.27954"
id="text16438"
sodipodi:linespacing="125%"><tspan
sodipodi:role="line"
id="tspan16440"
x="240.81497"
y="962.27954">u<tspan
style="font-size:64.99999762%;baseline-shift:sub"
id="tspan16442">0</tspan></tspan></text>
<text
xml:space="preserve"
style="font-style:italic;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:20px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Italic';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="294.96457"
y="963.77954"
id="text16444"
sodipodi:linespacing="125%"><tspan
sodipodi:role="line"
id="tspan16446"
x="294.96457"
y="963.77954">u<tspan
style="font-size:64.99999762%;baseline-shift:sub"
id="tspan16448">1</tspan></tspan></text>
<path
style="fill:none;fill-rule:evenodd;stroke:#336d9c;stroke-width:7.08661413;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
d="m 212.59843,941.81299 53.14961,0"
id="path16450"
inkscape:connector-curvature="0" />
<path
style="fill:none;fill-rule:evenodd;stroke:#336d9c;stroke-width:7.08661413;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
d="m 4.2499999,733.46456 0,-53.1496"
id="path16452"
inkscape:connector-curvature="0" />
<path
style="fill:none;fill-rule:evenodd;stroke:#ffd847;stroke-width:7.08661413;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
d="m 265.74804,941.81299 53.14961,0"
id="path16454"
inkscape:connector-curvature="0" />
<text
xml:space="preserve"
style="font-style:italic;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:20px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold Italic';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="343.96481"
y="712.6087"
id="text16458"
sodipodi:linespacing="125%"><tspan
sodipodi:role="line"
id="tspan16460"
x="343.96481"
y="712.6087">Fold</tspan></text>
<path
style="fill:none;fill-rule:evenodd;stroke:#336d9c;stroke-width:2.12598425;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;stroke-miterlimit:4;stroke-dasharray:none"
d="m 265.74804,680.31492 0,53.14961"
id="path16481"
inkscape:connector-curvature="0" />
<path
style="fill:none;fill-rule:evenodd;stroke:#ffd847;stroke-width:7.08661413;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
d="m 11.716536,733.46456 0,-53.1496"
id="path16456"
inkscape:connector-curvature="0" />
</g>
<g
inkscape:groupmode="layer"
id="layer2"
inkscape:label="Layer 2">
<path
transform="translate(37.568003,-484.90789)"
style="fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:6, 2;stroke-dashoffset:0;stroke-opacity:1"
d="m 0,698.03149 248.0315,0 0,247.85676"
id="path15680"
inkscape:connector-curvature="0"
sodipodi:nodetypes="ccc" />
<path
transform="translate(37.568003,-484.90789)"
style="fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:4, 2;stroke-dashoffset:0;stroke-opacity:1"
d="m 248.0315,698.03149 53.14961,0 0,247.85676"
id="path15682"
inkscape:connector-curvature="0"
sodipodi:nodetypes="ccc" />
<path
transform="translate(37.568003,-484.90789)"
style="fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:0.99921262;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:0.9992126, 11.99055118000000064;stroke-dashoffset:0;stroke-opacity:1"
d="m 0,680.31496 318.89765,0 0,265.57329"
id="path15566"
inkscape:connector-curvature="0"
sodipodi:nodetypes="ccc" />
<path
transform="translate(37.568003,-484.90789)"
style="fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:0.99921262;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:0.99921262, 7.99370097999999984;stroke-dashoffset:0;stroke-opacity:1"
d="m 212.59843,733.46456 0,212.42369"
id="path15676"
inkscape:connector-curvature="0"
sodipodi:nodetypes="cc" />
<path
transform="translate(37.568003,-484.90789)"
style="fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:1, 12;stroke-dashoffset:0;stroke-opacity:1"
d="m 0,733.46456 265.74804,0 0,211.88321"
id="path15552"
inkscape:connector-curvature="0"
sodipodi:nodetypes="ccc" />
</g>
</svg>

Before

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 221 KiB

File diff suppressed because it is too large Load Diff

View File

@ -1,205 +0,0 @@
PEP: 582
Title: Python local packages directory
Version: $Revision$
Last-Modified: $Date$
Author: Kushal Das <mail@kushaldas.in>, Steve Dower <steve.dower@python.org>,
Donald Stufft <donald@stufft.io>, Nick Coghlan <ncoghlan@gmail.com>
Discussions-To: https://discuss.python.org/t/pep-582-python-local-packages-directory/963/
Status: Draft
Type: Standards Track
Content-Type: text/x-rst
Created: 16-May-2018
Python-Version: 3.8
Abstract
========
This PEP proposes to add to Python a mechanism to automatically recognize a
``__pypackages__`` directory and prefer importing packages installed in this
location over user or global site-packages. This will avoid the steps to create,
activate or deactivate "virtual environments". Python will use the
``__pypackages__`` from the base directory of the script when present.
Motivation
==========
Python virtual environments have become an essential part of development and
teaching workflow in the community, but at the same time, they create a barrier
to entry for many. The following are a few of the issues people run into while
being introduced to Python (or programming for the first time).
- How virtual environments work is a lot of information for anyone new. It takes
a lot of extra time and effort to explain them.
- Different platforms and shell environments require different sets of commands
to activate the virtual environments. Any workshop or teaching environment with
people coming with different operating systems installed on their laptops create a
lot of confusion among the participants.
- Virtual environments need to be activated on each opened terminal. If someone
creates/opens a new terminal, that by default does not get the same environment
as in a previous terminal with virtual environment activated.
Specification
=============
When the Python binary is executed, it attempts to determine its prefix (as
stored in ``sys.prefix``), which is then used to find the standard library and
other key files, and by the ``site`` module to determine the location of the
``site-package`` directories. Currently the prefix is found -- assuming
``PYTHONHOME`` is not set -- by first walking up the filesystem tree looking for
a marker file (``os.py``) that signifies the presence of the standard library,
and if none is found, falling back to the build-time prefix hard coded in the
binary. The result of this process is the contents of ``sys.path`` - a list of
locations that the Python import system will search for modules.
This PEP proposes to add a new step in this process. If a ``__pypackages__``
directory is found in the current working directory, then it will be included in
``sys.path`` after the current working directory and just before the system
site-packages. This way, if the Python executable starts in the given project
directory, it will automatically find all the dependencies inside of
``__pypackages__``.
In case of Python scripts, Python will try to find ``__pypackages__`` in the
same directory as the script. If found (along with the current Python version
directory inside), then it will be used, otherwise Python will behave as it does
currently.
If any package management tool finds the same ``__pypackages__`` directory in
the current working directory, it will install any packages there and also
create it if required based on Python version.
Projects that use a source management system can include a ``__pypackages__``
directory (empty or with e.g. a file like ``.gitignore``). After doing a fresh
check out the source code, a tool like ``pip`` can be used to install the
required dependencies directly into this directory.
Example
-------
The following shows an example project directory structure, and different ways
the Python executable and any script will behave.
::
foo
__pypackages__
3.8
lib
bottle
myscript.py
/> python foo/myscript.py
sys.path[0] == 'foo'
sys.path[1] == 'foo/__pypackages__/3.8/lib'
cd foo
foo> /usr/bin/ansible
#! /usr/bin/env python3
foo> python /usr/bin/ansible
foo> python myscript.py
foo> python
sys.path[0] == '.'
sys.path[1] == './__pypackages__/3.8/lib'
foo> python -m bottle
We have a project directory called ``foo`` and it has a ``__pypackages__``
inside of it. We have ``bottle`` installed in that
``__pypackages__/3.8/lib``, and have a ``myscript.py`` file inside of the
project directory. We have used whatever tool we generally use to install ``bottle``
in that location.
For invoking a script, Python will try to find a ``__pypackages__`` inside of
the directory that the script resides[1]_, ``/usr/bin``. The same will happen
in case of the last example, where we are executing ``/usr/bin/ansible`` from
inside of the ``foo`` directory. In both cases, it will **not** use the
``__pypackages__`` in the current working directory.
Similarly, if we invoke ``myscript.py`` from the first example, it will use the
``__pypackages__`` directory that was in the ``foo`` directory.
If we go inside of the ``foo`` directory and start the Python executable (the
interpreter), it will find the ``__pypackages__`` directory inside of the
current working directory and use it in the ``sys.path``. The same happens if we
try to use the ``-m`` and use a module. In our example, ``bottle`` module will
be found inside of the ``__pypackages__`` directory.
The above two examples are only cases where ``__pypackages__`` from current
working directory is used.
In another example scenario, a trainer of a Python class can say "Today we are
going to learn how to use Twisted! To start, please checkout our example
project, go to that directory, and then run ``python3 -m pip install twisted``."
That will install Twisted into a directory separate from ``python3``. There's no
need to discuss virtual environments, global versus user installs, etc. as the
install will be local by default. The trainer can then just keep telling them to
use ``python3`` without any activation step, etc.
.. [1]_: In the case of symlinks, it is the directory where the actual script
resides, not the symlink pointing to the script
Security Considerations
=======================
While executing a Python script, it will not consider the ``__pypackages__`` in
the current directory, instead if there is a ``__pypackages__`` directory in the
same path of the script, that will be used.
For example, if we execute ``python /usr/share/myproject/fancy.py`` from the
``/tmp`` directory and if there is a ``__pypackages__`` directory inside of
``/usr/share/myproject/`` directory, it will be used. Any potential
``__pypackages__`` directory in ``/tmp`` will be ignored.
Backwards Compatibility
=======================
This does not affect any older version of Python implementation.
Impact on other Python implementations
--------------------------------------
Other Python implementations will need to replicate the new behavior of the
interpreter bootstrap, including locating the ``__pypackages__`` directory and
adding it the ``sys.path`` just before site packages, if it is present.
Reference Implementation
========================
`Here <https://github.com/kushaldas/cpython/tree/pypackages>`_ is a PoC
implementation (in the ``pypackages`` branch).
Rejected Ideas
==============
``__pylocal__`` and ``python_modules``.
Copyright
=========
This document has been placed in the public domain.
..
Local Variables:
mode: indented-text
indent-tabs-mode: nil
sentence-end-double-space: t
fill-column: 80
coding: utf-8
End:

Binary file not shown.

Before

Width:  |  Height:  |  Size: 356 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 179 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 96 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 76 KiB

View File

@ -1,168 +0,0 @@
PEP: 625
Title: File name of a Source Distribution
Author: Tzu-ping Chung <uranusjr@gmail.com>,
Paul Moore <p.f.moore@gmail.com>
Discussions-To: https://discuss.python.org/t/draft-pep-file-name-of-a-source-distribution/4686
Status: Draft
Type: Standards Track
Content-Type: text/x-rst
Created: 08-Jul-2020
Post-History: 08-Jul-2020
Abstract
========
This PEP describes a standard naming scheme for a Source Distribution, also
known as an *sdist*. This scheme distinguishes an sdist from an arbitrary
archive file containing source code of Python packages, and can be used to
communicate information about the distribution to packaging tools.
A standard sdist specified here is a gzipped tar file with a specially
formatted file stem and a ``.sdist`` suffix. This PEP does not specify the
contents of the tarball.
Motivation
==========
An sdist is a Python package distribution that contains "source code" of the
Python package, and requires a build step to be turned into a wheel on
installation. This format is often considered as an unbuilt counterpart of a
:pep:`427` wheel, and given special treatments in various parts of the
packaging ecosystem.
Compared to wheel, however, the sdist is entirely unspecified, and currently
works by convention. The widely accepted format of an sdist is defined by the
implementation of distutils and setuptools, which creates a source code
archive in a predictable format and file name scheme. Installers exploit this
predictability to assign this format certain contextual information that helps
the installation process. pip, for example, parses the file name of an sdist
from a :pep:`503` index, to obtain the distribution's project name and version
for dependency resolution purposes. But due to the lack of specification,
the installer does not have any guarantee as to the correctness of the inferred
message, and must verify it at some point by locally building the distribution
metadata.
This build step is awkward for a certain class of operations, when the user
does not expect the build process to occur. `pypa/pip#8387`_ describes an
example. The command ``pip download --no-deps --no-binary=numpy numpy`` is
expected to only download an sdist for numpy, since we do not need to check
for dependencies, and both the name and version are available by introspecting
the downloaded file name. pip, however, cannot assume the downloaded archive
follows the convention, and must build and check the metadata. For a :pep:`518`
project, this means running the ``prepare_metadata_for_build_wheel`` hook
specified in :pep:`517`, which incurs significant overhead.
Rationale
=========
By creating a special file name scheme for the sdist format, this PEP frees up
tools from the time-consuming metadata verification step when they only need
the metadata available in the file name.
This PEP also serves as the formal specification to the long-standing
file name convention used by the current sdist implementations. The file name
contains the distribution name and version, to aid tools identifying a
distribution without needing to download, unarchive the file, and perform
costly metadata generation for introspection, if all the information they need
is available in the file name.
Specification
=============
The name of an sdist should be ``{distribution}-{version}.sdist``.
* ``distribution`` is the name of the distribution as defined in :pep:`345`,
and normalised according to :pep:`503`, e.g. ``'pip'``, ``'flit-core'``.
* ``version`` is the version of the distribution as defined in :pep:`440`,
e.g. ``20.2``.
Each component is escaped according to the same rules as :pep:`427`.
An sdist must be a gzipped tar archive that is able to be extracted by the
standard library ``tarfile`` module with the open flag ``'r:gz'``.
Backwards Compatibility
=======================
The new file name scheme should not incur backwards incompatibility in
existing tools. Installers are likely to have already implemented logic to
exclude extensions they do not understand, since they already need to deal
with legacy formats on PyPI such as ``.rpm`` and ``.egg``. They should be able
to correctly ignore files with extension ``.sdist``.
pip, for example, skips this extension with the following debug message::
Skipping link: unsupported archive format: sdist: <URL to file>
While setuptools ignores it silently.
Rejected Ideas
==============
Create specification for sdist metadata
---------------------------------------
The topic of creating a trustworthy, standard sdist metadata format as a means
to distinguish sdists from arbitrary archive files has been raised and
discussed multiple times, but has yet to make significant progress due to
the complexity of potential metadata inconsistency between an sdist and a
wheel built from it.
This PEP does not exclude the possibility of creating a metadata specification
for sdists in the future. But by specifying only the file name of an sdist, a
tool can reliably identify an sdist, and perform useful introspection on its
identity, without going into the details required for metadata specification.
Use a currently common sdist naming scheme
------------------------------------------
There is a currently established practice to name an sdist in the format of
``{distribution}-{version}.[tar.gz|zip]``.
Popular source code management services use a similar scheme to name the
downloaded source archive. GitHub, for example, uses ``distribution-1.0.zip``
as the archive name containing source code of repository ``distribution`` on
branch ``1.0``. Giving this scheme a special meaning would cause confusion
since a source archive may not a valid sdist.
Augment a currently common sdist naming scheme
----------------------------------------------
A scheme ``{distribution}-{version}.sdist.tar.gz`` was raised during the
initial discussion. This was abandoned due to backwards compatibility issues
with currently available installation tools. pip 20.1, for example, would
parse ``distribution-1.0.sdist.tar.gz`` as project ``distribution`` with
version ``1.0.sdist``. This would cause the sdist to be downloaded, but fail to
install due to inconsistent metadata.
The same problem exists for all common archive suffixes. To avoid confusing
old installers, the sdist scheme must use a suffix that they do not identify
as an archive.
References
==========
.. _`pypa/pip#8387`: https://github.com/pypa/pip/issues/8387
Copyright
=========
This document is placed in the public domain or under the CC0-1.0-Universal
license, whichever is more permissive.
..
Local Variables:
mode: indented-text
indent-tabs-mode: nil
sentence-end-double-space: t
fill-column: 70
coding: utf-8
End:

View File

@ -1,904 +0,0 @@
PEP: 639
Title: Metadata for Python Software Packages 2.2
Version: $Revision$
Last-Modified: $Date$
Author: Philippe Ombredanne <pombredanne at nexb.com>
Sponsor: Paul Moore <p.f.moore at gmail.com>
BDFL-Delegate: Paul Moore <p.f.moore at gmail.com>
Discussions-To: https://discuss.python.org/t/2154
Status: Draft
Type: Standards Track
Content-Type: text/x-rst
Created: 15-Aug-2019
Python-Version: 3.x
Post-History:
Replaces: 566
Resolution:
Abstract
========
This PEP describes the changes between versions 2.1 and 2.2 of the `Core
Metadata Specification` [#cms]_ for Python packages. Version 2.1 is specified in
PEP 566.
The primary change introduced in this PEP updates how licenses are documented in
core metadata via the ``License`` field with license expression strings using
SPDX license identifiers [#spdxlist]_ such that license documentation is simpler
and less ambiguous:
- for package authors to create,
- for package users to read and understand, and,
- for tools to process package license information mechanically.
The other changes include:
- specifying a ``License-File`` field which is already used by ``wheel`` and
``setuptools`` to include license files in built distributions.
- defining how tools can validate license expressions and report warnings to
users for invalid expressions (but still accept any string as ``License``).
Goals
=====
This PEP's scope is limited strictly to how we document the license of a
distribution:
- with an improved and structured way to document a license expression, and,
- by including license texts in a built package.
The core metadata specification updates that are part of this PEP have been
designed to have minimal impact and to be backward compatible with v2.1. These
changes utilize emerging new ways to document licenses that are already in use
in some tools (e.g. by adding the ``License-File`` field already used in
``wheel`` and ``setuptools``) or by some package authors (e.g. storing an SPDX
license expression in the existing ``License`` field).
In addition to an update to the metadata specification, this PEP contains:
- recommendations for publishing tools on how to validate the ``License`` and
``Classifier`` fields and report informational warnings when a package uses an
older, non-structured style of license documentation conventions.
- informational appendices that contain surveys of how we document licenses
today in Python packages and elsewhere, and a reference Python library to
parse, validate and build correct license expressions.
It is the intent of the PEP authors to work closely with tool authors to
implement the recommendations for validation and warnings specified in this PEP.
Non-Goals
=========
This PEP is neutral regarding the choice of license by any package author.
In particular, the SPDX license expression syntax proposed in this PEP provides
simpler and more expressive conventions to document accurately any kind of
license that applies to a Python package, whether it is an open source license,
a free or libre software license, a proprietary license, or a combination of
such licenses.
This PEP makes no recommendation for specific licenses and does not require the
use of specific license documentation conventions. This PEP also does not impose
any restrictions when uploading to PyPI.
Instead, this PEP is intended to document common practices already in use, and
recommends that publishing tools should encourage users via informational
warnings when they do not follow this PEP's recommendations.
This PEP is not about license documentation in files inside packages, even
though this is a surveyed topic in the appendix.
Possible future PEPs
--------------------
It is the intention of the authors of this PEP to consider the submission of
related but separate PEPs in the future such as:
- make ``License`` and new ``License-File`` fields mandatory including
stricter enforcement in tools and PyPI publishing.
- require uploads to PyPI to use only FOSS (Free and Open Source software)
licenses.
Motivation
==========
Software is licensed, and providing accurate licensing information to Python
packages users is an important matter. Today, there are multiple places where
licenses are documented in package metadata and there are limitations to what
can be documented. This is often leading to confusion or a lack of clarity both
for package authors and package users.
Several package authors have expressed difficulty and/or frustrations due to the
limited capabilities to express licensing in package metadata. This also applies
to Linux and BSD* distribution packagers. This has triggered several
license-related discussions and issues, in particular:
- https://github.com/pypa/trove-classifiers/issues/17
- https://github.com/pypa/interoperability-peps/issues/46
- https://github.com/pypa/packaging-problems/issues/41
- https://github.com/pypa/wheel/issues/138
- https://github.com/pombredanne/spdx-pypi-pep/issues/1
On average, Python packages tend to have more ambiguous, or missing, license
information than other common application package formats (such as npm, Maven or
Gem) as can be seen in the statistics [#cdstats]_ page of the ClearlyDefined
[#cd]_ project that cover all packages from PyPI, Maven, npm and Rubygems.
ClearlyDefined is an open source project to help improve clarity of other open
source projects that is incubating at the OSI (Open Source Initiative) [#osi]_.
Rationale
=========
A mini-survey of existing license metadata definitions in use in the Python
ecosystem today and documented in several other system/distro and application
package formats is provided in Appendix 2 of this PEP.
There are a few takeaways from the survey:
- Most package formats use a single ``License`` field.
- Many modern package formats use some form of license expression syntax to
optionally combine more than one license identifier together. SPDX and
SPDX-like syntaxes are the most popular in use.
- SPDX license identifiers are becoming a de facto way to reference common
licenses everywhere, whether or not a license expression syntax is used.
- Several package formats support documenting both a license expression and the
paths of the corresponding files that contain the license text. Most free and
open source software licenses require package authors to include their full
text in a distribution.
These considerations have guided the design and recommendations of this PEP.
The reuse of the ``License`` field with license expressions will provide an
intuitive and more structured way to express the license of a distribution using
a well-defined syntax and well-known license identifiers.
Over time, recommending the usage of these expressions will help Python package
publishers improve the clarity of their license documentation to the benefit of
package authors, consumers and redistributors.
Core Metadata Specification updates
===================================
The canonical source for the names and semantics of each of the supported
metadata fields is the Core Metadata Specification [#cms]_ document.
The details of the updates considered to the Core Metadata Specification [#cms]_
document as part of this PEP are described here and will be added to the
canonical source once this PEP is approved.
Added in Version 2.2
--------------------
License-File (multiple use)
:::::::::::::::::::::::::::
The License-File is a string that is a path, relative to``.dist-info``, to a
license file. The license file content MUST be UTF-8 encoded text.
Build tools SHOULD honor this field and include the corresponding license
file(s) in the built package.
Changed in Version 2.2
----------------------
License (optional)
::::::::::::::::::
Text indicating the license covering the distribution. This text can be either a
valid license expression as defined here or any free text.
Publishing tools SHOULD issue an informational warning if this field is empty,
missing, or is not a valid license expression as defined here. Build tools MAY
issue a similar warning.
License Expression syntax
'''''''''''''''''''''''''
A license expression is a string using the SPDX license expression syntax as
documented in the SPDX specification [#spdx]_ using either Version 2.2
[#spdx22]_ or a later compatible version. SPDX is a working group at the Linux
Foundation that defines a standard way to exchange package information.
When used in the ``License`` field and as a specialization of the SPDX license
expression definition, a license expression can use the following license
identifiers:
- any SPDX-listed license short-form identifiers that are published in the SPDX
License List [#spdxlist]_ using either Version 3.10 or any later compatible
version. Note that the SPDX working group never removes any license
identifiers: instead they may choose to mark an identifier as "deprecated".
- the ``LicenseRef-Public-Domain`` and ``LicenseRef-Proprietary`` strings to
identify licenses that are not included in the SPDX license list.
When processing the ``License`` field to determine if it contains a valid
license expression, tools:
- SHOULD report an informational warning if one or more of the following
applies:
- the field does not contain a license expression,
- the license expression syntax is invalid,
- the license expression syntax is valid but some license identifiers are
unknown as defined here or the license identifiers have been marked as
deprecated in the SPDX License List [#spdxlist]_
- SHOULD store a case-normalized version of the ``License`` field using the
reference case for each SPDX license identifier and uppercase for the AND, OR
and WITH keywords.
- SHOULD report an informational warning if normalization process results in
changes to the ``License`` field contents.
License expression examples::
License: MIT
License: BSD-3-Clause
License: MIT OR GPL-2.0-or-later OR (FSFUL AND BSD-2-Clause)
License: GPL-3.0-only WITH Classpath-Exception-2.0 OR BSD-3-Clause
License: This software may only be obtained by sending the
author a postcard, and then the user promises not
to redistribute it.
License: LicenseRef-Proprietary AND LicenseRef-Public-Domain
Classifier (multiple use)
:::::::::::::::::::::::::
Each entry is a string giving a single classification value for the
distribution. Classifiers are described in PEP 301.
Examples::
Classifier: Development Status :: 4 - Beta
Classifier: Environment :: Console (Text Based)
Tools SHOULD issue an informational warning if this field contains a licensing-
related classifier string starting with the ``License ::`` prefix and SHOULD
suggest the use of a license expression in the ``License`` field instead.
If the ``License`` field is present and contains a valid license expression,
publishing tools MUST NOT also provide any licensing-related classifier entries
[#classif]_.
However, for compatibility with existing publishing and installation processes,
licensing-related classifier entries SHOULD continue to be accepted if the
``License`` field is absent or does not contain a valid license expression.
Publishing tools MAY infer a license expression from the provided classifier
entries if they are able to do so unambiguously.
However, no new licensing related classifiers will be added; anyone
requesting them will be directed to use a license expression in the ``License``
field instead. Note that the licensing-related classifiers may be deprecated in
a future PEP.
Mapping Legacy Classifiers to New License Expressions
'''''''''''''''''''''''''''''''''''''''''''''''''''''
Publishing tools MAY infer or suggest an equivalent license expression from the
provided ``License`` or ``Classifier`` information if they are able to do so
unambiguously. For instance, if a package only has this license classifier::
Classifier: License :: OSI Approved :: MIT License
Then the corresponding value for a ``License`` field using a valid license
expression to suggest would be::
License: MIT
Here are mapping guidelines for the legacy classifiers:
- Classifier ``License :: Other/Proprietary License`` becomes License:
``LicenseRef-Proprietary`` expression.
- Classifier ``License :: Public Domain`` becomes License: ``LicenseRef-Public-Domain``
expression, though tools should encourage the use of more explicit and legally
portable license identifiers such as ``CC0-1.0`` [#cc0]_, the ``Unlicense``
[#unlic]_ since the meaning associated with the term "public domain" is thoroughly
dependent on the specific legal jurisdiction involved and some jurisdictions
have no concept of Public Domain as it exists in the USA.
- The generic and ambiguous classifiers ``License :: OSI Approved`` and
``License :: DFSG approved`` do not have an equivalent license expression.
- The generic and sometimes ambiguous classifiers
``License :: Free For Educational Use``, ``License :: Free For Home Use``,
``License :: Free for non-commercial use``, ``License :: Freely Distributable``,
``License :: Free To Use But Restricted``, and ``License :: Freeware`` are mapped
to the generic License: ``LicenseRef-Proprietary`` expression.
- Classifiers ``License :: GUST*`` have no mapping to SPDX license identifierss
for now and no package uses them in PyPI as of the writing of this PEP.
The remainder of the classifiers using a ``License ::`` prefix map to a simple
single-identifier license expression using the corresponding SPDX license identifiers.
When multiple license-related classifiers are used, their relation is ambiguous
and it is typically not possible to determine if all the licenses apply or if
there is a choice that is possible among the licenses. In this case, tools
cannot reliably infer a license expression and should suggest that the package
author construct a license expression which expresses their intent.
Summary of Differences From PEP 566
===================================
* Metadata-Version is now 2.2.
* Added one new field: ``License-File``
* Updated the documentation of two fields: ``License`` and ``Classifier``
Backwards Compatibility
=======================
The reuse of the ``License`` field means that we keep backward
compatibility. The specification of the ``License-File`` field is only writing
down the practices of the ``wheel`` and ``setuptools`` tools and is backward
compatible with their support for that field.
The "soft" validation of the ``License`` field when it does not contain a valid
license expression and when the ``Classifier`` field is used with legacy
license-related classifiers means that we can gently prepare users for possible
strict and incompatible validation of these fields in the future.
Security Implications
=====================
This PEP has no foreseen security implications: the License field is a plain
string and the License-File(s) are file paths. None of them introduces any new
security concern.
How to Teach Users to Use License Expressions
=============================================
The simple cases are simple: a single license identifier is a valid license
expression and a large majority of packages use a single license.
The plan to teach users of packaging tools how to express their package's
license with a valid license expression is to have tools issue informative
messages when they detect invalid license expressions or when a license-related
classifier is used in the ``Classifier`` field.
With a warning message that does not terminate processing, publishing tools will
gently teach users how to provide correct license expressions over time.
Tools may also help with the conversion and suggest a license expression in some
cases:
1. The section `Mapping Legacy Classifiers to New License expressions`_ provides
tool authors with guidelines on how to suggest a license expression produced
from legacy classifiers.
2. Tools may also be able to infer and suggest how to update an existing
incorrect ``License`` value and convert that to a correct license expression.
For instance a tool may suggest to correct a ``License`` field from
``Apache2`` (which is not a valid license expression as defined in this PEP)
to ``Apache-2.0`` (which is a valid license expression using an SPDX license
identifier as defined in this PEP).
Reference Implementation
========================
Tools will need to support parsing and validating license expressions in the
``License`` field.
The ``license-expression`` library [#licexp]_ is a reference Python
implementation of a library that handles license expressions including parsing,
validating and formatting license expressions using flexible lists of license
symbols (including SPDX license identifiers and any extra identifiers referenced
here). It is licensed under the Apache-2.0 license and is used in a few projects
such as the SPDX Python tools [#spdxpy]_, the ScanCode toolkit [#scancodetk]_
and the Free Software Foundation Europe (FSFE) Reuse project [#reuse]_.
Rejected ideas
==============
1. Use a new ``License-Expression`` field and deprecate the ``License`` field.
Adding a new field would introduce backward incompatible changes when the
``License`` field would be retired later and require having more complex
validation. The use of such a field would further introduce a new concept that
is not seen anywhere else in any other package metadata (e.g. a new field only
for license expression) and possibly be a source of confusion. Also, users are
less likely to start using a new field than make small adjustments to their use
of existing fields.
2. Mapping licenses used in the license expression to specific files in the
license files (or vice versa).
This would require using a mapping (two parallel lists would be too prone to
alignment errors) and a mapping would bring extra complication to how license
are documented by adding an additional nesting level.
A mapping would be needed as you cannot guarantee that all expressions (e.g.
GPL with an exception may be in a single file) or all the license keys have a
single license file and that any expression does not have more than one. (e.g.
an Apache license ``LICENSE`` and its ``NOTICE`` file for instance are two
distinct files). Yet in most cases, there is a simpler "one license", "one or
more license files". In the rarer and more complex cases where there are many
licenses involved you can still use the proposed conventions at the cost of a
slight loss of clarity by not specifying which text file is for which license
identifier, but you are not forcing the more complex data model (e.g. a mapping)
on everyone that may not need it.
We could of course have a data field with multiple possible value types (its a
string, its a list, its a mapping!) but this could be a source of confusion.
This is what has been done for instance in npm (historically) and in Rubygems
(still today) and as result you need to test the type of the metadata field
before using it in code and users are confused about when to use a list or a
string.
3. Mapping licenses to specific source files and/or directories of source files
(or vice versa).
File-level notices are not considered as part of the scope of this PEP and the
existing ``SPDX-License-Identifier`` [#spdxids]_ convention can be used and
may not need further specification as a PEP.
Appendix 1. License Expression example
======================================
The current version of ``setuptools`` metadata [#setuptools5030]_ does not use
the ``License`` field. It uses instead this license-related information in
``setup.cfg``::
license_file = LICENSE
classifiers =
License :: OSI Approved :: MIT License
The simplest migration to this PEP would consist of using this instead::
license = MIT
license_files =
LICENSE
Another possibility would be to include the licenses of the third-party packages
that are vendored in the ``setuptools/_vendor/`` and ``pkg_resources/_vendor``
directories::
appdirs==1.4.3
packaging==20.4
pyparsing==2.2.1
ordered-set==3.1.1
These license expressions for these packages are::
appdirs: MIT
packaging: Apache-2.0 OR BSD-2-Clause
pyparsing: MIT
ordered-set: MIT
Therefore, a comprehensive license expression covering both ``setuptools`` proper
and its vendored packages could contain these metadata, combining all the
license expressions in one expression::
license = MIT AND (Apache-2.0 OR BSD-2-Clause)
license_files =
LICENSE.MIT
LICENSE.packaging
Here we would assume that the ``LICENSE.MIT`` file contains the text of the MIT
license and the copyrights used by ``setuptools``, ``appdirs``, ``pyparsing`` and
``ordered-set``, and that the ``LICENSE.packaging`` file contains the texts of the
Apache and BSD license, its copyrights and its license choice notice [#packlic]_.
Appendix 2. Surveying how we document licenses today in Python
==============================================================
There are multiple ways used or recommended to document Python package
licenses today:
In Core metadata
----------------
There are two overlapping core metadata fields to document a license: the
license-related ``Classifier`` strings [#classif]_ prefixed with ``License ::`` and
the ``License`` field as free text [#licfield]_.
The core metadata documentation ``License`` field documentation is currently::
License (optional)
::::::::::::::::::
Text indicating the license covering the distribution where the license
is not a selection from the "License" Trove classifiers. See
"Classifier" below. This field may also be used to specify a
particular version of a license which is named via the ``Classifier``
field, or to indicate a variation or exception to such a license.
Examples::
License: This software may only be obtained by sending the
author a postcard, and then the user promises not
to redistribute it.
License: GPL version 3, excluding DRM provisions
Even though there are two fields, it is at times difficult to convey anything
but simpler licensing. For instance some classifiers lack accuracy (GPL
without a version) and when you have multiple License-related classifiers it is
not clear if this is a choice or all these apply and which ones. Furthermore,
the list of available license-related classifiers is often out-of-date.
In the PyPA ``sampleproject``
-----------------------------
The latest PyPA ``sampleproject`` recommends only to use classifiers in
``setup.py`` and does not list the ``license`` field in its example
``setup.py`` [#samplesetup]_.
The License Files in wheels and setuptools
------------------------------------------
Beyond a license code or qualifier, license text files are documented and
included in a built package either implicitly or explicitly and this is another
possible source of confusion:
- In wheels [#wheels]_ license files are automatically added to the ``.dist-info``
directory if they match one of a few common license file name patterns (such
as LICENSE*, COPYING*). Alternatively a package author can specify a list of
license file paths to include in the built wheel using in the
``license_files`` field in the ``[metadata]`` section of the project's
``setup.cfg``. Previously this was a (singular) ``license_file`` file attribute
that is now deprecated but is still in common use. See [#pipsetup]_ for
instance.
- In ``setuptools`` [#setuptoolssdist]_, a ``license_file`` attribute is used to add
a single license file to a source distribution. This singular version is
still honored by ``wheels`` for backward compatibility.
- Using a LICENSE.txt file is encouraged in the packaging guide [#packaging]_
paired with a ``MANIFEST.in`` entry to ensure that the license file is included
in a built source distribution (sdist).
Note: the License-File field proposed in this PEP already exists in ``wheel`` and
``setuptools`` with the same behaviour as explained above. This PEP is only
recognizing and documenting the existing practice as used in ``wheel`` (with the
``license_file`` and ``license_files`` ``setup.cfg`` ``[metadata]`` entries) and in
``setuptools`` ``license_file`` ``setup()`` argument.
In Python code files
--------------------
(Note: Documenting licenses in source code is not in the scope of this PEP)
Beside using comments and/or ``SPDX-License-Identifier`` conventions, the license
is sometimes documented in Python code files using "dunder" variables typically
named after one of the lower cased Core Metadata fields such as ``__license__``
[#pycode]_.
This convention (dunder global variables) is recognized by the built-in ``help()``
function and the standard ``pydoc`` module. The dunder variable(s) will show up in
the ``help()`` DATA section for a module.
In some other Python packaging tools
------------------------------------
- Conda package manifest [#conda]_ has support for ``license`` and ``license_file``
fields as well as a ``license_family`` license grouping field.
- ``flit`` [#flit]_ recommends to use classifiers instead of License (as per the
current metadata spec).
- ``pbr`` [#pbr]_ uses similar data as setuptools but always stored setup.cfg.
- ``poetry`` [#poetry]_ specifies the use of the ``license`` field in
``pyproject.toml`` with SPDX license identifiers.
Appendix 3. Surveying how other package formats document licenses
=================================================================
Here is a survey of how things are done elsewhere.
License in Linux distribution packages
---------------------------------------
Note: in most cases the license texts of the most common licenses are included
globally once in a shared documentation directory (e.g. /usr/share/doc).
- Debian document package licenses with machine readable copyright files
[#dep5]_. This specification defines its own license expression syntax that is
very similar to the SDPX syntax and use its own list of license identifiers
for common licenses (also closely related to SPDX identifiers).
- Fedora packages [#fedora]_ specify how to include ``License Texts``
[#fedoratext]_ and how use a ``License`` field [#fedoralic]_ that must be filled
with an appropriate license Short License identifier(s) from an extensive list
of "Good Licenses" identifiers [#fedoralist]_. Fedora also defines its own
license expression syntax very similar to the SDPX syntax.
- openSUSE packages [#opensuse]_ use SPDX license expressions with
SPDX license identifiers and a list of extra license identifiers
[#opensuselist]_.
- Gentoo ebuild uses a ``LICENSE`` variable [#gentoo]_. This field is specified
in GLEP-0023 [#glep23]_ and in the Gentoo development manual [#gentoodev]_.
Gentoo also defines a license expression syntax and a list of allowed
licenses. The expression syntax is rather different from SPDX.
- FreeBSD package Makefile [#freebsd]_ provides ``LICENSE`` and
``LICENSE_FILE`` fields with a list of custom license symbols. For
non-standard licenses, FreeBSD recommend to use ``LICENSE=UNKNOWN`` and add
``LICENSE_NAME`` and ``LICENSE_TEXT`` fields, as well as sophisticated
``LICENSE_PERMS`` to qualify the license permissions and ``LICENSE_GROUPS``
to document a license grouping. The ``LICENSE_COMB`` allows to document more
than one license and how they apply together, forming a custom license
expression syntax. FreeBSD also recommends the use of
``SPDX-License-Identifier`` in source code files.
- Archlinux PKGBUILD [#archinux]_ define its own license identifiers
[#archlinuxlist]_. The value ``'unknown'`` can be used if the license is not
defined.
- OpenWRT ipk packages [#openwrt]_ use the ``PKG_LICENSE`` and
``PKG_LICENSE_FILES`` variables and recommend the use of SPDX License
identifiers.
- NixOS uses SPDX identifiers [#nixos]_ and some extra license identifiers in
its license field.
- GNU Guix (based on NixOS) has a single License field, uses its own license
symbols list [#guix]_ and specifies to use one license or a list of licenses
[#guixlic]_.
- Alpine Linux packages [#alpine]_ recommend using SPDX identifiers in the
license field.
License in Language and Application packages
--------------------------------------------
- In Java, Maven POM [#maven]_ defines a ``licenses`` XML tag with a list of license
items each with a name, URL, comments and "distribution" type. This is not
mandatory and the content of each field is not specified.
- JavaScript npm package.json [#npm]_ use a single license field with SPDX
license expression or the ``UNLICENSED`` id if no license is specified.
A license file can be referenced as an alternative using "SEE LICENSE IN
<filename>" in the single ``license`` field.
- Rubygems gemspec [#gem]_ specifies either a singular license string or a list
of license strings. The relationship between multiple licenses in a list is
not specified. They recommend using SPDX license identifiers.
- CPAN Perl modules [#perl]_ use a single license field which is either a single
string or a list of strings. The relationship between the licenses in a list
is not specified. There is a list of custom license identifiers plus
these generic identifiers: ``open_source``, ``restricted``, ``unrestricted``,
``unknown``.
- Rust Cargo [#cargo]_ specifies the use of an SPDX license expression (v2.1) in
the ``license`` field. It also supports an alternative expression syntax using
slash-separated SPDX license identifiers. There is also a ``license_file``
field. The crates.io package registry [#cratesio]_ requires that either
``license`` or ``license_file`` fields are set when you upload a package.
- PHP Composer composer.json [#composer]_ uses a ``license`` field with an SPDX
license id or "proprietary". The ``license`` field is either a single string
that can use something which resembles the SPDX license expression syntax with
"and" and "or" keywords; or is a list of strings if there is a choice of
licenses (aka. a "disjunctive" choice of license).
- NuGet packages [#nuget]_ were using only a simple license URL and are now
specifying to use an SPDX License expression and/or the path to a license
file within the package. The NuGet.org repository states that they only
accepts license expressions that are `approved by the Open Source Initiative
or the Free Software Foundation.`
- Go language modules ``go.mod`` have no provision for any metadata beyond
dependencies. Licensing information is left for code authors and other
community package managers to document.
- Dart/Flutter spec [#flutter]_ recommends to use a single ``LICENSE`` file
that should contain all the license texts each separated by a line with 80
hyphens.
- JavaScript Bower [#bower]_ ``license`` field is either a single string or a list
of strings using either SPDX license identifiers, or a path or a URL to a
license file.
- Cocoapods podspec [#cocoapod]_ ``license`` field is either a single string or a
mapping with attributes of type, file and text keys. This is mandatory unless
there is a LICENSE or LICENCE file provided.
- Haskell Cabal [#cabal]_ accepts an SPDX license expression since version 2.2.
The version of the SPDX license list used is a function of the ``cabal`` version.
The specification also provides a mapping between pre-SPDX Legacy license
Identifiers and SPDX identifiers. Cabal also specifies a ``license-file(s)``
field that lists license files that will be installed with the package.
- Erlang/Elixir mix/hex package [#mix]_ specifies a ``licenses`` field as a
required list of license strings and recommends to use SPDX license
identifiers.
- D lang dub package [#dub]_ defines its own list of license identifiers and
its own license expression syntax and both are similar to the SPDX conventions.
- R Package DESCRIPTION [#cran]_ defines its own sophisticated license
expression syntax and list of licenses identifiers. R has a unique way to
support specifiers for license versions such as ``LGPL (>= 2.0, < 3)`` in its
license expression syntax.
Conventions used by other ecosystems
------------------------------------
- ``SPDX-License-Identifier`` [#spdxids]_ is a simple convention to document the
license inside a file.
- The Free Software Foundation (FSF) promotes the use of SPDX license identifiers
for clarity in the GPL and other versioned free software licenses [#gnu]_
[#fsf]_.
- The Free Software Foundation Europe (FSFE) REUSE project [#reuse]_ promotes
using ``SPDX-License-Identifier``.
- The Linux kernel uses ``SPDX-License-Identifier`` and parts of the FSFE REUSE
conventions to document its licenses [#linux]_.
- U-Boot spearheaded using ``SPDX-License-Identifier`` in code and now follows the
Linux ways [#uboot]_.
- The Apache Software Foundation projects use RDF DOAP [#apache]_ with a single
license field pointing to SPDX license identifiers.
- The Eclipse Foundation promotes using ``SPDX-license-Identifiers`` [#eclipse]_
- The ClearlyDefined project [#cd]_ promotes using SPDX license identifiers and
expressions to improve license clarity.
- The Android Open Source Project [#android]_ use ``MODULE_LICENSE_XXX`` empty
tag files where ``XXX`` is a license code such as BSD, APACHE, GPL, etc. And
side by side with this ``MODULE_LICENSE`` file there is a ``NOTICE`` file
that contains license and notices texts.
References
==========
This document specifies version 2.2 of the metadata format.
- Version 1.0 is specified in PEP 241.
- Version 1.1 is specified in PEP 314.
- Version 1.2 is specified in PEP 345.
- Version 2.0, while not formally accepted, was specified in PEP 426.
- Version 2.1 is specified in PEP 566.
.. [#cms] https://packaging.python.org/specifications/core-metadata
.. [#cdstats] https://clearlydefined.io/stats
.. [#cd] https://clearlydefined.io
.. [#osi] http://opensource.org
.. [#classif] https://pypi.org/classifiers
.. [#spdxlist] https://spdx.org/licenses
.. [#spdx] https://spdx.org
.. [#spdx22] https://spdx.github.io/spdx-spec/appendix-IV-SPDX-license-expressions/
.. [#wheels] https://github.com/pypa/wheel/blob/b8b21a5720df98703716d3cd981d8886393228fa/docs/user_guide.rst#including-license-files-in-the-generated-wheel-file
.. [#reuse] https://reuse.software/
.. [#licexp] https://github.com/nexB/license-expression/
.. [#spdxpy] https://github.com/spdx/tools-python/
.. [#scancodetk] https://github.com/nexB/scancode-toolkit
.. [#licfield] https://packaging.python.org/guides/distributing-packages-using-setuptools/?highlight=MANIFEST.in#license
.. [#samplesetup] https://github.com/pypa/sampleproject/blob/52966defd6a61e97295b0bb82cd3474ac3e11c7a/setup.py#L98
.. [#pipsetup] https://github.com/pypa/pip/blob/476606425a08c66b9c9d326994ff5cf3f770926a/setup.cfg#L40
.. [#setuptoolssdist] https://github.com/pypa/setuptools/blob/97e8ad4f5ff7793729e9c8be38e0901e3ad8d09e/setuptools/command/sdist.py#L202
.. [#packaging] https://packaging.python.org/guides/distributing-packages-using-setuptools/?highlight=MANIFEST.in#license-txt
.. [#pycode] https://github.com/search?l=Python&q=%22__license__%22&type=Code
.. [#setuptools5030] https://github.com/pypa/setuptools/blob/v50.3.0/setup.cfg#L17
.. [#packlic] https://github.com/pypa/packaging/blob/19.1/LICENSE
.. [#conda] https://docs.conda.io/projects/conda-build/en/latest/resources/define-metadata.html#about-section
.. [#flit] https://github.com/takluyver/flit
.. [#poetry] https://poetry.eustace.io/docs/pyproject/#license
.. [#pbr] https://docs.openstack.org/pbr/latest/user/features.html
.. [#dep5] https://dep-team.pages.debian.net/deps/dep5/
.. [#fedora] https://docs.fedoraproject.org/en-US/packaging-guidelines/LicensingGuidelines/
.. [#fedoratext] https://docs.fedoraproject.org/en-US/packaging-guidelines/LicensingGuidelines/#_license_text
.. [#fedoralic] https://docs.fedoraproject.org/en-US/packaging-guidelines/LicensingGuidelines/#_valid_license_short_names
.. [#fedoralist] https://fedoraproject.org/wiki/Licensing:Main?rd=Licensing#Good_Licenses
.. [#opensuse] https://en.opensuse.org/openSUSE:Packaging_guidelines#Licensing
.. [#opensuselist] https://docs.google.com/spreadsheets/d/14AdaJ6cmU0kvQ4ulq9pWpjdZL5tkR03exRSYJmPGdfs/pub
.. [#gentoo] https://devmanual.gentoo.org/ebuild-writing/variables/index.html#license
.. [#glep23] https://www.gentoo.org/glep/glep-0023.html
.. [#gentoodev] https://devmanual.gentoo.org/general-concepts/licenses/index.html
.. [#freebsd] https://www.freebsd.org/doc/en_US.ISO8859-1/books/porters-handbook/licenses.html
.. [#archinux] https://wiki.archlinux.org/index.php/PKGBUILD#license
.. [#archlinuxlist] https://wiki.archlinux.org/index.php/PKGBUILD#license
.. [#openwrt] https://openwrt.org/docs/guide-developer/packages#buildpackage_variables
.. [#nixos] https://github.com/NixOS/nixpkgs/blob/master/lib/licenses.nix
.. [#guix] http://git.savannah.gnu.org/cgit/guix.git/tree/guix/licenses.scm
.. [#guixlic] https://guix.gnu.org/manual/en/html_node/package-Reference.html#index-license_002c-of-packages
.. [#alpine] https://wiki.alpinelinux.org/wiki/Creating_an_Alpine_package#license
.. [#maven] https://maven.apache.org/pom.html#Licenses
.. [#npm] https://docs.npmjs.com/files/package.json#license
.. [#gem] https://guides.rubygems.org/specification-reference/#license=
.. [#perl] https://metacpan.org/pod/CPAN::Meta::Spec#license
.. [#cargo] https://doc.rust-lang.org/cargo/reference/manifest.html#package-metadata
.. [#cratesio] https://doc.rust-lang.org/cargo/reference/registries.html#publish
.. [#composer] https://getcomposer.org/doc/04-schema.md#license
.. [#nuget] https://docs.microsoft.com/en-us/nuget/reference/nuspec#licenseurl
.. [#flutter] https://flutter.dev/docs/development/packages-and-plugins/developing-packages#adding-licenses-to-the-license-file
.. [#bower] https://github.com/bower/spec/blob/master/json.md#license
.. [#cocoapod] https://guides.cocoapods.org/syntax/podspec.html#license
.. [#cabal] https://cabal.readthedocs.io/en/latest/developing-packages.html#pkg-field-license
.. [#mix] https://hex.pm/docs/publish
.. [#dub] https://dub.pm/package-format-json.html#licenses
.. [#cran] https://cran.r-project.org/doc/manuals/r-release/R-exts.html#Licensing
.. [#spdxids] https://spdx.org/using-spdx-license-identifier
.. [#gnu] https://www.gnu.org/licenses/identify-licenses-clearly.html
.. [#fsf] https://www.fsf.org/blogs/rms/rms-article-for-claritys-sake-please-dont-say-licensed-under-gnu-gpl-2
.. [#linux] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/process/license-rules.rst
.. [#uboot] https://www.denx.de/wiki/U-Boot/Licensing
.. [#apache] https://svn.apache.org/repos/asf/allura/doap_Allura.rdf
.. [#eclipse] https://www.eclipse.org/legal/epl-2.0/faq.php
.. [#android] https://github.com/aosp-mirror/platform_external_tcpdump/blob/master/MODULE_LICENSE_BSD
.. [#cc0] https://creativecommons.org/publicdomain/zero/1.0/
.. [#unlic] https://unlicense.org/
Copyright
=========
This document is placed in the public domain or under the CC0-1.0-Universal
license [#cc0]_, whichever is more permissive.
Acknowledgements
================
- Nick Coghlan
- Kevin P. Fleming
- Pradyun Gedam
- Oleg Grenrus
- Dustin Ingram
- Chris Jerdonek
- Cyril Roelandt
- Luis Villa
..
Local Variables:
mode: indented-text
indent-tabs-mode: nil
sentence-end-double-space: t
fill-column: 80
End:

View File

@ -1,993 +0,0 @@
PEP: 649
Title: Deferred Evaluation Of Annotations Using Descriptors
Author: Larry Hastings <larry@hastings.org>
Status: Draft
Type: Standards Track
Content-Type: text/x-rst
Created: 11-Jan-2021
Post-History: 11-Jan-2021, 11-Apr-2021
Abstract
========
As of Python 3.9, Python supports two different behaviors
for annotations:
* original or "stock" Python semantics, in which annotations
are evaluated at the time they are bound, and
* PEP 563 semantics, currently enabled per-module by
``from __future__ import annotations``, in which annotations
are converted back into strings and must be reparsed and
executed by ``eval()`` to be used.
Original Python semantics created a circular references problem
for static typing analysis. PEP 563 solved that problem--but
its novel semantics introduced new problems, including its
restriction that annotations can only reference names at
module-level scope.
This PEP proposes a third way that embodies the best of both
previous approaches. It solves the same circular reference
problems solved by PEP 563, while otherwise preserving Python's
original annotation semantics, including allowing annotations
to refer to local and class variables.
In this new approach, the code to generate the annotations
dict is written to its own function which computes and returns
the annotations dict. Then, ``__annotations__`` is a "data
descriptor" which calls this annotation function once and
retains the result. This delays the evaluation of annotations
expressions until the annotations are examined, at which point
all circular references have likely been resolved. And if
the annotations are never examined, the function is never
called and the annotations are never computed.
Annotations defined using this PEP's semantics have the same
visibility into the symbol table as annotations under "stock"
semantics--any name visible to an annotation in Python 3.9
is visible to an annotation under this PEP. In addition,
annotations under this PEP can refer to names defined *after*
the annotation is defined, as long as the name is defined in
a scope visible to the annotation. Specifically, when this PEP
is active:
* An annotation can refer to a local variable defined in the
current function scope.
* An annotation can refer to a local variable defined in an
enclosing function scope.
* An annotation can refer to a class variable defined in the
current class scope.
* An annotation can refer to a global variable.
And in all four of these cases, the variable referenced by
the annotation needn't be defined at the time the annotation
is defined--it can be defined afterwards. The only restriction
is that the name or variable be defined before the annotation
is *evaluated.*
If accepted, these new semantics for annotations would initially
be gated behind ``from __future__ import co_annotations``.
However, these semantics would eventually be promoted to be
Python's default behavior. Thus this PEP would *supersede*
PEP 563, and PEP 563's behavior would be deprecated and
eventually removed.
Overview
========
.. note:: The code presented in this section is simplified
for clarity. The intention is to communicate the high-level
concepts involved without getting lost in with the details.
The actual details are often quite different. See the
Implementation_ section later in this PEP for a much more
accurate description of how this PEP works.
Consider this example code:
.. code-block::
def foo(x: int = 3, y: MyType = None) -> float:
...
class MyType:
...
foo_y_type = foo.__annotations__['y']
As we see here, annotations are available at runtime through an
``__annotations__`` attribute on functions, classes, and modules.
When annotations are specified on one of these objects,
``__annotations__`` is a dictionary mapping the names of the
fields to the value specified as that field's annotation.
The default behavior in Python 3.9 is to evaluate the expressions
for the annotations, and build the annotations dict, at the time
the function, class, or module is bound. At runtime the above
code actually works something like this:
.. code-block::
annotations = {'x': int, 'y': MyType, 'return': float}
def foo(x = 3, y = "abc"):
...
foo.__annotations__ = annotations
class MyType:
...
foo_y_type = foo.__annotations__['y']
The crucial detail here is that the values ``int``, ``MyType``,
and ``float`` are looked up at the time the function object is
bound, and these values are stored in the annotations dict.
But this code doesn't run—it throws a ``NameError`` on the first
line, because ``MyType`` hasn't been defined yet.
PEP 563's solution is to decompile the expressions back
into strings, and store those *strings* in the annotations dict.
The equivalent runtime code would look something like this:
.. code-block::
annotations = {'x': 'int', 'y': 'MyType', 'return': 'float'}
def foo(x = 3, y = "abc"):
...
foo.__annotations__ = annotations
class MyType:
...
foo_y_type = foo.__annotations__['y']
This code now runs successfully. However, ``foo_y_type``
is no longer a reference to ``MyType``, it is the *string*
``'MyType'``. The code would have to be further modified to
call ``eval()`` or ``typing.get_type_hints()`` to convert
the string into a useful reference to the actual ``MyType``
object.
This PEP proposes a third approach, delaying the evaluation of
the annotations by computing them in their own function. If
this PEP was active, the generated code would work something
like this:
.. code-block::
class function:
# __annotations__ on a function object is already a
# "data descriptor" in Python, we're just changing what it does
@property
def __annotations__(self):
return self.__co_annotations__()
# ...
def foo_annotations_fn():
return {'x': int, 'y': MyType, 'return': float}
def foo(x = 3, y = "abc"):
...
foo.__co_annotations__ = foo_annotations_fn
class MyType:
...
foo_y_type = foo.__annotations__['y']
The important change is that the code constructing the
annotations dict now lives in a function—here, called
``foo_annotations_fn()``. But this function isn't called
until we ask for the value of ``foo.__annotations__``,
and we don't do that until *after* the definition of ``MyType``.
So this code also runs successfully, and ``foo_y_type`` now
has the correct value--the class ``MyType``--even though
``MyType`` wasn't defined until *after* the annotation was
defined.
Motivation
==========
Python's original semantics for annotations made its use for
static type analysis painful due to forward reference problems.
This was the main justification for PEP 563, and we need not
revisit those arguments here.
However, PEP 563's solution was to decompile code for Python
annotations back into strings at compile time, requiring
users of annotations to ``eval()`` those strings to restore
them to their actual Python values. This has several drawbacks:
* It requires Python implementations to stringize their
annotations. This is surprising behavior—unprecedented
for a language-level feature. Also, adding this feature
to CPython was complicated, and this complicated code would
need to be reimplemented independently by every other Python
implementation.
* It requires that all annotations be evaluated at module-level
scope. Annotations under PEP 563 can no longer refer to
* class variables,
* local variables in the current function, or
* local variables in enclosing functions.
* It requires a code change every time existing code uses an
annotation, to handle converting the stringized
annotation back into a useful value.
* ``eval()`` is slow.
* ``eval()`` isn't always available; it's sometimes removed
from Python for space reasons.
* In order to evaluate the annotations on a class,
it requires obtaining a reference to that class's globals,
which PEP 563 suggests should be done by looking up that class
by name in ``sys.modules``—another surprising requirement for
a language-level feature.
* It adds an ongoing maintenance burden to Python implementations.
Every time the language adds a new feature available in expressions,
the implementation's stringizing code must be updated in
tandem in order to support decompiling it.
This PEP also solves the forward reference problem outlined in
PEP 563 while avoiding the problems listed above:
* Python implementations would generate annotations as code
objects. This is simpler than stringizing, and is something
Python implementations are already quite good at. This means:
- alternate implementations would need to write less code to
implement this feature, and
- the implementation would be simpler overall, which should
reduce its ongoing maintenance cost.
* Existing annotations would not need to be changed to only
use global scope. Actually, annotations would become much
easier to use, as they would now also handle forward
references.
* Code examining annotations at runtime would no longer need
to use ``eval()`` or anything else—it would automatically
see the correct values. This is easier, faster, and
removes the dependency on ``eval()``.
Backwards Compatibility
=======================
PEP 563 changed the semantics of annotations. When its semantics
are active, annotations must assume they will be evaluated in
*module-level* scope. They may no longer refer directly
to local variables or class attributes.
This PEP removes that restriction; annotations may refer to globals,
local variables inside functions, local variables defined in enclosing
functions, and class members in the current class. In addition,
annotations may refer to any of these that haven't been defined yet
at the time the annotation is defined, as long as the not-yet-defined
name is created normally (in such a way that it is known to the symbol
table for the relevant block, or is a global or class variable found
using normal name resolution). Thus, this PEP demonstrates *improved*
backwards compatibility over PEP 563.
PEP 563 also requires using ``eval()`` or ``typing.get_type_hints()``
to examine annotations. Code updated to work with PEP 563 that calls
``eval()`` directly would have to be updated simply to remove the
``eval()`` call. Code using ``typing.get_type_hints()`` would
continue to work unchanged, though future use of that function
would become optional in most cases.
Because this PEP makes semantic changes to how annotations are
evaluated, this PEP will be initially gated with a per-module
``from __future__ import co_annotations`` before it eventually
becomes the default behavior.
Apart from the delay in evaluating values stored in annotations
dicts, this PEP preserves nearly all existing behavior of
annotations dicts. Specifically:
* Annotations dicts are mutable, and any changes to them are
preserved.
* The ``__annotations__`` attribute can be explicitly set,
and any value set this way will be preserved.
* The ``__annotations__`` attribute can be deleted using
the ``del`` statement.
However, there are two uncommon interactions possible with class
and module annotations that work today—both with stock semantics,
and with PEP 563 semantics—that would no longer work when this PEP
was active. These two interactions would have to be prohibited.
The good news is, neither is common, and neither is considered good
practice. In fact, they're rarely seen outside of Python's own
regression test suite. They are:
* *Code that sets annotations on module or class attributes
from inside any kind of flow control statement.* It's
currently possible to set module and class attributes with
annotations inside an ``if`` or ``try`` statement, and it works
as one would expect. It's untenable to support this behavior
when this PEP is active.
* *Code in module or class scope that references or modifies the
local* ``__annotations__`` *dict directly.* Currently, when
setting annotations on module or class attributes, the generated
code simply creates a local ``__annotations__`` dict, then sets
mappings in it as needed. It's also possible for user code
to directly modify this dict, though this doesn't seem like it's
an intentional feature. Although it would be possible to support
this after a fashion when this PEP was active, the semantics
would likely be surprising and wouldn't make anyone happy.
Note that these are both also pain points for static type checkers,
and are unsupported by those checkers. It seems reasonable to
declare that both are at the very least unsupported, and their
use results in undefined behavior. It might be worth making a
small effort to explicitly prohibit them with compile-time checks.
In addition, there are a few operators that would no longer be
valid for use in annotations, because their side effects would
affect the *annotation function* instead of the
class/function/module the annotation was nominally defined in:
* ``:=`` (aka the "walrus operator"),
* ``yield`` and ``yield from``, and
* ``await``.
Use of any of these operators in an annotation will result in a
compile-time error.
Since delaying the evaluation of annotations until they are
evaluated changes the semantics of the language, it's observable
from within the language. Therefore it's possible to write code
that behaves differently based on whether annotations are
evaluated at binding time or at access time, e.g.
.. code-block::
mytype = str
def foo(a:mytype): pass
mytype = int
print(foo.__annotations__['a'])
This will print ``<class 'str'>`` with stock semantics
and ``<class 'int'>`` when this PEP is active. Since
this is poor programming style to begin with, it seems
acceptable that this PEP changes its behavior.
Finally, there's a standard idiom that's actually somewhat common
when accessing class annotations, and which will become more
problematic when this PEP is active: code often accesses class
annotations via ``cls.__dict__.get("__annotations__", {})``
rather than simply ``cls.__annotations__``. It's due to a flaw
in the original design of annotations themselves. This topic
will be examined in a separate discussion; the outcome of
that discussion will likely guide the future evolution of this
PEP.
Mistaken Rejection Of This Approach In November 2017
====================================================
During the early days of discussion around PEP 563,
using code to delay the evaluation of annotations was
briefly discussed, in a November 2017 thread in
``comp.lang.python-dev``. At the time the
technique was termed an "implicit lambda expression".
Guido van Rossum—Python's BDFL at the time—replied,
asserting that these "implicit lambda expression" wouldn't
work, because they'd only be able to resolve symbols at
module-level scope:
IMO the inability of referencing class-level definitions
from annotations on methods pretty much kills this idea.
https://mail.python.org/pipermail/python-dev/2017-November/150109.html
This led to a short discussion about extending lambda-ized
annotations for methods to be able to refer to class-level
definitions, by maintaining a reference to the class-level
scope. This idea, too, was quickly rejected.
PEP 563 summarizes the above discussion here:
https://www.python.org/dev/peps/pep-0563/#keeping-the-ability-to-use-function-local-state-when-defining-annotations
What's puzzling is PEP 563's own changes to the scoping rules
of annotations—it *also* doesn't permit annotations to reference
class-level definitions. It's not immediately clear why an
inability to reference class-level definitions was enough to
reject using "implicit lambda expressions" for annotations,
but was acceptable for stringized annotations.
In retrospect there was probably a pivot during the development
of PEP 563. It seems that, early on, there was a prevailing
assumption that PEP 563 would support references to class-level
definitions. But by the time PEP 563 was finalized, this
assumption had apparently been abandoned. And it looks like
"implicit lambda expressions" were never reconsidered in this
new light.
In any case, annotations are still able to refer to class-level
definitions under this PEP, rendering the objection moot.
.. _Implementation:
Implementation
==============
There's a prototype implementation of this PEP, here:
https://github.com/larryhastings/co_annotations/
As of this writing, all features described in this PEP are
implemented, and there are some rudimentary tests in the
test suite. There are still some broken tests, and the
``co_annotations`` repo is many months behind the
CPython repo.
from __future__ import co_annotations
-------------------------------------
In the prototype, the semantics presented in this PEP are gated with:
.. code-block::
from __future__ import co_annotations
__co_annotations__
------------------
Python supports runtime metadata for annotations for three different
types: function, classes, and modules. The basic approach to
implement this PEP is much the same for all three with only minor
variations.
With this PEP, each of these types adds a new attribute,
``__co_annotations__``. ``__co_annotations__`` is a function:
it takes no arguments, and must return either ``None`` or a dict
(or subclass of dict). It adds the following semantics:
* ``__co_annotations__`` is always set, and may contain either
``None`` or a callable.
* ``__co_annotations__`` cannot be deleted.
* ``__annotations__`` and ``__co_annotations__`` can't both
be set to a useful value simultaneously:
- If you set ``__annotations__`` to a dict, this also sets
``__co_annotations__`` to None.
- If you set ``__co_annotations__`` to a callable, this also
deletes ``__annotations__``
Internally, ``__co_annotations__`` is a "data descriptor",
where functions are called whenever user code gets, sets,
or deletes the attribute. In all three cases, the object
has separate internal storage for the current value
of the ``__co_annotations__`` attribute.
``__annotations__`` is also as a data descriptor, with its own
separate internal storage for its internal value. The code
implementing the "get" for ``__annotations__`` works something
like this:
.. code-block::
if (the internal value is set)
return the internal annotations dict
if (__co_annotations__ is not None)
call the __co_annotations__ function
if the result is a dict:
store the result as the internal value
set __co_annotations__ to None
return the internal value
do whatever this object does when there are no annotations
Unbound code objects
--------------------
When Python code defines one of these three objects with
annotations, the Python compiler generates a separate code
object which builds and returns the appropriate annotations
dict. Wherever possible, the "annotation code object" is
then stored *unbound* as the internal value of
``__co_annotations__``; it is then bound on demand when
the user asks for ``__annotations__``.
This is a useful optimization for both speed and memory
consumption. Python processes rarely examine annotations
at runtime. Therefore, pre-binding these code objects to
function objects would usually be a waste of resources.
When is this optimization not possible?
* When an annotation function contains references to
free variables, in the current function or in an
outer function.
* When an annotation function is defined on a method
(a function defined inside a class) and the annotations
possibly refer directly to class variables.
Note that user code isn't permitted to directly access these
unbound code objects. If the user "gets" the value of
``__co_annotations__``, and the internal value of
``__co_annotations__`` is an unbound code object,
it immediately binds the code object, and the resulting
function object is stored as the new value of
``__co_annotations__`` and returned.
(However, these unbound code objects *are* stored in the
``.pyc`` file. So a determined user could examine them
should that be necessary for some reason.)
Function Annotations
--------------------
When compiling a function, the CPython bytecode compiler
visits the annotations for the function all in one place,
starting with ``compiler_visit_annotations()`` in ``compile.c``.
If there are any annotations, they create the scope for
the annotations function on demand, and
``compiler_visit_annotations()`` assembles it.
The code object is passed in in place of the annotations dict
for the ``MAKE_FUNCTION`` bytecode instruction.
``MAKE_FUNCTION`` supports a new bit in its oparg
bitfield, ``0x10``, which tells it to expect a
``co_annotations`` code object on the stack.
The bitfields for ``annotations`` (``0x04``) and
``co_annotations`` (``0x10``) are mutually exclusive.
When binding an unbound annotation code object, a function will
use its own ``__globals__`` as the new function's globals.
One quirk of Python: you can't actually remove the annotations
from a function object. If you delete the ``__annotations__``
attribute of a function, then get its ``__annotations__`` member,
it will create an empty dict and use that as its
``__annotations__``. The implementation of this PEP maintains
this quirk for backwards compatibility.
Class Annotations
-----------------
When compiling a class body, the compiler maintains two scopes:
one for the normal class body code, and one for annotations.
(This is facilitated by four new functions: ``compiler.c``
adds ``compiler_push_scope()`` and ``compiler_pop_scope()``,
and ``symtable.c`` adds ``symtable_push_scope()`` and
``symtable_pop_scope()``.)
Once the code generator reaches the end of the class body,
but before it generates the bytecode for the class body,
it assembles the bytecode for ``__co_annotations__``, then
assigns that to ``__co_annotations__`` using ``STORE_NAME``.
It also sets a new ``__globals__`` attribute. Currently it
does this by calling ``globals()`` and storing the result.
(Surely there's a more elegant way to find the class's
globals--but this was good enough for the prototype.) When
binding an unbound annotation code object, a class will use
the value of this ``__globals__`` attribute. When the class
drops its reference to the unbound code object--either because
it has bound it to a function, or because ``__annotations__``
has been explicitly set--it also deletes its ``__globals__``
attribute.
As discussed above, examination or modification of
``__annotations__`` from within the class body is no
longer supported. Also, any flow control (``if`` or ``try`` blocks)
around declarations of members with annotations is unsupported.
If you delete the ``__annotations__`` attribute of a class,
then get its ``__annotations__`` member, it will return the
annotations dict of the first base class with annotations set.
If no base classes have annotations set, it will raise
``AttributeError``.
Although it's an implementation-specific detail, currently
classes store the internal value of ``__co_annotations__``
in their ``tp_dict`` under the same name.
Module Annotations
------------------
Module annotations work much the same as class annotations.
The main difference is, a module uses its own dict as the
``__globals__`` when binding the function.
If you delete the ``__annotations__`` attribute of a class,
then get its ``__annotations__`` member, the module will
raise ``AttributeError``.
Annotations With Closures
-------------------------
It's possible to write annotations that refer to
free variables, and even free variables that have yet
to be defined. For example:
.. code-block::
from __future__ import co_annotations
def outer():
def middle():
def inner(a:mytype, b:mytype2): pass
mytype = str
return inner
mytype2 = int
return middle()
fn = outer()
print(fn.__annotations__)
At the time ``fn`` is set, ``inner.__co_annotations__()``
hasn't been run. So it has to retain a reference to
the *future* definitions of ``mytype`` and ``mytype2`` if
it is to correctly evaluate its annotations.
If an annotation function refers to a local variable
from the current function scope, or a free variable
from an enclosing function scope--if, in CPython, the
annotation function code object contains one or more
``LOAD_DEREF`` opcodes--then the annotation code object
is bound at definition time with references to these
variables. ``LOAD_DEREF`` instructions require the annotation
function to be bound with special run-time information
(in CPython, a ``freevars`` array). Rather than store
that separately and use that to later lazy-bind the
function object, the current implementation simply
early-binds the function object.
Note that, since the annotation function ``inner.__co_annotations__()``
is defined while parsing ``outer()``, from Python's perspective
the annotation function is a "nested function". So "local
variable inside the 'current' function" and "free variable
from an enclosing function" are, from the perspective of
the annotation function, the same thing.
Annotations That Refer To Class Variables
-----------------------------------------
It's possible to write annotations that refer to
class variables, and even class variables that haven't
yet been defined. For example:
.. code-block::
from __future__ import co_annotations
class C:
def method(a:mytype): pass
mytype = str
print(C.method.__annotations__)
Internally, annotation functions are defined as
a new type of "block" in CPython's symbol table
called an ``AnnotationBlock``. An ``AnnotationBlock``
is almost identical to a ``FunctionBlock``. It differs
in that it's permitted to see names from an enclosing
class scope. (Again: annotation functions are functions,
and they're defined *inside* the same scope as
the thing they're being defined on. So in the above
example, the annotation function for ``C.method()``
is defined inside ``C``.)
If it's possible that an annotation function refers
to class variables--if all these conditions are true:
* The annotation function is being defined inside
a class scope.
* The generated code for the annotation function
has at least one ``LOAD_NAME`` instruction.
Then the annotation function is bound at the time
it's set on the class/function, and this binding
includes a reference to the class dict. The class
dict is pushed on the stack, and the ``MAKE_FUNCTION``
bytecode instruction takes a new second bitfield (0x20)
indicating that it should consume that stack argument
and store it as ``__locals__`` on the newly created
function object.
Then, at the time the function is executed, the
``f_locals`` field of the frame object is set to
the function's ``__locals__``, if set. This permits
``LOAD_NAME`` opcodes to work normally, which means
the code generated for annotation functions is nearly
identical to that generated for conventional Python
functions.
Interactive REPL Shell
----------------------
Everything works the same inside Python's interactive REPL shell,
except for module annotations in the interactive module (``__main__``)
itself. Since that module is never "finished", there's no specific
point where we can compile the ``__co_annotations__`` function.
For the sake of simplicity, in this case we forego delayed evaluation.
Module-level annotations in the REPL shell will continue to work
exactly as they do today, evaluating immediately and setting the
result directly inside the ``__annotations__`` dict.
(It might be possible to support delayed evaluation here.
But it gets complicated quickly, and for a nearly-non-existent
use case.)
Annotations On Local Variables Inside Functions
-----------------------------------------------
Python supports syntax for local variable annotations inside
functions. However, these annotations have no runtime
effect--they're discarded at compile-time. Therefore, this
PEP doesn't need to do anything to support them, the same
as stock semantics and PEP 563.
Performance Comparison
----------------------
Performance with this PEP should be favorable, when compared with either
stock behavior or PEP 563. In general, resources are only consumed
on demand—"you only pay for what you use".
There are three scenarios to consider:
* the runtime cost when annotations aren't defined,
* the runtime cost when annotations are defined but *not* referenced, and
* the runtime cost when annotations are defined *and* referenced.
We'll examine each of these scenarios in the context of all three
semantics for annotations: stock, PEP 563, and this PEP.
When there are no annotations, all three semantics have the same
runtime cost: zero. No annotations dict is created and no code is
generated for it. This requires no runtime processor time and
consumes no memory.
When annotations are defined but not referenced, the runtime cost
of Python with this PEP should be roughly equal to or slightly better
than PEP 563 semantics, and slightly better than "stock" Python
semantics. The specifics depend on the object being annotated:
* With stock semantics, the annotations dict is always built, and
set as an attribute of the object being annotated.
* In PEP 563 semantics, for function objects, a single constant
(a tuple) is set as an attribute of the function. For class and
module objects, the annotations dict is always built and set as
an attribute of the class or module.
* With this PEP, a single object is set as an attribute of the
object being annotated. Most often, this object is a constant
(a code object). In cases where the annotation refers to local
variables or class variables, the code object will be bound to
a function object, and the function object is set as the attribute
of the object being annotated.
When annotations are both defined and referenced, code using
this PEP should be much faster than code using PEP 563 semantics,
and equivalent to or slightly improved over original Python
semantics. PEP 563 semantics requires invoking ``eval()`` for
every value inside an annotations dict, which is enormously slow.
And, as already mentioned, this PEP generates measurably more
efficient bytecode for class and module annotations than stock
semantics; for function annotations, this PEP and stock semantics
should be roughly equivalent.
Memory use should also be comparable in all three scenarios across
all three semantic contexts. In the first and third scenarios,
memory usage should be roughly equivalent in all cases.
In the second scenario, when annotations are defined but not
referenced, using this PEP's semantics will mean the
function/class/module will store one unused code object (possibly
bound to an unused function object); with the other two semantics,
they'll store one unused dictionary (or constant tuple).
Bytecode Comparison
-------------------
The bytecode generated for annotations functions with
this PEP uses the efficient ``BUILD_CONST_KEY_MAP`` opcode
to build the dict for all annotatable objects:
functions, classes, and modules.
Stock semantics also uses ``BUILD_CONST_KEY_MAP`` bytecode
for function annotations. PEP 563 has an even more efficient
method for building annotations dicts on functions, leveraging
the fact that its annotations dicts only contain strings for
both keys and values. At compile-time it constructs a tuple
containing pairs of keys and values at compile-time, then
at runtime it converts that tuple into a dict on demand.
This is a faster technique than either stock semantics
or this PEP can employ, because in those two cases
annotations dicts can contain Python values of any type.
Of course, this performance win is negated if the
annotations are examined, due to the overhead of ``eval()``.
For class and module annotations, both stock semantics
and PEP 563 generate a longer and slightly-less-efficient
stanza of bytecode, creating the dict and setting the
annotations individually.
For Future Discussion
=====================
Circular Imports
----------------
There is one unfortunately-common scenario where PEP 563
currently provides a better experience, and it has to do
with large code bases, with circular dependencies and
imports, that examine their annotations at run-time.
PEP 563 permitted defining *and examining* invalid
expressions as annotations. Its implementation requires
annotations to be legal Python expressions, which it then
converts into strings at compile-time. But legal Python
expressions may not be computable at runtime, if for
example the expression references a name that isn't defined.
This is a problem for stringized annotations if they're
evaluated, e.g. with ``typing.get_type_hints()``. But
any stringized annotation may be examined harmlessly at
any time--as long as you don't evaluate it, and only
examine it as a string.
Some large organizations have code bases that unfortunately
have circular dependency problems with their annotations--class
A has methods annotated with class B, but class B has methods
annotated with class A--that can be difficult to resolve.
Since PEP 563 stringizes their annotations, it allows them
to leave these circular dependencies in place, and they can
sidestep the circular import problem by never importing the
module that defines the types used in the annotations. Their
annotations can no longer be evaluated, but this appears not
to be a concern in practice. They can then examine the
stringized form of the annotations at runtime and this seems
to be sufficient for their needs.
This PEP allows for many of the same behaviors.
Annotations must be legal Python expressions, which
are compiled into a function at compile-time.
And if the code never examines an annotation, it won't
have any runtime effect, so here too annotations can
harmlessly refer to undefined names. (It's exactly
like defining a function that refers to undefined
names--then never calling that function. Until you
call the function, nothing bad will happen.)
But examining an annotation when this PEP is active
means evaluating it, which means the names evaluated
in that expression must be defined. An undefined name
will throw a ``NameError`` in an annotation function,
just as it would with a stringized annotation passed
in to ``typing.get_type_hints()``, and just like any
other context in Python where an expression is evaluated.
In discussions we have yet to find a solution to this
problem that makes all the participants in the
conversation happy. There are various avenues to explore
here:
* One workaround is to continue to stringize one's
annotations, either by hand or done automatically
by the Python compiler (as it does today with
``from __future__ import annotations``). This might
mean preserving Python's current stringizing annotations
going forward, although leaving it turned off by default,
only available by explicit request (though likely with
a different mechanism than
``from __future__ import annotations``).
* Another possible workaround involves importing
the circularly-dependent modules separately, then
externally adding ("monkey-patching") their dependencies
to each other after the modules are loaded. As long
as the modules don't examine their annotations until
after they are completely loaded, this should work fine
and be maintainable with a minimum of effort.
* A third and more radical approach would be to change the
semantics of annotations so that they don't raise a
``NameError`` when an unknown name is evaluated,
but instead create some sort of proxy "reference" object.
* Of course, even if we do deprecate PEP 563, it will be
several releases before the before the functionality
is removed, giving us several years in which to to
research and innovate new solutions for this problem.
In any case, the participants of the discussion agree that
this PEP should still move forward, even as this issue remains
currently unresolved [1]_.
.. [1] https://github.com/larryhastings/co_annotations/issues/1
cls.__globals__ and fn.__locals__
---------------------------------
Is it permissible to add the ``__globals__`` reference to class
objects as proposed here? It's not clear why this hasn't already
been done; PEP 563 could have made use of class globals, but instead
made do with looking up classes inside ``sys.modules``. Python
seems strangely allergic to adding a ``__globals__`` reference to
class objects.
If adding ``__globals__`` to class objects is indeed a bad idea
(for reasons I don't know), here are two alternatives as to
how classes could get a reference to their globals for the
implementation of this PEP:
* The generate code for a class could bind its annotations code
object to a function at the time the class is bound, rather than
waiting for ``__annotations__`` to be referenced, making them an
exception to the rule (even though "special cases aren't special
enough to break the rules"). This would result in a small
additional runtime cost when annotations were defined but not
referenced on class objects. Honestly I'm more worried about
the lack of symmetry in semantics. (But I wouldn't want to
pre-bind all annotations code objects, as that would become
much more costly for function objects, even as annotations are
rarely used at runtime.)
* Use the class's ``__module__`` attribute to look up its module
by name in ``sys.modules``. This is what PEP 563 advises.
While this is passable for userspace or library code, it seems
like a little bit of a code smell for this to be defined semantics
baked into the language itself.
Also, the prototype gets globals for class objects by calling
``globals()`` then storing the result. I'm sure there's a much
faster way to do this, I just didn't know what it was when I was
prototyping. I'm sure we can revise this to something much faster
and much more sanitary. I'd prefer to make it completely internal
anyway, and not make it visible to the user (via this new
__globals__ attribute). There's possibly already a good place to
put it anyway--``ht_module``.
Similarly, this PEP adds one new dunder member to functions,
classes, and modules (``__co_annotations__``), and a second new
dunder member to functions (``__locals__``). This might be
considered excessive.
Bikeshedding the name
---------------------
During most of the development of this PEP, user code actually
could see the raw annotation code objects. ``__co_annotations__``
could only be set to a code object; functions and other callables
weren't permitted. In that context the name ``co_annotations``
makes a lot of sense. But with this last-minute pivot where
``__co_annotations__`` now presents itself as a callable,
perhaps the name of the attribute and the name of the
``from __future__ import`` needs a re-think.
Acknowledgements
================
Thanks to Barry Warsaw, Eric V. Smith, Mark Shannon,
and Guido van Rossum for feedback and encouragement.
Thanks in particular to Mark Shannon for two key
suggestions—build the entire annotations dict inside
a single code object, and only bind it to a function
on demand—that quickly became among the best aspects
of this proposal. Also, thanks in particular to Guido
van Rossum for suggesting that ``__co_annotations__``
functions should duplicate the name visibility rules of
annotations under "stock" semantics--this resulted in
a sizeable improvement to the second draft. Finally,
special thanks to Jelle Zijlstra, who contributed not
just feedback--but code!
Copyright
=========
This document is placed in the public domain or under the
CC0-1.0-Universal license, whichever is more permissive.
..
Local Variables:
mode: indented-text
indent-tabs-mode: nil
sentence-end-double-space: t
fill-column: 70
coding: utf-8
End:

View File

@ -1,308 +0,0 @@
PEP: 659
Title: Specializing Adaptive Interpreter
Author: Mark Shannon <mark@hotpy.org>
Status: Draft
Type: Informational
Content-Type: text/x-rst
Created: 13-Apr-2021
Post-History: 11-May-2021
Abstract
========
In order to perform well, virtual machines for dynamic languages must specialize the code that they execute
to the types and values in the program being run.
This specialization is often associated with "JIT" compilers, but is beneficial even without machine code generation.
A specializing, adaptive interpreter is one that speculatively specializes on the types or values it is currently operating on,
and adapts to changes in those types and values.
Specialization gives us improved performance, and adaptation allows the interpreter to rapidly change when the pattern of usage in a program alters,
limiting the amount of additional work caused by mis-specialization.
This PEP proposes using a specializing, adaptive interpreter that specializes code aggressively, but over a very small region,
and is able to adjust to mis-specialization rapidly and at low cost.
Adding a specializing, adaptive interpreter to CPython will bring significant performance improvements.
It is hard to come up with meaningful numbers, as it depends very much on the benchmarks and on work that has not yet happened.
Extensive experimentation suggests speedups of up to 50%.
Even if the speedup were only 25%, this would still be a worthwhile enhancement.
Motivation
==========
Python is widely acknowledged as slow.
Whilst Python will never attain the performance of low-level languages like C, Fortran, or even Java,
we would like it to be competitive with fast implementations of scripting languages, like V8 for Javascript or luajit for lua.
Specifically, we want to achieve these performance goals with CPython to benefit all users of Python
including those unable to use PyPy or other alternative virtual machines.
Achieving these performance goals is a long way off, and will require a lot of engineering effort,
but we can make a significant step towards those goals by speeding up the interpreter.
Both academic research and practical implementations have shown that a fast interpreter is a key part of a fast virtual machine.
Typical optimizations for virtual machines are expensive, so a long "warm up" time is required
to gain confidence that the cost of optimization is justified.
In order to get speed-ups rapidly, without noticeable warmup times,
the VM should speculate that specialization is justified even after a few executions of a function.
To do that effectively, the interpreter must be able to optimize and deoptimize continually and very cheaply.
By using adaptive and speculative specialization at the granularity of individual virtual machine instructions, we get a faster
interpreter that also generates profiling information for more sophisticated optimizations in the future.
Rationale
=========
There are many practical ways to speed-up a virtual machine for a dynamic language.
However, specialization is the most important, both in itself and as an enabler of other optimizations.
Therefore it makes sense to focus our efforts on specialization first, if we want to improve the performance of CPython.
Specialization is typically done in the context of a JIT compiler, but research shows specialization in an interpreter
can boost performance significantly, even outperforming a naive compiler [1]_.
There have been several ways of doing this proposed in the academic literature,
but most attempt to optimize regions larger than a single bytecode [1]_ [2]_.
Using larger regions than a single instruction, requires code to handle deoptimization in the middle of a region.
Specialization at the level of individual bytecodes makes deoptimization trivial, as it cannot occur in the middle of a region.
By speculatively specializing individual bytecodes, we can gain significant performance improvements without anything but the most local,
and trivial to implement, deoptimizations.
The closest approach to this PEP in the literature is "Inline Caching meets Quickening" [3]_.
This PEP has the advantages of inline caching, but adds the ability to quickly deoptimize making the performance
more robust in cases where specialization fails or is not stable.
Performance
-----------
The expected speedup of 50% can be broken roughly down as follows:
* In the region of 30% from specialization. Much of that is from specialization of calls,
with improvements in instructions that are already specialized such as ``LOAD_ATTR`` and ``LOAD_GLOBAL``
contributing much of the remainder. Specialization of operations adds a small amount.
* About 10% from improved dispatch such as super-instructions and other optimizations enabled by quickening.
* Further increases in the benefits of other optimizations, as they can exploit, or be exploited by specialization.
Implementation
==============
Overview
--------
Once any instruction in a code object has executed a few times, that code object will be "quickened" by allocating a new array
for the bytecode that can be modified at runtime, and is not constrained as the ``code.co_code`` object is.
From that point onwards, whenever any instruction in that code object is executed, it will use the quickened form.
Any instruction that would benefit from specialization will be replaced by an "adaptive" form of that instruction.
When executed, the adaptive instructions will specialize themselves in response to the types and values that they see.
Quickening
----------
Quickening is the process of replacing slow instructions with faster variants.
Quickened code has number of advantages over the normal bytecode:
* It can be changed at runtime
* It can use super-instructions that span lines and take multiple operands.
* It does not need to handle tracing as it can fallback to the normal bytecode for that.
In order that tracing can be supported, and quickening performed quickly, the quickened instruction format should match the normal
bytecode format: 16-bit instructions of 8-bit opcode followed by 8-bit operand.
Adaptive instructions
---------------------
Each instruction that would benefit from specialization is replaced by an adaptive version during quickening.
For example, the ``LOAD_ATTR`` instruction would be replaced with ``LOAD_ATTR_ADAPTIVE``.
Each adaptive instruction maintains a counter, and periodically attempts to specialize itself.
Specialization
--------------
CPython bytecode contains many bytecodes that represent high-level operations, and would benefit from specialization.
Examples include ``CALL_FUNCTION``, ``LOAD_ATTR``, ``LOAD_GLOBAL`` and ``BINARY_ADD``.
By introducing a "family" of specialized instructions for each of these instructions allows effective specialization,
since each new instruction is specialized to a single task.
Each family will include an "adaptive" instruction, that maintains a counter and periodically attempts to specialize itself.
Each family will also include one or more specialized instructions that perform the equivalent
of the generic operation much faster provided their inputs are as expected.
Each specialized instruction will maintain a saturating counter which will be incremented whenever the inputs are as expected.
Should the inputs not be as expected, the counter will be decremented and the generic operation will be performed.
If the counter reaches the minimum value, the instruction is deoptimized by simply replacing its opcode with the adaptive version.
Ancillary data
--------------
Most families of specialized instructions will require more information than can fit in an 8-bit operand.
To do this, an array of specialization data entries will be maintained alongside the new instruction array.
For instructions that need specialization data, the operand in the quickened array will serve as a partial index,
along with the offset of the instruction, to find the first specialization data entry for that instruction.
Each entry will be 8 bytes (for a 64 bit machine). The data in an entry, and the number of entries needed, will vary from instruction to instruction.
Data layout
-----------
Quickened instructions will be stored in an array (it is neither necessary not desirable to store them in a Python object) with the same
format as the original bytecode. Ancillary data will be stored in a separate array.
Each instruction will use 0 or more data entries. Each instruction within a family must have the same amount of data allocated, although some
instructions may not use all of it. Instructions that cannot be specialized, e.g. ``POP_TOP``, do not need any entries.
Experiments show that 25% to 30% of instructions can be usefully specialized.
Different families will need different amounts of data, but most need 2 entries (16 bytes on a 64 bit machine).
In order to support larger functions than 256 instructions, we compute the offset of the first data entry for instructions
as ``(instruction offset)//2 + (quickened operand)``.
Compared to the opcache in Python 3.10, this design:
* is faster; it requires no memory reads to compute the offset. 3.10 requires two reads, which are dependent.
* uses much less memory, as the data can be different sizes for different instruction families, and doesn't need an additional array of offsets.
* can support much larger functions, up to about 5000 instructions per function. 3.10 can support about 1000.
Example families of instructions
--------------------------------
CALL_FUNCTION
'''''''''''''
The ``CALL_FUNCTION`` instruction calls the (N+1)th item on the stack with top N items on the stack as arguments.
This is an obvious candidate for specialization. For example, the call in ``len(x)`` is represented as the bytecode ``CALL_FUNCTION 1``.
In this case we would always expect the object ``len`` to be the function. We probably don't want to specialize for ``len``
(although we might for ``type`` and ``isinstance``), but it would be beneficial to specialize for builtin functions taking a single argument.
A fast check that the underlying function is a builtin function taking a single argument (``METHOD_O``) would allow us to avoid a
sequence of checks for number of parameters and keyword arguments.
``CALL_FUNCTION_ADAPTIVE`` would track how often it is executed, and call the ``call_function_optimize`` when executed enough times, or jump
to ``CALL_FUNCTION`` otherwise.
When optimizing, the kind of the function would be checked and if a suitable specialized instruction was found,
it would replace ``CALL_FUNCTION_ADAPTIVE`` in place.
Specializations might include:
* ``CALL_FUNCTION_PY_SIMPLE``: Calls to Python functions with exactly matching parameters.
* ``CALL_FUNCTION_PY_DEFAULTS``: Calls to Python functions with more parameters and default values.
Since the exact number of defaults needed is known, the instruction needs to do no additional checking or computation; just copy some defaults.
* ``CALL_BUILTIN_O``: The example given above for calling builtin methods taking exactly one argument.
* ``CALL_BUILTIN_VECTOR``: For calling builtin function taking vector arguments.
Note how this allows optimizations that complement other optimizations.
For example, if the Python and C call stacks were decoupled and the data stack were contiguous,
then Python-to-Python calls could be made very fast.
LOAD_GLOBAL
'''''''''''
The ``LOAD_GLOBAL`` instruction looks up a name in the global namespace and then, if not present in the global namespace,
looks it up in the builtins namespace.
In 3.9 the C code for the ``LOAD_GLOBAL`` includes code to check to see whether the whole code object should be modified to add a cache,
whether either the global or builtins namespace, code to lookup the value in a cache, and fallback code.
This makes it complicated and bulky. It also performs many redundant operations even when supposedly optimized.
Using a family of instructions makes the code more maintainable and faster, as each instruction only needs to handle one concern.
Specializations would include:
* ``LOAD_GLOBAL_ADAPTIVE`` would operate like ``CALL_FUNCTION_ADAPTIVE`` above.
* ``LOAD_GLOBAL_MODULE`` can be specialized for the case where the value is in the globals namespace.
After checking that the keys of the namespace have not changed, it can load the value from the stored index.
* ``LOAD_GLOBAL_BUILTIN`` can be specialized for the case where the value is in the builtins namespace.
It needs to check that the keys of the global namespace have not been added to, and that the builtins namespace has not changed.
Note that we don't care if the values of the global namespace have changed, just the keys.
See [4]_ for a full implementation.
.. note::
This PEP outlines the mechanisms for managing specialization, and does not specify the particular optimizations to be applied.
The above scheme is just one possible scheme. Many others are possible and may well be better.
Compatibility
=============
There will be no change to the language, library or API.
The only way that users will be able to detect the presence of the new interpreter is through timing execution, the use of debugging tools,
or measuring memory use.
Costs
=====
Memory use
----------
An obvious concern with any scheme that performs any sort of caching is "how much more memory does it use?".
The short answer is "none".
Comparing memory use to 3.10
''''''''''''''''''''''''''''
The following table shows the additional bytes per instruction to support the 3.10 opcache
or the proposed adaptive interpreter, on a 64 bit machine.
================ ===== ======== ===== =====
Version 3.10 3.10 opt 3.11 3.11
Specialised 20% 20% 25% 33%
---------------- ----- -------- ----- -----
quickened code 0 0 2 2
opcache_map 1 1 0 0
opcache/data 6.4 4.8 4 5.3
---------------- ----- -------- ----- -----
Total 7.4 5.8 6 7.3
================ ===== ======== ===== =====
``3.10`` is the current version of 3.10 which uses 32 bytes per entry.
``3.10 opt`` is a hypothetical improved version of 3.10 that uses 24 bytes per entry.
Even if one third of all instructions were specialized (a high proportion), then the memory use is still less than
that of 3.10. With a more realistic 25%, then memory use is basically the same as the hypothetical improved version of 3.10.
Security Implications
=====================
None
Rejected Ideas
==============
Too many to list.
References
==========
.. [1] The construction of high-performance virtual machines for dynamic languages, Mark Shannon 2010.
http://theses.gla.ac.uk/2975/1/2011shannonphd.pdf
.. [2] Dynamic Interpretation for Dynamic Scripting Languages
https://www.scss.tcd.ie/publications/tech-reports/reports.09/TCD-CS-2009-37.pdf
.. [3] Inline Caching meets Quickening
http://www.complang.tuwien.ac.at/kps09/pdfs/brunthaler.pdf
.. [4] Adaptive specializing examples (This will be moved to a more permanent location, once this PEP is accepted)
https://gist.github.com/markshannon/556ccc0e99517c25a70e2fe551917c03
Copyright
=========
This document is placed in the public domain or under the
CC0-1.0-Universal license, whichever is more permissive.
..
Local Variables:
mode: indented-text
indent-tabs-mode: nil
sentence-end-double-space: t
fill-column: 70
coding: utf-8
End:

View File

@ -1,343 +0,0 @@
PEP: 661
Title: Sentinel Values
Author: Tal Einat <tal@python.org>
Discussions-To: https://discuss.python.org/t/pep-661-sentinel-values/9126
Status: Draft
Type: Standards Track
Content-Type: text/x-rst
Created: 06-Jun-2021
Post-History: 06-Jun-2021
TL;DR: See the `Specification`_ and `Reference Implementation`_.
Abstract
========
Unique placeholder values, commonly known as "sentinel values", are useful in
Python programs for several things, such as default values for function
arguments where ``None`` is a valid input value. These cases are common
enough for several idioms for implementing such "sentinels" to have arisen
over the years, but uncommon enough that there hasn't been a clear need for
standardization. However, the common implementations, including some in the
stdlib, suffer from several significant drawbacks.
This PEP suggests adding a utility for defining sentinel values, to be used
in the stdlib and made publicly available as part of the stdlib.
Note: Changing all existing sentinels in the stdlib to be implemented this
way is not deemed necessary, and whether to do so is left to the discretion
of each maintainer.
Motivation
==========
In May 2021, a question was brought up on the python-dev mailing list
[#python-dev-thread]_ about how to better implement a sentinel value for
``traceback.print_exception``. The existing implementation used the
following common idiom::
_sentinel = object()
However, this object has an uninformative and overly verbose repr, causing the
function's signature to be overly long and hard to read::
>>> help(traceback.print_exception)
Help on function print_exception in module traceback:
print_exception(exc, /, value=<object object at
0x000002825DF09650>, tb=<object object at 0x000002825DF09650>,
limit=None, file=None, chain=True)
Additionally, two other drawbacks of many existing sentinels were brought up
in the discussion:
1. Not having a distinct type, hence it being impossible to define strict
type signatures functions with sentinels as default values
2. Incorrect behavior after being copied or unpickled, due to a separate
instance being created and thus comparisons using ``is`` failing
In the ensuing discussion, Victor Stinner supplied a list of currently used
sentinel values in the Python standard library [#list-of-sentinels-in-stdlib]_.
This showed that the need for sentinels is fairly common, that there are
various implementation methods used even within the stdlib, and that many of
these suffer from at least one of the aforementioned drawbacks.
The discussion did not lead to any clear consensus on whether a standard
implementation method is needed or desirable, whether the drawbacks mentioned
are significant, nor which kind of implementation would be good.
A poll was created on discuss.python.org [#poll]_ to get a clearer sense of
the community's opinions. The poll's results were not conclusive, with 40%
voting for "The status-quo is fine / theres no need for consistency in
this", but most voters voting for one or more standardized solutions.
Specifically, 37% of the voters chose "Consistent use of a new, dedicated
sentinel factory / class / meta-class, also made publicly available in the
stdlib".
With such mixed opinions, this PEP was created to facilitate making a decision
on the subject.
Rationale
=========
The criteria guiding the chosen implementation were:
1. The sentinel objects should behave as expected by a sentinel object: When
compared using the ``is`` operator, it should always be considered identical
to itself but never to any other object.
2. It should be simple to define as many distinct sentinel values as needed.
3. The sentinel objects should have a clear and short repr.
4. The sentinel objects should each have a *distinct* type, usable in type
annotations to define *strict* type signatures.
5. The sentinel objects should behave correctly after copying and/or
unpickling.
6. Creating a sentinel object should be a simple, straightforward one-liner.
7. Works using CPython and PyPy3. Will hopefully also work with other
implementations.
After researching existing idioms and implementations, and going through many
different possible implementations, an implementation was written which meets
all of these criteria (see `Reference Implementation`_).
Specification
=============
A new ``sentinel`` function will be added to a new ``sentinels`` module.
It will accept a single required argument, the name of the sentinel object,
and a single optional argument, the repr of the object.
::
>>> NotGiven = sentinel('NotGiven')
>>> NotGiven
<NotGiven>
>>> MISSING = sentinel('MISSING', repr='mymodule.MISSING')
>>> MISSING
mymodule.MISSING
Checking if a value is such a sentinel *should* be done using the ``is``
operator, as is recommended for ``None``. Equality checks using ``==`` will
also work as expected, returning ``True`` only when the object is compared
with itself.
The name should be set to the name of the variable used to reference the
object, as in the examples above. Otherwise, the sentinel object won't be
able to survive copying or pickling+unpickling while retaining the above
described behavior. Note, that when defined in a class scope, the name must
be the fully-qualified name of the variable in the module, for example::
class MyClass:
NotGiven = sentinel('MyClass.NotGiven')
Type annotations for sentinel values will use `typing.Literal`_.
For example::
def foo(value: int | Literal[NotGiven]) -> None:
...
.. _typing.Literal: https://docs.python.org/3/library/typing.html#typing.Literal
Reference Implementation
========================
The reference implementation is found in a dedicated GitHub repo
[#reference-github-repo]_. A simplified version follows::
def sentinel(name, repr=None):
"""Create a unique sentinel object."""
repr = repr or f'<{name}>'
module = _get_parent_frame().f_globals.get('__name__', '__main__')
class_name = _get_class_name(name, module)
class_namespace = {
'__repr__': lambda self: repr,
}
cls = type(class_name, (), class_namespace)
cls.__module__ = module
_get_parent_frame().f_globals[class_name] = cls
sentinel = cls()
cls.__new__ = lambda cls_: sentinel
return sentinel
def _get_class_name(sentinel_qualname, module_name):
return '__'.join(['_sentinel_type',
module_name.replace('.', '_'),
sentinel_qualname.replace('.', '_')])
Note that a dedicated class is created automatically for each sentinel object.
This class is assigned to the namespace of the module from which the
``sentinel()`` call was made, or to that of the ``sentinels`` module itself as
a fallback. These classes have long names comprised of several parts to
ensure their uniqueness. However, these names usually wouldn't be used, since
type annotations should use ``Literal[]`` as described above, and identity
checks should be preferred over type checks.
Rejected Ideas
==============
Use ``NotGiven = object()``
---------------------------
This suffers from all of the drawbacks mentioned in the `Rationale`_ section.
Add a single new sentinel value, e.g. ``MISSING`` or ``Sentinel``
-----------------------------------------------------------------
Since such a value could be used for various things in various places, one
could not always be confident that it would never be a valid value in some use
cases. On the other hand, a dedicated and distinct sentinel value can be used
with confidence without needing to consider potential edge-cases.
Additionally, it is useful to be able to provide a meaningful name and repr
for a sentinel value, specific to the context where it is used.
Finally, this was a very unpopular option in the poll [#poll]_, with only 12%
of the votes voting for it.
Use the existing ``Ellipsis`` sentinel value
--------------------------------------------
This is not the original intended use of Ellipsis, though it has become
increasingly common to use it to define empty class or function blocks instead
of using ``pass``.
Also, similar to a potential new single sentinel value, ``Ellipsis`` can't be
as confidently used in all cases, unlike a dedicated, distinct value.
Use a single-valued enum
------------------------
The suggested idiom is:
::
class NotGivenType(Enum):
NotGiven = 'NotGiven'
NotGiven = NotGivenType.NotGiven
Besides the excessive repetition, the repr is overly long:
``<NotGivenType.NotGiven: 'NotGiven'>``. A shorter repr can be defined, at
the expense of a bit more code and yet more repetition.
Finally, this option was the least popular among the nine options in the poll
[#poll]_, being the only option to receive no votes.
A sentinel class decorator
--------------------------
The suggested interface:
::
@sentinel(repr='<NotGiven>')
class NotGivenType: pass
NotGiven = NotGivenType()
While this allowed for a very simple and clear implementation, the interface
is too verbose, repetitive, and difficult to remember.
Using class objects
-------------------
Since classes are inherently singletons, using a class as a sentinel value
makes sense and allows for a simple implementation.
The simplest version of this idiom is:
::
class NotGiven: pass
To have a clear repr, one could define ``__repr__``:
::
class NotGiven:
def __repr__(self):
return '<NotGiven>'
... or use a meta-class:
::
class NotGiven(metaclass=SentinelMeta): pass
However, all such implementations don't have a dedicated type for the
sentinel, which is considered desirable for strict typing. A dedicated type
could be created by a meta-class or class decorator, but at that point the
implementation would become much more complex and loses its advantages over
the chosen implementation.
Additionally, using classes this way is unusual and could be confusing.
Define a recommended "standard" idiom, without supplying an implementation
--------------------------------------------------------------------------
Most common exiting idioms have significant drawbacks. So far, no idiom
has been found that is clear and concise while avoiding these drawbacks.
Also, in the poll on this subject [#poll]_, the options for recommending an
idiom were unpopular, with the highest-voted option being voted for by only
25% of the voters.
Additional Notes
================
* This PEP and the initial implementation are drafted in a dedicated GitHub
repo [#reference-github-repo]_.
* The support for copying/unpickling works when defined in a module's scope or
a (possibly nested) class's scope. Note that in the latter case, the name
provided as the first parameter must be the fully-qualified name of the
variable in the module::
class MyClass:
NotGiven = sentinel('MyClass.NotGiven', repr='<NotGiven>')
References
==========
.. [#python-dev-thread] Python-Dev mailing list: `The repr of a sentinel <https://mail.python.org/archives/list/python-dev@python.org/thread/ZLVPD2OISI7M4POMTR2FCQTE6TPMPTO3/>`_
.. [#list-of-sentinels-in-stdlib] Python-Dev mailing list: `"The stdlib contains tons of sentinels" <https://mail.python.org/archives/list/python-dev@python.org/message/JBYXQH3NV3YBF7P2HLHB5CD6V3GVTY55/>`_
.. [#poll] discuss.python.org Poll: `Sentinel Values in the Stdlib <https://discuss.python.org/t/sentinel-values-in-the-stdlib/8810/>`_
.. [#reference-github-repo] `Reference implementation at the taleinat/python-stdlib-sentinels GitHub repo <https://github.com/taleinat/python-stdlib-sentinels>`_
.. [5] `bpo-44123: Make function parameter sentinel values true singletons <https://bugs.python.org/issue44123>`_
.. [6] `The "sentinels" package on PyPI <https://pypi.org/project/sentinels/>`_
.. [7] `The "sentinel" package on PyPI <https://pypi.org/project/sentinel/>`_
.. [8] `Discussion thread about type signatures for these sentinels on the typing-sig mailing list <https://mail.python.org/archives/list/typing-sig@python.org/thread/NDEJ7UCDPINP634GXWDARVMTGDVSNBKV/#LVCPTY26JQJW7NKGKGAZXHQKWVW7GOGL>`_
Copyright
=========
This document is placed in the public domain or under the
CC0-1.0-Universal license, whichever is more permissive.
..
Local Variables:
mode: indented-text
indent-tabs-mode: nil
sentence-end-double-space: t
fill-column: 70
coding: utf-8
End:

View File

@ -1,101 +0,0 @@
PEP: 664
Title: Python 3.11 Release Schedule
Version: $Revision$
Last-Modified: $Date$
Author: Pablo Galindo Salgado <pablogsal@python.org>
Status: Draft
Type: Informational
Content-Type: text/x-rst
Created: 12-Jul-2021
Python-Version: 3.11
Abstract
========
This document describes the development and release schedule for
Python 3.11. The schedule primarily concerns itself with PEP-sized
items.
.. Small features may be added up to the first beta
release. Bugs may be fixed until the final release,
which is planned for end of October 2021.
Release Manager and Crew
========================
- 3.11 Release Manager: Pablo Galindo Salgado
- Windows installers: Steve Dower
- Mac installers: Ned Deily
- Documentation: Julien Palard
Release Schedule
================
3.11.0 schedule
---------------
Note: the dates below use a 17-month development period that results
in a 12-month release cadence between major versions, as defined by
PEP 602.
Actual:
- 3.11 development begins: Monday, 2021-05-03
- 3.11.0 alpha 1: Monday, 2021-10-05
Expected:
- 3.11.0 alpha 2: Tuesday, 2021-11-02
- 3.11.0 alpha 3: Monday, 2021-12-06
- 3.11.0 alpha 4: Monday, 2022-01-03
- 3.11.0 alpha 5: Wednesday, 2022-02-02
- 3.11.0 alpha 6: Monday, 2022-02-28
- 3.11.0 alpha 7: Tuesday, 2022-04-05
- 3.11.0 beta 1: Friday, 2022-05-06
(No new features beyond this point.)
- 3.11.0 beta 2: Monday, 2022-05-30
- 3.11.0 beta 3: Thursday, 2022-06-16
- 3.11.0 beta 4: Saturday, 2022-07-09
- 3.11.0 candidate 1: Monday, 2022-08-01
- 3.11.0 candidate 2: Monday, 2022-09-05
- 3.11.0 final: Monday, 2022-10-03
Subsequent bugfix releases every two months.
3.11 Lifespan
-------------
3.11 will receive bugfix updates approximately every 2 months for
approximately 18 months. Some time after the release of 3.12.0 final,
the ninth and final 3.11 bugfix update will be released. After that,
it is expected that security updates (source only) will be released
until 5 years after the release of 3.11.0 final, so until approximately
October 2027.
Features for 3.11
=================
Some of the notable features of Python 3.11 include:
** Watch this space :) **
Copyright
=========
This document has been placed in the public domain.
..
Local Variables:
mode: indented-text
indent-tabs-mode: nil
sentence-end-double-space: t
fill-column: 72
coding: utf-8
End:

View File

@ -1,367 +0,0 @@
PEP: 670
Title: Convert macros to functions in the Python C API
Author: Erlend Egeberg Aasland <erlend.aasland@protonmail.com>,
Victor Stinner <vstinner@python.org>
Status: Draft
Type: Standards Track
Content-Type: text/x-rst
Created: 19-Oct-2021
Python-Version: 3.11
Abstract
========
Convert macros to static inline functions or regular functions.
Remove the return value of macros having a return value, whereas they
should not, to aid detecting bugs in C extensions when the C API is
misused.
Some function arguments are still cast to ``PyObject*`` to prevent
emitting new compiler warnings.
Rationale
=========
The use of macros may have unintended adverse effects that are hard to
avoid, even for experienced C developers. Some issues have been known
for years, while others have been discovered recently in Python.
Working around macro pitfalls makes the macro coder harder to read and
to maintain.
Converting macros to functions has multiple advantages:
* By design, functions don't have macro pitfalls.
* Arguments type and return type are well defined.
* Debuggers and profilers can retrieve the name of inlined functions.
* Debuggers can put breakpoints on inlined functions.
* Variables have a well defined scope.
* Code is usually easier to read and to maintain than similar macro
code. Functions don't need the following workarounds for macro
pitfalls:
* Add parentheses around arguments.
* Use line continuation characters if the function is written on
multiple lines.
* Add commas to execute multiple expressions.
* Use ``do { ... } while (0)`` to write multiple statements.
Converting macros and static inline functions to regular functions makes
these regular functions accessible to projects which use Python but
cannot use macros and static inline functions.
Macro Pitfalls
==============
The `GCC documentation
<https://gcc.gnu.org/onlinedocs/cpp/Macro-Pitfalls.html>`_ lists several
common macro pitfalls:
- Misnesting
- Operator precedence problems
- Swallowing the semicolon
- Duplication of side effects
- Self-referential macros
- Argument prescan
- Newlines in arguments
Performance and inlining
========================
Static inline functions is a feature added to the C99 standard. Modern C
compilers have efficient heuristics to decide if a function should be
inlined or not.
When a C compiler decides to not inline, there is likely a good reason.
For example, inlining would reuse a register which require to
save/restore the register value on the stack and so increase the stack
memory usage or be less efficient.
Debug build
-----------
When Python is built in debug mode, most compiler optimizations are
disabled. For example, Visual Studio disables inlining. Benchmarks must
not be run on a Python debug build, only on release build: using LTO and
PGO is recommended for reliable benchmarks. PGO helps the compiler to
decide if function should be inlined or not.
Force inlining
--------------
The ``Py_ALWAYS_INLINE`` macro can be used to force inlining. This macro
uses ``__attribute__((always_inline))`` with GCC and Clang, and
``__forceinline`` with MSC.
So far, previous attempts to use ``Py_ALWAYS_INLINE`` didn't show any
benefit and were abandoned. See for example: `bpo-45094
<https://bugs.python.org/issue45094>`_: "Consider using
``__forceinline`` and ``__attribute__((always_inline))`` on static
inline functions (``Py_INCREF``, ``Py_TYPE``) for debug build".
When the ``Py_INCREF()`` macro was converted to a static inline
functions in 2018 (`commit
<https://github.com/python/cpython/commit/2aaf0c12041bcaadd7f2cc5a54450eefd7a6ff12>`__),
it was decided not to force inlining. The machine code was analyzed with
multiple C compilers and compiler options: ``Py_INCREF()`` was always
inlined without having to force inlining. The only case where it was not
inlined was the debug build. See discussion in the `bpo-35059
<https://bugs.python.org/issue35059>`_: "Convert ``Py_INCREF()`` and
``PyObject_INIT()`` to inlined functions".
Disable inlining
----------------
On the other side, the ``Py_NO_INLINE`` macro can be used to disable
inlining. It is useful to reduce the stack memory usage. It is
especially useful on a LTO+PGO build which is more aggressive to inline
code: see `bpo-33720 <https://bugs.python.org/issue33720>`_. The
``Py_NO_INLINE`` macro uses ``__attribute__ ((noinline))`` with GCC and
Clang, and ``__declspec(noinline)`` with MSC.
Specification
=============
Convert macros to static inline functions
-----------------------------------------
Most macros should be converted to static inline functions to prevent
`macro pitfalls`_.
The following macros should not be converted:
* Empty macros. Example: ``#define Py_HAVE_CONDVAR``.
* Macros only defining a number, even if a constant with a well defined
type can better. Example: ``#define METH_VARARGS 0x0001``.
* Compatibility layer for different C compilers, C language extensions,
or recent C features.
Example: ``#define Py_ALWAYS_INLINE __attribute__((always_inline))``.
Convert static inline functions to regular functions
----------------------------------------------------
The performance impact of converting static inline functions to regular
functions should be measured with benchmarks. If there is a significant
slowdown, there should be a good reason to do the conversion. One reason
can be hiding implementation details.
Using static inline functions in the internal C API is fine: the
internal C API exposes implemenation details by design and should not be
used outside Python.
Cast to PyObject*
-----------------
When a macro is converted to a function and the macro casts its
arguments to ``PyObject*``, the new function comes with a new macro
which cast arguments to ``PyObject*`` to prevent emitting new compiler
warnings. So the converted functions still accept pointers to structures
inheriting from ``PyObject`` (ex: ``PyTupleObject``).
For example, the ``Py_TYPE(obj)`` macro casts its ``obj`` argument to
``PyObject*``::
#define _PyObject_CAST_CONST(op) ((const PyObject*)(op))
static inline PyTypeObject* _Py_TYPE(const PyObject *ob) {
return ob->ob_type;
}
#define Py_TYPE(ob) _Py_TYPE(_PyObject_CAST_CONST(ob))
The undocumented private ``_Py_TYPE()`` function must not be called
directly. Only the documented public ``Py_TYPE()`` macro must be used.
Later, the cast can be removed on a case by case basis, but that is out
of scope for this PEP.
Remove the return value
-----------------------
When a macro is implemented as an expression, it has an implicit return
value. In some cases, the macro must not have a return value and can be
misused in third party C extensions. See `bpo-30459
<https://bugs.python.org/issue30459>`_ for the example of
``PyList_SET_ITEM()`` and ``PyCell_SET()`` macros. It is not easy to
notice this issue while reviewing macro code.
These macros are converted to functions using the ``void`` return type
to remove their return value. Removing the return value aids detecting
bugs in C extensions when the C API is misused.
Backwards Compatibility
=======================
Removing the return value of macros is an incompatible API change made
on purpose: see the `Remove the return value`_ section.
Rejected Ideas
==============
Keep macros, but fix some macro issues
--------------------------------------
Converting macros to functions is not needed to `remove the return
value`_: casting a macro return value to ``void`` also fix the issue.
For example, the ``PyList_SET_ITEM()`` macro was already fixed like
that.
Macros are always "inlined" with any C compiler.
The duplication of side effects can be worked around in the caller of
the macro.
People using macros should be considered "consenting adults". People who
feel unsafe with macros should simply not use them.
Examples of hard to read macros
===============================
_Py_NewReference()
------------------
Example showing the usage of an ``#ifdef`` inside a macro.
Python 3.7 macro (simplified code)::
#ifdef COUNT_ALLOCS
# define _Py_INC_TPALLOCS(OP) inc_count(Py_TYPE(OP))
# define _Py_COUNT_ALLOCS_COMMA ,
#else
# define _Py_INC_TPALLOCS(OP)
# define _Py_COUNT_ALLOCS_COMMA
#endif /* COUNT_ALLOCS */
#define _Py_NewReference(op) ( \
_Py_INC_TPALLOCS(op) _Py_COUNT_ALLOCS_COMMA \
Py_REFCNT(op) = 1)
Python 3.8 function (simplified code)::
static inline void _Py_NewReference(PyObject *op)
{
_Py_INC_TPALLOCS(op);
Py_REFCNT(op) = 1;
}
PyObject_INIT()
---------------
Example showing the usage of commas in a macro.
Python 3.7 macro::
#define PyObject_INIT(op, typeobj) \
( Py_TYPE(op) = (typeobj), _Py_NewReference((PyObject *)(op)), (op) )
Python 3.8 function (simplified code)::
static inline PyObject*
_PyObject_INIT(PyObject *op, PyTypeObject *typeobj)
{
Py_TYPE(op) = typeobj;
_Py_NewReference(op);
return op;
}
#define PyObject_INIT(op, typeobj) \
_PyObject_INIT(_PyObject_CAST(op), (typeobj))
The function doesn't need the line continuation character. It has an
explicit ``"return op;"`` rather than a surprising ``", (op)"`` at the
end of the macro. It uses one short statement per line, rather than a
single long line. Inside the function, the *op* argument has a well
defined type: ``PyObject*``.
Macros converted to functions since Python 3.8
==============================================
Macros converted to static inline functions
-------------------------------------------
Python 3.8:
* ``Py_DECREF()``
* ``Py_INCREF()``
* ``Py_XDECREF()``
* ``Py_XINCREF()``
* ``PyObject_INIT()``
* ``PyObject_INIT_VAR()``
* ``_PyObject_GC_UNTRACK()``
* ``_Py_Dealloc()``
Python 3.10:
* ``Py_REFCNT()``
Python 3.11:
* ``Py_TYPE()``
* ``Py_SIZE()``
Macros converted to regular functions
-------------------------------------
Python 3.9:
* ``PyIndex_Check()``
* ``PyObject_CheckBuffer()``
* ``PyObject_GET_WEAKREFS_LISTPTR()``
* ``PyObject_IS_GC()``
* ``PyObject_NEW()``: alias to ``PyObject_New()``
* ``PyObject_NEW_VAR()``: alias to ``PyObjectVar_New()``
To avoid any risk of performance slowdown on Python built without LTO,
private static inline functions have been added to the internal C API:
* ``_PyIndex_Check()``
* ``_PyObject_IS_GC()``
* ``_PyType_HasFeature()``
* ``_PyType_IS_GC()``
Static inline functions converted to regular functions
-------------------------------------------------------
Python 3.11:
* ``PyObject_CallOneArg()``
* ``PyObject_Vectorcall()``
* ``PyVectorcall_Function()``
* ``_PyObject_FastCall()``
To avoid any risk of performance slowdown on Python built without LTO, a
private static inline function has been added to the internal C API:
* ``_PyVectorcall_FunctionInline()``
References
==========
* `bpo-45490 <https://bugs.python.org/issue45490>`_:
[meta][C API] Avoid C macro pitfalls and usage of static inline
functions (October 2021).
* `What to do with unsafe macros
<https://discuss.python.org/t/what-to-do-with-unsafe-macros/7771>`_
(March 2021).
* `bpo-43502 <https://bugs.python.org/issue43502>`_:
[C-API] Convert obvious unsafe macros to static inline functions
(March 2021).
Copyright
=========
This document is placed in the public domain or under the
CC0-1.0-Universal license, whichever is more permissive.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 64 KiB

View File

@ -1,166 +0,0 @@
PEP: 8103
Title: 2022 Term steering council election
Version: $Revision$
Last-Modified: $Date$
Author: Ewa Jodlowska <ewa@python.org>, Ee W. Durbin III <ee@python.org>, Joe Carey <joe@python.org>
Sponsor: Barry Warsaw <barry@python.org>
Status: Draft
Type: Informational
Content-Type: text/x-rst
Created: 04-Oct-2021
Abstract
========
This document describes the schedule and other details of the December
2021 election for the Python steering council, as specified in
PEP 13. This is the steering council election for the 2022 term
(i.e. Python 3.11).
Election Administration
=======================
TBD: Determine election administrators
Schedule
========
There will be a two-week nomination period, followed by a two-week
vote.
The nomination period shall be: November 1, 2021 through November 16,
2021 12:00 UTC (The end of November 15, 2021 `Anywhere on Earth
<https://www.ieee802.org/16/aoe.html>`_).
The voting period shall be: December 1, 2021 12:00 UTC through
December 16, 2021 12:00 UTC (The end of December 15, 2021 `Anywhere on
Earth <https://www.ieee802.org/16/aoe.html>`_).
Candidates
==========
Candidates must be nominated by a core team member. If the candidate
is a core team member, they may nominate themselves.
Nominees (in alphabetical order):
- TBD
Withdrawn nominations:
- TBD
Voter Roll
==========
All active Python core team members are eligible to vote. Active status
is determined as described in `PEP 13 <https://www.python.org/dev/peps/pep-0013/#membership>`_
and implemented via the software at `python/voters <https://github.com/python/voters>`_ [1]_.
Ballots will be distributed based on the `The Python Voter Roll for this
election
<https://github.com/python/voters/blob/master/voter-files/>`_
[1]_.
While this file is not public as it contains private email addresses, the
`Complete Voter Roll`_ by name will be made available when the roll is
created.
Election Implementation
=======================
The election will be conducted using the `Helios Voting Service
<https://heliosvoting.org>`__.
Configuration
-------------
.. note::
These details are subject to change.
Short name: ``2022-python-steering-council``
Name: ``2022 Python Steering Council Election``
Description: ``Election for the Python steering council, as specified
in PEP 13. This is steering council election for the 2022 term.``
type: ``Election``
Use voter aliases: ``[X]``
Randomize answer order: ``[X]``
Private: ``[X]``
Help Email Address: ``psf-election@python.org``
Voting starts at: ``December 1, 2021 00:00 UTC``
Voting ends at: ``December 16, 2021 12:00 UTC``
This will create an election in which:
* Voting is not open to the public, only those on the `Voter Roll`_ may
participate. Ballots will be emailed when voting starts.
* Candidates are presented in random order, to help avoid bias.
* Voter identities and ballots are protected against cryptographic advances.
Questions
---------
Question 1
~~~~~~~~~~
Select between ``0`` and ``- (approval)`` answers. Result Type: ``absolute``
Question: ``Select candidates for the Python Steering Council``
Answer #1 - #N: ``Candidates from Candidates_ Section``
Results
=======
- TBD
Copyright
=========
This document has been placed in the public domain.
Complete Voter Roll
===================
Active Python core developers
-----------------------------
- TBD
.. [1] This repository is private and accessible only to Python Core
Developers, administrators, and Python Software Foundation Staff as it
contains personal email addresses.
..
Local Variables:
mode: indented-text
indent-tabs-mode: nil
sentence-end-double-space: t
fill-column: 70
coding: utf-8
End:

344
pep.css
View File

@ -1,344 +0,0 @@
/*
:Author: David Goodger
:Contact: goodger@python.org
:date: $Date$
:version: $Revision$
:copyright: This stylesheet has been placed in the public domain.
Default cascading style sheet for the PEP HTML output of Docutils.
*/
/* "! important" is used here to override other ``margin-top`` and
``margin-bottom`` styles that are later in the stylesheet or
more specific. See http://www.w3.org/TR/CSS1#the-cascade */
.first {
margin-top: 0 ! important }
.last, .with-subtitle {
margin-bottom: 0 ! important }
.hidden {
display: none }
.navigation {
width: 100% ;
background: #99ccff ;
margin-top: 0px ;
margin-bottom: 0px }
.navigation .navicon {
width: 150px ;
height: 35px }
.navigation .textlinks {
padding-left: 1em ;
text-align: left }
.navigation td, .navigation th {
padding-left: 0em ;
padding-right: 0em ;
vertical-align: middle }
.rfc2822 {
margin-top: 0.5em ;
margin-left: 0.5em ;
margin-right: 0.5em ;
margin-bottom: 0em }
.rfc2822 td {
text-align: left }
.rfc2822 th.field-name {
text-align: right ;
font-family: sans-serif ;
padding-right: 0.5em ;
font-weight: bold ;
margin-bottom: 0em }
a.toc-backref {
text-decoration: none ;
color: black }
blockquote.epigraph {
margin: 2em 5em ; }
body {
margin: 0px ;
margin-bottom: 1em ;
padding: 0px }
dl.docutils dd {
margin-bottom: 0.5em }
div.section {
margin-left: 1em ;
margin-right: 1em ;
margin-bottom: 1.5em }
div.section div.section {
margin-left: 0em ;
margin-right: 0em ;
margin-top: 1.5em }
div.abstract {
margin: 2em 5em }
div.abstract p.topic-title {
font-weight: bold ;
text-align: center }
div.admonition, div.attention, div.caution, div.danger, div.error,
div.hint, div.important, div.note, div.tip, div.warning {
margin: 2em ;
border: medium outset ;
padding: 1em }
div.admonition p.admonition-title, div.hint p.admonition-title,
div.important p.admonition-title, div.note p.admonition-title,
div.tip p.admonition-title {
font-weight: bold ;
font-family: sans-serif }
div.attention p.admonition-title, div.caution p.admonition-title,
div.danger p.admonition-title, div.error p.admonition-title,
div.warning p.admonition-title {
color: red ;
font-weight: bold ;
font-family: sans-serif }
/* Uncomment (and remove this text!) to get reduced vertical space in
compound paragraphs.
div.compound .compound-first, div.compound .compound-middle {
margin-bottom: 0.5em }
div.compound .compound-last, div.compound .compound-middle {
margin-top: 0.5em }
*/
div.dedication {
margin: 2em 5em ;
text-align: center ;
font-style: italic }
div.dedication p.topic-title {
font-weight: bold ;
font-style: normal }
div.figure {
margin-left: 2em ;
margin-right: 2em }
div.footer, div.header {
clear: both;
font-size: smaller }
div.footer {
margin-left: 1em ;
margin-right: 1em }
div.line-block {
display: block ;
margin-top: 1em ;
margin-bottom: 1em }
div.line-block div.line-block {
margin-top: 0 ;
margin-bottom: 0 ;
margin-left: 1.5em }
div.sidebar {
margin-left: 1em ;
border: medium outset ;
padding: 1em ;
background-color: #ffffee ;
width: 40% ;
float: right ;
clear: right }
div.sidebar p.rubric {
font-family: sans-serif ;
font-size: medium }
div.system-messages {
margin: 5em }
div.system-messages h1 {
color: red }
div.system-message {
border: medium outset ;
padding: 1em }
div.system-message p.system-message-title {
color: red ;
font-weight: bold }
div.topic {
margin: 2em }
h1.section-subtitle, h2.section-subtitle, h3.section-subtitle,
h4.section-subtitle, h5.section-subtitle, h6.section-subtitle {
margin-top: 0.4em }
h1 {
font-family: sans-serif ;
font-size: large }
h2 {
font-family: sans-serif ;
font-size: medium }
h3 {
font-family: sans-serif ;
font-size: small }
h4 {
font-family: sans-serif ;
font-style: italic ;
font-size: small }
h5 {
font-family: sans-serif;
font-size: x-small }
h6 {
font-family: sans-serif;
font-style: italic ;
font-size: x-small }
hr.docutils {
width: 75% }
img.align-left {
clear: left }
img.align-right {
clear: right }
img.borderless {
border: 0 }
ol.simple, ul.simple {
margin-bottom: 1em }
ol.arabic {
list-style: decimal }
ol.loweralpha {
list-style: lower-alpha }
ol.upperalpha {
list-style: upper-alpha }
ol.lowerroman {
list-style: lower-roman }
ol.upperroman {
list-style: upper-roman }
p.attribution {
text-align: right ;
margin-left: 50% }
p.caption {
font-style: italic }
p.credits {
font-style: italic ;
font-size: smaller }
p.label {
white-space: nowrap }
p.rubric {
font-weight: bold ;
font-size: larger ;
color: maroon ;
text-align: center }
p.sidebar-title {
font-family: sans-serif ;
font-weight: bold ;
font-size: larger }
p.sidebar-subtitle {
font-family: sans-serif ;
font-weight: bold }
p.topic-title {
font-family: sans-serif ;
font-weight: bold }
pre.address {
margin-bottom: 0 ;
margin-top: 0 ;
font-family: serif ;
font-size: 100% }
pre.literal-block, pre.doctest-block {
margin-left: 2em ;
margin-right: 2em }
span.classifier {
font-family: sans-serif ;
font-style: oblique }
span.classifier-delimiter {
font-family: sans-serif ;
font-weight: bold }
span.interpreted {
font-family: sans-serif }
span.option {
white-space: nowrap }
span.option-argument {
font-style: italic }
span.pre {
white-space: pre }
span.problematic {
color: red }
span.section-subtitle {
/* font-size relative to parent (h1..h6 element) */
font-size: 80% }
table.citation {
border-left: solid 1px gray;
margin-left: 1px }
table.docinfo {
margin: 2em 4em }
table.docutils {
margin-top: 0.5em ;
margin-bottom: 0.5em }
table.footnote {
border-left: solid 1px black;
margin-left: 1px }
table.docutils td, table.docutils th,
table.docinfo td, table.docinfo th {
padding-left: 0.5em ;
padding-right: 0.5em ;
vertical-align: top }
td.num {
text-align: right }
th.field-name {
font-weight: bold ;
text-align: left ;
white-space: nowrap ;
padding-left: 0 }
h1 tt.docutils, h2 tt.docutils, h3 tt.docutils,
h4 tt.docutils, h5 tt.docutils, h6 tt.docutils {
font-size: 100% }
ul.auto-toc {
list-style-type: none }

View File

@ -1 +0,0 @@
# Empty

View File

@ -1,43 +0,0 @@
# -*- coding: utf-8 -*-
text_type = str
title_length = 55
author_length = 40
table_separator = "== ==== " + "="*title_length + " " + "="*author_length
column_format = (
'%(type)1s%(status)1s %(number)4s %(title)-{title_length}s %(authors)-s'
).format(title_length=title_length)
header = """\
PEP: 0
Title: Index of Python Enhancement Proposals (PEPs)
Version: N/A
Last-Modified: %s
Author: python-dev <python-dev@python.org>
Status: Active
Type: Informational
Content-Type: text/x-rst
Created: 13-Jul-2000
"""
intro = """\
This PEP contains the index of all Python Enhancement Proposals,
known as PEPs. PEP numbers are assigned by the PEP editors[1_], and
once assigned are never changed. The version control history [2_] of
the PEP texts represent their historical record.
"""
references = """\
.. [1] PEP 1: PEP Purpose and Guidelines
.. [2] View PEP history online: https://github.com/python/peps
"""
footer = """ \
..
Local Variables:
mode: indented-text
indent-tabs-mode: nil
sentence-end-double-space: t
fill-column: 70
coding: utf-8
End:\
"""

View File

@ -1,290 +0,0 @@
"""Code to handle the output of PEP 0."""
from __future__ import absolute_import
from __future__ import print_function
import datetime
import sys
import unicodedata
from operator import attrgetter
from . import constants
from .pep import PEP, PEPError
# This is a list of reserved PEP numbers. Reservations are not to be used for
# the normal PEP number allocation process - just give out the next available
# PEP number. These are for "special" numbers that may be used for semantic,
# humorous, or other such reasons, e.g. 401, 666, 754.
#
# PEP numbers may only be reserved with the approval of a PEP editor. Fields
# here are the PEP number being reserved and the claimants for the PEP.
# Although the output is sorted when PEP 0 is generated, please keep this list
# sorted as well.
RESERVED = [
(801, 'Warsaw'),
]
indent = u' '
def emit_column_headers(output):
"""Output the column headers for the PEP indices."""
column_headers = {'status': '.', 'type': '.', 'number': 'PEP',
'title': 'PEP Title', 'authors': 'PEP Author(s)'}
print(constants.table_separator, file=output)
print(constants.column_format % column_headers, file=output)
print(constants.table_separator, file=output)
def sort_peps(peps):
"""Sort PEPs into meta, informational, accepted, open, finished,
and essentially dead."""
meta = []
info = []
provisional = []
accepted = []
open_ = []
finished = []
historical = []
deferred = []
dead = []
for pep in peps:
# Order of 'if' statement important. Key Status values take precedence
# over Type value, and vice-versa.
if pep.status == 'Draft':
open_.append(pep)
elif pep.status == 'Deferred':
deferred.append(pep)
elif pep.type_ == 'Process':
if pep.status == "Active":
meta.append(pep)
elif pep.status in ("Withdrawn", "Rejected"):
dead.append(pep)
else:
historical.append(pep)
elif pep.status in ('Rejected', 'Withdrawn',
'Incomplete', 'Superseded'):
dead.append(pep)
elif pep.type_ == 'Informational':
# Hack until the conflict between the use of "Final"
# for both API definition PEPs and other (actually
# obsolete) PEPs is addressed
if (pep.status == "Active" or
"Release Schedule" not in pep.title):
info.append(pep)
else:
historical.append(pep)
elif pep.status == 'Provisional':
provisional.append(pep)
elif pep.status in ('Accepted', 'Active'):
accepted.append(pep)
elif pep.status == 'Final':
finished.append(pep)
else:
raise PEPError("unsorted (%s/%s)" %
(pep.type_, pep.status),
pep.filename, pep.number)
return (meta, info, provisional, accepted, open_,
finished, historical, deferred, dead)
def verify_email_addresses(peps):
authors_dict = {}
for pep in peps:
for author in pep.authors:
# If this is the first time we have come across an author, add them.
if author not in authors_dict:
authors_dict[author] = [author.email]
else:
found_emails = authors_dict[author]
# If no email exists for the author, use the new value.
if not found_emails[0]:
authors_dict[author] = [author.email]
# If the new email is an empty string, move on.
elif not author.email:
continue
# If the email has not been seen, add it to the list.
elif author.email not in found_emails:
authors_dict[author].append(author.email)
valid_authors_dict = {}
too_many_emails = []
for author, emails in authors_dict.items():
if len(emails) > 1:
too_many_emails.append((author.first_last, emails))
else:
valid_authors_dict[author] = emails[0]
if too_many_emails:
err_output = []
for author, emails in too_many_emails:
err_output.append(" %s: %r" % (author, emails))
raise ValueError("some authors have more than one email address "
"listed:\n" + '\n'.join(err_output))
return valid_authors_dict
def sort_authors(authors_dict):
authors_list = list(authors_dict.keys())
authors_list.sort(key=attrgetter('sort_by'))
return authors_list
def normalized_last_first(name):
return len(unicodedata.normalize('NFC', name.last_first))
def emit_title(text, anchor, output, *, symbol="="):
print(".. _{anchor}:\n".format(anchor=anchor), file=output)
print(text, file=output)
print(symbol*len(text), file=output)
print(file=output)
def emit_subtitle(text, anchor, output):
emit_title(text, anchor, output, symbol="-")
def emit_pep_category(output, category, anchor, peps):
emit_subtitle(category, anchor, output)
emit_column_headers(output)
for pep in peps:
print(pep, file=output)
print(constants.table_separator, file=output)
print(file=output)
def write_pep0(peps, output=sys.stdout):
# PEP metadata
today = datetime.date.today().strftime("%Y-%m-%d")
print(constants.header % today, file=output)
print(file=output)
# Introduction
emit_title("Introduction", "intro", output)
print(constants.intro, file=output)
print(file=output)
# PEPs by category
(meta, info, provisional, accepted, open_,
finished, historical, deferred, dead) = sort_peps(peps)
emit_title("Index by Category", "by-category", output)
emit_pep_category(
category="Meta-PEPs (PEPs about PEPs or Processes)",
anchor="by-category-meta",
peps=meta,
output=output,
)
emit_pep_category(
category="Other Informational PEPs",
anchor="by-category-other-info",
peps=info,
output=output,
)
emit_pep_category(
category="Provisional PEPs (provisionally accepted; interface may still change)",
anchor="by-category-provisional",
peps=provisional,
output=output,
)
emit_pep_category(
category="Accepted PEPs (accepted; may not be implemented yet)",
anchor="by-category-accepted",
peps=accepted,
output=output,
)
emit_pep_category(
category="Open PEPs (under consideration)",
anchor="by-category-open",
peps=open_,
output=output,
)
emit_pep_category(
category="Finished PEPs (done, with a stable interface)",
anchor="by-category-finished",
peps=finished,
output=output,
)
emit_pep_category(
category="Historical Meta-PEPs and Informational PEPs",
anchor="by-category-historical",
peps=historical,
output=output,
)
emit_pep_category(
category="Deferred PEPs (postponed pending further research or updates)",
anchor="by-category-deferred",
peps=deferred,
output=output,
)
emit_pep_category(
category="Abandoned, Withdrawn, and Rejected PEPs",
anchor="by-category-abandoned",
peps=dead,
output=output,
)
print(file=output)
# PEPs by number
emit_title("Numerical Index", "by-pep-number", output)
emit_column_headers(output)
prev_pep = 0
for pep in peps:
if pep.number - prev_pep > 1:
print(file=output)
print(constants.text_type(pep), file=output)
prev_pep = pep.number
print(constants.table_separator, file=output)
print(file=output)
# Reserved PEP numbers
emit_title('Reserved PEP Numbers', "reserved", output)
emit_column_headers(output)
for number, claimants in sorted(RESERVED):
print(constants.column_format % {
'type': '.',
'status': '.',
'number': number,
'title': 'RESERVED',
'authors': claimants,
}, file=output)
print(constants.table_separator, file=output)
print(file=output)
# PEP types key
emit_title("PEP Types Key", "type-key", output)
for type_ in sorted(PEP.type_values):
print(u" %s - %s PEP" % (type_[0], type_), file=output)
print(file=output)
print(file=output)
# PEP status key
emit_title("PEP Status Key", "status-key", output)
for status in sorted(PEP.status_values):
# Draft PEPs have no status displayed, Active shares a key with Accepted
if status in ("Active", "Draft"):
continue
if status == "Accepted":
msg = " A - Accepted (Standards Track only) or Active proposal"
else:
msg = " {status[0]} - {status} proposal".format(status=status)
print(msg, file=output)
print(file=output)
print(file=output)
# PEP owners
emit_title("Authors/Owners", "authors", output)
authors_dict = verify_email_addresses(peps)
max_name = max(authors_dict.keys(), key=normalized_last_first)
max_name_len = len(max_name.last_first)
author_table_separator = "="*max_name_len + " " + "="*len("email address")
print(author_table_separator, file=output)
_author_header_fmt = "{name:{max_name_len}} Email Address"
print(_author_header_fmt.format(name="Name", max_name_len=max_name_len), file=output)
print(author_table_separator, file=output)
sorted_authors = sort_authors(authors_dict)
_author_fmt = "{author.last_first:{max_name_len}} {author_email}"
for author in sorted_authors:
# Use the email from authors_dict instead of the one from 'author' as
# the author instance may have an empty email.
_entry = _author_fmt.format(
author=author,
author_email=authors_dict[author],
max_name_len=max_name_len,
)
print(_entry, file=output)
print(author_table_separator, file=output)
print(file=output)
print(file=output)
# References for introduction footnotes
emit_title("References", "references", output)
print(constants.references, file=output)
print(constants.footer, file=output)

View File

@ -1,316 +0,0 @@
# -*- coding: utf-8 -*-
"""Code for handling object representation of a PEP."""
from __future__ import absolute_import
import re
import sys
import textwrap
import unicodedata
from email.parser import HeaderParser
from . import constants
class PEPError(Exception):
def __init__(self, error, pep_file, pep_number=None):
super(PEPError, self).__init__(error)
self.filename = pep_file
self.number = pep_number
def __str__(self):
error_msg = super(PEPError, self).__str__()
if self.number is not None:
return "PEP %d: %r" % (self.number, error_msg)
else:
return "(%s): %r" % (self.filename, error_msg)
class PEPParseError(PEPError):
pass
class Author(object):
"""Represent PEP authors.
Attributes:
+ first_last : str
The author's full name.
+ last_first : str
Output the author's name in Last, First, Suffix order.
+ first : str
The author's first name. A middle initial may be included.
+ last : str
The author's last name.
+ suffix : str
A person's suffix (can be the empty string).
+ sort_by : str
Modification of the author's last name that should be used for
sorting.
+ email : str
The author's email address.
"""
def __init__(self, author_and_email_tuple):
"""Parse the name and email address of an author."""
name, email = author_and_email_tuple
self.first_last = name.strip()
self.email = email.lower()
last_name_fragment, suffix = self._last_name(name)
name_sep = name.index(last_name_fragment)
self.first = name[:name_sep].rstrip()
self.last = last_name_fragment
if self.last[1] == u'.':
# Add an escape to avoid docutils turning `v.` into `22.`.
self.last = u'\\' + self.last
self.suffix = suffix
if not self.first:
self.last_first = self.last
else:
self.last_first = u', '.join([self.last, self.first])
if self.suffix:
self.last_first += u', ' + self.suffix
if self.last == "van Rossum":
# Special case for our beloved BDFL. :)
if self.first == "Guido":
self.nick = "GvR"
elif self.first == "Just":
self.nick = "JvR"
else:
raise ValueError("unknown van Rossum %r!" % self)
self.last_first += " (%s)" % (self.nick,)
else:
self.nick = self.last
def __hash__(self):
return hash(self.first_last)
def __eq__(self, other):
return self.first_last == other.first_last
@property
def sort_by(self):
name_parts = self.last.split()
for index, part in enumerate(name_parts):
if part[0].isupper():
base = u' '.join(name_parts[index:]).lower()
break
else:
# If no capitals, use the whole string
base = self.last.lower()
return unicodedata.normalize('NFKD', base).encode('ASCII', 'ignore')
def _last_name(self, full_name):
"""Find the last name (or nickname) of a full name.
If no last name (e.g, 'Aahz') then return the full name. If there is
a leading, lowercase portion to the last name (e.g., 'van' or 'von')
then include it. If there is a suffix (e.g., 'Jr.') that is appended
through a comma, then drop the suffix.
"""
name_partition = full_name.partition(u',')
no_suffix = name_partition[0].strip()
suffix = name_partition[2].strip()
name_parts = no_suffix.split()
part_count = len(name_parts)
if part_count == 1 or part_count == 2:
return name_parts[-1], suffix
else:
assert part_count > 2
if name_parts[-2].islower():
return u' '.join(name_parts[-2:]), suffix
else:
return name_parts[-1], suffix
class PEP(object):
"""Representation of PEPs.
Attributes:
+ number : int
PEP number.
+ title : str
PEP title.
+ type_ : str
The type of PEP. Can only be one of the values from
PEP.type_values.
+ status : str
The PEP's status. Value must be found in PEP.status_values.
+ authors : Sequence(Author)
A list of the authors.
"""
# The various RFC 822 headers that are supported.
# The second item in the nested tuples represents if the header is
# required or not.
headers = (('PEP', True), ('Title', True), ('Version', False),
('Last-Modified', False), ('Author', True),
('Sponsor', False), ('BDFL-Delegate', False),
('PEP-Delegate', False),
('Discussions-To', False), ('Status', True), ('Type', True),
('Content-Type', False), ('Requires', False),
('Created', True), ('Python-Version', False),
('Post-History', False), ('Replaces', False),
('Superseded-By', False), ('Resolution', False),
)
# Valid values for the Type header.
type_values = (u"Standards Track", u"Informational", u"Process")
# Valid values for the Status header.
# Active PEPs can only be for Informational or Process PEPs.
status_values = (u"Accepted", u"Provisional",
u"Rejected", u"Withdrawn", u"Deferred",
u"Final", u"Active", u"Draft", u"Superseded")
def __init__(self, pep_file):
"""Init object from an open PEP file object."""
# Parse the headers.
self.filename = pep_file
pep_parser = HeaderParser()
metadata = pep_parser.parse(pep_file)
header_order = iter(self.headers)
try:
for header_name in metadata.keys():
current_header, required = next(header_order)
while header_name != current_header and not required:
current_header, required = next(header_order)
if header_name != current_header:
raise PEPError("did not deal with "
"%r before having to handle %r" %
(header_name, current_header),
pep_file.name)
except StopIteration:
raise PEPError("headers missing or out of order",
pep_file.name)
required = False
try:
while not required:
current_header, required = next(header_order)
else:
raise PEPError("PEP is missing its %r" % (current_header,),
pep_file.name)
except StopIteration:
pass
# 'PEP'.
try:
self.number = int(metadata['PEP'])
except ValueError:
raise PEPParseError("PEP number isn't an integer", pep_file.name)
# 'Title'.
self.title = metadata['Title']
# 'Type'.
type_ = metadata['Type']
if type_ not in self.type_values:
raise PEPError('%r is not a valid Type value' % (type_,),
pep_file.name, self.number)
self.type_ = type_
# 'Status'.
status = metadata['Status']
if status not in self.status_values:
if status == "April Fool!":
# See PEP 401 :)
status = "Rejected"
else:
raise PEPError("%r is not a valid Status value" %
(status,), pep_file.name, self.number)
# Special case for Active PEPs.
if (status == u"Active" and
self.type_ not in ("Process", "Informational")):
raise PEPError("Only Process and Informational PEPs may "
"have an Active status", pep_file.name,
self.number)
# Special case for Provisional PEPs.
if (status == u"Provisional" and self.type_ != "Standards Track"):
raise PEPError("Only Standards Track PEPs may "
"have a Provisional status", pep_file.name,
self.number)
self.status = status
# 'Author'.
authors_and_emails = self._parse_author(metadata['Author'])
if len(authors_and_emails) < 1:
raise PEPError("no authors found", pep_file.name,
self.number)
self.authors = list(map(Author, authors_and_emails))
def _parse_author(self, data):
"""Return a list of author names and emails."""
# XXX Consider using email.utils.parseaddr (doesn't work with names
# lacking an email address.
angled = constants.text_type(r'(?P<author>.+?) <(?P<email>.+?)>')
paren = constants.text_type(r'(?P<email>.+?) \((?P<author>.+?)\)')
simple = constants.text_type(r'(?P<author>[^,]+)')
author_list = []
for regex in (angled, paren, simple):
# Watch out for commas separating multiple names.
regex += r'(,\s*)?'
for match in re.finditer(regex, data):
# Watch out for suffixes like 'Jr.' when they are comma-separated
# from the name and thus cause issues when *all* names are only
# separated by commas.
match_dict = match.groupdict()
author = match_dict['author']
if not author.partition(' ')[1] and author.endswith('.'):
prev_author = author_list.pop()
author = ', '.join([prev_author, author])
if u'email' not in match_dict:
email = ''
else:
email = match_dict['email']
author_list.append((author, email))
else:
# If authors were found then stop searching as only expect one
# style of author citation.
if author_list:
break
return author_list
@property
def type_abbr(self):
"""Return the how the type is to be represented in the index."""
return self.type_[0].upper()
@property
def status_abbr(self):
"""Return how the status should be represented in the index."""
if self.status in ('Draft', 'Active'):
return u' '
else:
return self.status[0].upper()
@property
def author_abbr(self):
"""Return the author list as a comma-separated with only last names."""
return u', '.join(x.nick for x in self.authors)
@property
def title_abbr(self):
"""Shorten the title to be no longer than the max title length."""
if len(self.title) <= constants.title_length:
return self.title
wrapped_title = textwrap.wrap(self.title, constants.title_length - 4)
return wrapped_title[0] + u' ...'
def __unicode__(self):
"""Return the line entry for the PEP."""
pep_info = {'type': self.type_abbr, 'number': str(self.number),
'title': self.title_abbr, 'status': self.status_abbr,
'authors': self.author_abbr}
return constants.column_format % pep_info
if sys.version_info[0] > 2:
__str__ = __unicode__

View File

@ -1,710 +0,0 @@
#!/usr/bin/env python3.9
"""Convert PEPs to (X)HTML - courtesy of /F
Usage: %(PROGRAM)s [options] [<peps> ...]
Options:
-u, --user
python.org username
-b, --browse
After generating the HTML, direct your web browser to view it
(using the Python webbrowser module). If both -i and -b are
given, this will browse the on-line HTML; otherwise it will
browse the local HTML. If no pep arguments are given, this
will browse PEP 0.
-i, --install
After generating the HTML, install it and the plaintext source file
(.txt) on python.org. In that case the user's name is used in the scp
and ssh commands, unless "-u username" is given (in which case, it is
used instead). Without -i, -u is ignored.
-l, --local
Same as -i/--install, except install on the local machine. Use this
when logged in to the python.org machine (dinsdale).
-q, --quiet
Turn off verbose messages.
-h, --help
Print this help message and exit.
The optional arguments ``peps`` are either pep numbers, .rst or .txt files.
"""
from __future__ import print_function, unicode_literals
import sys
import os
import re
import glob
import getopt
import errno
import random
import time
from io import open
try:
from html import escape
except ImportError:
from cgi import escape
from docutils import core, nodes, utils
from docutils.readers import standalone
from docutils.transforms import peps, frontmatter, Transform
from docutils.parsers import rst
class DataError(Exception):
pass
REQUIRES = {'python': '2.6',
'docutils': '0.2.7'}
PROGRAM = sys.argv[0]
RFCURL = 'http://www.faqs.org/rfcs/rfc%d.html'
PEPURL = 'pep-%04d.html'
PEPCVSURL = ('https://hg.python.org/peps/file/tip/pep-%04d.txt')
PEPDIRRUL = 'http://www.python.org/peps/'
HOST = "dinsdale.python.org" # host for update
HDIR = "/data/ftp.python.org/pub/www.python.org/peps" # target host directory
LOCALVARS = "Local Variables:"
COMMENT = """<!--
This HTML is auto-generated. DO NOT EDIT THIS FILE! If you are writing a new
PEP, see http://www.python.org/peps/pep-0001.html for instructions and links
to templates. DO NOT USE THIS HTML FILE AS YOUR TEMPLATE!
-->"""
# The generated HTML doesn't validate -- you cannot use <hr> and <h3> inside
# <pre> tags. But if I change that, the result doesn't look very nice...
DTD = ('<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"\n'
' "http://www.w3.org/TR/REC-html40/loose.dtd">')
fixpat = re.compile(r"((https?|ftp):[-_a-zA-Z0-9/.+~:?#$=&,]+)|(pep-\d+(.txt|.rst)?)|"
r"(RFC[- ]?(?P<rfcnum>\d+))|"
r"(PEP\s+(?P<pepnum>\d+))|"
r".")
EMPTYSTRING = ''
SPACE = ' '
COMMASPACE = ', '
def usage(code, msg=''):
"""Print usage message and exit. Uses stderr if code != 0."""
if code == 0:
out = sys.stdout
else:
out = sys.stderr
print(__doc__ % globals(), file=out)
if msg:
print(msg, file=out)
sys.exit(code)
def fixanchor(current, match):
text = match.group(0)
link = None
if (text.startswith('http:') or text.startswith('https:')
or text.startswith('ftp:')):
# Strip off trailing punctuation. Pattern taken from faqwiz.
ltext = list(text)
while ltext:
c = ltext.pop()
if c not in '''();:,.?'"<>''':
ltext.append(c)
break
link = EMPTYSTRING.join(ltext)
elif text.startswith('pep-') and text != current:
link = os.path.splitext(text)[0] + ".html"
elif text.startswith('PEP'):
pepnum = int(match.group('pepnum'))
link = PEPURL % pepnum
elif text.startswith('RFC'):
rfcnum = int(match.group('rfcnum'))
link = RFCURL % rfcnum
if link:
return '<a href="%s">%s</a>' % (escape(link), escape(text))
return escape(match.group(0)) # really slow, but it works...
NON_MASKED_EMAILS = [
'peps@python.org',
'python-list@python.org',
'python-dev@python.org',
]
def fixemail(address, pepno):
if address.lower() in NON_MASKED_EMAILS:
# return hyperlinked version of email address
return linkemail(address, pepno)
else:
# return masked version of email address
parts = address.split('@', 1)
return '%s&#32;&#97;t&#32;%s' % (parts[0], parts[1])
def linkemail(address, pepno):
parts = address.split('@', 1)
return ('<a href="mailto:%s&#64;%s?subject=PEP%%20%s">'
'%s&#32;&#97;t&#32;%s</a>'
% (parts[0], parts[1], pepno, parts[0], parts[1]))
def fixfile(inpath, input_lines, outfile):
try:
from email.Utils import parseaddr
except ImportError:
from email.utils import parseaddr
basename = os.path.basename(inpath)
infile = iter(input_lines)
# convert plaintext pep to minimal XHTML markup
print(DTD, file=outfile)
print('<html>', file=outfile)
print(COMMENT, file=outfile)
print('<head>', file=outfile)
# head
header = []
pep = ""
title = ""
for line in infile:
if not line.strip():
break
if line[0].strip():
if ":" not in line:
break
key, value = line.split(":", 1)
value = value.strip()
header.append((key, value))
else:
# continuation line
key, value = header[-1]
value = value + line
header[-1] = key, value
if key.lower() == "title":
title = value
elif key.lower() == "pep":
pep = value
if pep:
title = "PEP " + pep + " -- " + title
if title:
print(' <title>%s</title>' % escape(title), file=outfile)
r = random.choice(list(range(64)))
print((
' <link rel="STYLESHEET" href="style.css" type="text/css" />\n'
'</head>\n'
'<body bgcolor="white">\n'
'<table class="navigation" cellpadding="0" cellspacing="0"\n'
' width="100%%" border="0">\n'
'<tr><td class="navicon" width="150" height="35">\n'
'<a href="../" title="Python Home Page">\n'
'<img src="../pics/PyBanner%03d.gif" alt="[Python]"\n'
' border="0" width="150" height="35" /></a></td>\n'
'<td class="textlinks" align="left">\n'
'[<b><a href="../">Python Home</a></b>]' % r), file=outfile)
if basename != 'pep-0000.txt':
print('[<b><a href=".">PEP Index</a></b>]', file=outfile)
if pep:
try:
print(('[<b><a href="pep-%04d.txt">PEP Source</a>'
'</b>]' % int(pep)), file=outfile)
except ValueError as error:
print(('ValueError (invalid PEP number): %s'
% error), file=sys.stderr)
print('</td></tr></table>', file=outfile)
print('<div class="header">\n<table border="0">', file=outfile)
for k, v in header:
if k.lower() in ('author', 'pep-delegate', 'bdfl-delegate', 'discussions-to',
'sponsor'):
mailtos = []
for part in re.split(r',\s*', v):
if '@' in part:
realname, addr = parseaddr(part)
if k.lower() == 'discussions-to':
m = linkemail(addr, pep)
else:
m = fixemail(addr, pep)
mailtos.append('%s &lt;%s&gt;' % (realname, m))
elif part.startswith('http:'):
mailtos.append(
'<a href="%s">%s</a>' % (part, part))
else:
mailtos.append(part)
v = COMMASPACE.join(mailtos)
elif k.lower() in ('replaces', 'superseded-by', 'requires'):
otherpeps = ''
for otherpep in re.split(r',?\s+', v):
otherpep = int(otherpep)
otherpeps += '<a href="pep-%04d.html">%i</a> ' % (otherpep,
otherpep)
v = otherpeps
elif k.lower() in ('last-modified',):
date = v or time.strftime('%d-%b-%Y',
time.localtime(os.stat(inpath)[8]))
if date.startswith('$' 'Date: ') and date.endswith(' $'):
date = date[6:-2]
if basename == 'pep-0000.txt':
v = date
else:
try:
url = PEPCVSURL % int(pep)
v = '<a href="%s">%s</a> ' % (url, escape(date))
except ValueError as error:
v = date
elif k.lower() in ('content-type',):
url = PEPURL % 9
pep_type = v or 'text/plain'
v = '<a href="%s">%s</a> ' % (url, escape(pep_type))
elif k.lower() == 'version':
if v.startswith('$' 'Revision: ') and v.endswith(' $'):
v = escape(v[11:-2])
else:
v = escape(v)
print(' <tr><th>%s:&nbsp;</th><td>%s</td></tr>' \
% (escape(k), v), file=outfile)
print('</table>', file=outfile)
print('</div>', file=outfile)
print('<hr />', file=outfile)
print('<div class="content">', file=outfile)
need_pre = 1
for line in infile:
if line[0] == '\f':
continue
if line.strip() == LOCALVARS:
break
if line[0].strip():
if not need_pre:
print('</pre>', file=outfile)
print('<h3>%s</h3>' % line.strip(), file=outfile)
need_pre = 1
elif not line.strip() and need_pre:
continue
else:
# PEP 0 has some special treatment
if basename == 'pep-0000.txt':
parts = line.split()
if len(parts) > 1 and re.match(r'\s*\d{1,4}', parts[1]):
# This is a PEP summary line, which we need to hyperlink
url = PEPURL % int(parts[1])
if need_pre:
print('<pre>', file=outfile)
need_pre = 0
print(re.sub(
parts[1],
'<a href="%s">%s</a>' % (url, parts[1]),
line, 1), end='', file=outfile)
continue
elif parts and '@' in parts[-1]:
# This is a pep email address line, so filter it.
url = fixemail(parts[-1], pep)
if need_pre:
print('<pre>', file=outfile)
need_pre = 0
print(re.sub(
parts[-1], url, line, 1), end='', file=outfile)
continue
line = fixpat.sub(lambda x, c=inpath: fixanchor(c, x), line)
if need_pre:
print('<pre>', file=outfile)
need_pre = 0
outfile.write(line)
if not need_pre:
print('</pre>', file=outfile)
print('</div>', file=outfile)
print('</body>', file=outfile)
print('</html>', file=outfile)
docutils_settings = None
"""Runtime settings object used by Docutils. Can be set by the client
application when this module is imported."""
class PEPHeaders(Transform):
"""
Process fields in a PEP's initial RFC-2822 header.
"""
default_priority = 360
pep_url = 'pep-%04d'
pep_cvs_url = PEPCVSURL
rcs_keyword_substitutions = (
(re.compile(r'\$' r'RCSfile: (.+),v \$$', re.IGNORECASE), r'\1'),
(re.compile(r'\$[a-zA-Z]+: (.+) \$$'), r'\1'),)
def apply(self):
if not len(self.document):
# @@@ replace these DataErrors with proper system messages
raise DataError('Document tree is empty.')
header = self.document[0]
if not isinstance(header, nodes.field_list) or \
'rfc2822' not in header['classes']:
raise DataError('Document does not begin with an RFC-2822 '
'header; it is not a PEP.')
pep = None
for field in header:
if field[0].astext().lower() == 'pep': # should be the first field
value = field[1].astext()
try:
pep = int(value)
cvs_url = self.pep_cvs_url % pep
except ValueError:
pep = value
cvs_url = None
msg = self.document.reporter.warning(
'"PEP" header must contain an integer; "%s" is an '
'invalid value.' % pep, base_node=field)
msgid = self.document.set_id(msg)
prb = nodes.problematic(value, value or '(none)',
refid=msgid)
prbid = self.document.set_id(prb)
msg.add_backref(prbid)
if len(field[1]):
field[1][0][:] = [prb]
else:
field[1] += nodes.paragraph('', '', prb)
break
if pep is None:
raise DataError('Document does not contain an RFC-2822 "PEP" '
'header.')
if pep == 0:
# Special processing for PEP 0.
pending = nodes.pending(peps.PEPZero)
self.document.insert(1, pending)
self.document.note_pending(pending)
if len(header) < 2 or header[1][0].astext().lower() != 'title':
raise DataError('No title!')
for field in header:
name = field[0].astext().lower()
body = field[1]
if len(body) > 1:
raise DataError('PEP header field body contains multiple '
'elements:\n%s' % field.pformat(level=1))
elif len(body) == 1:
if not isinstance(body[0], nodes.paragraph):
raise DataError('PEP header field body may only contain '
'a single paragraph:\n%s'
% field.pformat(level=1))
elif name == 'last-modified':
date = time.strftime(
'%d-%b-%Y',
time.localtime(os.stat(self.document['source'])[8]))
if cvs_url:
body += nodes.paragraph(
'', '', nodes.reference('', date, refuri=cvs_url))
else:
# empty
continue
para = body[0]
if name in ('author', 'bdfl-delegate', 'pep-delegate', 'sponsor'):
for node in para:
if isinstance(node, nodes.reference):
node.replace_self(peps.mask_email(node))
elif name == 'discussions-to':
for node in para:
if isinstance(node, nodes.reference):
node.replace_self(peps.mask_email(node, pep))
elif name in ('replaces', 'superseded-by', 'requires'):
newbody = []
space = nodes.Text(' ')
for refpep in re.split(r',?\s+', body.astext()):
pepno = int(refpep)
newbody.append(nodes.reference(
refpep, refpep,
refuri=(self.document.settings.pep_base_url
+ self.pep_url % pepno)))
newbody.append(space)
para[:] = newbody[:-1] # drop trailing space
elif name == 'last-modified':
utils.clean_rcs_keywords(para, self.rcs_keyword_substitutions)
if cvs_url:
date = para.astext()
para[:] = [nodes.reference('', date, refuri=cvs_url)]
elif name == 'content-type':
pep_type = para.astext()
uri = self.document.settings.pep_base_url + self.pep_url % 12
para[:] = [nodes.reference('', pep_type, refuri=uri)]
elif name == 'version' and len(body):
utils.clean_rcs_keywords(para, self.rcs_keyword_substitutions)
class PEPReader(standalone.Reader):
supported = ('pep',)
"""Contexts this reader supports."""
settings_spec = (
'PEP Reader Option Defaults',
'The --pep-references and --rfc-references options (for the '
'reStructuredText parser) are on by default.',
())
config_section = 'pep reader'
config_section_dependencies = ('readers', 'standalone reader')
def get_transforms(self):
transforms = standalone.Reader.get_transforms(self)
# We have PEP-specific frontmatter handling.
transforms.remove(frontmatter.DocTitle)
transforms.remove(frontmatter.SectionSubTitle)
transforms.remove(frontmatter.DocInfo)
transforms.extend([PEPHeaders, peps.Contents, peps.TargetNotes])
return transforms
settings_default_overrides = {'pep_references': 1, 'rfc_references': 1}
inliner_class = rst.states.Inliner
def __init__(self, parser=None, parser_name=None):
"""`parser` should be ``None``."""
if parser is None:
parser = rst.Parser(rfc2822=True, inliner=self.inliner_class())
standalone.Reader.__init__(self, parser, '')
def fix_rst_pep(inpath, input_lines, outfile):
output = core.publish_string(
source=''.join(input_lines),
source_path=inpath,
destination_path=outfile.name,
reader=PEPReader(),
parser_name='restructuredtext',
writer_name='pep_html',
settings=docutils_settings,
# Allow Docutils traceback if there's an exception:
settings_overrides={'traceback': 1, 'halt_level': 2})
outfile.write(output.decode('utf-8'))
def get_pep_type(input_lines):
"""
Return the Content-Type of the input. "text/plain" is the default.
Return ``None`` if the input is not a PEP.
"""
pep_type = None
for line in input_lines:
line = line.rstrip().lower()
if not line:
# End of the RFC 2822 header (first blank line).
break
elif line.startswith('content-type: '):
pep_type = line.split()[1] or 'text/plain'
break
elif line.startswith('pep: '):
# Default PEP type, used if no explicit content-type specified:
pep_type = 'text/plain'
return pep_type
def get_input_lines(inpath):
try:
infile = open(inpath, encoding='utf-8')
except IOError as e:
if e.errno != errno.ENOENT: raise
print('Error: Skipping missing PEP file:', e.filename, file=sys.stderr)
sys.stderr.flush()
return None
lines = infile.read().splitlines(1) # handles x-platform line endings
infile.close()
return lines
def find_pep(pep_str):
"""Find the .rst or .txt file indicated by a cmd line argument"""
if os.path.exists(pep_str):
return pep_str
num = int(pep_str)
rstpath = "pep-%04d.rst" % num
if os.path.exists(rstpath):
return rstpath
return "pep-%04d.txt" % num
def make_html(inpath, verbose=0):
input_lines = get_input_lines(inpath)
if input_lines is None:
return None
pep_type = get_pep_type(input_lines)
if pep_type is None:
print('Error: Input file %s is not a PEP.' % inpath, file=sys.stderr)
sys.stdout.flush()
return None
elif pep_type not in PEP_TYPE_DISPATCH:
print(('Error: Unknown PEP type for input file %s: %s'
% (inpath, pep_type)), file=sys.stderr)
sys.stdout.flush()
return None
elif PEP_TYPE_DISPATCH[pep_type] is None:
pep_type_error(inpath, pep_type)
return None
outpath = os.path.splitext(inpath)[0] + ".html"
if verbose:
print(inpath, "(%s)" % pep_type, "->", outpath)
sys.stdout.flush()
outfile = open(outpath, "w", encoding='utf-8')
PEP_TYPE_DISPATCH[pep_type](inpath, input_lines, outfile)
outfile.close()
os.chmod(outfile.name, 0o664)
return outpath
def push_pep(htmlfiles, txtfiles, username, verbose, local=0):
quiet = ""
if local:
if verbose:
quiet = "-v"
target = HDIR
copy_cmd = "cp"
chmod_cmd = "chmod"
else:
if not verbose:
quiet = "-q"
if username:
username = username + "@"
target = username + HOST + ":" + HDIR
copy_cmd = "scp"
chmod_cmd = "ssh %s%s chmod" % (username, HOST)
files = htmlfiles[:]
files.extend(txtfiles)
files.append("style.css")
files.append("pep.css")
filelist = SPACE.join(files)
rc = os.system("%s %s %s %s" % (copy_cmd, quiet, filelist, target))
if rc:
sys.exit(rc)
## rc = os.system("%s 664 %s/*" % (chmod_cmd, HDIR))
## if rc:
## sys.exit(rc)
PEP_TYPE_DISPATCH = {'text/plain': fixfile,
'text/x-rst': fix_rst_pep}
PEP_TYPE_MESSAGES = {}
def check_requirements():
# Check Python:
# This is pretty much covered by the __future__ imports...
if sys.version_info < (2, 6, 0):
PEP_TYPE_DISPATCH['text/plain'] = None
PEP_TYPE_MESSAGES['text/plain'] = (
'Python %s or better required for "%%(pep_type)s" PEP '
'processing; %s present (%%(inpath)s).'
% (REQUIRES['python'], sys.version.split()[0]))
# Check Docutils:
try:
import docutils
except ImportError:
PEP_TYPE_DISPATCH['text/x-rst'] = None
PEP_TYPE_MESSAGES['text/x-rst'] = (
'Docutils not present for "%(pep_type)s" PEP file %(inpath)s. '
'See README.rst for installation.')
else:
installed = [int(part) for part in docutils.__version__.split('.')]
required = [int(part) for part in REQUIRES['docutils'].split('.')]
if installed < required:
PEP_TYPE_DISPATCH['text/x-rst'] = None
PEP_TYPE_MESSAGES['text/x-rst'] = (
'Docutils must be reinstalled for "%%(pep_type)s" PEP '
'processing (%%(inpath)s). Version %s or better required; '
'%s present. See README.rst for installation.'
% (REQUIRES['docutils'], docutils.__version__))
def pep_type_error(inpath, pep_type):
print('Error: ' + PEP_TYPE_MESSAGES[pep_type] % locals(), file=sys.stderr)
sys.stdout.flush()
def browse_file(pep):
import webbrowser
file = find_pep(pep)
if file.startswith('pep-') and file.endswith((".txt", '.rst')):
file = file[:-3] + "html"
file = os.path.abspath(file)
url = "file:" + file
webbrowser.open(url)
def browse_remote(pep):
import webbrowser
file = find_pep(pep)
if file.startswith('pep-') and file.endswith((".txt", '.rst')):
file = file[:-3] + "html"
url = PEPDIRRUL + file
webbrowser.open(url)
def main(argv=None):
# defaults
update = 0
local = 0
username = ''
verbose = 1
browse = 0
check_requirements()
if argv is None:
argv = sys.argv[1:]
try:
opts, args = getopt.getopt(
argv, 'bilhqu:',
['browse', 'install', 'local', 'help', 'quiet', 'user='])
except getopt.error as msg:
usage(1, msg)
for opt, arg in opts:
if opt in ('-h', '--help'):
usage(0)
elif opt in ('-i', '--install'):
update = 1
elif opt in ('-l', '--local'):
update = 1
local = 1
elif opt in ('-u', '--user'):
username = arg
elif opt in ('-q', '--quiet'):
verbose = 0
elif opt in ('-b', '--browse'):
browse = 1
if args:
pep_list = []
html = []
for pep in args:
file = find_pep(pep)
pep_list.append(file)
newfile = make_html(file, verbose=verbose)
if newfile:
html.append(newfile)
if browse and not update:
browse_file(pep)
else:
# do them all
pep_list = []
html = []
files = glob.glob("pep-*.txt") + glob.glob("pep-*.rst")
files.sort()
for file in files:
pep_list.append(file)
newfile = make_html(file, verbose=verbose)
if newfile:
html.append(newfile)
if browse and not update:
browse_file("0")
if update:
push_pep(html, pep_list, username, verbose, local=local)
if browse:
if args:
for pep in args:
browse_remote(pep)
else:
browse_remote("0")
if __name__ == "__main__":
main()

View File

@ -1,126 +0,0 @@
#!/usr/bin/env python3
# usage: python3 pep2rss.py .
import datetime
import glob
import os
import re
import sys
import time
import PyRSS2Gen as rssgen
import docutils.frontend
import docutils.nodes
import docutils.parsers.rst
import docutils.utils
RSS_PATH = os.path.join(sys.argv[1], 'peps.rss')
def remove_prefix(text: str, prefix: str) -> str:
try:
# Python 3.9+
return text.removeprefix(prefix)
except AttributeError:
if text.startswith(prefix):
return text[len(prefix):]
return text
def parse_rst(text: str) -> docutils.nodes.document:
parser = docutils.parsers.rst.Parser()
components = (docutils.parsers.rst.Parser,)
settings = docutils.frontend.OptionParser(components=components).get_default_values()
document = docutils.utils.new_document('<rst-doc>', settings=settings)
parser.parse(text, document)
return document
def pep_abstract(full_path: str) -> str:
"""Return the first paragraph of the PEP abstract"""
abstract = None
with open(full_path, encoding="utf-8") as f:
text = f.read()
document = parse_rst(text)
nodes = list(document)
for node in nodes:
if "<title>Abstract</title>" in str(node):
for child in node:
if child.tagname == "paragraph":
abstract = child.astext()
# Just fetch the first paragraph
break
return abstract
def firstline_startingwith(full_path, text):
for line in open(full_path, encoding="utf-8"):
if line.startswith(text):
return line[len(text):].strip()
return None
# get list of peps with creation time
# (from "Created:" string in pep .rst or .txt)
peps = glob.glob('pep-*.txt')
peps.extend(glob.glob('pep-*.rst'))
def pep_creation_dt(full_path):
created_str = firstline_startingwith(full_path, 'Created:')
# bleh, I was hoping to avoid re but some PEPs editorialize
# on the Created line
m = re.search(r'''(\d+-\w+-\d{4})''', created_str)
if not m:
# some older ones have an empty line, that's okay, if it's old
# we ipso facto don't care about it.
# "return None" would make the most sense but datetime objects
# refuse to compare with that. :-|
return datetime.datetime(*time.localtime(0)[:6])
created_str = m.group(1)
try:
t = time.strptime(created_str, '%d-%b-%Y')
except ValueError:
t = time.strptime(created_str, '%d-%B-%Y')
return datetime.datetime(*t[:6])
peps_with_dt = [(pep_creation_dt(full_path), full_path) for full_path in peps]
# sort peps by date, newest first
peps_with_dt.sort(reverse=True)
# generate rss items for 10 most recent peps
items = []
for dt, full_path in peps_with_dt[:10]:
try:
n = int(full_path.split('-')[-1].split('.')[0])
except ValueError:
pass
title = firstline_startingwith(full_path, 'Title:')
author = firstline_startingwith(full_path, 'Author:')
abstract = pep_abstract(full_path)
url = 'https://www.python.org/dev/peps/pep-%0.4d/' % n
item = rssgen.RSSItem(
title='PEP %d: %s' % (n, title),
link=url,
description=abstract,
author=author,
guid=rssgen.Guid(url),
pubDate=dt)
items.append(item)
# the rss envelope
desc = """
Newest Python Enhancement Proposals (PEPs) - Information on new
language features, and some meta-information like release
procedure and schedules
""".strip()
rss = rssgen.RSS2(
title='Newest Python PEPs',
link = 'https://www.python.org/dev/peps/',
description=desc,
lastBuildDate=datetime.datetime.now(),
items=items)
with open(RSS_PATH, 'w', encoding="utf-8") as fp:
fp.write(rss.to_xml(encoding="utf-8"))

View File

@ -1,145 +0,0 @@
import datetime
import email.utils
from pathlib import Path
import re
from dateutil import parser
import docutils.frontend
import docutils.nodes
import docutils.parsers.rst
import docutils.utils
from feedgen import entry
from feedgen import feed
# Monkeypatch feedgen.util.formatRFC2822
def _format_rfc_2822(dt: datetime.datetime) -> str:
return email.utils.format_datetime(dt, usegmt=True)
entry.formatRFC2822 = feed.formatRFC2822 = _format_rfc_2822
line_cache: dict[Path, dict[str, str]] = {}
def first_line_starting_with(full_path: Path, text: str) -> str:
# Try and retrieve from cache
if full_path in line_cache:
return line_cache[full_path].get(text, "")
# Else read source
line_cache[full_path] = path_cache = {}
for line in full_path.open(encoding="utf-8"):
if line.startswith("Created:"):
path_cache["Created:"] = line.removeprefix("Created:").strip()
elif line.startswith("Title:"):
path_cache["Title:"] = line.removeprefix("Title:").strip()
elif line.startswith("Author:"):
path_cache["Author:"] = line.removeprefix("Author:").strip()
# Once all have been found, exit loop
if path_cache.keys == {"Created:", "Title:", "Author:"}:
break
return path_cache.get(text, "")
def pep_creation(full_path: Path) -> datetime.datetime:
created_str = first_line_starting_with(full_path, "Created:")
# bleh, I was hoping to avoid re but some PEPs editorialize on the Created line
# (note as of Aug 2020 only PEP 102 has additional content on the Created line)
m = re.search(r"(\d+[- ][\w\d]+[- ]\d{2,4})", created_str)
if not m:
# some older ones have an empty line, that's okay, if it's old we ipso facto don't care about it.
# "return None" would make the most sense but datetime objects refuse to compare with that. :-|
return datetime.datetime(1900, 1, 1)
created_str = m.group(1)
try:
return parser.parse(created_str, dayfirst=True)
except (ValueError, OverflowError):
return datetime.datetime(1900, 1, 1)
def parse_rst(text: str) -> docutils.nodes.document:
rst_parser = docutils.parsers.rst.Parser()
components = (docutils.parsers.rst.Parser,)
settings = docutils.frontend.OptionParser(components=components).get_default_values()
document = docutils.utils.new_document('<rst-doc>', settings=settings)
rst_parser.parse(text, document)
return document
def pep_abstract(full_path: Path) -> str:
"""Return the first paragraph of the PEP abstract"""
text = full_path.read_text(encoding="utf-8")
for node in parse_rst(text):
if "<title>Abstract</title>" in str(node):
for child in node:
if child.tagname == "paragraph":
return child.astext().strip().replace("\n", " ")
return ""
def main():
# get the directory with the PEP sources
pep_dir = Path(__file__).parent
# get list of peps with creation time (from "Created:" string in pep source)
peps_with_dt = sorted((pep_creation(path), path) for path in pep_dir.glob("pep-????.*"))
# generate rss items for 10 most recent peps
items = []
for dt, full_path in peps_with_dt[-10:]:
try:
pep_num = int(full_path.stem.split("-")[-1])
except ValueError:
continue
title = first_line_starting_with(full_path, "Title:")
author = first_line_starting_with(full_path, "Author:")
if "@" in author or " at " in author:
parsed_authors = email.utils.getaddresses([author])
# ideal would be to pass as a list of dicts with names and emails to
# item.author, but FeedGen's RSS <author/> output doesn't pass W3C
# validation (as of 12/06/2021)
joined_authors = ", ".join(f"{name} ({email_address})" for name, email_address in parsed_authors)
else:
joined_authors = author
url = f"https://www.python.org/dev/peps/pep-{pep_num:0>4}"
item = entry.FeedEntry()
item.title(f"PEP {pep_num}: {title}")
item.link(href=url)
item.description(pep_abstract(full_path))
item.guid(url, permalink=True)
item.published(dt.replace(tzinfo=datetime.timezone.utc)) # ensure datetime has a timezone
item.author(email=joined_authors)
items.append(item)
# The rss envelope
desc = """
Newest Python Enhancement Proposals (PEPs) - Information on new
language features, and some meta-information like release
procedure and schedules.
""".replace("\n ", " ").strip()
# Setup feed generator
fg = feed.FeedGenerator()
fg.language("en")
fg.generator("")
fg.docs("https://cyber.harvard.edu/rss/rss.html")
# Add metadata
fg.title("Newest Python PEPs")
fg.link(href="https://www.python.org/dev/peps")
fg.link(href="https://www.python.org/dev/peps/peps.rss", rel="self")
fg.description(desc)
fg.lastBuildDate(datetime.datetime.utcnow().replace(tzinfo=datetime.timezone.utc))
# Add PEP information (ordered by newest first)
for item in items:
fg.add_entry(item)
pep_dir.joinpath("peps.rss").write_bytes(fg.rss_str(pretty=True))
if __name__ == "__main__":
main()

View File

@ -0,0 +1,2 @@
This files in this directory are placed in the public domain or under the
CC0-1.0-Universal license, whichever is more permissive.

View File

@ -5,54 +5,78 @@ from __future__ import annotations
from typing import TYPE_CHECKING
from docutils.writers.html5_polyglot import HTMLTranslator
from sphinx.environment import BuildEnvironment
from sphinx.environment import default_settings
from sphinx import environment
from pep_sphinx_extensions import config
from pep_sphinx_extensions.generate_rss import create_rss_feed
from pep_sphinx_extensions.pep_processor.html import pep_html_builder
from pep_sphinx_extensions.pep_processor.html import pep_html_translator
from pep_sphinx_extensions.pep_processor.parsing import pep_banner_directive
from pep_sphinx_extensions.pep_processor.parsing import pep_parser
from pep_sphinx_extensions.pep_processor.parsing import pep_role
from pep_sphinx_extensions.pep_processor.transforms import pep_references
from pep_sphinx_extensions.pep_zero_generator.pep_index_generator import create_pep_zero
if TYPE_CHECKING:
from sphinx.application import Sphinx
# Monkeypatch sphinx.environment.default_settings as Sphinx doesn't allow custom settings or Readers
# These settings should go in docutils.conf, but are overridden here for now so as not to affect
# pep2html.py
default_settings |= {
"pep_references": True,
"rfc_references": True,
"pep_base_url": "",
"pep_file_url_template": "pep-%04d.html",
"_disable_config": True, # disable using docutils.conf whilst running both PEP generators
}
# Monkeypatch sphinx.environment.BuildEnvironment.collect_relations, as it takes a long time
# and we don't use the parent/next/prev functionality
BuildEnvironment.collect_relations = lambda self: {}
def _depart_maths():
pass # No-op callable for the type checker
def _update_config_for_builder(app: Sphinx):
def _update_config_for_builder(app: Sphinx) -> None:
app.env.document_ids = {} # For PEPReferenceRoleTitleText
app.env.settings["builder"] = app.builder.name
if app.builder.name == "dirhtml":
config.pep_url = f"../{config.pep_stem}"
app.env.settings["pep_file_url_template"] = "../pep-%04d"
app.env.settings["pep_url"] = "pep-{:0>4}/"
app.connect("build-finished", _post_build) # Post-build tasks
def _post_build(app: Sphinx, exception: Exception | None) -> None:
from pathlib import Path
from build import create_index_file
if exception is not None:
return
# internal_builder exists if Sphinx is run by build.py
if "internal_builder" not in app.tags:
create_index_file(Path(app.outdir), app.builder.name)
create_rss_feed(app.doctreedir, app.outdir)
def setup(app: Sphinx) -> dict[str, bool]:
"""Initialize Sphinx extension."""
environment.default_settings["pep_url"] = "pep-{:0>4}.html"
environment.default_settings["halt_level"] = 2 # Fail on Docutils warning
# Register plugin logic
app.add_builder(pep_html_builder.FileBuilder, override=True)
app.add_builder(pep_html_builder.DirectoryBuilder, override=True)
app.add_source_parser(pep_parser.PEPParser) # Add PEP transforms
app.add_role("pep", pep_role.PEPRole(), override=True) # Transform PEP references to links
app.set_translator("html", pep_html_translator.PEPTranslator) # Docutils Node Visitor overrides (html builder)
app.set_translator("dirhtml", pep_html_translator.PEPTranslator) # Docutils Node Visitor overrides (dirhtml builder)
app.connect("env-before-read-docs", create_pep_zero) # PEP 0 hook
app.add_role("pep", pep_role.PEPRole(), override=True) # Transform PEP references to links
app.add_post_transform(pep_references.PEPReferenceRoleTitleText)
# Register custom directives
app.add_directive(
"pep-banner", pep_banner_directive.PEPBanner)
app.add_directive(
"canonical-doc", pep_banner_directive.CanonicalDocBanner)
app.add_directive(
"canonical-pypa-spec", pep_banner_directive.CanonicalPyPASpecBanner)
# Register event callbacks
app.connect("builder-inited", _update_config_for_builder) # Update configuration values for builder used
app.connect("env-before-read-docs", create_pep_zero) # PEP 0 hook
# Mathematics rendering
inline_maths = HTMLTranslator.visit_math, _depart_maths

View File

@ -1,6 +0,0 @@
"""Miscellaneous configuration variables for the PEP Sphinx extensions."""
pep_stem = "pep-{:0>4}"
pep_url = f"{pep_stem}.html"
pep_vcs_url = "https://github.com/python/peps/blob/master/"
pep_commits_url = "https://github.com/python/peps/commits/master/"

View File

@ -0,0 +1,117 @@
# This file is placed in the public domain or under the
# CC0-1.0-Universal license, whichever is more permissive.
from __future__ import annotations
import datetime as dt
import pickle
from email.utils import format_datetime, getaddresses
from html import escape
from pathlib import Path
from docutils import nodes
RSS_DESCRIPTION = (
"Newest Python Enhancement Proposals (PEPs): "
"Information on new language features "
"and some meta-information like release procedure and schedules."
)
def _format_rfc_2822(datetime: dt.datetime) -> str:
datetime = datetime.replace(tzinfo=dt.timezone.utc)
return format_datetime(datetime, usegmt=True)
document_cache: dict[Path, dict[str, str]] = {}
def get_from_doctree(full_path: Path, text: str) -> str:
# Try and retrieve from cache
if full_path in document_cache:
return document_cache[full_path].get(text, "")
# Else load doctree
document = pickle.loads(full_path.read_bytes())
# Store the headers (populated in the PEPHeaders transform)
document_cache[full_path] = path_cache = document.get("headers", {})
# Store the Abstract
path_cache["Abstract"] = pep_abstract(document)
# Return the requested key
return path_cache.get(text, "")
def pep_creation(full_path: Path) -> dt.datetime:
created_str = get_from_doctree(full_path, "Created")
try:
return dt.datetime.strptime(created_str, "%d-%b-%Y")
except ValueError:
return dt.datetime.min
def pep_abstract(document: nodes.document) -> str:
"""Return the first paragraph of the PEP abstract"""
for node in document.findall(nodes.section):
title_node = node.next_node(nodes.title)
if title_node is None:
continue
if title_node.astext() == "Abstract":
return node.next_node(nodes.paragraph).astext().strip().replace("\n", " ")
return ""
def _generate_items(doctree_dir: Path):
# get list of peps with creation time (from "Created:" string in pep source)
peps_with_dt = sorted((pep_creation(path), path) for path in doctree_dir.glob("pep-????.doctree"))
# generate rss items for 10 most recent peps (in reverse order)
for datetime, full_path in reversed(peps_with_dt[-10:]):
try:
pep_num = int(get_from_doctree(full_path, "PEP"))
except ValueError:
continue
title = get_from_doctree(full_path, "Title")
url = f"https://peps.python.org/pep-{pep_num:0>4}/"
abstract = get_from_doctree(full_path, "Abstract")
author = get_from_doctree(full_path, "Author")
if "@" in author or " at " in author:
parsed_authors = getaddresses([author])
joined_authors = ", ".join(f"{name} ({email_address})" for name, email_address in parsed_authors)
else:
joined_authors = author
item = f"""\
<item>
<title>PEP {pep_num}: {escape(title, quote=False)}</title>
<link>{escape(url, quote=False)}</link>
<description>{escape(abstract, quote=False)}</description>
<author>{escape(joined_authors, quote=False)}</author>
<guid isPermaLink="true">{url}</guid>
<pubDate>{_format_rfc_2822(datetime)}</pubDate>
</item>"""
yield item
def create_rss_feed(doctree_dir: Path, output_dir: Path):
# The rss envelope
last_build_date = _format_rfc_2822(dt.datetime.now(dt.timezone.utc))
items = "\n".join(_generate_items(Path(doctree_dir)))
output = f"""\
<?xml version='1.0' encoding='UTF-8'?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" version="2.0">
<channel>
<title>Newest Python PEPs</title>
<link>https://peps.python.org/peps.rss</link>
<description>{RSS_DESCRIPTION}</description>
<atom:link href="https://peps.python.org/peps.rss" rel="self"/>
<docs>https://cyber.harvard.edu/rss/rss.html</docs>
<language>en</language>
<lastBuildDate>{last_build_date}</lastBuildDate>
{items}
</channel>
</rss>
"""
# output directory for target HTML files
Path(output_dir, "peps.rss").write_text(output, encoding="utf-8")

View File

@ -0,0 +1,50 @@
from docutils import nodes
from docutils.frontend import OptionParser
from sphinx.builders.html import StandaloneHTMLBuilder
from sphinx.writers.html import HTMLWriter
from sphinx.builders.dirhtml import DirectoryHTMLBuilder
class FileBuilder(StandaloneHTMLBuilder):
copysource = False # Prevent unneeded source copying - we link direct to GitHub
search = False # Disable search
# Things we don't use but that need to exist:
indexer = None
relations = {}
_script_files = _css_files = []
globalcontext = {"script_files": [], "css_files": []}
def prepare_writing(self, _doc_names: set[str]) -> None:
self.docwriter = HTMLWriter(self)
_opt_parser = OptionParser([self.docwriter], defaults=self.env.settings, read_config_files=True)
self.docsettings = _opt_parser.get_default_values()
self._orig_css_files = self._orig_js_files = []
def get_doc_context(self, docname: str, body: str, _metatags: str) -> dict:
"""Collect items for the template context of a page."""
try:
title = self.env.longtitles[docname].astext()
except KeyError:
title = ""
# local table of contents
toc_tree = self.env.tocs[docname].deepcopy()
if len(toc_tree) and len(toc_tree[0]) > 1:
toc_tree = toc_tree[0][1] # don't include document title
del toc_tree[0] # remove contents node
for node in toc_tree.findall(nodes.reference):
node["refuri"] = node["anchorname"] or '#' # fix targets
toc = self.render_partial(toc_tree)["fragment"]
else:
toc = "" # PEPs with no sections -- 9, 210
return {"title": title, "toc": toc, "body": body}
class DirectoryBuilder(FileBuilder):
# sync all overwritten things from DirectoryHTMLBuilder
name = DirectoryHTMLBuilder.name
get_target_uri = DirectoryHTMLBuilder.get_target_uri
get_outfilename = DirectoryHTMLBuilder.get_outfilename

View File

@ -57,30 +57,49 @@ class PEPTranslator(html5.HTML5Translator):
"""Add corresponding end tag from `visit_paragraph`."""
self.body.append(self.context.pop())
def visit_footnote_reference(self, node):
self.body.append(self.starttag(node, "a", suffix="[",
CLASS=f"footnote-reference {self.settings.footnote_references}",
href=f"#{node['refid']}"
))
def depart_footnote_reference(self, node):
self.body.append(']</a>')
def visit_label(self, node):
# pass parent node to get id into starttag:
self.body.append(self.starttag(node.parent, "dt", suffix="[", CLASS="label"))
# footnote/citation backrefs:
back_refs = node.parent["backrefs"]
if self.settings.footnote_backlinks and len(back_refs) == 1:
self.body.append(f'<a href="#{back_refs[0]}">')
self.context.append("</a>]")
else:
self.context.append("]")
def depart_label(self, node) -> None:
"""PEP link/citation block cleanup with italicised backlinks."""
if not self.settings.footnote_backlinks:
self.body.append("</span>")
self.body.append("</dt>\n<dd>")
return
# If only one reference to this footnote
back_references = node.parent["backrefs"]
if len(back_references) == 1:
self.body.append("</a>")
# Close the tag
self.body.append("</span>")
# If more than one reference
if len(back_references) > 1:
back_links = [f"<a href='#{ref}'>{i}</a>" for i, ref in enumerate(back_references, start=1)]
back_links_str = ", ".join(back_links)
self.body.append(f"<span class='fn-backref''><em> ({back_links_str}) </em></span>")
self.body.append(self.context.pop())
back_refs = node.parent["backrefs"]
if self.settings.footnote_backlinks and len(back_refs) > 1:
back_links = ", ".join(f"<a href='#{ref}'>{i}</a>" for i, ref in enumerate(back_refs, start=1))
self.body.append(f"<em> ({back_links}) </em>")
# Close the def tags
self.body.append("</dt>\n<dd>")
def visit_bullet_list(self, node):
if isinstance(node.parent, nodes.section) and "contents" in node.parent["names"]:
self.body.append("<details><summary>Table of Contents</summary>")
self.context.append("</details>")
super().visit_bullet_list(node)
def depart_bullet_list(self, node):
super().depart_bullet_list(node)
if isinstance(node.parent, nodes.section) and "contents" in node.parent["names"]:
self.body.append(self.context.pop())
def unknown_visit(self, node: nodes.Node) -> None:
"""No processing for unknown node types."""
pass

View File

@ -0,0 +1,101 @@
"""Roles to insert custom admonitions pointing readers to canonical content."""
from __future__ import annotations
from docutils import nodes
from docutils.parsers import rst
PYPA_SPEC_BASE_URL = "https://packaging.python.org/en/latest/specifications/"
class PEPBanner(rst.Directive):
"""Insert a special banner admonition in a PEP document."""
has_content = True
required_arguments = 0
optional_arguments = 1
final_argument_whitespace = True
option_spec = {}
admonition_pre_template = ""
admonition_pre_text = ""
admonition_post_text = ""
admonition_class = nodes.important
css_classes = []
def run(self) -> list[nodes.admonition]:
if self.arguments:
link_content = self.arguments[0]
pre_text = self.admonition_pre_template.format(
link_content=link_content)
else:
pre_text = self.admonition_pre_text
pre_text_node = nodes.paragraph(pre_text)
pre_text_node.line = self.lineno
pre_node, pre_msg = self.state.inline_text(pre_text, self.lineno)
pre_text_node.extend(pre_node + pre_msg)
post_text = self.admonition_post_text
post_text_node = nodes.paragraph(post_text)
post_text_node.line = self.lineno
post_node, post_msg = self.state.inline_text(post_text, self.lineno)
post_text_node.extend(post_node + post_msg)
source_lines = [pre_text] + list(self.content or []) + [post_text]
admonition_node = self.admonition_class(
"\n".join(source_lines), classes=["pep-banner"] + self.css_classes)
admonition_node.append(pre_text_node)
if self.content:
self.state.nested_parse(
self.content, self.content_offset, admonition_node)
admonition_node.append(post_text_node)
return [admonition_node]
class CanonicalDocBanner(PEPBanner):
"""Insert an admonition pointing readers to a PEP's canonical docs."""
admonition_pre_template = (
"This PEP is a historical document. "
"The up-to-date, canonical documentation can now be found "
"at {link_content}."
)
admonition_pre_text = (
"This PEP is a historical document. "
"The up-to-date, canonical documentation can now be found elsewhere."
)
admonition_post_text = (
"See :pep:`1` for how to propose changes."
)
css_classes = ["canonical-doc", "sticky-banner"]
class CanonicalPyPASpecBanner(PEPBanner):
"""Insert a specialized admonition for PyPA packaging specifications."""
admonition_pre_template = (
"This PEP is a historical document. "
"The up-to-date, canonical spec, {link_content}, is maintained on "
f"the `PyPA specs page <{PYPA_SPEC_BASE_URL}>`__."
)
admonition_pre_text = (
"This PEP is a historical document. "
"The up-to-date, canonical specifications are maintained on "
f"the `PyPA specs page <{PYPA_SPEC_BASE_URL}>`__."
)
admonition_post_text = (
"See the `PyPA specification update process "
"<https://www.pypa.io/en/latest/specifications/#handling-fixes-and-other-minor-updates>`__ "
"for how to propose changes."
)
admonition_class = nodes.attention
css_classes = ["canonical-pypa-spec", "sticky-banner"]

View File

@ -1,15 +1,39 @@
from docutils import nodes
from sphinx import roles
from pep_sphinx_extensions import config
class PEPRole(roles.PEP):
class PEPRole(roles.ReferenceRole):
"""Override the :pep: role"""
def build_uri(self) -> str:
"""Get PEP URI from role text."""
def run(self) -> tuple[list[nodes.Node], list[nodes.system_message]]:
# Get PEP URI from role text.
pep_str, _, fragment = self.target.partition("#")
pep_base = config.pep_url.format(int(pep_str))
try:
pep_num = int(pep_str)
except ValueError:
msg = self.inliner.reporter.error(f'invalid PEP number {self.target}', line=self.lineno)
prb = self.inliner.problematic(self.rawtext, self.rawtext, msg)
return [prb], [msg]
pep_base = self.inliner.document.settings.pep_url.format(pep_num)
if self.inliner.document.settings.builder == "dirhtml":
pep_base = "../" + pep_base
if "topic" in self.get_location():
pep_base = "../" + pep_base
if fragment:
return f"{pep_base}#{fragment}"
return pep_base
ref_uri = f"{pep_base}#{fragment}"
else:
ref_uri = pep_base
if self.has_explicit_title:
title = self.title
else:
title = f"PEP {pep_num}"
return [
nodes.reference(
"", title,
internal=True,
refuri=ref_uri,
classes=["pep"],
_title_tuple=(pep_num, fragment)
)
], []

View File

@ -17,8 +17,7 @@ class PEPContents(transforms.Transform):
if not Path(self.document["source"]).match("pep-*"):
return # not a PEP file, exit early
# Create the contents placeholder section
title = nodes.title("", "", nodes.Text("Contents"))
contents_section = nodes.section("", title)
contents_section = nodes.section("")
if not self.document.has_name("contents"):
contents_section["names"].append("contents")
self.document.note_implicit_target(contents_section)

View File

@ -1,25 +1,16 @@
import datetime
import time
from pathlib import Path
import subprocess
from docutils import nodes
from docutils import transforms
from docutils.transforms import misc
from docutils.transforms import references
from pep_sphinx_extensions import config
class PEPFooter(transforms.Transform):
"""Footer transforms for PEPs.
- Appends external links to footnotes.
- Creates a link to the (GitHub) source text.
TargetNotes:
Locate the `References` section, insert a placeholder at the end
for an external target footnote insertion transform, and schedule
the transform to run immediately.
- Remove the References/Footnotes section if it is empty when rendered.
- Create a link to the (GitHub) source text.
Source Link:
Create the link to the source file from the document source path,
@ -32,80 +23,86 @@ class PEPFooter(transforms.Transform):
def apply(self) -> None:
pep_source_path = Path(self.document["source"])
if not pep_source_path.match("pep-*"):
if not pep_source_path.match("pep-????.???"):
return # not a PEP file, exit early
doc = self.document[0]
reference_section = copyright_section = None
# Iterate through sections from the end of the document
num_sections = len(doc)
for i, section in enumerate(reversed(doc)):
for section in reversed(self.document[0]):
if not isinstance(section, nodes.section):
continue
title_words = section[0].astext().lower().split()
if "references" in title_words:
reference_section = section
break
elif "copyright" in title_words:
copyright_section = num_sections - i - 1
# Add a references section if we didn't find one
if not reference_section:
reference_section = nodes.section()
reference_section += nodes.title("", "References")
self.document.set_id(reference_section)
if copyright_section:
# Put the new "References" section before "Copyright":
doc.insert(copyright_section, reference_section)
else:
# Put the new "References" section at end of doc:
doc.append(reference_section)
# Add and schedule execution of the TargetNotes transform
pending = nodes.pending(references.TargetNotes)
reference_section.append(pending)
self.document.note_pending(pending, priority=0)
# If there are no references after TargetNotes has finished, remove the
# references section
pending = nodes.pending(misc.CallBack, details={"callback": _cleanup_callback})
reference_section.append(pending)
self.document.note_pending(pending, priority=1)
title_words = {*section[0].astext().lower().split()}
if {"references", "footnotes"} & title_words:
# Remove references/footnotes sections if there is no displayed
# content (i.e. they only have title & link target nodes)
to_hoist = []
types = set()
for node in section:
types.add(type(node))
if isinstance(node, nodes.target):
to_hoist.append(node)
if types <= {nodes.title, nodes.target, nodes.system_message}:
section.parent.extend(to_hoist)
section.parent.remove(section)
# Add link to source text and last modified date
if pep_source_path.stem != "pep-0000":
if pep_source_path.stem != "pep-0210": # 210 is entirely empty, skip
self.document += nodes.transition()
self.document += _add_source_link(pep_source_path)
self.document += _add_commit_history_info(pep_source_path)
def _cleanup_callback(pending: nodes.pending) -> None:
"""Remove an empty "References" section.
Called after the `references.TargetNotes` transform is complete.
"""
if len(pending.parent) == 2: # <title> and <pending>
pending.parent.parent.remove(pending.parent)
def _add_source_link(pep_source_path: Path) -> nodes.paragraph:
"""Add link to source text on VCS (GitHub)"""
source_link = config.pep_vcs_url + pep_source_path.name
source_link = f"https://github.com/python/peps/blob/main/peps/{pep_source_path.name}"
link_node = nodes.reference("", source_link, refuri=source_link)
return nodes.paragraph("", "Source: ", link_node)
def _add_commit_history_info(pep_source_path: Path) -> nodes.paragraph:
"""Use local git history to find last modified date."""
args = ["git", "--no-pager", "log", "-1", "--format=%at", pep_source_path.name]
try:
file_modified = subprocess.check_output(args)
since_epoch = file_modified.decode("utf-8").strip()
dt = datetime.datetime.utcfromtimestamp(float(since_epoch))
except (subprocess.CalledProcessError, ValueError):
iso_time = _LAST_MODIFIED_TIMES[pep_source_path.stem]
except KeyError:
return nodes.paragraph()
commit_link = config.pep_commits_url + pep_source_path.name
link_node = nodes.reference("", f"{dt.isoformat(sep=' ')} GMT", refuri=commit_link)
commit_link = f"https://github.com/python/peps/commits/main/{pep_source_path.name}"
link_node = nodes.reference("", f"{iso_time} GMT", refuri=commit_link)
return nodes.paragraph("", "Last modified: ", link_node)
def _get_last_modified_timestamps():
# get timestamps and changed files from all commits (without paging results)
args = ("git", "--no-pager", "log", "--format=#%at", "--name-only")
ret = subprocess.run(args, stdout=subprocess.PIPE, text=True, encoding="utf-8")
if ret.returncode: # non-zero return code
return {}
all_modified = ret.stdout
# remove "peps/" prefix from file names
all_modified = all_modified.replace("\npeps/", "\n")
# set up the dictionary with the *current* files
peps_dir = Path(__file__, "..", "..", "..", "..", "peps").resolve()
last_modified = {path.stem: "" for path in peps_dir.glob("pep-????.rst")}
# iterate through newest to oldest, updating per file timestamps
change_sets = all_modified.removeprefix("#").split("#")
for change_set in change_sets:
timestamp, files = change_set.split("\n", 1)
for file in files.strip().split("\n"):
if not file.startswith("pep-") or not file.endswith((".rst", ".txt")):
continue # not a PEP
file = file[:-4]
if last_modified.get(file) != "":
continue # most recent modified date already found
try:
y, m, d, hh, mm, ss, *_ = time.gmtime(float(timestamp))
except ValueError:
continue # if float conversion fails
last_modified[file] = f"{y:04}-{m:02}-{d:02} {hh:02}:{mm:02}:{ss:02}"
return last_modified
_LAST_MODIFIED_TIMES = _get_last_modified_timestamps()

View File

@ -1,5 +1,3 @@
from __future__ import annotations
from pathlib import Path
import re
@ -7,9 +5,44 @@ from docutils import nodes
from docutils import transforms
from sphinx import errors
from pep_sphinx_extensions import config
from pep_sphinx_extensions.pep_processor.transforms import pep_zero
from pep_sphinx_extensions.pep_processor.transforms.pep_zero import _mask_email
from pep_sphinx_extensions.pep_zero_generator.constants import (
SPECIAL_STATUSES,
STATUS_ACCEPTED,
STATUS_ACTIVE,
STATUS_DEFERRED,
STATUS_DRAFT,
STATUS_FINAL,
STATUS_PROVISIONAL,
STATUS_REJECTED,
STATUS_SUPERSEDED,
STATUS_WITHDRAWN,
TYPE_INFO,
TYPE_PROCESS,
TYPE_STANDARDS,
)
ABBREVIATED_STATUSES = {
STATUS_DRAFT: "Proposal under active discussion and revision",
STATUS_DEFERRED: "Inactive draft that may be taken up again at a later time",
STATUS_ACCEPTED: "Normative proposal accepted for implementation",
STATUS_ACTIVE: "Currently valid informational guidance, or an in-use process",
STATUS_FINAL: "Accepted and implementation complete, or no longer active",
STATUS_WITHDRAWN: "Removed from consideration by sponsor or authors",
STATUS_REJECTED: "Formally declined and will not be accepted",
STATUS_SUPERSEDED: "Replaced by another succeeding PEP",
STATUS_PROVISIONAL: "Provisionally accepted but additional feedback needed",
}
ABBREVIATED_TYPES = {
TYPE_STANDARDS: "Normative PEP with a new feature for Python, implementation "
"change for CPython or interoperability standard for the ecosystem",
TYPE_INFO: "Non-normative PEP containing background, guidelines or other "
"information relevant to the Python ecosystem",
TYPE_PROCESS: "Normative PEP describing or proposing a change to a Python "
"community process, workflow or governance",
}
class PEPParsingError(errors.SphinxError):
pass
@ -39,14 +72,14 @@ class PEPHeaders(transforms.Transform):
raise PEPParsingError("Document does not contain an RFC-2822 'PEP' header!")
# Extract PEP number
value = pep_field[1].astext()
pep_num_str = pep_field[1].astext()
try:
pep = int(value)
pep_num = int(pep_num_str)
except ValueError:
raise PEPParsingError(f"'PEP' header must contain an integer. '{value}' is invalid!")
raise PEPParsingError(f"PEP header must contain an integer. '{pep_num_str}' is invalid!")
# Special processing for PEP 0.
if pep == 0:
if pep_num == 0:
pending = nodes.pending(pep_zero.PEPZero)
self.document.insert(1, pending)
self.document.note_pending(pending)
@ -56,7 +89,11 @@ class PEPHeaders(transforms.Transform):
raise PEPParsingError("No title!")
fields_to_remove = []
self.document["headers"] = headers = {}
for field in header:
row_attributes = {sub.tagname: sub.rawsource for sub in field}
headers[row_attributes["field_name"]] = row_attributes["field_body"]
name = field[0].astext().lower()
body = field[1]
if len(body) == 0:
@ -70,45 +107,197 @@ class PEPHeaders(transforms.Transform):
raise PEPParsingError(msg)
para = body[0]
if name in {"author", "bdfl-delegate", "pep-delegate", "discussions-to", "sponsor"}:
if name in {"author", "bdfl-delegate", "pep-delegate", "sponsor"}:
# mask emails
for node in para:
if isinstance(node, nodes.reference):
pep_num = pep if name == "discussions-to" else None
node.replace_self(_mask_email(node, pep_num))
if not isinstance(node, nodes.reference):
continue
node.replace_self(_mask_email(node))
elif name in {"discussions-to", "resolution", "post-history"}:
# Prettify mailing list and Discourse links
for node in para:
if (not isinstance(node, nodes.reference)
or not node["refuri"]):
continue
# Have known mailto links link to their main list pages
if node["refuri"].lower().startswith("mailto:"):
node["refuri"] = _generate_list_url(node["refuri"])
parts = node["refuri"].lower().split("/")
if len(parts) <= 2 or parts[2] not in LINK_PRETTIFIERS:
continue
pretty_title = _make_link_pretty(str(node["refuri"]))
if name == "post-history":
node["reftitle"] = pretty_title
else:
node[0] = nodes.Text(pretty_title)
elif name in {"replaces", "superseded-by", "requires"}:
# replace PEP numbers with normalised list of links to PEPs
new_body = []
for ref_pep in re.split(r",?\s+", body.astext()):
new_body += [nodes.reference("", ref_pep, refuri=config.pep_url.format(int(ref_pep)))]
new_body += [nodes.Text(", ")]
for pep_str in re.split(r",?\s+", body.astext()):
target = self.document.settings.pep_url.format(int(pep_str))
if self.document.settings.builder == "dirhtml":
target = f"../{target}"
new_body += [nodes.reference("", pep_str, refuri=target), nodes.Text(", ")]
para[:] = new_body[:-1] # drop trailing space
elif name == "topic":
new_body = []
for topic_name in body.astext().split(","):
if topic_name:
target = f"topic/{topic_name.lower().strip()}"
if self.document.settings.builder == "html":
target = f"{target}.html"
else:
target = f"../{target}/"
new_body += [
nodes.reference("", topic_name, refuri=target),
nodes.Text(", "),
]
if new_body:
para[:] = new_body[:-1] # Drop trailing space/comma
elif name == "status":
para[:] = [
nodes.abbreviation(
body.astext(),
body.astext(),
explanation=_abbreviate_status(body.astext()),
)
]
elif name == "type":
para[:] = [
nodes.abbreviation(
body.astext(),
body.astext(),
explanation=_abbreviate_type(body.astext()),
)
]
elif name in {"last-modified", "content-type", "version"}:
# Mark unneeded fields
fields_to_remove.append(field)
# Remove any trailing commas and whitespace in the headers
if para and isinstance(para[-1], nodes.Text):
last_node = para[-1]
if last_node.astext().strip() == ",":
last_node.parent.remove(last_node)
else:
para[-1] = last_node.rstrip().rstrip(",")
# Remove unneeded fields
for field in fields_to_remove:
field.parent.remove(field)
def _mask_email(ref: nodes.reference, pep_num: int | None = None) -> nodes.reference:
"""Mask the email address in `ref` and return a replacement node.
def _generate_list_url(mailto: str) -> str:
list_name_domain = mailto.lower().removeprefix("mailto:").strip()
list_name = list_name_domain.split("@")[0]
`ref` is returned unchanged if it contains no email address.
if list_name_domain.endswith("@googlegroups.com"):
return f"https://groups.google.com/g/{list_name}"
If given an email not explicitly whitelisted, process it such that
`user@host` -> `user at host`.
if not list_name_domain.endswith("@python.org"):
return mailto
If given a PEP number `pep_num`, add a default email subject.
# Active lists not yet on Mailman3; this URL will redirect if/when they are
if list_name in {"csv", "db-sig", "doc-sig", "python-list", "web-sig"}:
return f"https://mail.python.org/mailman/listinfo/{list_name}"
# Retired lists that are closed for posting, so only the archive matters
if list_name in {"import-sig", "python-3000"}:
return f"https://mail.python.org/pipermail/{list_name}/"
# The remaining lists (and any new ones) are all on Mailman3/Hyperkitty
return f"https://mail.python.org/archives/list/{list_name}@python.org/"
"""
if "refuri" not in ref or not ref["refuri"].startswith("mailto:"):
return ref
non_masked_addresses = {"peps@python.org", "python-list@python.org", "python-dev@python.org"}
if ref["refuri"].removeprefix("mailto:").strip() not in non_masked_addresses:
ref[0] = nodes.raw("", ref[0].replace("@", "&#32;&#97;t&#32;"), format="html")
if pep_num is None:
return ref[0] # return email text without mailto link
ref["refuri"] += f"?subject=PEP%20{pep_num}"
return ref
def _process_list_url(parts: list[str]) -> tuple[str, str]:
item_type = "list"
# HyperKitty (Mailman3) archive structure is
# https://mail.python.org/archives/list/<list_name>/thread/<id>
if "archives" in parts:
list_name = (
parts[parts.index("archives") + 2].removesuffix("@python.org"))
if len(parts) > 6 and parts[6] in {"message", "thread"}:
item_type = parts[6]
# Mailman3 list info structure is
# https://mail.python.org/mailman3/lists/<list_name>.python.org/
elif "mailman3" in parts:
list_name = (
parts[parts.index("mailman3") + 2].removesuffix(".python.org"))
# Pipermail (Mailman) archive structure is
# https://mail.python.org/pipermail/<list_name>/<month>-<year>/<id>
elif "pipermail" in parts:
list_name = parts[parts.index("pipermail") + 1]
item_type = "message" if len(parts) > 6 else "list"
# Mailman listinfo structure is
# https://mail.python.org/mailman/listinfo/<list_name>
elif "listinfo" in parts:
list_name = parts[parts.index("listinfo") + 1]
# Not a link to a mailing list, message or thread
else:
raise ValueError(
f"{'/'.join(parts)} not a link to a list, message or thread")
return list_name, item_type
def _process_discourse_url(parts: list[str]) -> tuple[str, str]:
item_name = "discourse"
if len(parts) < 5 or ("t" not in parts and "c" not in parts):
raise ValueError(
f"{'/'.join(parts)} not a link to a Discourse thread or category")
first_subpart = parts[4]
has_title = not first_subpart.isnumeric()
if "t" in parts:
item_type = "message" if len(parts) > (5 + has_title) else "thread"
elif "c" in parts:
item_type = "category"
if has_title:
item_name = f"{first_subpart.replace('-', ' ')} {item_name}"
return item_name, item_type
# Domains supported for pretty URL parsing
LINK_PRETTIFIERS = {
"mail.python.org": _process_list_url,
"discuss.python.org": _process_discourse_url,
}
def _process_pretty_url(url: str) -> tuple[str, str]:
parts = url.lower().strip().strip("/").split("/")
try:
item_name, item_type = LINK_PRETTIFIERS[parts[2]](parts)
except KeyError as error:
raise ValueError(
f"{url} not a link to a recognized domain to prettify") from error
item_name = item_name.title().replace("Sig", "SIG").replace("Pep", "PEP")
return item_name, item_type
def _make_link_pretty(url: str) -> str:
item_name, item_type = _process_pretty_url(url)
return f"{item_name} {item_type}"
def _abbreviate_status(status: str) -> str:
if status in SPECIAL_STATUSES:
status = SPECIAL_STATUSES[status]
try:
return ABBREVIATED_STATUSES[status]
except KeyError:
raise PEPParsingError(f"Unknown status: {status}")
def _abbreviate_type(type_: str) -> str:
try:
return ABBREVIATED_TYPES[type_]
except KeyError:
raise PEPParsingError(f"Unknown type: {type_}")

View File

@ -0,0 +1,36 @@
from pathlib import Path
from docutils import nodes
from docutils import transforms
class PEPReferenceRoleTitleText(transforms.Transform):
"""Add title text of document titles to reference role references."""
default_priority = 730
def apply(self) -> None:
if not Path(self.document["source"]).match("pep-*"):
return # not a PEP file, exit early
for node in self.document.findall(nodes.reference):
if "_title_tuple" not in node:
continue
# get pep number and section target (fragment)
pep_num, fragment = node.attributes.pop("_title_tuple")
filename = f"pep-{pep_num:0>4}"
# Cache target_ids
env = self.document.settings.env
try:
target_ids = env.document_ids[filename]
except KeyError:
env.document_ids[filename] = target_ids = env.get_doctree(filename).ids
# Create title text string. We hijack the 'reftitle' attribute so
# that we don't have to change things in the HTML translator
node["reftitle"] = env.titles[filename].astext()
try:
node["reftitle"] += f" § {target_ids[fragment][0].astext()}"
except KeyError:
pass

View File

@ -22,13 +22,19 @@ class PEPTitle(transforms.Transform):
pep_header_details = {}
# Iterate through the header fields, which are the first section of the document
desired_fields = {"PEP", "Title"}
fields_to_remove = []
for field in self.document[0]:
# Hold details of the attribute's tag against its details
row_attributes = {sub.tagname: sub.rawsource for sub in field}
pep_header_details[row_attributes["field_name"]] = row_attributes["field_body"]
# Store the redundant fields in the table for removal
if row_attributes["field_name"] in desired_fields:
fields_to_remove.append(field)
# We only need the PEP number and title
if pep_header_details.keys() >= {"PEP", "Title"}:
if pep_header_details.keys() >= desired_fields:
break
# Create the title string for the PEP
@ -46,6 +52,10 @@ class PEPTitle(transforms.Transform):
pep_title_node.extend(document_children)
self.document.note_implicit_target(pep_title_node, pep_title_node)
# Remove the now-redundant fields
for field in fields_to_remove:
field.parent.remove(field)
def _line_to_nodes(text: str) -> list[nodes.Node]:
"""Parse RST string to nodes."""

View File

@ -1,74 +1,34 @@
from __future__ import annotations
from docutils import nodes
from docutils import transforms
from docutils.transforms import peps
from pep_sphinx_extensions import config
class PEPZero(transforms.Transform):
"""Schedule PEP 0 processing."""
# Run during sphinx post processing
# Run during sphinx post-processing
default_priority = 760
def apply(self) -> None:
# Walk document and then remove this node
visitor = PEPZeroSpecial(self.document)
self.document.walk(visitor)
# Walk document and mask email addresses if present.
for reference_node in self.document.findall(nodes.reference):
reference_node.replace_self(_mask_email(reference_node))
# Remove this node
self.startnode.parent.remove(self.startnode)
class PEPZeroSpecial(nodes.SparseNodeVisitor):
"""Perform the special processing needed by PEP 0:
def _mask_email(ref: nodes.reference) -> nodes.reference:
"""Mask the email address in `ref` and return a replacement node.
- Mask email addresses.
- Link PEP numbers in the second column of 4-column tables to the PEPs themselves.
`ref` is returned unchanged if it contains no email address.
If given an email not explicitly whitelisted, process it such that
`user@host` -> `user at host`.
The returned node has no refuri link attribute.
"""
def __init__(self, document: nodes.document):
super().__init__(document)
self.pep_table: int = 0
self.entry: int = 0
def unknown_visit(self, node: nodes.Node) -> None:
"""No processing for undefined node types."""
pass
@staticmethod
def visit_reference(node: nodes.reference) -> None:
"""Mask email addresses if present."""
node.replace_self(peps.mask_email(node))
@staticmethod
def visit_field_list(node: nodes.field_list) -> None:
"""Skip PEP headers."""
if "rfc2822" in node["classes"]:
raise nodes.SkipNode
def visit_tgroup(self, node: nodes.tgroup) -> None:
"""Set column counter and PEP table marker."""
self.pep_table = node["cols"] == 4
self.entry = 0 # reset column number
def visit_colspec(self, node: nodes.colspec) -> None:
self.entry += 1
if self.pep_table and self.entry == 2:
node["classes"].append("num")
def visit_row(self, _node: nodes.row) -> None:
self.entry = 0 # reset column number
def visit_entry(self, node: nodes.entry) -> None:
self.entry += 1
if self.pep_table and self.entry == 2 and len(node) == 1:
node["classes"].append("num")
# if this is the PEP number column, replace the number with a link to the PEP
para = node[0]
if isinstance(para, nodes.paragraph) and len(para) == 1:
pep_str = para.astext()
try:
ref = config.pep_url.format(int(pep_str))
para[0] = nodes.reference(pep_str, pep_str, refuri=ref)
except ValueError:
pass
if not ref.get("refuri", "").startswith("mailto:"):
return ref
return nodes.raw("", ref[0].replace("@", "&#32;&#97;t&#32;"), format="html")

View File

@ -0,0 +1,35 @@
// Handle setting and changing the site's color scheme (light/dark)
"use strict";
const prefersDark = window.matchMedia("(prefers-color-scheme: dark)")
const getColourScheme = () => document.documentElement.dataset.colour_scheme
const setColourScheme = (colourScheme = getColourScheme()) => {
document.documentElement.dataset.colour_scheme = colourScheme
localStorage.setItem("colour_scheme", colourScheme)
setPygments(colourScheme)
}
// Map system theme to a cycle of steps
const cycles = {
dark: ["auto", "light", "dark"], // auto (dark) → light → dark
light: ["auto", "dark", "light"], // auto (light) → dark → light
}
const nextColourScheme = (colourScheme = getColourScheme()) => {
const cycle = cycles[prefersDark.matches ? "dark" : "light"]
return cycle[(cycle.indexOf(colourScheme) + 1) % cycle.length]
}
const setPygments = (colourScheme = getColourScheme()) => {
const pygmentsDark = document.getElementById("pyg-dark")
const pygmentsLight = document.getElementById("pyg-light")
pygmentsDark.disabled = colourScheme === "light"
pygmentsLight.disabled = colourScheme === "dark"
pygmentsDark.media = colourScheme === "auto" ? "(prefers-color-scheme: dark)" : ""
pygmentsLight.media = colourScheme === "auto" ? "(prefers-color-scheme: light)" : ""
}
// Update Pygments state (the page theme is initialised inline, see page.html)
document.addEventListener("DOMContentLoaded", () => setColourScheme())

View File

@ -1,5 +0,0 @@
/* JavaScript utilities for all documentation. */
// Footnote fixer
document.querySelectorAll("span.brackets").forEach(el => el.innerHTML = "[" + el.innerHTML + "]")
document.querySelectorAll("a.brackets").forEach(el => el.innerHTML = "[" + el.innerHTML + "]")

View File

@ -1,7 +1,12 @@
@charset "UTF-8";
/* Media Queries */
@media (max-width: 32.5em) {
/* Reduce padding & margins for the smallest screens */
/* Reduce padding & margins for smaller screens */
@media (max-width: 40em) {
section#pep-page-section {
padding: 1rem;
}
section#pep-page-section > header > h1 {
padding-right: 0;
border-right: none;
@ -12,25 +17,25 @@
nav#pep-sidebar {
display: none;
}
pre {
font-size: 0.8175rem;
}
table th,
table td {
padding: 0 0.1rem;
}
}
@media (min-width: 32.5em) {
@media (min-width: 40em) {
section#pep-page-section {
max-width: 40em;
width: 100%;
display: table;
margin: 0 auto;
padding: .5rem 1rem 0;
}
}
@media (min-width: 54em) {
section#pep-page-section {
max-width: 75em;
padding: 0.5rem 1rem 0;
width: 100%;
}
section#pep-page-section > article {
max-width: 40em;
max-width: 37em;
width: 74%;
float: right;
margin-right: 0;
@ -41,10 +46,15 @@
float: left;
margin-right: 2%;
}
/* Make less prominent when sidebar ToC is available */
details > summary {
font-size: 1rem;
width: max-content;
}
}
@media (min-width: 60em) {
section#pep-page-section > article {
max-width: none;
max-width: 56em;
padding-left: 3.2%;
padding-right: 3.2%;
}

View File

@ -0,0 +1,28 @@
"use strict";
// Inject a style element into the document head that adds scroll-margin-top to
// all elements with an id attribute. This is used to offset the scroll position
// when clicking on a link to an element with an id attribute. The offset is
// equal to the height of the sticky banner.
document.addEventListener("DOMContentLoaded", () => {
const stickyBanners = document.getElementsByClassName("sticky-banner");
if (!stickyBanners.length) {
return;
}
const stickyBanner = stickyBanners[0];
const node = document.createElement("style");
node.id = "sticky-banner-style";
document.head.appendChild(node);
function adjustBannerMargin() {
const text = document.createTextNode(
":target { scroll-margin-top: " + stickyBanner.offsetHeight + "px; }"
);
node.replaceChildren(text);
}
adjustBannerMargin();
document.addEventListener("resize", adjustBannerMargin);
document.addEventListener("load", adjustBannerMargin);
});

View File

@ -1,114 +1,120 @@
@charset "UTF-8";
/* Styles for PEPs
Colours:
white:
background
footnotes/references vertical border
#333
body text
#888
blockquote left line
header breadcrumbs separator
link underline (hovered/focused)
#ccc:
scrollbar
#ddd
header bottom border
horizontal rule
table vertical border
#eee:
link underline
table rows & top/bottom border
PEP header rows
footnotes/references rows
admonition note background
#f8f8f8:
inline code background
/* Styles for PEPs */
#0072aa:
links
# fee:
admonition warning background
/*
* `initial` works like undefined variables, so `var(initial, x)` will resolve to `x`.
* A space means an empty value, so `var( , x) y` will resolve to `y`.
*/
@media (prefers-color-scheme: dark) {
:root {
--light: ;
--dark: initial;
}
}
*/
@media (prefers-color-scheme: light) {
:root {
--dark: ;
--light: initial;
}
}
:root[data-colour_scheme="dark"] {
--light: ;
--dark: initial;
}
:root[data-colour_scheme="light"] {
--dark: ;
--light: initial;
}
/* Set master colours */
:root {
--colour-background: var(--light, white) var(--dark, #111);
--colour-background-accent-strong: var(--light, #ccc) var(--dark, #444);
--colour-background-accent-medium: var(--light, #ddd) var(--dark, #333);
--colour-background-accent-light: var(--light, #eee) var(--dark, #222);
--colour-text: var(--light, #333) var(--dark, #ccc);
--colour-text-strong: var(--light, #222) var(--dark, #ddd);
--colour-links: var(--light, #069) var(--dark, #8bf);
--colour-links-light: var(--light, #057) var(--dark, #acf);
--colour-scrollbar: var(--light, #ccc) var(--dark, #333);
--colour-rule-strong: var(--light, #888) var(--dark, #777);
--colour-rule-light: var(--light, #ddd) var(--dark, #222);
--colour-inline-code-bg: var(--light, #eee) var(--dark, #333);
--colour-inline-code-text: var(--light, #222) var(--dark, #ccc);
--colour-error: var(--light, #faa) var(--dark, #800);
--colour-warning: var(--light, #fca) var(--dark, #840);
--colour-caution: var(--light, #ffa) var(--dark, #550);
--colour-attention: var(--light, #bdf) var(--dark, #045);
--colour-tip: var(--light, #bfc) var(--dark, #041);
}
img.invert-in-dark-mode {
filter: var(--dark, invert(1) hue-rotate(.5turn));
}
/* Set master rules */
* {box-sizing: border-box}
:root {color-scheme: light dark}
html {
overflow-y: scroll;
-webkit-font-smoothing: antialiased;
margin: 0;
line-height: 1.4rem;
font-weight: normal;
line-height: 1.5;
font-size: 1rem;
font-family: "Source Sans Pro", Arial, sans-serif;
font-family: -apple-system, BlinkMacSystemFont, avenir next, avenir, segoe ui, helvetica neue, helvetica, Cantarell, Ubuntu, roboto, noto, arial, sans-serif;
}
body {
margin: 0;
color: #333;
background-color: white;
color: var(--colour-text);
background-color: var(--colour-background);
}
section#pep-page-section {
padding: 0.25rem 0.25rem 0;
display: table;
padding: 0.25rem;
}
/* Reduce margin sizes for body text */
p {margin: .5rem 0}
/* Header rules */
h1.page-title {
line-height: 2.3rem;
h1 {
font-size: 2rem;
font-weight: bold;
margin-top: 2rem;
margin-bottom: 1.5rem;
}
h2 {
font-size: 1.6rem;
font-weight: bold;
margin-top: 1rem;
margin-bottom: .5rem;
}
h3 {
font-size: 1.4rem;
font-weight: normal;
margin-top: 1rem;
margin-bottom: 0.5rem;
}
h4 {
font-size: 1.2rem;
font-weight: normal;
margin-top: .5rem;
margin-bottom: 0;
}
h5,
h6 {
font-size: 1rem;
font-weight: bold;
margin-top: 0;
margin-bottom: 0;
}
/* Anchor link rules */
a,
a:active,
a:visited {
color: #0072aa;
text-decoration-color: #eee;
color: var(--colour-links);
display: inline;
overflow-wrap: anywhere;
text-decoration-color: var(--colour-background-accent-strong);
}
a:hover,
a:focus {
text-decoration-color: #888;
text-decoration-color: var(--colour-rule-strong);
}
/* Blockquote rules */
blockquote {
font-style: italic;
border-left: 1px solid #888;
margin: .5rem;
border-left: 1px solid var(--colour-rule-strong);
padding: .5rem 1rem;
}
blockquote em {
@ -120,20 +126,37 @@ cite {
}
/* Code rules (code literals and Pygments highlighting blocks) */
pre,
code {
font-family: ui-monospace, "Cascadia Mono", "Segoe UI Mono", "DejaVu Sans Mono", Consolas, monospace;
white-space: pre-wrap;
word-wrap: break-word;
code,
pre {
font-family: Menlo, Consolas, Monaco, Liberation Mono, Lucida Console, monospace;
font-size: 0.875rem;
-webkit-hyphens: none;
hyphens: none;
}
code {
overflow-wrap: anywhere;
}
code.literal {
background-color: var(--colour-inline-code-bg);
color: var(--colour-inline-code-text);
font-size: .8em;
background-color: #f8f8f8;
padding: 1px 2px 1px;
}
pre {
overflow-x: auto;
padding: .5rem .75rem;
white-space: pre;
}
/* Contents rules */
details > summary {
cursor: pointer;
font-size: 1.6rem;
font-weight: bold;
margin-bottom: 1em;
}
details > summary:hover {
text-decoration: underline;
}
/* Definition list rules */
@ -141,16 +164,15 @@ dl dt {
font-weight: bold;
}
dl dd {
margin: 0;
margin-bottom: 0.5rem;
}
/* Horizontal rule rule */
hr {
border: 0;
border-top: 1px solid #ddd;
margin: 1.75rem 0;
border-top: 1px solid var(--colour-rule-light);
}
/*Image rules */
/* Image rules */
img {
max-width: 100%;
}
@ -160,13 +182,6 @@ a img {
}
/* List rules */
ul,
ol {
padding: 0;
margin: 0 0 0 1.5rem;
}
ul {list-style: square}
ol.arabic {list-style: decimal}
ol.loweralpha {list-style: lower-alpha}
ol.upperalpha {list-style: upper-alpha}
ol.lowerroman {list-style: lower-roman}
@ -184,37 +199,64 @@ sup {top: -0.5em}
sub {bottom: -0.25em}
/* Table rules */
div.table-wrapper {
overflow-x: auto;
}
table {
width: 100%;
border-collapse: collapse;
border-top: 1px solid #eee;
border-bottom: 1px solid #eee;
border: 1px solid var(--colour-background-accent-strong);
}
table caption {
margin: 1rem 0 .75rem;
}
table tbody tr:nth-of-type(odd) {
background-color: #eee;
table thead tr {
background-color: var(--colour-background-accent-medium);
color: var(--colour-text-strong);
}
table tbody tr {
border-top: 1px solid var(--colour-background-accent-strong);
}
table th,
table td {
text-align: left;
padding: 0.25rem 0.5rem 0.2rem;
}
table.pep-zero-table tr td:nth-child(1),
table.pep-zero-table tr td:nth-child(2) {
white-space: nowrap;
}
table th + th,
table td + td {
border-left: 1px solid #ddd;
border-left: 1px solid var(--colour-background-accent-strong);
}
/* Common column widths for PEP status tables */
table.pep-zero-table tr td:nth-child(1) {
width: 5.5%;
}
table.pep-zero-table tr td:nth-child(2) {
width: 6.5%;
}
table.pep-zero-table tr td:nth-child(3),
table.pep-zero-table tr td:nth-child(4){
width: 44%;
}
/* Authors & Sponsors table */
#authors-owners table td,
#authors-owners table th {
width: 50%;
}
/* Breadcrumbs rules */
section#pep-page-section > header {
border-bottom: 1px solid #ddd;
border-bottom: 1px solid var(--colour-rule-light);
}
section#pep-page-section > header > h1 {
font-size: 1.1rem;
margin: 0;
display: inline-block;
padding-right: .6rem;
border-right: 1px solid #888;
border-right: 1px solid var(--colour-rule-strong);
}
ul.breadcrumbs {
margin: 0;
@ -229,19 +271,57 @@ ul.breadcrumbs a {
text-decoration: none;
}
/* Dark mode toggle rules */
#colour-scheme-cycler {
background: transparent;
border: none;
padding: 0;
cursor: pointer;
width: 1.2rem;
height: 1.2rem;
float: right;
transform: translate(0, 50%);
}
#colour-scheme-cycler svg {
color: var(--colour-rule-strong);
height: 1.2rem;
width: 1.2rem;
display: none;
}
:root[data-colour_scheme="auto"] #colour-scheme-cycler svg.colour-scheme-icon-when-auto {display: initial}
:root[data-colour_scheme="dark"] #colour-scheme-cycler svg.colour-scheme-icon-when-dark {display: initial}
:root[data-colour_scheme="light"] #colour-scheme-cycler svg.colour-scheme-icon-when-light {display: initial}
/* Admonitions rules */
div.note,
div.warning {
padding: 0.5rem 0.75rem;
margin-top: 1rem;
div.admonition {
background-color: var(--colour-background-accent-medium);
margin-bottom: 1rem;
margin-top: 1rem;
padding: 0.5rem 0.75rem;
}
div.note {
background-color: #eee;
div.admonition a {
color: var(--colour-links-light);
}
div.danger,
div.error {
background-color: var(--colour-error);
}
div.warning {
background-color: #fee;
background-color: var(--colour-warning);
}
div.attention,
div.caution {
background-color: var(--colour-caution);
}
div.important {
background-color: var(--colour-attention);
}
div.hint,
div.tip {
background-color: var(--colour-tip);
}
p.admonition-title {
font-weight: bold;
}
@ -251,42 +331,81 @@ dl.rfc2822,
dl.footnote {
display: grid;
grid-template-columns: fit-content(30%) auto;
line-height: 1.875;
width: 100%;
border-top: 1px solid #eee;
border-bottom: 1px solid #eee;
}
dl.rfc2822 > dt, dl.rfc2822 > dd,
dl.footnote > dt, dl.footnote > dd {
dl.footnote {
border-top: 1px solid var(--colour-rule-strong);
line-height: 1.875;
}
dl.rfc2822 > dt,
dl.rfc2822 > dd {
padding: .1rem .3rem .1rem;
}
dl.footnote > dt,
dl.footnote > dd {
padding: .25rem .5rem .2rem;
border-bottom: 1px solid var(--colour-rule-strong);
}
dl.rfc2822 > dt:nth-of-type(even), dl.rfc2822 > dd:nth-of-type(even),
dl.footnote > dt:nth-of-type(even), dl.footnote > dd:nth-of-type(even) {
background-color: #eee;
dl.rfc2822 > dt {
text-align: right;
}
dl.footnote > dt {
font-weight: normal;
border-right: 1px solid white;
border-right: 1px solid var(--colour-background);
}
dl.rfc2822 > dd,
dl.footnote > dd {
margin: 0;
}
/* Sidebar formatting */
nav#pep-sidebar {
overflow-y: scroll;
#pep-sidebar {
overflow-y: auto;
position: sticky;
top: 0;
height: 100vh;
scrollbar-width: thin; /* CSS Standards, not *yet* widely supported */
scrollbar-color: #ccc transparent;
}
nav#pep-sidebar::-webkit-scrollbar {width: 6px}
nav#pep-sidebar::-webkit-scrollbar-track {background: transparent}
nav#pep-sidebar::-webkit-scrollbar-thumb {background: #ccc}
nav#pep-sidebar > h2 {
#pep-sidebar > h2 {
font-size: 1.4rem;
}
nav#pep-sidebar ul {
#contents ol,
#contents ul,
#pep-sidebar ol,
#pep-sidebar ul {
padding: 0;
margin: 0 0 0 1.5rem;
}
#pep-sidebar ul {
font-size: .9rem;
margin-left: 1rem;
}
nav#pep-sidebar ul a {
#pep-sidebar ul a {
text-decoration: none;
}
#source {
padding-bottom: 2rem;
font-weight: bold;
}
.reference.external > strong {
font-weight: normal; /* Fix strong links for :pep: and :rfc: roles */
}
.visually-hidden {
position: absolute !important;
width: 1px !important;
height: 1px !important;
padding: 0 !important;
margin: -1px !important;
overflow: hidden !important;
clip-path: polygon(0px 0px, 0px 0px,0px 0px, 0px 0px) !important;
white-space: nowrap !important;
border: 0 !important;
}
/* Sticky banners */
.sticky-banner {
top: 0;
position: sticky;
z-index: 1;
}

View File

@ -0,0 +1,30 @@
// Wrap the tables in PEP bodies in a div, to allow for responsive scrolling
"use strict";
const pepContentId = "pep-content";
// Wrap passed table element in wrapper divs
function wrapTable (table) {
const wrapper = document.createElement("div");
wrapper.classList.add("table-wrapper");
table.parentNode.insertBefore(wrapper, table);
wrapper.appendChild(table);
}
// Wrap all tables in the PEP content in wrapper divs
function wrapPepContentTables () {
const pepContent = document.getElementById(pepContentId);
const bodyTables = pepContent.getElementsByTagName("table");
Array.from(bodyTables).forEach(wrapTable);
}
// Wrap the tables as soon as the DOM is loaded
document.addEventListener("DOMContentLoaded", () => {
if (document.getElementById(pepContentId)) {
wrapPepContentTables();
}
})

View File

@ -1,26 +1,40 @@
{# Master template for simple pages (e.g. RST files) #}
<!DOCTYPE html>
<html lang="en-GB">
<html lang="en">
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>{{ title + " | "|safe + docstitle }}</title>
<link rel="shortcut icon" href="{{ pathto('_static/py.png', resource=True) }}"/>
<link rel="stylesheet" href="{{ pathto('_static/style.css', resource=True) }}" type="text/css" />
<link rel="stylesheet" href="{{ pathto('_static/mq.css', resource=True) }}" type="text/css" />
<link rel="stylesheet" href="{{ pathto('_static/pygments.css', resource=True) }}" type="text/css" />
<link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,400;0,700;1,400&display=swap" rel="stylesheet">
<meta name="description" content="Python Enhancement Proposals (PEPs)"/>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="color-scheme" content="light dark">
<title>{{ title + " | peps.python.org"|safe }}</title>
<link rel="shortcut icon" href="{{ pathto('_static/py.png', resource=True) }}">
<link rel="canonical" href="https://peps.python.org/{{ pagename }}/">
<link rel="stylesheet" href="{{ pathto('_static/style.css', resource=True) }}" type="text/css">
<link rel="stylesheet" href="{{ pathto('_static/mq.css', resource=True) }}" type="text/css">
<link rel="stylesheet" href="{{ pathto('_static/pygments.css', resource=True) }}" type="text/css" media="(prefers-color-scheme: light)" id="pyg-light">
<link rel="stylesheet" href="{{ pathto('_static/pygments_dark.css', resource=True) }}" type="text/css" media="(prefers-color-scheme: dark)" id="pyg-dark">
<link rel="alternate" type="application/rss+xml" title="Latest PEPs" href="https://peps.python.org/peps.rss">
<meta name="description" content="Python Enhancement Proposals (PEPs)">
</head>
<body>
{% include "partials/icons.html" %}
<script>
{# set colour scheme from local storage synchronously to avoid a flash of unstyled content #}
document.documentElement.dataset.colour_scheme = localStorage.getItem("colour_scheme") || "auto"
</script>
<section id="pep-page-section">
<header>
<h1>Python Enhancement Proposals</h1>
<ul class="breadcrumbs">
<li><a href="https://www.python.org/" title="The Python Programming Language">Python</a> &raquo; </li>
<li><a href="{{ pathto("pep-0000") }}">PEP Index</a> &raquo; </li>
<li>{{ title }}</li>
<li>{{ title.split("")[0].strip() }}</li>
</ul>
<button id="colour-scheme-cycler" onClick="setColourScheme(nextColourScheme())">
<svg aria-hidden="true" class="colour-scheme-icon-when-auto"><use href="#svg-sun-half"></use></svg>
<svg aria-hidden="true" class="colour-scheme-icon-when-dark"><use href="#svg-moon"></use></svg>
<svg aria-hidden="true" class="colour-scheme-icon-when-light"><use href="#svg-sun"></use></svg>
<span class="visually-hidden">Toggle light / dark / auto colour theme</span>
</button>
</header>
<article>
{{ body }}
@ -28,10 +42,14 @@
<nav id="pep-sidebar">
<h2>Contents</h2>
{{ toc }}
<br />
<strong><a href="https://github.com/python/peps/blob/master/{{sourcename}}">Page Source (GitHub)</a></strong>
<br>
{%- if not pagename.startswith(("pep-0000", "topic")) %}
<a id="source" href="https://github.com/python/peps/blob/main/peps/{{pagename}}.rst">Page Source (GitHub)</a>
{%- endif %}
</nav>
</section>
<script src="{{ pathto('_static/doctools.js', resource=True) }}"></script>
<script src="{{ pathto('_static/colour_scheme.js', resource=True) }}"></script>
<script src="{{ pathto('_static/wrap_tables.js', resource=True) }}"></script>
<script src="{{ pathto('_static/sticky_banner.js', resource=True) }}"></script>
</body>
</html>

View File

@ -0,0 +1,34 @@
{# Adapted from Just the Docs → Furo #}
<svg xmlns="http://www.w3.org/2000/svg" style="display: none;">
<symbol id="svg-sun-half" viewBox="0 0 24 24" pointer-events="all">
<title>Following system colour scheme</title>
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" fill="none"
stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
<circle cx="12" cy="12" r="9"></circle>
<path d="M12 3v18m0-12l4.65-4.65M12 14.3l7.37-7.37M12 19.6l8.85-8.85"></path>
</svg>
</symbol>
<symbol id="svg-moon" viewBox="0 0 24 24" pointer-events="all">
<title>Selected dark colour scheme</title>
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" fill="none"
stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
<path stroke="none" d="M0 0h24v24H0z" fill="none"></path>
<path d="M12 3c.132 0 .263 0 .393 0a7.5 7.5 0 0 0 7.92 12.446a9 9 0 1 1 -8.313 -12.454z"></path>
</svg>
</symbol>
<symbol id="svg-sun" viewBox="0 0 24 24" pointer-events="all">
<title>Selected light colour scheme</title>
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" fill="none"
stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
<circle cx="12" cy="12" r="5"></circle>
<line x1="12" y1="1" x2="12" y2="3"></line>
<line x1="12" y1="21" x2="12" y2="23"></line>
<line x1="4.22" y1="4.22" x2="5.64" y2="5.64"></line>
<line x1="18.36" y1="18.36" x2="19.78" y2="19.78"></line>
<line x1="1" y1="12" x2="3" y2="12"></line>
<line x1="21" y1="12" x2="23" y2="12"></line>
<line x1="4.22" y1="19.78" x2="5.64" y2="18.36"></line>
<line x1="18.36" y1="5.64" x2="19.78" y2="4.22"></line>
</svg>
</symbol>
</svg>

View File

@ -2,3 +2,4 @@
# Theme options
inherit = none
pygments_style = tango
pygments_dark_style = native

View File

@ -1,93 +0,0 @@
from __future__ import annotations
from typing import NamedTuple
class _Name(NamedTuple):
mononym: str = None
forename: str = None
surname: str = None
suffix: str = None
class Author(NamedTuple):
"""Represent PEP authors."""
last_first: str # The author's name in Surname, Forename, Suffix order.
nick: str # Author's nickname for PEP tables. Defaults to surname.
email: str # The author's email address.
def parse_author_email(author_email_tuple: tuple[str, str], authors_overrides: dict[str, dict[str, str]]) -> Author:
"""Parse the name and email address of an author."""
name, email = author_email_tuple
_first_last = name.strip()
email = email.lower()
if _first_last in authors_overrides:
name_dict = authors_overrides[_first_last]
last_first = name_dict["Surname First"]
nick = name_dict["Name Reference"]
return Author(last_first, nick, email)
name_parts = _parse_name(_first_last)
if name_parts.mononym is not None:
return Author(name_parts.mononym, name_parts.mononym, email)
if name_parts.surname[1] == ".":
# Add an escape to avoid docutils turning `v.` into `22.`.
name_parts.surname = f"\\{name_parts.surname}"
if name_parts.suffix:
last_first = f"{name_parts.surname}, {name_parts.forename}, {name_parts.suffix}"
return Author(last_first, name_parts.surname, email)
last_first = f"{name_parts.surname}, {name_parts.forename}"
return Author(last_first, name_parts.surname, email)
def _parse_name(full_name: str) -> _Name:
"""Decompose a full name into parts.
If a mononym (e.g, 'Aahz') then return the full name. If there are
suffixes in the name (e.g. ', Jr.' or 'II'), then find and extract
them. If there is a middle initial followed by a full stop, then
combine the following words into a surname (e.g. N. Vander Weele). If
there is a leading, lowercase portion to the last name (e.g. 'van' or
'von') then include it in the surname.
"""
possible_suffixes = {"Jr", "Jr.", "II", "III"}
pre_suffix, _, raw_suffix = full_name.partition(",")
name_parts = pre_suffix.strip().split(" ")
num_parts = len(name_parts)
suffix = raw_suffix.strip()
if num_parts == 0:
raise ValueError("Name is empty!")
elif num_parts == 1:
return _Name(mononym=name_parts[0], suffix=suffix)
elif num_parts == 2:
return _Name(forename=name_parts[0].strip(), surname=name_parts[1], suffix=suffix)
# handles rogue uncaught suffixes
if name_parts[-1] in possible_suffixes:
suffix = f"{name_parts.pop(-1)} {suffix}".strip()
# handles von, van, v. etc.
if name_parts[-2].islower():
forename = " ".join(name_parts[:-2]).strip()
surname = " ".join(name_parts[-2:])
return _Name(forename=forename, surname=surname, suffix=suffix)
# handles double surnames after a middle initial (e.g. N. Vander Weele)
elif any(s.endswith(".") for s in name_parts):
split_position = [i for i, x in enumerate(name_parts) if x.endswith(".")][-1] + 1
forename = " ".join(name_parts[:split_position]).strip()
surname = " ".join(name_parts[split_position:])
return _Name(forename=forename, surname=surname, suffix=suffix)
# default to using the last item as the surname
else:
forename = " ".join(name_parts[:-1]).strip()
return _Name(forename=forename, surname=name_parts[-1], suffix=suffix)

View File

@ -19,8 +19,8 @@ STATUS_VALUES = {
SPECIAL_STATUSES = {
"April Fool!": STATUS_REJECTED, # See PEP 401 :)
}
# Draft PEPs have no status displayed, Active shares a key with Accepted
HIDE_STATUS = {STATUS_DRAFT, STATUS_ACTIVE}
# Draft PEPs have no status displayed
HIDE_STATUS = {STATUS_DRAFT}
# Dead PEP statuses
DEAD_STATUSES = {STATUS_REJECTED, STATUS_WITHDRAWN, STATUS_SUPERSEDED}
@ -32,3 +32,32 @@ TYPE_STANDARDS = "Standards Track"
TYPE_VALUES = {TYPE_STANDARDS, TYPE_INFO, TYPE_PROCESS}
# Active PEPs can only be for Informational or Process PEPs.
ACTIVE_ALLOWED = {TYPE_PROCESS, TYPE_INFO}
# map of topic -> additional description
SUBINDICES_BY_TOPIC = {
"governance": """\
These PEPs detail Python's governance, including governance model proposals
and selection, and the results of the annual steering council elections.
""",
"packaging": """\
Packaging PEPs follow the `PyPA specification update process`_.
They are used to propose major additions or changes to the PyPA specifications.
The canonical, up-to-date packaging specifications can be found on the
`Python Packaging Authority`_ (PyPA) `specifications`_ page.
.. _Python Packaging Authority: https://www.pypa.io/
.. _specifications: https://packaging.python.org/en/latest/specifications/
.. _PyPA specification update process: https://www.pypa.io/en/latest/specifications/#specification-update-process
""",
"release": """\
A PEP is written to specify the release cycle for each feature release of Python.
See the `developer's guide`_ for more information.
.. _developer's guide: https://devguide.python.org/devcycle/
""",
"typing": """\
Many recent PEPs propose changes to Python's static type system
or otherwise relate to type annotations.
They are listed here for reference.
""",
}

View File

@ -2,13 +2,10 @@
from __future__ import annotations
import dataclasses
from email.parser import HeaderParser
from pathlib import Path
import re
import textwrap
from typing import TYPE_CHECKING
from pep_sphinx_extensions.pep_zero_generator.author import parse_author_email
from pep_sphinx_extensions.pep_zero_generator.constants import ACTIVE_ALLOWED
from pep_sphinx_extensions.pep_zero_generator.constants import HIDE_STATUS
from pep_sphinx_extensions.pep_zero_generator.constants import SPECIAL_STATUSES
@ -19,8 +16,12 @@ from pep_sphinx_extensions.pep_zero_generator.constants import TYPE_STANDARDS
from pep_sphinx_extensions.pep_zero_generator.constants import TYPE_VALUES
from pep_sphinx_extensions.pep_zero_generator.errors import PEPError
if TYPE_CHECKING:
from pep_sphinx_extensions.pep_zero_generator.author import Author
@dataclasses.dataclass(order=True, frozen=True)
class _Author:
"""Represent PEP authors."""
full_name: str # The author's name.
email: str # The author's email address.
class PEP:
@ -38,7 +39,7 @@ class PEP:
# The required RFC 822 headers for all PEPs.
required_headers = {"PEP", "Title", "Author", "Status", "Type", "Created"}
def __init__(self, filename: Path, authors_overrides: dict):
def __init__(self, filename: Path):
"""Init object from an open PEP file object.
pep_file is full text of the PEP file, filename is path of the PEP file, author_lookup is author exceptions file
@ -89,7 +90,27 @@ class PEP:
self.status: str = status
# Parse PEP authors
self.authors: list[Author] = _parse_authors(self, metadata["Author"], authors_overrides)
self.authors: list[_Author] = _parse_author(metadata["Author"])
if not self.authors:
raise _raise_pep_error(self, "no authors found", pep_num=True)
# Topic (for sub-indices)
_topic = metadata.get("Topic", "").lower().split(",")
self.topic: set[str] = {topic for topic_raw in _topic if (topic := topic_raw.strip())}
# Other headers
self.created = metadata["Created"]
self.discussions_to = metadata["Discussions-To"]
self.python_version = metadata["Python-Version"]
self.replaces = metadata["Replaces"]
self.requires = metadata["Requires"]
self.resolution = metadata["Resolution"]
self.superseded_by = metadata["Superseded-By"]
if metadata["Post-History"]:
# Squash duplicate whitespace
self.post_history = " ".join(metadata["Post-History"].split())
else:
self.post_history = None
def __repr__(self) -> str:
return f"<PEP {self.number:0>4} - {self.title}>"
@ -100,17 +121,46 @@ class PEP:
def __eq__(self, other):
return self.number == other.number
def details(self, *, title_length) -> dict[str, str | int]:
@property
def shorthand(self) -> str:
"""Return reStructuredText tooltip for the PEP type and status."""
type_code = self.pep_type[0].upper()
if self.status in HIDE_STATUS:
return f":abbr:`{type_code} ({self.pep_type}, {self.status})`"
status_code = self.status[0].upper()
return f":abbr:`{type_code}{status_code} ({self.pep_type}, {self.status})`"
@property
def details(self) -> dict[str, str | int]:
"""Return the line entry for the PEP."""
return {
# how the type is to be represented in the index
"type": self.pep_type[0].upper(),
"number": self.number,
"title": _title_abbr(self.title, title_length),
# how the status should be represented in the index
"status": " " if self.status in HIDE_STATUS else self.status[0].upper(),
"title": self.title,
# a tooltip representing the type and status
"shorthand": self.shorthand,
# the author list as a comma-separated with only last names
"authors": ", ".join(author.nick for author in self.authors),
"authors": ", ".join(author.full_name for author in self.authors),
}
@property
def full_details(self) -> dict[str, str | int]:
"""Returns all headers of the PEP as a dict."""
return {
"number": self.number,
"title": self.title,
"authors": ", ".join(author.full_name for author in self.authors),
"discussions_to": self.discussions_to,
"status": self.status,
"type": self.pep_type,
"topic": ", ".join(sorted(self.topic)),
"created": self.created,
"python_version": self.python_version,
"post_history": self.post_history,
"resolution": self.resolution,
"requires": self.requires,
"replaces": self.replaces,
"superseded_by": self.superseded_by,
"url": f"https://peps.python.org/pep-{self.number:0>4}/",
}
@ -120,49 +170,27 @@ def _raise_pep_error(pep: PEP, msg: str, pep_num: bool = False) -> None:
raise PEPError(msg, pep.filename)
def _parse_authors(pep: PEP, author_header: str, authors_overrides: dict) -> list[Author]:
"""Parse Author header line"""
authors_and_emails = _parse_author(author_header)
if not authors_and_emails:
raise _raise_pep_error(pep, "no authors found", pep_num=True)
return [parse_author_email(author_tuple, authors_overrides) for author_tuple in authors_and_emails]
jr_placeholder = ",Jr"
author_angled = re.compile(r"(?P<author>.+?) <(?P<email>.+?)>(,\s*)?")
author_paren = re.compile(r"(?P<email>.+?) \((?P<author>.+?)\)(,\s*)?")
author_simple = re.compile(r"(?P<author>[^,]+)(,\s*)?")
def _parse_author(data: str) -> list[tuple[str, str]]:
def _parse_author(data: str) -> list[_Author]:
"""Return a list of author names and emails."""
author_list = []
for regex in (author_angled, author_paren, author_simple):
for match in regex.finditer(data):
# Watch out for suffixes like 'Jr.' when they are comma-separated
# from the name and thus cause issues when *all* names are only
# separated by commas.
match_dict = match.groupdict()
author = match_dict["author"]
if not author.partition(" ")[1] and author.endswith("."):
prev_author = author_list.pop()
author = ", ".join([prev_author, author])
if "email" not in match_dict:
email = ""
else:
email = match_dict["email"]
author_list.append((author, email))
data = (data.replace("\n", " ")
.replace(", Jr", jr_placeholder)
.rstrip().removesuffix(","))
for author_email in data.split(", "):
if ' <' in author_email:
author, email = author_email.removesuffix(">").split(" <")
else:
author, email = author_email, ""
# If authors were found then stop searching as only expect one
# style of author citation.
if author_list:
break
author = author.strip()
if author == "":
raise ValueError("Name is empty!")
author = author.replace(jr_placeholder, ", Jr")
email = email.lower()
author_list.append(_Author(author, email))
return author_list
def _title_abbr(title, title_length) -> str:
"""Shorten the title to be no longer than the max title length."""
if len(title) <= title_length:
return title
wrapped_title, *_excess = textwrap.wrap(title, title_length - 4)
return f"{wrapped_title} ..."

View File

@ -17,49 +17,56 @@ to allow it to be processed as normal.
"""
from __future__ import annotations
import csv
import json
import os
from pathlib import Path
import re
from typing import TYPE_CHECKING
from pep_sphinx_extensions.pep_zero_generator import parser
from pep_sphinx_extensions.pep_zero_generator import subindices
from pep_sphinx_extensions.pep_zero_generator import writer
from pep_sphinx_extensions.pep_zero_generator.constants import SUBINDICES_BY_TOPIC
if TYPE_CHECKING:
from sphinx.application import Sphinx
from sphinx.environment import BuildEnvironment
def create_pep_zero(_: Sphinx, env: BuildEnvironment, docnames: list[str]) -> None:
# Sphinx app object is unneeded by this function
def _parse_peps(path: Path) -> list[parser.PEP]:
# Read from root directory
path = Path(".")
pep_zero_filename = "pep-0000"
peps: list[parser.PEP] = []
pep_pat = re.compile(r"pep-\d{4}") # Path.match() doesn't support regular expressions
# AUTHOR_OVERRIDES.csv is an exception file for PEP0 name parsing
with open("AUTHOR_OVERRIDES.csv", "r", encoding="utf-8") as f:
authors_overrides = {}
for line in csv.DictReader(f):
full_name = line.pop("Overridden Name")
authors_overrides[full_name] = line
for file_path in path.iterdir():
if not file_path.is_file():
continue # Skip directories etc.
if file_path.match("pep-0000*"):
continue # Skip pre-existing PEP 0 files
if pep_pat.match(str(file_path)) and file_path.suffix in {".txt", ".rst"}:
pep = parser.PEP(path.joinpath(file_path).absolute(), authors_overrides)
if file_path.match("pep-????.rst"):
pep = parser.PEP(path.joinpath(file_path).absolute())
peps.append(pep)
pep0_text = writer.PEPZeroWriter().write_pep0(sorted(peps))
Path(f"{pep_zero_filename}.rst").write_text(pep0_text, encoding="utf-8")
return sorted(peps)
# Add to files for builder
docnames.insert(1, pep_zero_filename)
# Add to files for writer
env.found_docs.add(pep_zero_filename)
def create_pep_json(peps: list[parser.PEP]) -> str:
return json.dumps({pep.number: pep.full_details for pep in peps}, indent=1)
def write_peps_json(peps: list[parser.PEP], path: Path) -> None:
# Create peps.json
json_peps = create_pep_json(peps)
Path(path, "peps.json").write_text(json_peps, encoding="utf-8")
os.makedirs(os.path.join(path, "api"), exist_ok=True)
Path(path, "api", "peps.json").write_text(json_peps, encoding="utf-8")
def create_pep_zero(app: Sphinx, env: BuildEnvironment, docnames: list[str]) -> None:
peps = _parse_peps(Path(app.srcdir))
pep0_text = writer.PEPZeroWriter().write_pep0(peps, builder=env.settings["builder"])
pep0_path = subindices.update_sphinx("pep-0000", pep0_text, docnames, env)
peps.append(parser.PEP(pep0_path))
subindices.generate_subindices(SUBINDICES_BY_TOPIC, peps, docnames, env)
write_peps_json(peps, Path(app.outdir))

View File

@ -0,0 +1,76 @@
"""Utilities to support sub-indices for PEPs."""
from __future__ import annotations
import os
from pathlib import Path
from typing import TYPE_CHECKING
from pep_sphinx_extensions.pep_zero_generator import writer
if TYPE_CHECKING:
from sphinx.environment import BuildEnvironment
from pep_sphinx_extensions.pep_zero_generator.parser import PEP
def update_sphinx(filename: str, text: str, docnames: list[str], env: BuildEnvironment) -> Path:
file_path = Path(env.srcdir, f"{filename}.rst")
file_path.write_text(text, encoding="utf-8")
# Add to files for builder
docnames.append(filename)
# Add to files for writer
env.found_docs.add(filename)
return file_path
def generate_subindices(
subindices: dict[str, str],
peps: list[PEP],
docnames: list[str],
env: BuildEnvironment,
) -> None:
# create topic directory
os.makedirs(os.path.join(env.srcdir, "topic"), exist_ok=True)
# Create sub index page
generate_topic_contents(docnames, env)
for subindex, additional_description in subindices.items():
header_text = f"{subindex.title()} PEPs"
header_line = "#" * len(header_text)
header = header_text + "\n" + header_line + "\n"
topic = subindex.lower()
filtered_peps = [pep for pep in peps if topic in pep.topic]
subindex_intro = f"""\
This is the index of all Python Enhancement Proposals (PEPs) labelled
under the '{subindex.title()}' topic. This is a sub-index of :pep:`0`,
the PEP index.
{additional_description}
"""
subindex_text = writer.PEPZeroWriter().write_pep0(
filtered_peps, header, subindex_intro, is_pep0=False,
)
update_sphinx(f"topic/{subindex}", subindex_text, docnames, env)
def generate_topic_contents(docnames: list[str], env: BuildEnvironment):
update_sphinx("topic/index", """\
.. _topic-index:
Topic Index
***********
PEPs are indexed by topic on the pages below:
.. toctree::
:maxdepth: 1
:titlesonly:
:glob:
*
""", docnames, env)

Some files were not shown because too many files have changed in this diff Show More