Merge remote-tracking branch 'upstream/main' into pep-716
This commit is contained in:
commit
4afb0ade0d
|
@ -3,3 +3,7 @@
|
|||
*.png binary
|
||||
*.pptx binary
|
||||
*.odp binary
|
||||
|
||||
# Instruct linguist not to ignore the PEPs
|
||||
# https://github.com/github-linguist/linguist/blob/master/docs/overrides.md
|
||||
peps/*.rst text linguist-detectable
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -10,7 +10,7 @@ If your PEP is not Standards Track, remove the corresponding section.
|
|||
## Basic requirements (all PEP Types)
|
||||
|
||||
* [ ] Read and followed [PEP 1](https://peps.python.org/1) & [PEP 12](https://peps.python.org/12)
|
||||
* [ ] File created from the [latest PEP template](https://github.com/python/peps/blob/main/pep-0012/pep-NNNN.rst?plain=1)
|
||||
* [ ] File created from the [latest PEP template](https://github.com/python/peps/blob/main/peps/pep-0012/pep-NNNN.rst?plain=1)
|
||||
* [ ] PEP has next available number, & set in filename (``pep-NNNN.rst``), PR title (``PEP 123: <Title of PEP>``) and ``PEP`` header
|
||||
* [ ] Title clearly, accurately and concisely describes the content in 79 characters or less
|
||||
* [ ] Core dev/PEP editor listed as ``Author`` or ``Sponsor``, and formally confirmed their approval
|
||||
|
|
|
@ -9,4 +9,4 @@ If you're unsure about something, just leave it blank and we'll take a look.
|
|||
* [ ] Any substantial changes since the accepted version approved by the SC/PEP delegate
|
||||
* [ ] Pull request title in appropriate format (``PEP 123: Mark Final``)
|
||||
* [ ] ``Status`` changed to ``Final`` (and ``Python-Version`` is correct)
|
||||
* [ ] Canonical docs/spec linked with a ``canonical-doc`` directive (or ``pypa-spec``, for packaging PEPs)
|
||||
* [ ] Canonical docs/spec linked with a ``canonical-doc`` directive (or ``canonical-pypa-spec``, for packaging PEPs)
|
||||
|
|
|
@ -1,12 +1,18 @@
|
|||
name: Read the Docs PR preview
|
||||
|
||||
on:
|
||||
pull_request_target:
|
||||
types:
|
||||
- opened
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
pull-requests: write
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
documentation-links:
|
||||
runs-on: ubuntu-latest
|
||||
|
|
|
@ -1,6 +1,20 @@
|
|||
name: Lint
|
||||
name: Lint PEPs
|
||||
|
||||
on: [push, pull_request, workflow_dispatch]
|
||||
on:
|
||||
push:
|
||||
pull_request:
|
||||
workflow_dispatch:
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
|
||||
cancel-in-progress: true
|
||||
|
||||
env:
|
||||
FORCE_COLOR: 1
|
||||
RUFF_FORMAT: github
|
||||
|
||||
jobs:
|
||||
pre-commit:
|
||||
|
@ -8,13 +22,12 @@ jobs:
|
|||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Check out repo
|
||||
uses: actions/checkout@v3
|
||||
|
||||
- uses: actions/checkout@v4
|
||||
- name: Set up Python 3
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: '3.x'
|
||||
python-version: "3.x"
|
||||
cache: pip
|
||||
|
||||
- name: Run pre-commit hooks
|
||||
uses: pre-commit/action@v3.0.0
|
||||
|
@ -23,3 +36,17 @@ jobs:
|
|||
uses: pre-commit/action@v3.0.0
|
||||
with:
|
||||
extra_args: --all-files --hook-stage manual codespell || true
|
||||
|
||||
check-peps:
|
||||
name: Run check-peps
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- name: Set up Python 3
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: "3"
|
||||
|
||||
- name: Run check-peps
|
||||
run: python check-peps.py --detailed
|
||||
|
|
|
@ -1,48 +1,65 @@
|
|||
name: Render PEPs
|
||||
|
||||
on: [push, pull_request, workflow_dispatch]
|
||||
on:
|
||||
push:
|
||||
pull_request:
|
||||
workflow_dispatch:
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
|
||||
cancel-in-progress: true
|
||||
|
||||
env:
|
||||
FORCE_COLOR: 1
|
||||
|
||||
jobs:
|
||||
render-peps:
|
||||
name: Render PEPs
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: write
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
python-version: ["3.x", "3.12-dev"]
|
||||
python-version:
|
||||
- "3.x"
|
||||
- "3.12-dev"
|
||||
|
||||
steps:
|
||||
- name: 🛎️ Checkout
|
||||
uses: actions/checkout@v3
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0 # fetch all history so that last modified date-times are accurate
|
||||
|
||||
- name: 🐍 Set up Python ${{ matrix.python-version }}
|
||||
- name: Set up Python ${{ matrix.python-version }}
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: ${{ matrix.python-version }}
|
||||
cache: pip
|
||||
|
||||
- name: 👷 Install dependencies
|
||||
- name: Update pip
|
||||
run: |
|
||||
python -m pip install --upgrade pip
|
||||
|
||||
- name: 🔧 Render PEPs
|
||||
- name: Render PEPs
|
||||
run: make dirhtml JOBS=$(nproc)
|
||||
|
||||
# remove the .doctrees folder when building for deployment as it takes two thirds of disk space
|
||||
- name: 🔥 Clean up files
|
||||
- name: Clean up files
|
||||
run: rm -r build/.doctrees/
|
||||
|
||||
- name: 🚀 Deploy to GitHub pages
|
||||
- name: Deploy to GitHub pages
|
||||
# This allows CI to build branches for testing
|
||||
if: (github.ref == 'refs/heads/main') && (matrix.python-version == '3.x')
|
||||
uses: JamesIves/github-pages-deploy-action@v4
|
||||
with:
|
||||
folder: build # Synchronise with build.py -> build_directory
|
||||
folder: build # Synchronise with Makefile -> OUTPUT_DIR
|
||||
single-commit: true # Delete existing files
|
||||
|
||||
- name: ♻️ Purge CDN cache
|
||||
- name: Purge CDN cache
|
||||
if: github.ref == 'refs/heads/main'
|
||||
run: |
|
||||
curl -H "Accept: application/json" -H "Fastly-Key: $FASTLY_TOKEN" -X POST "https://api.fastly.com/service/$FASTLY_SERVICE_ID/purge_all"
|
||||
|
|
|
@ -13,6 +13,13 @@ on:
|
|||
- "tox.ini"
|
||||
workflow_dispatch:
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
|
||||
cancel-in-progress: true
|
||||
|
||||
env:
|
||||
FORCE_COLOR: 1
|
||||
|
||||
|
@ -22,12 +29,18 @@ jobs:
|
|||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
python-version: ["3.9", "3.10", "3.11", "3.12-dev"]
|
||||
os: [windows-latest, macos-latest, ubuntu-latest]
|
||||
python-version:
|
||||
- "3.9"
|
||||
- "3.10"
|
||||
- "3.11"
|
||||
- "3.12-dev"
|
||||
os:
|
||||
- "windows-latest"
|
||||
- "macos-latest"
|
||||
- "ubuntu-latest"
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
|
||||
- uses: actions/checkout@v4
|
||||
- name: Set up Python ${{ matrix.python-version }}
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
|
@ -40,7 +53,7 @@ jobs:
|
|||
python -m pip install -U wheel
|
||||
python -m pip install -U tox
|
||||
|
||||
- name: Run tests with tox
|
||||
- name: Run tests
|
||||
run: |
|
||||
tox -e py -- -v --cov-report term
|
||||
|
||||
|
|
|
@ -1,18 +1,27 @@
|
|||
coverage.xml
|
||||
pep-0000.txt
|
||||
# PEPs
|
||||
pep-0000.rst
|
||||
pep-????.html
|
||||
peps.rss
|
||||
topic
|
||||
/build
|
||||
|
||||
# Bytecode
|
||||
__pycache__
|
||||
*.pyc
|
||||
*.pyo
|
||||
*.py[co]
|
||||
|
||||
# Editors
|
||||
*~
|
||||
*env
|
||||
.coverage
|
||||
.tox
|
||||
.idea
|
||||
.vscode
|
||||
*.swp
|
||||
/build
|
||||
/package
|
||||
/topic
|
||||
|
||||
# Tests
|
||||
coverage.xml
|
||||
.coverage
|
||||
.tox
|
||||
|
||||
# Virtual environments
|
||||
*env
|
||||
/venv
|
||||
|
||||
# Builds
|
||||
/sphinx-warnings.txt
|
|
@ -43,7 +43,7 @@ repos:
|
|||
name: "Check YAML"
|
||||
|
||||
- repo: https://github.com/psf/black
|
||||
rev: 22.12.0
|
||||
rev: 23.7.0
|
||||
hooks:
|
||||
- id: black
|
||||
name: "Format with Black"
|
||||
|
@ -52,22 +52,23 @@ repos:
|
|||
- '--target-version=py310'
|
||||
files: 'pep_sphinx_extensions/tests/.*'
|
||||
|
||||
- repo: https://github.com/PyCQA/isort
|
||||
rev: 5.12.0
|
||||
- repo: https://github.com/astral-sh/ruff-pre-commit
|
||||
rev: v0.0.287
|
||||
hooks:
|
||||
- id: isort
|
||||
name: "Sort imports with isort"
|
||||
args: ['--profile=black', '--atomic']
|
||||
files: 'pep_sphinx_extensions/tests/.*'
|
||||
- id: ruff
|
||||
name: "Lint with Ruff"
|
||||
args:
|
||||
- '--exit-non-zero-on-fix'
|
||||
files: '^pep_sphinx_extensions/tests/'
|
||||
|
||||
- repo: https://github.com/tox-dev/tox-ini-fmt
|
||||
rev: 0.6.1
|
||||
rev: 1.3.1
|
||||
hooks:
|
||||
- id: tox-ini-fmt
|
||||
name: "Format tox.ini"
|
||||
|
||||
- repo: https://github.com/sphinx-contrib/sphinx-lint
|
||||
rev: v0.6.7
|
||||
rev: v0.6.8
|
||||
hooks:
|
||||
- id: sphinx-lint
|
||||
name: "Sphinx lint"
|
||||
|
@ -79,20 +80,16 @@ repos:
|
|||
hooks:
|
||||
- id: rst-backticks
|
||||
name: "Check RST: No single backticks"
|
||||
files: '^pep-\d\.txt|\.rst$'
|
||||
types: [text]
|
||||
|
||||
- id: rst-inline-touching-normal
|
||||
name: "Check RST: No backticks touching text"
|
||||
files: '^pep-\d+\.txt|\.rst$'
|
||||
types: [text]
|
||||
|
||||
- id: rst-directive-colons
|
||||
name: "Check RST: 2 colons after directives"
|
||||
files: '^pep-\d+\.txt|\.rst$'
|
||||
types: [text]
|
||||
|
||||
# Manual codespell check
|
||||
- repo: https://github.com/codespell-project/codespell
|
||||
rev: v2.2.2
|
||||
rev: v2.2.5
|
||||
hooks:
|
||||
- id: codespell
|
||||
name: "Check for common misspellings in text files"
|
||||
|
@ -101,152 +98,134 @@ repos:
|
|||
# Local checks for PEP headers and more
|
||||
- repo: local
|
||||
hooks:
|
||||
- id: check-no-tabs
|
||||
name: "Check tabs not used in PEPs"
|
||||
language: pygrep
|
||||
entry: '\t'
|
||||
files: '^pep-\d+\.(rst|txt)$'
|
||||
types: [text]
|
||||
# # Hook to run "check-peps.py"
|
||||
# - id: "check-peps"
|
||||
# name: "Check PEPs for metadata and content enforcement"
|
||||
# entry: "python check-peps.py"
|
||||
# language: "system"
|
||||
# files: "^pep-\d{4}\.(rst|txt)$"
|
||||
# require_serial: true
|
||||
|
||||
- id: check-required-headers
|
||||
name: "PEPs must have all required headers"
|
||||
language: pygrep
|
||||
entry: '(?-m:^PEP:(?=[\s\S]*\nTitle:)(?=[\s\S]*\nAuthor:)(?=[\s\S]*\nStatus:)(?=[\s\S]*\nType:)(?=[\s\S]*\nContent-Type:)(?=[\s\S]*\nCreated:))'
|
||||
entry: '(?-m:^PEP:(?=[\s\S]*\nTitle:)(?=[\s\S]*\nAuthor:)(?=[\s\S]*\nStatus:)(?=[\s\S]*\nType:)(?=[\s\S]*\nCreated:))'
|
||||
args: ['--negate', '--multiline']
|
||||
files: '^pep-\d+\.(rst|txt)$'
|
||||
types: [text]
|
||||
files: '^peps/pep-\d+\.rst$'
|
||||
|
||||
- id: check-header-order
|
||||
name: "PEP header order must follow PEP 12"
|
||||
language: pygrep
|
||||
entry: '^PEP:[^\n]+\nTitle:[^\n]+\n(Version:[^\n]+\n)?(Last-Modified:[^\n]+\n)?Author:[^\n]+\n( +\S[^\n]+\n)*(Sponsor:[^\n]+\n)?((PEP|BDFL)-Delegate:[^\n]*\n)?(Discussions-To:[^\n]*\n)?Status:[^\n]+\nType:[^\n]+\n(Topic:[^\n]+\n)?Content-Type:[^\n]+\n(Requires:[^\n]+\n)?Created:[^\n]+\n(Python-Version:[^\n]*\n)?(Post-History:[^\n]*\n( +\S[^\n]*\n)*)?(Replaces:[^\n]+\n)?(Superseded-By:[^\n]+\n)?(Resolution:[^\n]*\n)?\n'
|
||||
entry: '^PEP:[^\n]+\nTitle:[^\n]+\n(Version:[^\n]+\n)?(Last-Modified:[^\n]+\n)?Author:[^\n]+\n( +\S[^\n]+\n)*(Sponsor:[^\n]+\n)?((PEP|BDFL)-Delegate:[^\n]*\n)?(Discussions-To:[^\n]*\n)?Status:[^\n]+\nType:[^\n]+\n(Topic:[^\n]+\n)?(Content-Type:[^\n]+\n)?(Requires:[^\n]+\n)?Created:[^\n]+\n(Python-Version:[^\n]*\n)?(Post-History:[^\n]*\n( +\S[^\n]*\n)*)?(Replaces:[^\n]+\n)?(Superseded-By:[^\n]+\n)?(Resolution:[^\n]*\n)?\n'
|
||||
args: ['--negate', '--multiline']
|
||||
files: '^pep-\d+\.(rst|txt)$'
|
||||
types: [text]
|
||||
files: '^peps/pep-\d+\.rst$'
|
||||
|
||||
- id: validate-pep-number
|
||||
name: "'PEP' header must be a number 1-9999"
|
||||
language: pygrep
|
||||
entry: '(?-m:^PEP:(?:(?! +(0|[1-9][0-9]{0,3})\n)))'
|
||||
args: ['--multiline']
|
||||
files: '^pep-\d+\.(rst|txt)$'
|
||||
types: [text]
|
||||
files: '^peps/pep-\d+\.rst$'
|
||||
|
||||
- id: validate-title
|
||||
name: "'Title' must be 1-79 characters"
|
||||
language: pygrep
|
||||
entry: '(?<=\n)Title:(?:(?! +\S.{1,78}\n(?=[A-Z])))'
|
||||
args: ['--multiline']
|
||||
files: '^pep-\d+\.(rst|txt)$'
|
||||
exclude: '^pep-(0499)\.(rst|txt)$'
|
||||
types: [text]
|
||||
files: '^peps/pep-\d+\.rst$'
|
||||
exclude: '^peps/pep-(0499)\.rst$'
|
||||
|
||||
- id: validate-author
|
||||
name: "'Author' must be list of 'Name <email@example.com>, ...'"
|
||||
language: pygrep
|
||||
entry: '(?<=\n)Author:(?:(?!((( +|\n {1,8})[^!#$%&()*+,/:;<=>?@\[\\\]\^_`{|}~]+( <[\w!#$%&''*+\-/=?^_{|}~.]+(@| at )[\w\-.]+\.[A-Za-z0-9]+>)?)(,|(?=\n[^ ])))+\n(?=[A-Z])))'
|
||||
args: [--multiline]
|
||||
files: '^pep-\d+\.(rst|txt)$'
|
||||
types: [text]
|
||||
args: ["--multiline"]
|
||||
files: '^peps/pep-\d+\.rst$'
|
||||
|
||||
- id: validate-sponsor
|
||||
name: "'Sponsor' must have format 'Name <email@example.com>'"
|
||||
language: pygrep
|
||||
entry: '^Sponsor:(?: (?! *[^!#$%&()*+,/:;<=>?@\[\\\]\^_`{|}~]+( <[\w!#$%&''*+\-/=?^_{|}~.]+(@| at )[\w\-.]+\.[A-Za-z0-9]+>)?$))'
|
||||
files: '^pep-\d+\.(rst|txt)$'
|
||||
types: [text]
|
||||
files: '^peps/pep-\d+\.rst$'
|
||||
|
||||
- id: validate-delegate
|
||||
name: "'Delegate' must have format 'Name <email@example.com>'"
|
||||
language: pygrep
|
||||
entry: '^(PEP|BDFL)-Delegate: (?:(?! *[^!#$%&()*+,/:;<=>?@\[\\\]\^_`{|}~]+( <[\w!#$%&''*+\-/=?^_{|}~.]+(@| at )[\w\-.]+\.[A-Za-z0-9]+>)?$))'
|
||||
files: '^pep-\d+\.(rst|txt)$'
|
||||
exclude: '^pep-(0451)\.(rst|txt)$'
|
||||
types: [text]
|
||||
files: '^peps/pep-\d+\.rst$'
|
||||
exclude: '^peps/pep-(0451)\.rst$'
|
||||
|
||||
- id: validate-discussions-to
|
||||
name: "'Discussions-To' must be a thread URL"
|
||||
language: pygrep
|
||||
entry: '^Discussions-To: (?:(?!([\w\-]+@(python\.org|googlegroups\.com))|https://((discuss\.python\.org/t/([\w\-]+/)?\d+/?)|(mail\.python\.org/pipermail/[\w\-]+/\d{4}-[A-Za-z]+/[A-Za-z0-9]+\.html)|(mail\.python\.org/archives/list/[\w\-]+@python\.org/thread/[A-Za-z0-9]+/?))$))'
|
||||
files: '^pep-\d+\.(rst|txt)$'
|
||||
types: [text]
|
||||
files: '^peps/pep-\d+\.rst$'
|
||||
|
||||
- id: validate-status
|
||||
name: "'Status' must be a valid PEP status"
|
||||
language: pygrep
|
||||
entry: '^Status:(?:(?! +(Draft|Withdrawn|Rejected|Accepted|Final|Active|Provisional|Deferred|Superseded|April Fool!)$))'
|
||||
files: '^pep-\d+\.(rst|txt)$'
|
||||
types: [text]
|
||||
files: '^peps/pep-\d+\.rst$'
|
||||
|
||||
- id: validate-type
|
||||
name: "'Type' must be a valid PEP type"
|
||||
language: pygrep
|
||||
entry: '^Type:(?:(?! +(Standards Track|Informational|Process)$))'
|
||||
files: '^pep-\d+\.(rst|txt)$'
|
||||
types: [text]
|
||||
files: '^peps/pep-\d+\.rst$'
|
||||
|
||||
- id: validate-topic
|
||||
name: "'Topic' must be for a valid sub-index"
|
||||
language: pygrep
|
||||
entry: '^Topic:(?:(?! +(Governance|Packaging|Typing|Release)(, (Governance|Packaging|Typing|Release))*$))'
|
||||
files: '^pep-\d+\.(rst|txt)$'
|
||||
types: [text]
|
||||
files: '^peps/pep-\d+\.rst$'
|
||||
|
||||
- id: validate-content-type
|
||||
name: "'Content-Type' must be 'text/x-rst'"
|
||||
language: pygrep
|
||||
entry: '^Content-Type:(?:(?! +text/x-rst$))'
|
||||
files: '^pep-\d+\.(rst|txt)$'
|
||||
types: [text]
|
||||
files: '^peps/pep-\d+\.rst$'
|
||||
|
||||
- id: validate-pep-references
|
||||
name: "`Requires`/`Replaces`/`Superseded-By` must be 'NNN' PEP IDs"
|
||||
language: pygrep
|
||||
entry: '^(Requires|Replaces|Superseded-By):(?:(?! *( (0|[1-9][0-9]{0,3})(,|$))+$))'
|
||||
files: '^pep-\d+\.(rst|txt)$'
|
||||
types: [text]
|
||||
files: '^peps/pep-\d+\.rst$'
|
||||
|
||||
- id: validate-created
|
||||
name: "'Created' must be a 'DD-mmm-YYYY' date"
|
||||
language: pygrep
|
||||
entry: '^Created:(?:(?! +([0-2][0-9]|(3[01]))-(Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)-(199[0-9]|20[0-9][0-9])$))'
|
||||
files: '^pep-\d+\.(rst|txt)$'
|
||||
types: [text]
|
||||
files: '^peps/pep-\d+\.rst$'
|
||||
|
||||
- id: validate-python-version
|
||||
name: "'Python-Version' must be a 'X.Y[.Z]` version"
|
||||
language: pygrep
|
||||
entry: '^Python-Version:(?:(?! *( [1-9]\.([0-9][0-9]?|x)(\.[1-9][0-9]?)?(,|$))+$))'
|
||||
files: '^pep-\d+\.(rst|txt)$'
|
||||
types: [text]
|
||||
files: '^peps/pep-\d+\.rst$'
|
||||
|
||||
- id: validate-post-history
|
||||
name: "'Post-History' must be '`DD-mmm-YYYY <Thread URL>`__, ...'"
|
||||
language: pygrep
|
||||
entry: '(?<=\n)Post-History:(?:(?! ?\n|((( +|\n {1,14})(([0-2][0-9]|(3[01]))-(Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)-(199[0-9]|20[0-9][0-9])|`([0-2][0-9]|(3[01]))-(Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)-(199[0-9]|20[0-9][0-9]) <https://((discuss\.python\.org/t/([\w\-]+/)?\d+(?:/\d+/|/?))|(mail\.python\.org/pipermail/[\w\-]+/\d{4}-[A-Za-z]+/[A-Za-z0-9]+\.html)|(mail\.python\.org/archives/list/[\w\-]+@python\.org/thread/[A-Za-z0-9]+/?(#[A-Za-z0-9]+)?))>`__)(,|(?=\n[^ ])))+\n(?=[A-Z\n]))))'
|
||||
args: [--multiline]
|
||||
files: '^pep-\d+\.(rst|txt)$'
|
||||
types: [text]
|
||||
files: '^peps/pep-\d+\.rst$'
|
||||
|
||||
- id: validate-resolution
|
||||
name: "'Resolution' must be a direct thread/message URL"
|
||||
language: pygrep
|
||||
entry: '(?<!\n\n)(?<=\n)Resolution: (?:(?!https://((discuss\.python\.org/t/([\w\-]+/)?\d+(/\d+)?/?)|(mail\.python\.org/pipermail/[\w\-]+/\d{4}-[A-Za-z]+/[A-Za-z0-9]+\.html)|(mail\.python\.org/archives/list/[\w\-]+@python\.org/(message|thread)/[A-Za-z0-9]+/?(#[A-Za-z0-9]+)?))\n))'
|
||||
args: ['--multiline']
|
||||
files: '^pep-\d+\.(rst|txt)$'
|
||||
types: [text]
|
||||
files: '^peps/pep-\d+\.rst$'
|
||||
|
||||
- id: check-direct-pep-links
|
||||
name: "Check that PEPs aren't linked directly"
|
||||
language: pygrep
|
||||
entry: '(dev/peps|peps\.python\.org)/pep-\d+'
|
||||
files: '^pep-\d+\.(rst|txt)$'
|
||||
exclude: '^pep-(0009|0287|0676|0684|8001)\.(rst|txt)$'
|
||||
types: [text]
|
||||
files: '^peps/pep-\d+\.rst$'
|
||||
exclude: '^peps/pep-(0009|0287|0676|0684|8001)\.rst$'
|
||||
|
||||
- id: check-direct-rfc-links
|
||||
name: "Check that RFCs aren't linked directly"
|
||||
language: pygrep
|
||||
entry: '(rfc-editor\.org|ietf\.org)/[\.\-_\?\&\#\w/]*[Rr][Ff][Cc][\-_]?\d+'
|
||||
files: '\.(rst|txt)$'
|
||||
types: [text]
|
||||
types: ['rst']
|
||||
|
|
|
@ -0,0 +1,15 @@
|
|||
ignore = [
|
||||
"E501", # Line too long
|
||||
]
|
||||
|
||||
select = [
|
||||
"E", # pycodestyle errors
|
||||
"F", # pyflakes
|
||||
"I", # isort
|
||||
"PT", # flake8-pytest-style
|
||||
"W", # pycodestyle warnings
|
||||
]
|
||||
|
||||
show-source = true
|
||||
|
||||
target-version = "py39"
|
|
@ -1,13 +0,0 @@
|
|||
Overridden Name,Surname First,Name Reference
|
||||
The Python core team and community,"The Python core team and community",python-dev
|
||||
Erik De Bonte,"De Bonte, Erik",De Bonte
|
||||
Greg Ewing,"Ewing, Gregory",Ewing
|
||||
Guido van Rossum,"van Rossum, Guido (GvR)",GvR
|
||||
Inada Naoki,"Inada, Naoki",Inada
|
||||
Jim Jewett,"Jewett, Jim J.",Jewett
|
||||
Just van Rossum,"van Rossum, Just (JvR)",JvR
|
||||
Martin v. Löwis,"von Löwis, Martin",von Löwis
|
||||
Nathaniel Smith,"Smith, Nathaniel J.",Smith
|
||||
P.J. Eby,"Eby, Phillip J.",Eby
|
||||
Germán Méndez Bravo,"Méndez Bravo, Germán",Méndez Bravo
|
||||
Amethyst Reese,"Reese, Amethyst",Amethyst
|
|
|
@ -37,11 +37,30 @@ which don't significantly impair meaning and understanding.
|
|||
|
||||
If you're still unsure, we encourage you to reach out first before opening a
|
||||
PR here. For example, you could contact the PEP author(s), propose your idea in
|
||||
a discussion venue appropriate to the PEP (such as `Typing-SIG
|
||||
<https://mail.python.org/archives/list/typing-sig@python.org/>`__ for static
|
||||
a discussion venue appropriate to the PEP (such as `Typing Discourse
|
||||
<https://discuss.python.org/c/typing/>`__ for static
|
||||
typing, or `Packaging Discourse <https://discuss.python.org/c/packaging/>`__
|
||||
for packaging), or `open an issue <https://github.com/python/peps/issues>`__.
|
||||
|
||||
Opening a pull request
|
||||
----------------------
|
||||
|
||||
The PEPs repository defines a set of pull request templates, which should be
|
||||
used when opening a PR.
|
||||
|
||||
If you use Git from the command line, you may be accustomed to creating PRs
|
||||
by following the URL that is provided after pushing a new branch. **Do not use
|
||||
this link**, as it does not provide the option to populate the PR template.
|
||||
|
||||
However, you *can* use the ``gh`` command line tool. ``gh pr create`` will allow
|
||||
you to create a pull request, will prompt you for the template you wish to use,
|
||||
and then give you the option of continuing editing in your broswer.
|
||||
|
||||
Alternatively, after pushing your branch, you can visit
|
||||
`https://github.com/python/peps <https://github.com/python/peps>`__, and follow
|
||||
the link in the notification about recent changes to your branch to
|
||||
create a new PR. The in-browser interface will allow you to select a PR template
|
||||
for your new PR.
|
||||
|
||||
Commit messages and PR titles
|
||||
-----------------------------
|
||||
|
|
48
Makefile
48
Makefile
|
@ -1,15 +1,23 @@
|
|||
# Builds PEP files to HTML using sphinx
|
||||
|
||||
PYTHON=python3
|
||||
VENVDIR=.venv
|
||||
JOBS=8
|
||||
OUTPUT_DIR=build
|
||||
RENDER_COMMAND=$(VENVDIR)/bin/python3 build.py -j $(JOBS) -o $(OUTPUT_DIR)
|
||||
# You can set these variables from the command line.
|
||||
PYTHON = python3
|
||||
VENVDIR = .venv
|
||||
SPHINXBUILD = PATH=$(VENVDIR)/bin:$$PATH sphinx-build
|
||||
BUILDER = html
|
||||
JOBS = 8
|
||||
SOURCES =
|
||||
# synchronise with render.yml -> deploy step
|
||||
OUTPUT_DIR = build
|
||||
SPHINXERRORHANDLING = -W --keep-going -w sphinx-warnings.txt
|
||||
|
||||
ALLSPHINXOPTS = -b $(BUILDER) -j $(JOBS) \
|
||||
$(SPHINXOPTS) $(SPHINXERRORHANDLING) peps $(OUTPUT_DIR) $(SOURCES)
|
||||
|
||||
## html to render PEPs to "pep-NNNN.html" files
|
||||
.PHONY: html
|
||||
html: venv
|
||||
$(RENDER_COMMAND)
|
||||
$(SPHINXBUILD) $(ALLSPHINXOPTS)
|
||||
|
||||
## htmlview to open the index page built by the html target in your browser
|
||||
.PHONY: htmlview
|
||||
|
@ -18,23 +26,15 @@ htmlview: html
|
|||
|
||||
## dirhtml to render PEPs to "index.html" files within "pep-NNNN" directories
|
||||
.PHONY: dirhtml
|
||||
dirhtml: venv rss
|
||||
$(RENDER_COMMAND) --build-dirs
|
||||
|
||||
## fail-warning to render PEPs to "pep-NNNN.html" files and fail the Sphinx build on any warning
|
||||
.PHONY: fail-warning
|
||||
fail-warning: venv
|
||||
$(RENDER_COMMAND) --fail-on-warning
|
||||
dirhtml: BUILDER = dirhtml
|
||||
dirhtml: venv
|
||||
$(SPHINXBUILD) $(ALLSPHINXOPTS)
|
||||
|
||||
## check-links to check validity of links within PEP sources
|
||||
.PHONY: check-links
|
||||
check-links: BUILDER = linkcheck
|
||||
check-links: venv
|
||||
$(RENDER_COMMAND) --check-links
|
||||
|
||||
## rss to generate the peps.rss file
|
||||
.PHONY: rss
|
||||
rss: venv
|
||||
$(VENVDIR)/bin/python3 generate_rss.py -o $(OUTPUT_DIR)
|
||||
$(SPHINXBUILD) $(ALLSPHINXOPTS)
|
||||
|
||||
## clean to remove the venv and build files
|
||||
.PHONY: clean
|
||||
|
@ -76,16 +76,6 @@ spellcheck: venv
|
|||
$(VENVDIR)/bin/python3 -m pre_commit --version > /dev/null || $(VENVDIR)/bin/python3 -m pip install pre-commit
|
||||
$(VENVDIR)/bin/python3 -m pre_commit run --all-files --hook-stage manual codespell
|
||||
|
||||
## render (deprecated: use 'make html' alias instead)
|
||||
.PHONY: render
|
||||
render: html
|
||||
@echo "\033[0;33mWarning:\033[0;31m 'make render' \033[0;33mis deprecated, use\033[0;32m 'make html' \033[0;33malias instead\033[0m"
|
||||
|
||||
## pages (deprecated: use 'make dirhtml' alias instead)
|
||||
.PHONY: pages
|
||||
pages: dirhtml
|
||||
@echo "\033[0;33mWarning:\033[0;31m 'make pages' \033[0;33mis deprecated, use\033[0;32m 'make dirhtml' \033[0;33malias instead\033[0m"
|
||||
|
||||
.PHONY: help
|
||||
help : Makefile
|
||||
@echo "Please use \`make <target>' where <target> is one of"
|
||||
|
|
38
build.py
38
build.py
|
@ -5,6 +5,7 @@
|
|||
"""Build script for Sphinx documentation"""
|
||||
|
||||
import argparse
|
||||
import os
|
||||
from pathlib import Path
|
||||
|
||||
from sphinx.application import Sphinx
|
||||
|
@ -27,19 +28,10 @@ def create_parser():
|
|||
help='Render PEPs to "index.html" files within "pep-NNNN" directories. '
|
||||
'Cannot be used with "-f" or "-l".')
|
||||
|
||||
# flags / options
|
||||
parser.add_argument("-w", "--fail-on-warning", action="store_true",
|
||||
help="Fail the Sphinx build on any warning.")
|
||||
parser.add_argument("-n", "--nitpicky", action="store_true",
|
||||
help="Run Sphinx in 'nitpicky' mode, "
|
||||
"warning on every missing reference target.")
|
||||
parser.add_argument("-j", "--jobs", type=int, default=1,
|
||||
help="How many parallel jobs to run (if supported). "
|
||||
"Integer, default 1.")
|
||||
parser.add_argument(
|
||||
"-o",
|
||||
"--output-dir",
|
||||
default="build", # synchronise with render.yaml -> deploy step
|
||||
default="build",
|
||||
help="Output directory, relative to root. Default 'build'.",
|
||||
)
|
||||
|
||||
|
@ -61,33 +53,23 @@ def create_index_file(html_root: Path, builder: str) -> None:
|
|||
if __name__ == "__main__":
|
||||
args = create_parser()
|
||||
|
||||
root_directory = Path(".").absolute()
|
||||
source_directory = root_directory
|
||||
root_directory = Path(__file__).resolve().parent
|
||||
source_directory = root_directory / "peps"
|
||||
build_directory = root_directory / args.output_dir
|
||||
doctree_directory = build_directory / ".doctrees"
|
||||
|
||||
# builder configuration
|
||||
if args.builder is not None:
|
||||
sphinx_builder = args.builder
|
||||
else:
|
||||
# default builder
|
||||
sphinx_builder = "html"
|
||||
|
||||
# other configuration
|
||||
config_overrides = {}
|
||||
if args.nitpicky:
|
||||
config_overrides["nitpicky"] = True
|
||||
sphinx_builder = args.builder or "html"
|
||||
|
||||
app = Sphinx(
|
||||
source_directory,
|
||||
confdir=source_directory,
|
||||
outdir=build_directory,
|
||||
doctreedir=doctree_directory,
|
||||
outdir=build_directory / sphinx_builder,
|
||||
doctreedir=build_directory / "doctrees",
|
||||
buildername=sphinx_builder,
|
||||
confoverrides=config_overrides,
|
||||
warningiserror=args.fail_on_warning,
|
||||
parallel=args.jobs,
|
||||
warningiserror=True,
|
||||
parallel=os.cpu_count() or 1,
|
||||
tags=["internal_builder"],
|
||||
keep_going=True,
|
||||
)
|
||||
app.build()
|
||||
|
||||
|
|
|
@ -0,0 +1,605 @@
|
|||
#!/usr/bin/env python3
|
||||
|
||||
# This file is placed in the public domain or under the
|
||||
# CC0-1.0-Universal license, whichever is more permissive.
|
||||
|
||||
"""check-peps: Check PEPs for common mistakes.
|
||||
|
||||
Usage: check-peps [-d | --detailed] <PEP files...>
|
||||
|
||||
Only the PEPs specified are checked.
|
||||
If none are specified, all PEPs are checked.
|
||||
|
||||
Use "--detailed" to show the contents of lines where errors were found.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import datetime as dt
|
||||
import re
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
TYPE_CHECKING = False
|
||||
if TYPE_CHECKING:
|
||||
from collections.abc import Iterable, Iterator, KeysView, Sequence
|
||||
from typing import TypeAlias
|
||||
|
||||
# (line number, warning message)
|
||||
Message: TypeAlias = tuple[int, str]
|
||||
MessageIterator: TypeAlias = Iterator[Message]
|
||||
|
||||
|
||||
# get the directory with the PEP sources
|
||||
ROOT_DIR = Path(__file__).resolve().parent
|
||||
PEP_ROOT = ROOT_DIR / "peps"
|
||||
|
||||
# See PEP 12 for the order
|
||||
# Note we retain "BDFL-Delegate"
|
||||
ALL_HEADERS = (
|
||||
"PEP",
|
||||
"Title",
|
||||
"Version",
|
||||
"Last-Modified",
|
||||
"Author",
|
||||
"Sponsor",
|
||||
"BDFL-Delegate", "PEP-Delegate",
|
||||
"Discussions-To",
|
||||
"Status",
|
||||
"Type",
|
||||
"Topic",
|
||||
"Content-Type",
|
||||
"Requires",
|
||||
"Created",
|
||||
"Python-Version",
|
||||
"Post-History",
|
||||
"Replaces",
|
||||
"Superseded-By",
|
||||
"Resolution",
|
||||
)
|
||||
REQUIRED_HEADERS = frozenset({"PEP", "Title", "Author", "Status", "Type", "Created"})
|
||||
|
||||
# See PEP 1 for the full list
|
||||
ALL_STATUSES = frozenset({
|
||||
"Accepted",
|
||||
"Active",
|
||||
"April Fool!",
|
||||
"Deferred",
|
||||
"Draft",
|
||||
"Final",
|
||||
"Provisional",
|
||||
"Rejected",
|
||||
"Superseded",
|
||||
"Withdrawn",
|
||||
})
|
||||
|
||||
# PEPs that are allowed to link directly to PEPs
|
||||
SKIP_DIRECT_PEP_LINK_CHECK = frozenset({"0009", "0287", "0676", "0684", "8001"})
|
||||
|
||||
DEFAULT_FLAGS = re.ASCII | re.IGNORECASE # Insensitive latin
|
||||
|
||||
# any sequence of letters or '-', followed by a single ':' and a space or end of line
|
||||
HEADER_PATTERN = re.compile(r"^([a-z\-]+):(?: |$)", DEFAULT_FLAGS)
|
||||
# any sequence of unicode letters or legal special characters
|
||||
NAME_PATTERN = re.compile(r"(?:[^\W\d_]|[ ',\-.])+(?: |$)")
|
||||
# any sequence of ASCII letters, digits, or legal special characters
|
||||
EMAIL_LOCAL_PART_PATTERN = re.compile(r"[\w!#$%&'*+\-/=?^{|}~.]+", DEFAULT_FLAGS)
|
||||
|
||||
DISCOURSE_THREAD_PATTERN = re.compile(r"([\w\-]+/)?\d+", DEFAULT_FLAGS)
|
||||
DISCOURSE_POST_PATTERN = re.compile(r"([\w\-]+/)?\d+(/\d+)?", DEFAULT_FLAGS)
|
||||
|
||||
MAILMAN_2_PATTERN = re.compile(r"[\w\-]+/\d{4}-[a-z]+/\d+\.html", DEFAULT_FLAGS)
|
||||
MAILMAN_3_THREAD_PATTERN = re.compile(r"[\w\-]+@python\.org/thread/[a-z0-9]+/?", DEFAULT_FLAGS)
|
||||
MAILMAN_3_MESSAGE_PATTERN = re.compile(r"[\w\-]+@python\.org/message/[a-z0-9]+/?(#[a-z0-9]+)?", DEFAULT_FLAGS)
|
||||
|
||||
# Controlled by the "--detailed" flag
|
||||
DETAILED_ERRORS = False
|
||||
|
||||
|
||||
def check(filenames: Sequence[str] = (), /) -> int:
|
||||
"""The main entry-point."""
|
||||
if filenames:
|
||||
filenames = map(Path, filenames)
|
||||
else:
|
||||
filenames = PEP_ROOT.glob("pep-????.rst")
|
||||
if (count := sum(map(check_file, filenames))) > 0:
|
||||
s = "s" * (count != 1)
|
||||
print(f"check-peps failed: {count} error{s}", file=sys.stderr)
|
||||
return 1
|
||||
return 0
|
||||
|
||||
|
||||
def check_file(filename: Path, /) -> int:
|
||||
filename = filename.resolve()
|
||||
try:
|
||||
content = filename.read_text(encoding="utf-8")
|
||||
except FileNotFoundError:
|
||||
return _output_error(filename, [""], [(0, "Could not read PEP!")])
|
||||
else:
|
||||
lines = content.splitlines()
|
||||
return _output_error(filename, lines, check_peps(filename, lines))
|
||||
|
||||
|
||||
def check_peps(filename: Path, lines: Sequence[str], /) -> MessageIterator:
|
||||
yield from check_headers(lines)
|
||||
for line_num, line in enumerate(lines, start=1):
|
||||
if filename.stem.removeprefix("pep-") in SKIP_DIRECT_PEP_LINK_CHECK:
|
||||
continue
|
||||
yield from check_direct_links(line_num, line.lstrip())
|
||||
|
||||
|
||||
def check_headers(lines: Sequence[str], /) -> MessageIterator:
|
||||
yield from _validate_pep_number(next(iter(lines), ""))
|
||||
|
||||
found_headers = {}
|
||||
line_num = 0
|
||||
for line_num, line in enumerate(lines, start=1):
|
||||
if line.strip() == "":
|
||||
headers_end_line_num = line_num
|
||||
break
|
||||
if match := HEADER_PATTERN.match(line):
|
||||
header = match[1]
|
||||
if header in ALL_HEADERS:
|
||||
if header not in found_headers:
|
||||
found_headers[match[1]] = line_num
|
||||
else:
|
||||
yield line_num, f"Must not have duplicate header: {header} "
|
||||
else:
|
||||
yield line_num, f"Must not have invalid header: {header}"
|
||||
else:
|
||||
headers_end_line_num = line_num
|
||||
|
||||
yield from _validate_required_headers(found_headers.keys())
|
||||
|
||||
shifted_line_nums = list(found_headers.values())[1:]
|
||||
for i, (header, line_num) in enumerate(found_headers.items()):
|
||||
start = line_num - 1
|
||||
end = headers_end_line_num - 1
|
||||
if i < len(found_headers) - 1:
|
||||
end = shifted_line_nums[i] - 1
|
||||
remainder = "\n".join(lines[start:end]).removeprefix(f"{header}:")
|
||||
if remainder != "":
|
||||
if remainder[0] not in {" ", "\n"}:
|
||||
yield line_num, f"Headers must have a space after the colon: {header}"
|
||||
remainder = remainder.lstrip()
|
||||
yield from _validate_header(header, line_num, remainder)
|
||||
|
||||
|
||||
def _validate_header(header: str, line_num: int, content: str) -> MessageIterator:
|
||||
if header == "Title":
|
||||
yield from _validate_title(line_num, content)
|
||||
elif header == "Author":
|
||||
yield from _validate_author(line_num, content)
|
||||
elif header == "Sponsor":
|
||||
yield from _validate_sponsor(line_num, content)
|
||||
elif header in {"BDFL-Delegate", "PEP-Delegate"}:
|
||||
yield from _validate_delegate(line_num, content)
|
||||
elif header == "Discussions-To":
|
||||
yield from _validate_discussions_to(line_num, content)
|
||||
elif header == "Status":
|
||||
yield from _validate_status(line_num, content)
|
||||
elif header == "Type":
|
||||
yield from _validate_type(line_num, content)
|
||||
elif header == "Topic":
|
||||
yield from _validate_topic(line_num, content)
|
||||
elif header == "Content-Type":
|
||||
yield from _validate_content_type(line_num, content)
|
||||
elif header in {"Requires", "Replaces", "Superseded-By"}:
|
||||
yield from _validate_pep_references(line_num, content)
|
||||
elif header == "Created":
|
||||
yield from _validate_created(line_num, content)
|
||||
elif header == "Python-Version":
|
||||
yield from _validate_python_version(line_num, content)
|
||||
elif header == "Post-History":
|
||||
yield from _validate_post_history(line_num, content)
|
||||
elif header == "Resolution":
|
||||
yield from _validate_resolution(line_num, content)
|
||||
|
||||
|
||||
def check_direct_links(line_num: int, line: str) -> MessageIterator:
|
||||
"""Check that PEPs and RFCs aren't linked directly"""
|
||||
|
||||
line = line.lower()
|
||||
if "dev/peps/pep-" in line or "peps.python.org/pep-" in line:
|
||||
yield line_num, "Use the :pep:`NNN` role to refer to PEPs"
|
||||
if "rfc-editor.org/rfc/" in line or "ietf.org/doc/html/rfc" in line:
|
||||
yield line_num, "Use the :rfc:`NNN` role to refer to RFCs"
|
||||
|
||||
|
||||
def _output_error(filename: Path, lines: Sequence[str], errors: Iterable[Message]) -> int:
|
||||
relative_filename = filename.relative_to(ROOT_DIR)
|
||||
err_count = 0
|
||||
for line_num, msg in errors:
|
||||
err_count += 1
|
||||
|
||||
print(f"{relative_filename}:{line_num}: {msg}")
|
||||
if not DETAILED_ERRORS:
|
||||
continue
|
||||
|
||||
line = lines[line_num - 1]
|
||||
print(" |")
|
||||
print(f"{line_num: >4} | '{line}'")
|
||||
print(" |")
|
||||
|
||||
return err_count
|
||||
|
||||
|
||||
###########################
|
||||
# PEP Header Validators #
|
||||
###########################
|
||||
|
||||
|
||||
def _validate_required_headers(found_headers: KeysView[str]) -> MessageIterator:
|
||||
"""PEPs must have all required headers, in the PEP 12 order"""
|
||||
|
||||
if missing := REQUIRED_HEADERS.difference(found_headers):
|
||||
for missing_header in sorted(missing, key=ALL_HEADERS.index):
|
||||
yield 1, f"Must have required header: {missing_header}"
|
||||
|
||||
ordered_headers = sorted(found_headers, key=ALL_HEADERS.index)
|
||||
if list(found_headers) != ordered_headers:
|
||||
order_str = ", ".join(ordered_headers)
|
||||
yield 1, "Headers must be in PEP 12 order. Correct order: " + order_str
|
||||
|
||||
|
||||
def _validate_pep_number(line: str) -> MessageIterator:
|
||||
"""'PEP' header must be a number 1-9999"""
|
||||
|
||||
if not line.startswith("PEP: "):
|
||||
yield 1, "PEP must begin with the 'PEP:' header"
|
||||
return
|
||||
|
||||
pep_number = line.removeprefix("PEP: ").lstrip()
|
||||
yield from _pep_num(1, pep_number, "'PEP:' header")
|
||||
|
||||
|
||||
def _validate_title(line_num: int, line: str) -> MessageIterator:
|
||||
"""'Title' must be 1-79 characters"""
|
||||
|
||||
if len(line) == 0:
|
||||
yield line_num, "PEP must have a title"
|
||||
elif len(line) > 79:
|
||||
yield line_num, "PEP title must be less than 80 characters"
|
||||
|
||||
|
||||
def _validate_author(line_num: int, body: str) -> MessageIterator:
|
||||
"""'Author' must be list of 'Name <email@example.com>, …'"""
|
||||
|
||||
lines = body.split("\n")
|
||||
for offset, line in enumerate(lines):
|
||||
if offset >= 1 and line[:9].isspace():
|
||||
# Checks for:
|
||||
# Author: Alice
|
||||
# Bob
|
||||
# ^^^^
|
||||
# Note that len("Author: ") == 8
|
||||
yield line_num + offset, "Author line must not be over-indented"
|
||||
if offset < len(lines) - 1:
|
||||
if not line.endswith(","):
|
||||
yield line_num + offset, "Author continuation lines must end with a comma"
|
||||
for part in line.removesuffix(",").split(", "):
|
||||
yield from _email(line_num + offset, part, "Author")
|
||||
|
||||
|
||||
def _validate_sponsor(line_num: int, line: str) -> MessageIterator:
|
||||
"""'Sponsor' must have format 'Name <email@example.com>'"""
|
||||
|
||||
yield from _email(line_num, line, "Sponsor")
|
||||
|
||||
|
||||
def _validate_delegate(line_num: int, line: str) -> MessageIterator:
|
||||
"""'Delegate' must have format 'Name <email@example.com>'"""
|
||||
|
||||
if line == "":
|
||||
return
|
||||
|
||||
# PEP 451
|
||||
if ", " in line:
|
||||
for part in line.removesuffix(",").split(", "):
|
||||
yield from _email(line_num, part, "Delegate")
|
||||
return
|
||||
|
||||
yield from _email(line_num, line, "Delegate")
|
||||
|
||||
|
||||
def _validate_discussions_to(line_num: int, line: str) -> MessageIterator:
|
||||
"""'Discussions-To' must be a thread URL"""
|
||||
|
||||
yield from _thread(line_num, line, "Discussions-To", discussions_to=True)
|
||||
if line.startswith("https://"):
|
||||
return
|
||||
for suffix in "@python.org", "@googlegroups.com":
|
||||
if line.endswith(suffix):
|
||||
remainder = line.removesuffix(suffix)
|
||||
if re.fullmatch(r"[\w\-]+", remainder) is None:
|
||||
yield line_num, "Discussions-To must be a valid mailing list"
|
||||
return
|
||||
yield line_num, "Discussions-To must be a valid thread URL or mailing list"
|
||||
|
||||
|
||||
def _validate_status(line_num: int, line: str) -> MessageIterator:
|
||||
"""'Status' must be a valid PEP status"""
|
||||
|
||||
if line not in ALL_STATUSES:
|
||||
yield line_num, "Status must be a valid PEP status"
|
||||
|
||||
|
||||
def _validate_type(line_num: int, line: str) -> MessageIterator:
|
||||
"""'Type' must be a valid PEP type"""
|
||||
|
||||
if line not in {"Standards Track", "Informational", "Process"}:
|
||||
yield line_num, "Type must be a valid PEP type"
|
||||
|
||||
|
||||
def _validate_topic(line_num: int, line: str) -> MessageIterator:
|
||||
"""'Topic' must be for a valid sub-index"""
|
||||
|
||||
topics = line.split(", ")
|
||||
unique_topics = set(topics)
|
||||
if len(topics) > len(unique_topics):
|
||||
yield line_num, "Topic must not contain duplicates"
|
||||
|
||||
if unique_topics - {"Governance", "Packaging", "Typing", "Release"}:
|
||||
if not all(map(str.istitle, unique_topics)):
|
||||
yield line_num, "Topic must be properly capitalised (Title Case)"
|
||||
if unique_topics - {"governance", "packaging", "typing", "release"}:
|
||||
yield line_num, "Topic must be for a valid sub-index"
|
||||
if sorted(topics) != topics:
|
||||
yield line_num, "Topic must be sorted lexicographically"
|
||||
|
||||
|
||||
def _validate_content_type(line_num: int, line: str) -> MessageIterator:
|
||||
"""'Content-Type' must be 'text/x-rst'"""
|
||||
|
||||
if line != "text/x-rst":
|
||||
yield line_num, "Content-Type must be 'text/x-rst'"
|
||||
|
||||
|
||||
def _validate_pep_references(line_num: int, line: str) -> MessageIterator:
|
||||
"""`Requires`/`Replaces`/`Superseded-By` must be 'NNN' PEP IDs"""
|
||||
|
||||
line = line.removesuffix(",").rstrip()
|
||||
if line.count(", ") != line.count(","):
|
||||
yield line_num, "PEP references must be separated by comma-spaces (', ')"
|
||||
return
|
||||
|
||||
references = line.split(", ")
|
||||
for reference in references:
|
||||
yield from _pep_num(line_num, reference, "PEP reference")
|
||||
|
||||
|
||||
def _validate_created(line_num: int, line: str) -> MessageIterator:
|
||||
"""'Created' must be a 'DD-mmm-YYYY' date"""
|
||||
|
||||
yield from _date(line_num, line, "Created")
|
||||
|
||||
|
||||
def _validate_python_version(line_num: int, line: str) -> MessageIterator:
|
||||
"""'Python-Version' must be an ``X.Y[.Z]`` version"""
|
||||
|
||||
versions = line.split(", ")
|
||||
for version in versions:
|
||||
if version.count(".") not in {1, 2}:
|
||||
yield line_num, f"Python-Version must have two or three segments: {version}"
|
||||
continue
|
||||
|
||||
try:
|
||||
major, minor, micro = version.split(".", 2)
|
||||
except ValueError:
|
||||
major, minor = version.split(".", 1)
|
||||
micro = ""
|
||||
|
||||
if major not in "123":
|
||||
yield line_num, f"Python-Version major part must be 1, 2, or 3: {version}"
|
||||
if not _is_digits(minor) and minor != "x":
|
||||
yield line_num, f"Python-Version minor part must be numeric: {version}"
|
||||
elif minor != "0" and minor[0] == "0":
|
||||
yield line_num, f"Python-Version minor part must not have leading zeros: {version}"
|
||||
|
||||
if micro == "":
|
||||
return
|
||||
if minor == "x":
|
||||
yield line_num, f"Python-Version micro part must be empty if minor part is 'x': {version}"
|
||||
elif micro[0] == "0":
|
||||
yield line_num, f"Python-Version micro part must not have leading zeros: {version}"
|
||||
elif not _is_digits(micro):
|
||||
yield line_num, f"Python-Version micro part must be numeric: {version}"
|
||||
|
||||
|
||||
def _validate_post_history(line_num: int, body: str) -> MessageIterator:
|
||||
"""'Post-History' must be '`DD-mmm-YYYY <Thread URL>`__, …'"""
|
||||
|
||||
if body == "":
|
||||
return
|
||||
|
||||
for offset, line in enumerate(body.removesuffix(",").split("\n"), start=line_num):
|
||||
for post in line.removesuffix(",").strip().split(", "):
|
||||
if not post.startswith("`") and not post.endswith(">`__"):
|
||||
yield from _date(offset, post, "Post-History")
|
||||
else:
|
||||
post_date, post_url = post[1:-4].split(" <")
|
||||
yield from _date(offset, post_date, "Post-History")
|
||||
yield from _thread(offset, post_url, "Post-History")
|
||||
|
||||
|
||||
def _validate_resolution(line_num: int, line: str) -> MessageIterator:
|
||||
"""'Resolution' must be a direct thread/message URL"""
|
||||
|
||||
yield from _thread(line_num, line, "Resolution", allow_message=True)
|
||||
|
||||
|
||||
########################
|
||||
# Validation Helpers #
|
||||
########################
|
||||
|
||||
def _pep_num(line_num: int, pep_number: str, prefix: str) -> MessageIterator:
|
||||
if pep_number == "":
|
||||
yield line_num, f"{prefix} must not be blank: {pep_number!r}"
|
||||
return
|
||||
if pep_number.startswith("0") and pep_number != "0":
|
||||
yield line_num, f"{prefix} must not contain leading zeros: {pep_number!r}"
|
||||
if not _is_digits(pep_number):
|
||||
yield line_num, f"{prefix} must be numeric: {pep_number!r}"
|
||||
elif not 0 <= int(pep_number) <= 9999:
|
||||
yield line_num, f"{prefix} must be between 0 and 9999: {pep_number!r}"
|
||||
|
||||
|
||||
def _is_digits(string: str) -> bool:
|
||||
"""Match a string of ASCII digits ([0-9]+)."""
|
||||
return string.isascii() and string.isdigit()
|
||||
|
||||
|
||||
def _email(line_num: int, author_email: str, prefix: str) -> MessageIterator:
|
||||
author_email = author_email.strip()
|
||||
|
||||
if author_email.count("<") > 1:
|
||||
msg = f"{prefix} entries must not contain multiple '<': {author_email!r}"
|
||||
yield line_num, msg
|
||||
if author_email.count(">") > 1:
|
||||
msg = f"{prefix} entries must not contain multiple '>': {author_email!r}"
|
||||
yield line_num, msg
|
||||
if author_email.count("@") > 1:
|
||||
msg = f"{prefix} entries must not contain multiple '@': {author_email!r}"
|
||||
yield line_num, msg
|
||||
|
||||
author = author_email.split("<", 1)[0].rstrip()
|
||||
if NAME_PATTERN.fullmatch(author) is None:
|
||||
msg = f"{prefix} entries must begin with a valid 'Name': {author_email!r}"
|
||||
yield line_num, msg
|
||||
return
|
||||
|
||||
email_text = author_email.removeprefix(author)
|
||||
if not email_text:
|
||||
# Does not have the optional email part
|
||||
return
|
||||
|
||||
if not email_text.startswith(" <") or not email_text.endswith(">"):
|
||||
msg = f"{prefix} entries must be formatted as 'Name <email@example.com>': {author_email!r}"
|
||||
yield line_num, msg
|
||||
email_text = email_text.removeprefix(" <").removesuffix(">")
|
||||
|
||||
if "@" in email_text:
|
||||
local, domain = email_text.rsplit("@", 1)
|
||||
elif " at " in email_text:
|
||||
local, domain = email_text.rsplit(" at ", 1)
|
||||
else:
|
||||
yield line_num, f"{prefix} entries must contain a valid email address: {author_email!r}"
|
||||
return
|
||||
if EMAIL_LOCAL_PART_PATTERN.fullmatch(local) is None or _invalid_domain(domain):
|
||||
yield line_num, f"{prefix} entries must contain a valid email address: {author_email!r}"
|
||||
|
||||
|
||||
def _invalid_domain(domain_part: str) -> bool:
|
||||
*labels, root = domain_part.split(".")
|
||||
for label in labels:
|
||||
if not label.replace("-", "").isalnum():
|
||||
return True
|
||||
return not root.isalnum() or not root.isascii()
|
||||
|
||||
|
||||
def _thread(line_num: int, url: str, prefix: str, *, allow_message: bool = False, discussions_to: bool = False) -> MessageIterator:
|
||||
if allow_message and discussions_to:
|
||||
msg = "allow_message and discussions_to cannot both be True"
|
||||
raise ValueError(msg)
|
||||
|
||||
msg = f"{prefix} must be a valid thread URL"
|
||||
|
||||
if not url.startswith("https://"):
|
||||
if not discussions_to:
|
||||
yield line_num, msg
|
||||
return
|
||||
|
||||
if url.startswith("https://discuss.python.org/t/"):
|
||||
remainder = url.removeprefix("https://discuss.python.org/t/").removesuffix("/")
|
||||
|
||||
# Discussions-To links must be the thread itself, not a post
|
||||
if discussions_to:
|
||||
# The equivalent pattern is similar to '([\w\-]+/)?\d+',
|
||||
# but the topic name must contain a non-numeric character
|
||||
|
||||
# We use ``str.rpartition`` as the topic name is optional
|
||||
topic_name, _, topic_id = remainder.rpartition("/")
|
||||
if topic_name == '' and _is_digits(topic_id):
|
||||
return
|
||||
topic_name = topic_name.replace("-", "0").replace("_", "0")
|
||||
# the topic name must not be entirely numeric
|
||||
valid_topic_name = not _is_digits(topic_name) and topic_name.isalnum()
|
||||
if valid_topic_name and _is_digits(topic_id):
|
||||
return
|
||||
else:
|
||||
# The equivalent pattern is similar to '([\w\-]+/)?\d+(/\d+)?',
|
||||
# but the topic name must contain a non-numeric character
|
||||
if remainder.count("/") == 2:
|
||||
# When there are three parts, the URL must be "topic-name/topic-id/post-id".
|
||||
topic_name, topic_id, post_id = remainder.rsplit("/", 2)
|
||||
topic_name = topic_name.replace("-", "0").replace("_", "0")
|
||||
valid_topic_name = not _is_digits(topic_name) and topic_name.isalnum()
|
||||
if valid_topic_name and _is_digits(topic_id) and _is_digits(post_id):
|
||||
# the topic name must not be entirely numeric
|
||||
return
|
||||
elif remainder.count("/") == 1:
|
||||
# When there are only two parts, there's an ambiguity between
|
||||
# "topic-name/topic-id" and "topic-id/post-id".
|
||||
# We disambiguate by checking if the LHS is a valid name and
|
||||
# the RHS is a valid topic ID (for the former),
|
||||
# and then if both the LHS and RHS are valid IDs (for the latter).
|
||||
left, right = remainder.rsplit("/")
|
||||
left = left.replace("-", "0").replace("_", "0")
|
||||
# the topic name must not be entirely numeric
|
||||
left_is_name = not _is_digits(left) and left.isalnum()
|
||||
if left_is_name and _is_digits(right):
|
||||
return
|
||||
elif _is_digits(left) and _is_digits(right):
|
||||
return
|
||||
else:
|
||||
# When there's only one part, it must be a valid topic ID.
|
||||
if _is_digits(remainder):
|
||||
return
|
||||
|
||||
if url.startswith("https://mail.python.org/pipermail/"):
|
||||
remainder = url.removeprefix("https://mail.python.org/pipermail/")
|
||||
if MAILMAN_2_PATTERN.fullmatch(remainder) is not None:
|
||||
return
|
||||
|
||||
if url.startswith("https://mail.python.org/archives/list/"):
|
||||
remainder = url.removeprefix("https://mail.python.org/archives/list/")
|
||||
if allow_message and MAILMAN_3_MESSAGE_PATTERN.fullmatch(remainder) is not None:
|
||||
return
|
||||
if MAILMAN_3_THREAD_PATTERN.fullmatch(remainder) is not None:
|
||||
return
|
||||
|
||||
yield line_num, msg
|
||||
|
||||
|
||||
def _date(line_num: int, date_str: str, prefix: str) -> MessageIterator:
|
||||
try:
|
||||
parsed_date = dt.datetime.strptime(date_str, "%d-%b-%Y")
|
||||
except ValueError:
|
||||
yield line_num, f"{prefix} must be a 'DD-mmm-YYYY' date: {date_str!r}"
|
||||
return
|
||||
else:
|
||||
if date_str[1] == "-": # Date must be zero-padded
|
||||
yield line_num, f"{prefix} must be a 'DD-mmm-YYYY' date: {date_str!r}"
|
||||
return
|
||||
|
||||
if parsed_date.year < 1990:
|
||||
yield line_num, f"{prefix} must not be before Python was invented: {date_str!r}"
|
||||
if parsed_date > (dt.datetime.now() + dt.timedelta(days=14)):
|
||||
yield line_num, f"{prefix} must not be in the future: {date_str!r}"
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
if {"-h", "--help", "-?"}.intersection(sys.argv[1:]):
|
||||
print(__doc__, file=sys.stderr)
|
||||
raise SystemExit(0)
|
||||
|
||||
files = {}
|
||||
for arg in sys.argv[1:]:
|
||||
if not arg.startswith("-"):
|
||||
files[arg] = None
|
||||
elif arg in {"-d", "--detailed"}:
|
||||
DETAILED_ERRORS = True
|
||||
else:
|
||||
print(f"Unknown option: {arg!r}", file=sys.stderr)
|
||||
raise SystemExit(1)
|
||||
raise SystemExit(check(files))
|
|
@ -1,5 +1,4 @@
|
|||
..
|
||||
Author: Adam Turner
|
||||
:author: Adam Turner
|
||||
|
||||
|
||||
Building PEPs Locally
|
||||
|
@ -10,8 +9,8 @@ This can also be used to check that the PEP is valid reStructuredText before
|
|||
submission to the PEP editors.
|
||||
|
||||
The rest of this document assumes you are working from a local clone of the
|
||||
`PEPs repository <https://github.com/python/peps>`__, with
|
||||
**Python 3.9 or later** installed.
|
||||
`PEPs repository <https://github.com/python/peps>`__,
|
||||
with **Python 3.9 or later** installed.
|
||||
|
||||
|
||||
Render PEPs locally
|
||||
|
@ -51,11 +50,6 @@ Render PEPs locally
|
|||
|
||||
(venv) PS> python build.py
|
||||
|
||||
.. note::
|
||||
|
||||
There may be a series of warnings about unreferenced citations or labels.
|
||||
Whilst these are valid warnings, they do not impact the build process.
|
||||
|
||||
4. Navigate to the ``build`` directory of your PEPs repo to find the HTML pages.
|
||||
PEP 0 provides a formatted index, and may be a useful reference.
|
||||
|
||||
|
@ -87,28 +81,8 @@ Check the validity of links within PEP sources (runs the `Sphinx linkchecker
|
|||
|
||||
.. code-block:: shell
|
||||
|
||||
python build.py --check-links
|
||||
make check-links
|
||||
|
||||
|
||||
Stricter rendering
|
||||
''''''''''''''''''
|
||||
|
||||
Run in `nit-picky <https://www.sphinx-doc.org/en/master/usage/configuration.html#confval-nitpicky>`__
|
||||
mode.
|
||||
This generates warnings for all missing references.
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
python build.py --nitpicky
|
||||
|
||||
Fail the build on any warning.
|
||||
As of January 2022, there are around 250 warnings when building the PEPs.
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
python build.py --fail-on-warning
|
||||
make fail-warning
|
||||
python build.py --check-links
|
||||
make check-links
|
||||
|
||||
|
||||
``build.py`` usage
|
||||
|
@ -118,4 +92,4 @@ For details on the command-line options to the ``build.py`` script, run:
|
|||
|
||||
.. code-block:: shell
|
||||
|
||||
python build.py --help
|
||||
python build.py --help
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
..
|
||||
Author: Adam Turner
|
||||
:author: Adam Turner
|
||||
|
||||
..
|
||||
We can't use :pep:`N` references in this document, as they use links relative
|
||||
to the current file, which doesn't work in a subdirectory like this one.
|
||||
|
||||
|
@ -9,7 +9,7 @@ An Overview of the PEP Rendering System
|
|||
=======================================
|
||||
|
||||
This document provides an overview of the PEP rendering system, as a companion
|
||||
to :doc:`PEP 676 <../pep-0676>`.
|
||||
to `PEP 676 <https://peps.python.org/pep-0676/>`__.
|
||||
|
||||
|
||||
1. Configuration
|
||||
|
@ -17,14 +17,14 @@ to :doc:`PEP 676 <../pep-0676>`.
|
|||
|
||||
Configuration is stored in three files:
|
||||
|
||||
- ``conf.py`` contains the majority of the Sphinx configuration
|
||||
- ``contents.rst`` creates the Sphinx-mandated table of contents directive
|
||||
- ``peps/conf.py`` contains the majority of the Sphinx configuration
|
||||
- ``peps/contents.rst`` contains the compulsory table of contents directive
|
||||
- ``pep_sphinx_extensions/pep_theme/theme.conf`` sets the Pygments themes
|
||||
|
||||
The configuration:
|
||||
|
||||
- registers the custom Sphinx extension
|
||||
- sets both ``.txt`` and ``.rst`` suffixes to be parsed as PEPs
|
||||
- sets the ``.rst`` suffix to be parsed as PEPs
|
||||
- tells Sphinx which source files to use
|
||||
- registers the PEP theme, maths renderer, and template
|
||||
- disables some default settings that are covered in the extension
|
||||
|
@ -35,7 +35,7 @@ The configuration:
|
|||
----------------
|
||||
|
||||
``build.py`` manages the rendering process.
|
||||
Usage is covered in :doc:`build`.
|
||||
Usage is covered in `Building PEPs Locally <./build.rst>`_.
|
||||
|
||||
|
||||
3. Extension
|
||||
|
@ -110,7 +110,8 @@ This overrides the built-in ``:pep:`` role to return the correct URL.
|
|||
3.4.2 ``PEPHeaders`` transform
|
||||
******************************
|
||||
|
||||
PEPs start with a set of :rfc:`2822` headers, per :doc:`PEP 1 <../pep-0001>`.
|
||||
PEPs start with a set of :rfc:`2822` headers,
|
||||
per `PEP 1 <https://peps.python.org/pep-0001/>`__.
|
||||
This transform validates that the required headers are present and of the
|
||||
correct data type, and removes headers not for display.
|
||||
It must run before the ``PEPTitle`` transform.
|
||||
|
@ -122,7 +123,7 @@ It must run before the ``PEPTitle`` transform.
|
|||
We generate the title node from the parsed title in the PEP headers, and make
|
||||
all nodes in the document children of the new title node.
|
||||
This transform must also handle parsing reStructuredText markup within PEP
|
||||
titles, such as :doc:`PEP 604 <../pep-0604>`.
|
||||
titles, such as `PEP 604 <https://peps.python.org/pep-0604/>`__.
|
||||
|
||||
|
||||
3.4.4 ``PEPContents`` transform
|
||||
|
@ -216,12 +217,9 @@ parse and validate that metadata.
|
|||
After collecting and validating all the PEP data, the index itself is created in
|
||||
three steps:
|
||||
|
||||
1. Output the header text
|
||||
2. Output the category and numerical indices
|
||||
3. Output the author index
|
||||
|
||||
The ``AUTHOR_OVERRIDES.csv`` file can be used to override an author's name in
|
||||
the PEP 0 output.
|
||||
1. Output the header text
|
||||
2. Output the category and numerical indices
|
||||
3. Output the author index
|
||||
|
||||
We then add the newly created PEP 0 file to two Sphinx variables so that it will
|
||||
be processed as a normal source document.
|
||||
|
|
210
generate_rss.py
210
generate_rss.py
|
@ -1,210 +0,0 @@
|
|||
#!/usr/bin/env python3
|
||||
# This file is placed in the public domain or under the
|
||||
# CC0-1.0-Universal license, whichever is more permissive.
|
||||
|
||||
import argparse
|
||||
import datetime as dt
|
||||
import email.utils
|
||||
from html import escape
|
||||
from pathlib import Path
|
||||
import re
|
||||
|
||||
import docutils.frontend
|
||||
from docutils import nodes
|
||||
from docutils import utils
|
||||
from docutils.parsers import rst
|
||||
from docutils.parsers.rst import roles
|
||||
|
||||
# get the directory with the PEP sources
|
||||
PEP_ROOT = Path(__file__).parent
|
||||
|
||||
|
||||
def _format_rfc_2822(datetime: dt.datetime) -> str:
|
||||
datetime = datetime.replace(tzinfo=dt.timezone.utc)
|
||||
return email.utils.format_datetime(datetime, usegmt=True)
|
||||
|
||||
|
||||
line_cache: dict[Path, dict[str, str]] = {}
|
||||
|
||||
# Monkeypatch PEP and RFC reference roles to match Sphinx behaviour
|
||||
EXPLICIT_TITLE_RE = re.compile(r'^(.+?)\s*(?<!\x00)<(.*?)>$', re.DOTALL)
|
||||
|
||||
|
||||
def _pep_reference_role(role, rawtext, text, lineno, inliner,
|
||||
options={}, content=[]):
|
||||
matched = EXPLICIT_TITLE_RE.match(text)
|
||||
if matched:
|
||||
title = utils.unescape(matched.group(1))
|
||||
target = utils.unescape(matched.group(2))
|
||||
else:
|
||||
target = utils.unescape(text)
|
||||
title = "PEP " + utils.unescape(text)
|
||||
pep_str, _, fragment = target.partition("#")
|
||||
try:
|
||||
pepnum = int(pep_str)
|
||||
if pepnum < 0 or pepnum > 9999:
|
||||
raise ValueError
|
||||
except ValueError:
|
||||
msg = inliner.reporter.error(
|
||||
f'PEP number must be a number from 0 to 9999; "{pep_str}" is invalid.',
|
||||
line=lineno)
|
||||
prb = inliner.problematic(rawtext, rawtext, msg)
|
||||
return [prb], [msg]
|
||||
# Base URL mainly used by inliner.pep_reference; so this is correct:
|
||||
ref = (inliner.document.settings.pep_base_url
|
||||
+ inliner.document.settings.pep_file_url_template % pepnum)
|
||||
if fragment:
|
||||
ref += "#" + fragment
|
||||
roles.set_classes(options)
|
||||
return [nodes.reference(rawtext, title, refuri=ref, **options)], []
|
||||
|
||||
|
||||
def _rfc_reference_role(role, rawtext, text, lineno, inliner,
|
||||
options={}, content=[]):
|
||||
matched = EXPLICIT_TITLE_RE.match(text)
|
||||
if matched:
|
||||
title = utils.unescape(matched.group(1))
|
||||
target = utils.unescape(matched.group(2))
|
||||
else:
|
||||
target = utils.unescape(text)
|
||||
title = "RFC " + utils.unescape(text)
|
||||
pep_str, _, fragment = target.partition("#")
|
||||
try:
|
||||
rfcnum = int(pep_str)
|
||||
if rfcnum < 0 or rfcnum > 9999:
|
||||
raise ValueError
|
||||
except ValueError:
|
||||
msg = inliner.reporter.error(
|
||||
f'RFC number must be a number from 0 to 9999; "{pep_str}" is invalid.',
|
||||
line=lineno)
|
||||
prb = inliner.problematic(rawtext, rawtext, msg)
|
||||
return [prb], [msg]
|
||||
ref = (inliner.document.settings.rfc_base_url + inliner.rfc_url % rfcnum)
|
||||
if fragment:
|
||||
ref += "#" + fragment
|
||||
roles.set_classes(options)
|
||||
return [nodes.reference(rawtext, title, refuri=ref, **options)], []
|
||||
|
||||
|
||||
roles.register_canonical_role("pep-reference", _pep_reference_role)
|
||||
roles.register_canonical_role("rfc-reference", _rfc_reference_role)
|
||||
|
||||
|
||||
def first_line_starting_with(full_path: Path, text: str) -> str:
|
||||
# Try and retrieve from cache
|
||||
if full_path in line_cache:
|
||||
return line_cache[full_path].get(text, "")
|
||||
|
||||
# Else read source
|
||||
line_cache[full_path] = path_cache = {}
|
||||
for line in full_path.open(encoding="utf-8"):
|
||||
if line.startswith("Created:"):
|
||||
path_cache["Created:"] = line.removeprefix("Created:").strip()
|
||||
elif line.startswith("Title:"):
|
||||
path_cache["Title:"] = line.removeprefix("Title:").strip()
|
||||
elif line.startswith("Author:"):
|
||||
path_cache["Author:"] = line.removeprefix("Author:").strip()
|
||||
|
||||
# Once all have been found, exit loop
|
||||
if path_cache.keys == {"Created:", "Title:", "Author:"}:
|
||||
break
|
||||
return path_cache.get(text, "")
|
||||
|
||||
|
||||
def pep_creation(full_path: Path) -> dt.datetime:
|
||||
created_str = first_line_starting_with(full_path, "Created:")
|
||||
if full_path.stem == "pep-0102":
|
||||
# remove additional content on the Created line
|
||||
created_str = created_str.split(" ", 1)[0]
|
||||
return dt.datetime.strptime(created_str, "%d-%b-%Y")
|
||||
|
||||
|
||||
def parse_rst(full_path: Path) -> nodes.document:
|
||||
text = full_path.read_text(encoding="utf-8")
|
||||
settings = docutils.frontend.get_default_settings(rst.Parser)
|
||||
document = utils.new_document(f'<{full_path}>', settings=settings)
|
||||
rst.Parser(rfc2822=True).parse(text, document)
|
||||
return document
|
||||
|
||||
|
||||
def pep_abstract(full_path: Path) -> str:
|
||||
"""Return the first paragraph of the PEP abstract"""
|
||||
for node in parse_rst(full_path).findall(nodes.section):
|
||||
if node.next_node(nodes.title).astext() == "Abstract":
|
||||
return node.next_node(nodes.paragraph).astext().strip().replace("\n", " ")
|
||||
return ""
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(description="Generate RSS feed")
|
||||
parser.add_argument(
|
||||
"-o",
|
||||
"--output-dir",
|
||||
default="build", # synchronise with render.yaml -> deploy step
|
||||
help="Output directory, relative to root. Default 'build'.",
|
||||
)
|
||||
args = parser.parse_args()
|
||||
|
||||
# get list of peps with creation time (from "Created:" string in pep source)
|
||||
peps_with_dt = sorted((pep_creation(path), path) for path in PEP_ROOT.glob("pep-????.???"))
|
||||
|
||||
# generate rss items for 10 most recent peps
|
||||
items = []
|
||||
for datetime, full_path in peps_with_dt[-10:]:
|
||||
try:
|
||||
pep_num = int(full_path.stem.split("-")[-1])
|
||||
except ValueError:
|
||||
continue
|
||||
|
||||
title = first_line_starting_with(full_path, "Title:")
|
||||
author = first_line_starting_with(full_path, "Author:")
|
||||
if "@" in author or " at " in author:
|
||||
parsed_authors = email.utils.getaddresses([author])
|
||||
joined_authors = ", ".join(f"{name} ({email_address})" for name, email_address in parsed_authors)
|
||||
else:
|
||||
joined_authors = author
|
||||
url = f"https://peps.python.org/pep-{pep_num:0>4}/"
|
||||
|
||||
item = f"""\
|
||||
<item>
|
||||
<title>PEP {pep_num}: {escape(title, quote=False)}</title>
|
||||
<link>{escape(url, quote=False)}</link>
|
||||
<description>{escape(pep_abstract(full_path), quote=False)}</description>
|
||||
<author>{escape(joined_authors, quote=False)}</author>
|
||||
<guid isPermaLink="true">{url}</guid>
|
||||
<pubDate>{_format_rfc_2822(datetime)}</pubDate>
|
||||
</item>"""
|
||||
items.append(item)
|
||||
|
||||
# The rss envelope
|
||||
desc = """
|
||||
Newest Python Enhancement Proposals (PEPs) - Information on new
|
||||
language features, and some meta-information like release
|
||||
procedure and schedules.
|
||||
"""
|
||||
last_build_date = _format_rfc_2822(dt.datetime.now(dt.timezone.utc))
|
||||
items = "\n".join(reversed(items))
|
||||
output = f"""\
|
||||
<?xml version='1.0' encoding='UTF-8'?>
|
||||
<rss xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" version="2.0">
|
||||
<channel>
|
||||
<title>Newest Python PEPs</title>
|
||||
<link>https://peps.python.org/peps.rss</link>
|
||||
<description>{" ".join(desc.split())}</description>
|
||||
<atom:link href="https://peps.python.org/peps.rss" rel="self"/>
|
||||
<docs>https://cyber.harvard.edu/rss/rss.html</docs>
|
||||
<language>en</language>
|
||||
<lastBuildDate>{last_build_date}</lastBuildDate>
|
||||
{items}
|
||||
</channel>
|
||||
</rss>
|
||||
"""
|
||||
|
||||
# output directory for target HTML files
|
||||
out_dir = PEP_ROOT / args.output_dir
|
||||
out_dir.mkdir(exist_ok=True, parents=True)
|
||||
out_dir.joinpath("peps.rss").write_text(output)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
295
pep-0705.rst
295
pep-0705.rst
|
@ -1,295 +0,0 @@
|
|||
PEP: 705
|
||||
Title: TypedMapping: Type Hints for Mappings with a Fixed Set of Keys
|
||||
Author: Alice Purcell <alicederyn@gmail.com>
|
||||
Sponsor: Pablo Galindo <pablogsal@gmail.com>
|
||||
Discussions-To: https://discuss.python.org/t/pep-705-typedmapping/24827
|
||||
Status: Draft
|
||||
Type: Standards Track
|
||||
Topic: Typing
|
||||
Content-Type: text/x-rst
|
||||
Created: 07-Nov-2022
|
||||
Python-Version: 3.12
|
||||
Post-History: `30-Sep-2022 <https://mail.python.org/archives/list/typing-sig@python.org/thread/6FR6RKNUZU4UY6B6RXC2H4IAHKBU3UKV/>`__,
|
||||
`02-Nov-2022 <https://mail.python.org/archives/list/python-dev@python.org/thread/2P26R4VH2ZCNNNOQCBZWEM4RNF35OXOW/>`__,
|
||||
`14-Mar-2023 <https://discuss.python.org/t/pep-705-typedmapping/24827>`__,
|
||||
|
||||
|
||||
Abstract
|
||||
========
|
||||
|
||||
:pep:`589` defines the structural type :class:`~typing.TypedDict` for dictionaries with a fixed set of keys.
|
||||
As ``TypedDict`` is a mutable type, it is difficult to correctly annotate methods which accept read-only parameters in a way that doesn't prevent valid inputs.
|
||||
This PEP proposes a type constructor ``typing.TypedMapping`` to support this use case.
|
||||
|
||||
Motivation
|
||||
==========
|
||||
|
||||
Representing structured data using (potentially nested) dictionaries with string keys is a common pattern in Python programs. :pep:`589` allows these values to be type checked when the exact type is known up-front, but it is hard to write read-only code that accepts more specific variants: for instance, where fields may be subtypes or restrict a union of possible types. This is an especially common issue when writing APIs for services, which may support a wide range of input structures, and typically do not need to modify their input.
|
||||
|
||||
For illustration, we will try to add type hints to a function ``movie_string``::
|
||||
|
||||
def movie_string(movie: Movie) -> str:
|
||||
if movie.get("year") is None:
|
||||
return movie["name"]
|
||||
else:
|
||||
return f'{movie["name"]} ({movie["year"]})'
|
||||
|
||||
We could define this ``Movie`` type using a ``TypedDict``::
|
||||
|
||||
from typing import NotRequired, TypedDict
|
||||
|
||||
class Movie(TypedDict):
|
||||
name: str
|
||||
year: NotRequired[int | None]
|
||||
|
||||
But suppose we have another type where year is required::
|
||||
|
||||
class MovieRecord(TypedDict):
|
||||
name: str
|
||||
year: int
|
||||
|
||||
Attempting to pass a ``MovieRecord`` into ``movie_string`` results in the error (using mypy):
|
||||
|
||||
.. code-block:: text
|
||||
|
||||
Argument 1 to "movie_string" has incompatible type "MovieRecord"; expected "Movie"
|
||||
|
||||
This particular use case should be type-safe, but the type checker correctly stops the
|
||||
user from passing a ``MovieRecord`` into a ``Movie`` parameter in the general case, because
|
||||
the ``Movie`` class has mutator methods that could potentially allow the function to break
|
||||
the type constraints in ``MovieRecord`` (e.g. with ``movie["year"] = None`` or ``del movie["year"]``).
|
||||
The problem disappears if we don't have mutator methods in ``Movie``. This could be achieved by defining an immutable interface using a :pep:`544` :class:`~typing.Protocol`::
|
||||
|
||||
from typing import Literal, Protocol, overload
|
||||
|
||||
class Movie(Protocol):
|
||||
@overload
|
||||
def get(self, key: Literal["name"]) -> str: ...
|
||||
|
||||
@overload
|
||||
def get(self, key: Literal["year"]) -> int | None: ...
|
||||
|
||||
@overload
|
||||
def __getitem__(self, key: Literal["name"]) -> str: ...
|
||||
|
||||
@overload
|
||||
def __getitem__(self, key: Literal["year"]) -> int | None: ...
|
||||
|
||||
This is very repetitive, easy to get wrong, and is still missing important method definitions like ``__contains__()`` and ``keys()``.
|
||||
|
||||
Rationale
|
||||
=========
|
||||
|
||||
The proposed ``TypedMapping`` type allows a straightforward way of defining these types that should be familiar to existing users of ``TypedDict`` and support the cases exemplified above::
|
||||
|
||||
from typing import NotRequired, TypedMapping
|
||||
|
||||
class Movie(TypedMapping):
|
||||
name: str
|
||||
year: NotRequired[int | None]
|
||||
|
||||
In addition to those benefits, by flagging arguments of a function as ``TypedMapping``, it makes explicit not just to typecheckers but also to users that the function is not going to modify its inputs, which is usually a desirable property of a function interface.
|
||||
Finally, this allows bringing the benefits of ``TypedDict`` to other mapping types that are unrelated to ``dict``.
|
||||
|
||||
Specification
|
||||
=============
|
||||
|
||||
A ``TypedMapping`` type defines a protocol with the same methods as :class:`~collections.abc.Mapping`, but with value types determined per-key as with ``TypedDict``.
|
||||
|
||||
Notable similarities to ``TypedDict``:
|
||||
|
||||
* A ``TypedMapping`` protocol can be declared using class-based or alternative syntax.
|
||||
* Keys must be strings.
|
||||
* By default, all specified keys must be present in a ``TypedMapping`` instance. It is possible to override this by specifying totality, or by using ``NotRequired`` from :pep:`655`.
|
||||
* Methods are not allowed in the declaration (though they may be inherited).
|
||||
|
||||
Notable differences from ``TypedDict``:
|
||||
|
||||
* The runtime type of a ``TypedMapping`` object is not constrained to be a ``dict``.
|
||||
* No mutator methods (``__setitem__``, ``__delitem__``, ``update``, etc.) will be generated.
|
||||
* The ``|`` operator is not supported.
|
||||
* A class definition defines a ``TypedMapping`` protocol if and only if ``TypedMapping`` appears directly in its class bases.
|
||||
* Subclasses can narrow value types, in the same manner as other protocols.
|
||||
|
||||
As with :pep:`589`, this PEP provides a sketch of how a type checker is expected to support type checking operations involving ``TypedMapping`` and ``TypedDict`` objects, but details are left to implementors. In particular, type compatibility should be based on structural compatibility.
|
||||
|
||||
|
||||
Multiple inheritance and TypedDict
|
||||
----------------------------------
|
||||
|
||||
A type that inherits from a ``TypedMapping`` protocol and from ``TypedDict`` (either directly or indirectly):
|
||||
|
||||
* is the structural intersection of its parents, or invalid if no such intersection exists
|
||||
* instances must be a dict subclass
|
||||
* adds mutator methods only for fields it explicitly (re)declares
|
||||
|
||||
For example::
|
||||
|
||||
class Movie(TypedMapping):
|
||||
name: str
|
||||
year: int | None
|
||||
|
||||
class MovieRecord(Movie, TypedDict):
|
||||
year: int
|
||||
|
||||
movie: MovieRecord = { "name": "Blade Runner",
|
||||
"year": 1982 }
|
||||
|
||||
movie["year"] = 1985 # Fine; mutator methods added in definition
|
||||
movie["name"] = "Terminator" # Type check error; "name" mutator not declared
|
||||
|
||||
Inheriting, directly or indirectly, from both ``TypedDict`` and ``Protocol`` will continue to fail at runtime, and should continue to be rejected by type checkers.
|
||||
|
||||
|
||||
Multiple inheritance and Protocol
|
||||
---------------------------------
|
||||
|
||||
* A type that inherits from a ``TypedMapping`` protocol and from a ``Protocol`` protocol must satisfy the protocols defined by both, but is not itself a protocol unless it inherits directly from ``TypedMapping`` or ``Protocol``.
|
||||
* A type that inherits from a ``TypedMapping`` protocol and from ``Protocol`` itself is configured as a ``Protocol``. Methods and properties may be defined; keys may not::
|
||||
|
||||
class A(Movie, Protocol):
|
||||
# Declare a mutable property called 'year'
|
||||
# This does not affect the dictionary key 'year'
|
||||
year: str
|
||||
|
||||
* A type that inherits from a ``Protocol`` protocol and from ``TypedMapping`` itself is configured as a ``TypedMapping``. Keys may be defined; methods and properties may not::
|
||||
|
||||
class B(A, TypedMapping):
|
||||
# Declare a key 'year'
|
||||
# This does not affect the property 'year'
|
||||
year: int
|
||||
|
||||
|
||||
Type consistency rules
|
||||
----------------------
|
||||
|
||||
Informally speaking, *type consistency* is a generalization of the is-subtype-of relation to support the ``Any`` type. It is defined more formally in :pep:`483`. This section introduces the new, non-trivial rules needed to support type consistency for ``TypedMapping`` types.
|
||||
|
||||
First, any ``TypedMapping`` type is consistent with ``Mapping[str, object]``.
|
||||
Second, a ``TypedMapping`` or ``TypedDict`` type ``A`` is consistent with ``TypedMapping`` ``B`` if ``A`` is structurally compatible with ``B``. This is true if and only if both of these conditions are satisfied:
|
||||
|
||||
* For each key in ``A``, ``B`` has the corresponding key and the corresponding value type in ``B`` is consistent with the value type in ``A``.
|
||||
|
||||
* For each required key in ``A``, the corresponding key is required in ``B``.
|
||||
|
||||
Discussion:
|
||||
|
||||
* Value types behave covariantly, since ``TypedMapping`` objects have no mutator methods. This is similar to container types such as ``Mapping``, and different from relationships between two ``TypedDict`` types. Example::
|
||||
|
||||
class A(TypedMapping):
|
||||
x: int | None
|
||||
|
||||
class B(TypedDict):
|
||||
x: int
|
||||
|
||||
def f(a: A) -> None:
|
||||
print(a['x'] or 0)
|
||||
|
||||
b: B = {'x': 0}
|
||||
f(b) # Accepted by type checker
|
||||
|
||||
* A ``TypedDict`` or ``TypedMapping`` type with a required key is consistent with a ``TypedMapping`` type where the same key is a non-required key, again unlike relationships between two ``TypedDict`` types. Example::
|
||||
|
||||
class A(TypedMapping, total=False):
|
||||
x: int
|
||||
|
||||
class B(TypedDict):
|
||||
x: int
|
||||
|
||||
def f(a: A) -> None:
|
||||
print(a.get('x', 0))
|
||||
|
||||
b: B = {'x': 0}
|
||||
f(b) # Accepted by type checker
|
||||
|
||||
* A ``TypedMapping`` type ``A`` with no key ``'x'`` is not consistent with a ``TypedMapping`` type with a non-required key ``'x'``, since at runtime the key ``'x'`` could be present and have an incompatible type (which may not be visible through ``A`` due to structural subtyping). This is the same as for ``TypedDict`` types. Example::
|
||||
|
||||
class A(TypedMapping, total=False):
|
||||
x: int
|
||||
y: int
|
||||
|
||||
class B(TypedMapping, total=False):
|
||||
x: int
|
||||
|
||||
class C(TypedMapping, total=False):
|
||||
x: int
|
||||
y: str
|
||||
|
||||
def f(a: A) -> None:
|
||||
print(a.get('y') + 1)
|
||||
|
||||
def g(b: B) -> None:
|
||||
f(b) # Type check error: 'B' incompatible with 'A'
|
||||
|
||||
c: C = {'x': 0, 'y': 'foo'}
|
||||
g(c) # Runtime error: str + int
|
||||
|
||||
* A ``TypedMapping`` with all ``int`` values is not consistent with ``Mapping[str, int]``, since there may be additional non-``int`` values not visible through the type, due to structural subtyping. This mirrors ``TypedDict``. Example::
|
||||
|
||||
class A(TypedMapping):
|
||||
x: int
|
||||
|
||||
class B(TypedMapping):
|
||||
x: int
|
||||
y: str
|
||||
|
||||
def sum_values(m: Mapping[str, int]) -> int:
|
||||
return sum(m.values())
|
||||
|
||||
def f(a: A) -> None:
|
||||
sum_values(a) # Type check error: 'A' incompatible with Mapping[str, int]
|
||||
|
||||
b: B = {'x': 0, 'y': 'foo'}
|
||||
f(b) # Runtime error: int + str
|
||||
|
||||
|
||||
Backwards Compatibility
|
||||
=======================
|
||||
|
||||
This PEP changes the rules for how ``TypedDict`` behaves (allowing subclasses to
|
||||
inherit from ``TypedMapping`` protocols in a way that changes the resulting
|
||||
overloads), so code that inspects ``TypedDict`` types will have to change. This
|
||||
is expected to mainly affect type-checkers.
|
||||
|
||||
The ``TypedMapping`` type will be added to the ``typing_extensions`` module,
|
||||
enabling its use in older versions of Python.
|
||||
|
||||
|
||||
Security Implications
|
||||
=====================
|
||||
|
||||
There are no known security consequences arising from this PEP.
|
||||
|
||||
|
||||
How to Teach This
|
||||
=================
|
||||
|
||||
Class documentation should be added to the :mod:`typing` module's documentation, using
|
||||
that for :class:`~collections.abc.Mapping`, :class:`~typing.Protocol` and
|
||||
:class:`~typing.TypedDict` as examples. Suggested introductory sentence: "Base class
|
||||
for read-only mapping protocol classes."
|
||||
|
||||
This PEP could be added to the others listed in the :mod:`typing` module's documentation.
|
||||
|
||||
|
||||
Reference Implementation
|
||||
========================
|
||||
|
||||
No reference implementation exists yet.
|
||||
|
||||
|
||||
Rejected Alternatives
|
||||
=====================
|
||||
|
||||
Several variations were considered and discarded:
|
||||
|
||||
* A ``readonly`` parameter to ``TypedDict``, behaving much like ``TypedMapping`` but with the additional constraint that instances must be dictionaries at runtime. This was discarded as less flexible due to the extra constraint; additionally, the new type nicely mirrors the existing ``Mapping``/``Dict`` types.
|
||||
* Inheriting from a ``TypedMapping`` subclass and ``TypedDict`` resulting in mutator methods being added for all fields, not just those actively (re)declared in the class body. Discarded as less flexible, and not matching how inheritance works in other cases for ``TypedDict`` (e.g. total=False and total=True do not affect fields not specified in the class body).
|
||||
* A generic type that removes mutator methods from its parameter, e.g. ``Readonly[MovieRecord]``. This would naturally want to be defined for a wider set of types than just ``TypedDict`` subclasses, and also raises questions about whether and how it applies to nested types. We decided to keep the scope of this PEP narrower.
|
||||
* Declaring methods directly on a ``TypedMapping`` class. Methods are a kind of property, but declarations on a ``TypedMapping`` class are defining keys, so mixing the two is potentially confusing. Banning methods also makes it very easy to decide whether a ``TypedDict`` subclass can mix in a protocol or not (yes if it's just ``TypedMapping`` superclasses, no if there's a ``Protocol``).
|
||||
|
||||
|
||||
Copyright
|
||||
=========
|
||||
This document is placed in the public domain or under the
|
||||
CC0-1.0-Universal license, whichever is more permissive.
|
|
@ -7,6 +7,7 @@ from typing import TYPE_CHECKING
|
|||
from docutils.writers.html5_polyglot import HTMLTranslator
|
||||
from sphinx import environment
|
||||
|
||||
from pep_sphinx_extensions.generate_rss import create_rss_feed
|
||||
from pep_sphinx_extensions.pep_processor.html import pep_html_builder
|
||||
from pep_sphinx_extensions.pep_processor.html import pep_html_translator
|
||||
from pep_sphinx_extensions.pep_processor.parsing import pep_banner_directive
|
||||
|
@ -27,11 +28,9 @@ def _update_config_for_builder(app: Sphinx) -> None:
|
|||
app.env.document_ids = {} # For PEPReferenceRoleTitleText
|
||||
app.env.settings["builder"] = app.builder.name
|
||||
if app.builder.name == "dirhtml":
|
||||
app.env.settings["pep_url"] = "pep-{:0>4}"
|
||||
app.env.settings["pep_url"] = "pep-{:0>4}/"
|
||||
|
||||
# internal_builder exists if Sphinx is run by build.py
|
||||
if "internal_builder" not in app.tags:
|
||||
app.connect("build-finished", _post_build) # Post-build tasks
|
||||
app.connect("build-finished", _post_build) # Post-build tasks
|
||||
|
||||
|
||||
def _post_build(app: Sphinx, exception: Exception | None) -> None:
|
||||
|
@ -41,7 +40,11 @@ def _post_build(app: Sphinx, exception: Exception | None) -> None:
|
|||
|
||||
if exception is not None:
|
||||
return
|
||||
create_index_file(Path(app.outdir), app.builder.name)
|
||||
|
||||
# internal_builder exists if Sphinx is run by build.py
|
||||
if "internal_builder" not in app.tags:
|
||||
create_index_file(Path(app.outdir), app.builder.name)
|
||||
create_rss_feed(app.doctreedir, app.outdir)
|
||||
|
||||
|
||||
def setup(app: Sphinx) -> dict[str, bool]:
|
||||
|
|
|
@ -0,0 +1,117 @@
|
|||
# This file is placed in the public domain or under the
|
||||
# CC0-1.0-Universal license, whichever is more permissive.
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import datetime as dt
|
||||
import pickle
|
||||
from email.utils import format_datetime, getaddresses
|
||||
from html import escape
|
||||
from pathlib import Path
|
||||
|
||||
from docutils import nodes
|
||||
|
||||
RSS_DESCRIPTION = (
|
||||
"Newest Python Enhancement Proposals (PEPs): "
|
||||
"Information on new language features "
|
||||
"and some meta-information like release procedure and schedules."
|
||||
)
|
||||
|
||||
|
||||
def _format_rfc_2822(datetime: dt.datetime) -> str:
|
||||
datetime = datetime.replace(tzinfo=dt.timezone.utc)
|
||||
return format_datetime(datetime, usegmt=True)
|
||||
|
||||
|
||||
document_cache: dict[Path, dict[str, str]] = {}
|
||||
|
||||
|
||||
def get_from_doctree(full_path: Path, text: str) -> str:
|
||||
# Try and retrieve from cache
|
||||
if full_path in document_cache:
|
||||
return document_cache[full_path].get(text, "")
|
||||
|
||||
# Else load doctree
|
||||
document = pickle.loads(full_path.read_bytes())
|
||||
# Store the headers (populated in the PEPHeaders transform)
|
||||
document_cache[full_path] = path_cache = document.get("headers", {})
|
||||
# Store the Abstract
|
||||
path_cache["Abstract"] = pep_abstract(document)
|
||||
# Return the requested key
|
||||
return path_cache.get(text, "")
|
||||
|
||||
|
||||
def pep_creation(full_path: Path) -> dt.datetime:
|
||||
created_str = get_from_doctree(full_path, "Created")
|
||||
try:
|
||||
return dt.datetime.strptime(created_str, "%d-%b-%Y")
|
||||
except ValueError:
|
||||
return dt.datetime.min
|
||||
|
||||
|
||||
def pep_abstract(document: nodes.document) -> str:
|
||||
"""Return the first paragraph of the PEP abstract"""
|
||||
for node in document.findall(nodes.section):
|
||||
title_node = node.next_node(nodes.title)
|
||||
if title_node is None:
|
||||
continue
|
||||
if title_node.astext() == "Abstract":
|
||||
return node.next_node(nodes.paragraph).astext().strip().replace("\n", " ")
|
||||
return ""
|
||||
|
||||
|
||||
def _generate_items(doctree_dir: Path):
|
||||
# get list of peps with creation time (from "Created:" string in pep source)
|
||||
peps_with_dt = sorted((pep_creation(path), path) for path in doctree_dir.glob("pep-????.doctree"))
|
||||
|
||||
# generate rss items for 10 most recent peps (in reverse order)
|
||||
for datetime, full_path in reversed(peps_with_dt[-10:]):
|
||||
try:
|
||||
pep_num = int(get_from_doctree(full_path, "PEP"))
|
||||
except ValueError:
|
||||
continue
|
||||
|
||||
title = get_from_doctree(full_path, "Title")
|
||||
url = f"https://peps.python.org/pep-{pep_num:0>4}/"
|
||||
abstract = get_from_doctree(full_path, "Abstract")
|
||||
author = get_from_doctree(full_path, "Author")
|
||||
if "@" in author or " at " in author:
|
||||
parsed_authors = getaddresses([author])
|
||||
joined_authors = ", ".join(f"{name} ({email_address})" for name, email_address in parsed_authors)
|
||||
else:
|
||||
joined_authors = author
|
||||
|
||||
item = f"""\
|
||||
<item>
|
||||
<title>PEP {pep_num}: {escape(title, quote=False)}</title>
|
||||
<link>{escape(url, quote=False)}</link>
|
||||
<description>{escape(abstract, quote=False)}</description>
|
||||
<author>{escape(joined_authors, quote=False)}</author>
|
||||
<guid isPermaLink="true">{url}</guid>
|
||||
<pubDate>{_format_rfc_2822(datetime)}</pubDate>
|
||||
</item>"""
|
||||
yield item
|
||||
|
||||
|
||||
def create_rss_feed(doctree_dir: Path, output_dir: Path):
|
||||
# The rss envelope
|
||||
last_build_date = _format_rfc_2822(dt.datetime.now(dt.timezone.utc))
|
||||
items = "\n".join(_generate_items(Path(doctree_dir)))
|
||||
output = f"""\
|
||||
<?xml version='1.0' encoding='UTF-8'?>
|
||||
<rss xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" version="2.0">
|
||||
<channel>
|
||||
<title>Newest Python PEPs</title>
|
||||
<link>https://peps.python.org/peps.rss</link>
|
||||
<description>{RSS_DESCRIPTION}</description>
|
||||
<atom:link href="https://peps.python.org/peps.rss" rel="self"/>
|
||||
<docs>https://cyber.harvard.edu/rss/rss.html</docs>
|
||||
<language>en</language>
|
||||
<lastBuildDate>{last_build_date}</lastBuildDate>
|
||||
{items}
|
||||
</channel>
|
||||
</rss>
|
||||
"""
|
||||
|
||||
# output directory for target HTML files
|
||||
Path(output_dir, "peps.rss").write_text(output, encoding="utf-8")
|
|
@ -1,5 +1,3 @@
|
|||
from pathlib import Path
|
||||
|
||||
from docutils import nodes
|
||||
from docutils.frontend import OptionParser
|
||||
from sphinx.builders.html import StandaloneHTMLBuilder
|
||||
|
@ -22,6 +20,7 @@ class FileBuilder(StandaloneHTMLBuilder):
|
|||
self.docwriter = HTMLWriter(self)
|
||||
_opt_parser = OptionParser([self.docwriter], defaults=self.env.settings, read_config_files=True)
|
||||
self.docsettings = _opt_parser.get_default_values()
|
||||
self._orig_css_files = self._orig_js_files = []
|
||||
|
||||
def get_doc_context(self, docname: str, body: str, _metatags: str) -> dict:
|
||||
"""Collect items for the template context of a page."""
|
||||
|
@ -30,10 +29,6 @@ class FileBuilder(StandaloneHTMLBuilder):
|
|||
except KeyError:
|
||||
title = ""
|
||||
|
||||
# source filename
|
||||
file_is_rst = Path(self.env.srcdir, docname + ".rst").exists()
|
||||
source_name = f"{docname}.rst" if file_is_rst else f"{docname}.txt"
|
||||
|
||||
# local table of contents
|
||||
toc_tree = self.env.tocs[docname].deepcopy()
|
||||
if len(toc_tree) and len(toc_tree[0]) > 1:
|
||||
|
@ -45,7 +40,7 @@ class FileBuilder(StandaloneHTMLBuilder):
|
|||
else:
|
||||
toc = "" # PEPs with no sections -- 9, 210
|
||||
|
||||
return {"title": title, "sourcename": source_name, "toc": toc, "body": body}
|
||||
return {"title": title, "toc": toc, "body": body}
|
||||
|
||||
|
||||
class DirectoryBuilder(FileBuilder):
|
||||
|
|
|
@ -5,7 +5,6 @@ from __future__ import annotations
|
|||
from docutils import nodes
|
||||
from docutils.parsers import rst
|
||||
|
||||
|
||||
PYPA_SPEC_BASE_URL = "https://packaging.python.org/en/latest/specifications/"
|
||||
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
import datetime as dt
|
||||
import time
|
||||
from pathlib import Path
|
||||
import subprocess
|
||||
|
||||
|
@ -23,7 +23,7 @@ class PEPFooter(transforms.Transform):
|
|||
|
||||
def apply(self) -> None:
|
||||
pep_source_path = Path(self.document["source"])
|
||||
if not pep_source_path.match("pep-*"):
|
||||
if not pep_source_path.match("pep-????.???"):
|
||||
return # not a PEP file, exit early
|
||||
|
||||
# Iterate through sections from the end of the document
|
||||
|
@ -54,7 +54,7 @@ class PEPFooter(transforms.Transform):
|
|||
|
||||
def _add_source_link(pep_source_path: Path) -> nodes.paragraph:
|
||||
"""Add link to source text on VCS (GitHub)"""
|
||||
source_link = f"https://github.com/python/peps/blob/main/{pep_source_path.name}"
|
||||
source_link = f"https://github.com/python/peps/blob/main/peps/{pep_source_path.name}"
|
||||
link_node = nodes.reference("", source_link, refuri=source_link)
|
||||
return nodes.paragraph("", "Source: ", link_node)
|
||||
|
||||
|
@ -62,12 +62,10 @@ def _add_source_link(pep_source_path: Path) -> nodes.paragraph:
|
|||
def _add_commit_history_info(pep_source_path: Path) -> nodes.paragraph:
|
||||
"""Use local git history to find last modified date."""
|
||||
try:
|
||||
since_epoch = LAST_MODIFIED_TIMES[pep_source_path.name]
|
||||
iso_time = _LAST_MODIFIED_TIMES[pep_source_path.stem]
|
||||
except KeyError:
|
||||
return nodes.paragraph()
|
||||
|
||||
epoch_dt = dt.datetime.fromtimestamp(since_epoch, dt.timezone.utc)
|
||||
iso_time = epoch_dt.isoformat(sep=" ")
|
||||
commit_link = f"https://github.com/python/peps/commits/main/{pep_source_path.name}"
|
||||
link_node = nodes.reference("", f"{iso_time} GMT", refuri=commit_link)
|
||||
return nodes.paragraph("", "Last modified: ", link_node)
|
||||
|
@ -75,29 +73,36 @@ def _add_commit_history_info(pep_source_path: Path) -> nodes.paragraph:
|
|||
|
||||
def _get_last_modified_timestamps():
|
||||
# get timestamps and changed files from all commits (without paging results)
|
||||
args = ["git", "--no-pager", "log", "--format=#%at", "--name-only"]
|
||||
with subprocess.Popen(args, stdout=subprocess.PIPE) as process:
|
||||
all_modified = process.stdout.read().decode("utf-8")
|
||||
process.stdout.close()
|
||||
if process.wait(): # non-zero return code
|
||||
return {}
|
||||
args = ("git", "--no-pager", "log", "--format=#%at", "--name-only")
|
||||
ret = subprocess.run(args, stdout=subprocess.PIPE, text=True, encoding="utf-8")
|
||||
if ret.returncode: # non-zero return code
|
||||
return {}
|
||||
all_modified = ret.stdout
|
||||
|
||||
# remove "peps/" prefix from file names
|
||||
all_modified = all_modified.replace("\npeps/", "\n")
|
||||
|
||||
# set up the dictionary with the *current* files
|
||||
last_modified = {path.name: 0 for path in Path().glob("pep-*") if path.suffix in {".txt", ".rst"}}
|
||||
peps_dir = Path(__file__, "..", "..", "..", "..", "peps").resolve()
|
||||
last_modified = {path.stem: "" for path in peps_dir.glob("pep-????.rst")}
|
||||
|
||||
# iterate through newest to oldest, updating per file timestamps
|
||||
change_sets = all_modified.removeprefix("#").split("#")
|
||||
for change_set in change_sets:
|
||||
timestamp, files = change_set.split("\n", 1)
|
||||
for file in files.strip().split("\n"):
|
||||
if file.startswith("pep-") and file[-3:] in {"txt", "rst"}:
|
||||
if last_modified.get(file) == 0:
|
||||
try:
|
||||
last_modified[file] = float(timestamp)
|
||||
except ValueError:
|
||||
pass # if float conversion fails
|
||||
if not file.startswith("pep-") or not file.endswith((".rst", ".txt")):
|
||||
continue # not a PEP
|
||||
file = file[:-4]
|
||||
if last_modified.get(file) != "":
|
||||
continue # most recent modified date already found
|
||||
try:
|
||||
y, m, d, hh, mm, ss, *_ = time.gmtime(float(timestamp))
|
||||
except ValueError:
|
||||
continue # if float conversion fails
|
||||
last_modified[file] = f"{y:04}-{m:02}-{d:02} {hh:02}:{mm:02}:{ss:02}"
|
||||
|
||||
return last_modified
|
||||
|
||||
|
||||
LAST_MODIFIED_TIMES = _get_last_modified_timestamps()
|
||||
_LAST_MODIFIED_TIMES = _get_last_modified_timestamps()
|
||||
|
|
|
@ -72,11 +72,11 @@ class PEPHeaders(transforms.Transform):
|
|||
raise PEPParsingError("Document does not contain an RFC-2822 'PEP' header!")
|
||||
|
||||
# Extract PEP number
|
||||
value = pep_field[1].astext()
|
||||
pep_num_str = pep_field[1].astext()
|
||||
try:
|
||||
pep_num = int(value)
|
||||
pep_num = int(pep_num_str)
|
||||
except ValueError:
|
||||
raise PEPParsingError(f"'PEP' header must contain an integer. '{value}' is invalid!")
|
||||
raise PEPParsingError(f"PEP header must contain an integer. '{pep_num_str}' is invalid!")
|
||||
|
||||
# Special processing for PEP 0.
|
||||
if pep_num == 0:
|
||||
|
@ -89,7 +89,11 @@ class PEPHeaders(transforms.Transform):
|
|||
raise PEPParsingError("No title!")
|
||||
|
||||
fields_to_remove = []
|
||||
self.document["headers"] = headers = {}
|
||||
for field in header:
|
||||
row_attributes = {sub.tagname: sub.rawsource for sub in field}
|
||||
headers[row_attributes["field_name"]] = row_attributes["field_body"]
|
||||
|
||||
name = field[0].astext().lower()
|
||||
body = field[1]
|
||||
if len(body) == 0:
|
||||
|
|
|
@ -103,7 +103,6 @@ a:active,
|
|||
a:visited {
|
||||
color: var(--colour-links);
|
||||
display: inline;
|
||||
overflow-wrap: break-word;
|
||||
overflow-wrap: anywhere;
|
||||
text-decoration-color: var(--colour-background-accent-strong);
|
||||
}
|
||||
|
@ -135,7 +134,6 @@ pre {
|
|||
hyphens: none;
|
||||
}
|
||||
code {
|
||||
overflow-wrap: break-word;
|
||||
overflow-wrap: anywhere;
|
||||
}
|
||||
code.literal {
|
||||
|
@ -232,6 +230,25 @@ table th + th,
|
|||
table td + td {
|
||||
border-left: 1px solid var(--colour-background-accent-strong);
|
||||
}
|
||||
/* Common column widths for PEP status tables */
|
||||
table.pep-zero-table tr td:nth-child(1) {
|
||||
width: 5%;
|
||||
}
|
||||
table.pep-zero-table tr td:nth-child(2) {
|
||||
width: 7%;
|
||||
}
|
||||
table.pep-zero-table tr td:nth-child(3),
|
||||
table.pep-zero-table tr td:nth-child(4){
|
||||
width: 41%;
|
||||
}
|
||||
table.pep-zero-table tr td:nth-child(5) {
|
||||
width: 6%;
|
||||
}
|
||||
/* Authors & Sponsors table */
|
||||
#authors-owners table td,
|
||||
#authors-owners table th {
|
||||
width: 50%;
|
||||
}
|
||||
|
||||
/* Breadcrumbs rules */
|
||||
section#pep-page-section > header {
|
||||
|
|
|
@ -43,8 +43,8 @@
|
|||
<h2>Contents</h2>
|
||||
{{ toc }}
|
||||
<br>
|
||||
{%- if not (sourcename.startswith("pep-0000") or sourcename.startswith("topic")) %}
|
||||
<a id="source" href="https://github.com/python/peps/blob/main/{{sourcename}}">Page Source (GitHub)</a>
|
||||
{%- if not pagename.startswith(("pep-0000", "topic")) %}
|
||||
<a id="source" href="https://github.com/python/peps/blob/main/peps/{{pagename}}.rst">Page Source (GitHub)</a>
|
||||
{%- endif %}
|
||||
</nav>
|
||||
</section>
|
||||
|
|
|
@ -1,89 +0,0 @@
|
|||
from __future__ import annotations
|
||||
|
||||
from typing import NamedTuple
|
||||
|
||||
|
||||
class _Name(NamedTuple):
|
||||
mononym: str = None
|
||||
forename: str = None
|
||||
surname: str = None
|
||||
suffix: str = None
|
||||
|
||||
|
||||
class Author(NamedTuple):
|
||||
"""Represent PEP authors."""
|
||||
last_first: str # The author's name in Surname, Forename, Suffix order.
|
||||
nick: str # Author's nickname for PEP tables. Defaults to surname.
|
||||
email: str # The author's email address.
|
||||
|
||||
|
||||
def parse_author_email(author_email_tuple: tuple[str, str], authors_overrides: dict[str, dict[str, str]]) -> Author:
|
||||
"""Parse the name and email address of an author."""
|
||||
name, email = author_email_tuple
|
||||
_first_last = name.strip()
|
||||
email = email.lower()
|
||||
|
||||
if _first_last in authors_overrides:
|
||||
name_dict = authors_overrides[_first_last]
|
||||
last_first = name_dict["Surname First"]
|
||||
nick = name_dict["Name Reference"]
|
||||
return Author(last_first, nick, email)
|
||||
|
||||
name_parts = _parse_name(_first_last)
|
||||
if name_parts.mononym is not None:
|
||||
return Author(name_parts.mononym, name_parts.mononym, email)
|
||||
|
||||
if name_parts.suffix:
|
||||
last_first = f"{name_parts.surname}, {name_parts.forename}, {name_parts.suffix}"
|
||||
return Author(last_first, name_parts.surname, email)
|
||||
|
||||
last_first = f"{name_parts.surname}, {name_parts.forename}"
|
||||
return Author(last_first, name_parts.surname, email)
|
||||
|
||||
|
||||
def _parse_name(full_name: str) -> _Name:
|
||||
"""Decompose a full name into parts.
|
||||
|
||||
If a mononym (e.g, 'Aahz') then return the full name. If there are
|
||||
suffixes in the name (e.g. ', Jr.' or 'II'), then find and extract
|
||||
them. If there is a middle initial followed by a full stop, then
|
||||
combine the following words into a surname (e.g. N. Vander Weele). If
|
||||
there is a leading, lowercase portion to the last name (e.g. 'van' or
|
||||
'von') then include it in the surname.
|
||||
|
||||
"""
|
||||
possible_suffixes = {"Jr", "Jr.", "II", "III"}
|
||||
|
||||
pre_suffix, _, raw_suffix = full_name.partition(",")
|
||||
name_parts = pre_suffix.strip().split(" ")
|
||||
num_parts = len(name_parts)
|
||||
suffix = raw_suffix.strip()
|
||||
|
||||
if name_parts == [""]:
|
||||
raise ValueError("Name is empty!")
|
||||
elif num_parts == 1:
|
||||
return _Name(mononym=name_parts[0], suffix=suffix)
|
||||
elif num_parts == 2:
|
||||
return _Name(forename=name_parts[0].strip(), surname=name_parts[1], suffix=suffix)
|
||||
|
||||
# handles rogue uncaught suffixes
|
||||
if name_parts[-1] in possible_suffixes:
|
||||
suffix = f"{name_parts.pop(-1)} {suffix}".strip()
|
||||
|
||||
# handles von, van, v. etc.
|
||||
if name_parts[-2].islower():
|
||||
forename = " ".join(name_parts[:-2]).strip()
|
||||
surname = " ".join(name_parts[-2:])
|
||||
return _Name(forename=forename, surname=surname, suffix=suffix)
|
||||
|
||||
# handles double surnames after a middle initial (e.g. N. Vander Weele)
|
||||
elif any(s.endswith(".") for s in name_parts):
|
||||
split_position = [i for i, x in enumerate(name_parts) if x.endswith(".")][-1] + 1
|
||||
forename = " ".join(name_parts[:split_position]).strip()
|
||||
surname = " ".join(name_parts[split_position:])
|
||||
return _Name(forename=forename, surname=surname, suffix=suffix)
|
||||
|
||||
# default to using the last item as the surname
|
||||
else:
|
||||
forename = " ".join(name_parts[:-1]).strip()
|
||||
return _Name(forename=forename, surname=name_parts[-1], suffix=suffix)
|
|
@ -2,13 +2,10 @@
|
|||
|
||||
from __future__ import annotations
|
||||
|
||||
import csv
|
||||
import dataclasses
|
||||
from email.parser import HeaderParser
|
||||
from pathlib import Path
|
||||
import re
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
from pep_sphinx_extensions.pep_zero_generator.author import parse_author_email
|
||||
from pep_sphinx_extensions.pep_zero_generator.constants import ACTIVE_ALLOWED
|
||||
from pep_sphinx_extensions.pep_zero_generator.constants import HIDE_STATUS
|
||||
from pep_sphinx_extensions.pep_zero_generator.constants import SPECIAL_STATUSES
|
||||
|
@ -19,16 +16,12 @@ from pep_sphinx_extensions.pep_zero_generator.constants import TYPE_STANDARDS
|
|||
from pep_sphinx_extensions.pep_zero_generator.constants import TYPE_VALUES
|
||||
from pep_sphinx_extensions.pep_zero_generator.errors import PEPError
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from pep_sphinx_extensions.pep_zero_generator.author import Author
|
||||
|
||||
|
||||
# AUTHOR_OVERRIDES.csv is an exception file for PEP 0 name parsing
|
||||
AUTHOR_OVERRIDES: dict[str, dict[str, str]] = {}
|
||||
with open("AUTHOR_OVERRIDES.csv", "r", encoding="utf-8") as f:
|
||||
for line in csv.DictReader(f):
|
||||
full_name = line.pop("Overridden Name")
|
||||
AUTHOR_OVERRIDES[full_name] = line
|
||||
@dataclasses.dataclass(order=True, frozen=True)
|
||||
class _Author:
|
||||
"""Represent PEP authors."""
|
||||
full_name: str # The author's name.
|
||||
email: str # The author's email address.
|
||||
|
||||
|
||||
class PEP:
|
||||
|
@ -97,7 +90,9 @@ class PEP:
|
|||
self.status: str = status
|
||||
|
||||
# Parse PEP authors
|
||||
self.authors: list[Author] = _parse_authors(self, metadata["Author"], AUTHOR_OVERRIDES)
|
||||
self.authors: list[_Author] = _parse_author(metadata["Author"])
|
||||
if not self.authors:
|
||||
raise _raise_pep_error(self, "no authors found", pep_num=True)
|
||||
|
||||
# Topic (for sub-indices)
|
||||
_topic = metadata.get("Topic", "").lower().split(",")
|
||||
|
@ -144,7 +139,9 @@ class PEP:
|
|||
# a tooltip representing the type and status
|
||||
"shorthand": self.shorthand,
|
||||
# the author list as a comma-separated with only last names
|
||||
"authors": ", ".join(author.nick for author in self.authors),
|
||||
"authors": ", ".join(author.full_name for author in self.authors),
|
||||
# The targeted Python-Version (if present) or the empty string
|
||||
"python_version": self.python_version or "",
|
||||
}
|
||||
|
||||
@property
|
||||
|
@ -153,7 +150,7 @@ class PEP:
|
|||
return {
|
||||
"number": self.number,
|
||||
"title": self.title,
|
||||
"authors": ", ".join(author.nick for author in self.authors),
|
||||
"authors": ", ".join(author.full_name for author in self.authors),
|
||||
"discussions_to": self.discussions_to,
|
||||
"status": self.status,
|
||||
"type": self.pep_type,
|
||||
|
@ -175,41 +172,27 @@ def _raise_pep_error(pep: PEP, msg: str, pep_num: bool = False) -> None:
|
|||
raise PEPError(msg, pep.filename)
|
||||
|
||||
|
||||
def _parse_authors(pep: PEP, author_header: str, authors_overrides: dict) -> list[Author]:
|
||||
"""Parse Author header line"""
|
||||
authors_and_emails = _parse_author(author_header)
|
||||
if not authors_and_emails:
|
||||
raise _raise_pep_error(pep, "no authors found", pep_num=True)
|
||||
return [parse_author_email(author_tuple, authors_overrides) for author_tuple in authors_and_emails]
|
||||
jr_placeholder = ",Jr"
|
||||
|
||||
|
||||
author_angled = re.compile(r"(?P<author>.+?) <(?P<email>.+?)>(,\s*)?")
|
||||
author_paren = re.compile(r"(?P<email>.+?) \((?P<author>.+?)\)(,\s*)?")
|
||||
author_simple = re.compile(r"(?P<author>[^,]+)(,\s*)?")
|
||||
|
||||
|
||||
def _parse_author(data: str) -> list[tuple[str, str]]:
|
||||
def _parse_author(data: str) -> list[_Author]:
|
||||
"""Return a list of author names and emails."""
|
||||
|
||||
author_list = []
|
||||
for regex in (author_angled, author_paren, author_simple):
|
||||
for match in regex.finditer(data):
|
||||
# Watch out for suffixes like 'Jr.' when they are comma-separated
|
||||
# from the name and thus cause issues when *all* names are only
|
||||
# separated by commas.
|
||||
match_dict = match.groupdict()
|
||||
author = match_dict["author"]
|
||||
if not author.partition(" ")[1] and author.endswith("."):
|
||||
prev_author = author_list.pop()
|
||||
author = ", ".join([prev_author, author])
|
||||
if "email" not in match_dict:
|
||||
email = ""
|
||||
else:
|
||||
email = match_dict["email"]
|
||||
author_list.append((author, email))
|
||||
data = (data.replace("\n", " ")
|
||||
.replace(", Jr", jr_placeholder)
|
||||
.rstrip().removesuffix(","))
|
||||
for author_email in data.split(", "):
|
||||
if ' <' in author_email:
|
||||
author, email = author_email.removesuffix(">").split(" <")
|
||||
else:
|
||||
author, email = author_email, ""
|
||||
|
||||
# If authors were found then stop searching as only expect one
|
||||
# style of author citation.
|
||||
if author_list:
|
||||
break
|
||||
author = author.strip()
|
||||
if author == "":
|
||||
raise ValueError("Name is empty!")
|
||||
|
||||
author = author.replace(jr_placeholder, ", Jr")
|
||||
email = email.lower()
|
||||
author_list.append(_Author(author, email))
|
||||
return author_list
|
||||
|
|
|
@ -18,22 +18,22 @@ to allow it to be processed as normal.
|
|||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import os
|
||||
from pathlib import Path
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
from pep_sphinx_extensions.pep_zero_generator.constants import SUBINDICES_BY_TOPIC
|
||||
from pep_sphinx_extensions.pep_zero_generator import parser
|
||||
from pep_sphinx_extensions.pep_zero_generator import subindices
|
||||
from pep_sphinx_extensions.pep_zero_generator import writer
|
||||
from pep_sphinx_extensions.pep_zero_generator.constants import SUBINDICES_BY_TOPIC
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from sphinx.application import Sphinx
|
||||
from sphinx.environment import BuildEnvironment
|
||||
|
||||
|
||||
def _parse_peps() -> list[parser.PEP]:
|
||||
def _parse_peps(path: Path) -> list[parser.PEP]:
|
||||
# Read from root directory
|
||||
path = Path(".")
|
||||
peps: list[parser.PEP] = []
|
||||
|
||||
for file_path in path.iterdir():
|
||||
|
@ -41,7 +41,7 @@ def _parse_peps() -> list[parser.PEP]:
|
|||
continue # Skip directories etc.
|
||||
if file_path.match("pep-0000*"):
|
||||
continue # Skip pre-existing PEP 0 files
|
||||
if file_path.match("pep-????.???") and file_path.suffix in {".txt", ".rst"}:
|
||||
if file_path.match("pep-????.rst"):
|
||||
pep = parser.PEP(path.joinpath(file_path).absolute())
|
||||
peps.append(pep)
|
||||
|
||||
|
@ -52,8 +52,16 @@ def create_pep_json(peps: list[parser.PEP]) -> str:
|
|||
return json.dumps({pep.number: pep.full_details for pep in peps}, indent=1)
|
||||
|
||||
|
||||
def write_peps_json(peps: list[parser.PEP], path: Path) -> None:
|
||||
# Create peps.json
|
||||
json_peps = create_pep_json(peps)
|
||||
Path(path, "peps.json").write_text(json_peps, encoding="utf-8")
|
||||
os.makedirs(os.path.join(path, "api"), exist_ok=True)
|
||||
Path(path, "api", "peps.json").write_text(json_peps, encoding="utf-8")
|
||||
|
||||
|
||||
def create_pep_zero(app: Sphinx, env: BuildEnvironment, docnames: list[str]) -> None:
|
||||
peps = _parse_peps()
|
||||
peps = _parse_peps(Path(app.srcdir))
|
||||
|
||||
pep0_text = writer.PEPZeroWriter().write_pep0(peps, builder=env.settings["builder"])
|
||||
pep0_path = subindices.update_sphinx("pep-0000", pep0_text, docnames, env)
|
||||
|
@ -61,7 +69,4 @@ def create_pep_zero(app: Sphinx, env: BuildEnvironment, docnames: list[str]) ->
|
|||
|
||||
subindices.generate_subindices(SUBINDICES_BY_TOPIC, peps, docnames, env)
|
||||
|
||||
# Create peps.json
|
||||
json_path = Path(app.outdir, "api", "peps.json").resolve()
|
||||
json_path.parent.mkdir(exist_ok=True)
|
||||
json_path.write_text(create_pep_json(peps), encoding="utf-8")
|
||||
write_peps_json(peps, Path(app.outdir))
|
||||
|
|
|
@ -2,6 +2,7 @@
|
|||
|
||||
from __future__ import annotations
|
||||
|
||||
import os
|
||||
from pathlib import Path
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
|
@ -14,8 +15,7 @@ if TYPE_CHECKING:
|
|||
|
||||
|
||||
def update_sphinx(filename: str, text: str, docnames: list[str], env: BuildEnvironment) -> Path:
|
||||
file_path = Path(f"{filename}.rst").resolve()
|
||||
file_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
file_path = Path(env.srcdir, f"{filename}.rst")
|
||||
file_path.write_text(text, encoding="utf-8")
|
||||
|
||||
# Add to files for builder
|
||||
|
@ -32,6 +32,9 @@ def generate_subindices(
|
|||
docnames: list[str],
|
||||
env: BuildEnvironment,
|
||||
) -> None:
|
||||
# create topic directory
|
||||
os.makedirs(os.path.join(env.srcdir, "topic"), exist_ok=True)
|
||||
|
||||
# Create sub index page
|
||||
generate_topic_contents(docnames, env)
|
||||
|
||||
|
|
|
@ -2,14 +2,11 @@
|
|||
|
||||
from __future__ import annotations
|
||||
|
||||
import datetime as dt
|
||||
from typing import TYPE_CHECKING
|
||||
import unicodedata
|
||||
|
||||
from pep_sphinx_extensions.pep_processor.transforms.pep_headers import (
|
||||
ABBREVIATED_STATUSES,
|
||||
ABBREVIATED_TYPES,
|
||||
)
|
||||
from pep_sphinx_extensions.pep_processor.transforms.pep_headers import ABBREVIATED_STATUSES
|
||||
from pep_sphinx_extensions.pep_processor.transforms.pep_headers import ABBREVIATED_TYPES
|
||||
from pep_sphinx_extensions.pep_zero_generator.constants import DEAD_STATUSES
|
||||
from pep_sphinx_extensions.pep_zero_generator.constants import STATUS_ACCEPTED
|
||||
from pep_sphinx_extensions.pep_zero_generator.constants import STATUS_ACTIVE
|
||||
|
@ -29,11 +26,10 @@ from pep_sphinx_extensions.pep_zero_generator.errors import PEPError
|
|||
if TYPE_CHECKING:
|
||||
from pep_sphinx_extensions.pep_zero_generator.parser import PEP
|
||||
|
||||
HEADER = f"""\
|
||||
HEADER = """\
|
||||
PEP: 0
|
||||
Title: Index of Python Enhancement Proposals (PEPs)
|
||||
Last-Modified: {dt.date.today()}
|
||||
Author: python-dev <python-dev@python.org>
|
||||
Author: The PEP Editors
|
||||
Status: Active
|
||||
Type: Informational
|
||||
Content-Type: text/x-rst
|
||||
|
@ -78,14 +74,22 @@ class PEPZeroWriter:
|
|||
self.output.append(author_table_separator)
|
||||
|
||||
def emit_pep_row(
|
||||
self, *, shorthand: str, number: int, title: str, authors: str
|
||||
self,
|
||||
*,
|
||||
shorthand: str,
|
||||
number: int,
|
||||
title: str,
|
||||
authors: str,
|
||||
python_version: str | None = None,
|
||||
) -> None:
|
||||
self.emit_text(f" * - {shorthand}")
|
||||
self.emit_text(f" - :pep:`{number} <{number}>`")
|
||||
self.emit_text(f" - :pep:`{title.replace('`', '')} <{number}>`")
|
||||
self.emit_text(f" - {authors}")
|
||||
if python_version is not None:
|
||||
self.emit_text(f" - {python_version}")
|
||||
|
||||
def emit_column_headers(self) -> None:
|
||||
def emit_column_headers(self, *, include_version=True) -> None:
|
||||
"""Output the column headers for the PEP indices."""
|
||||
self.emit_text(".. list-table::")
|
||||
self.emit_text(" :header-rows: 1")
|
||||
|
@ -96,6 +100,8 @@ class PEPZeroWriter:
|
|||
self.emit_text(" - PEP")
|
||||
self.emit_text(" - Title")
|
||||
self.emit_text(" - Authors")
|
||||
if include_version:
|
||||
self.emit_text(" - ") # for Python-Version
|
||||
|
||||
def emit_title(self, text: str, *, symbol: str = "=") -> None:
|
||||
self.output.append(text)
|
||||
|
@ -105,17 +111,25 @@ class PEPZeroWriter:
|
|||
def emit_subtitle(self, text: str) -> None:
|
||||
self.emit_title(text, symbol="-")
|
||||
|
||||
def emit_table(self, peps: list[PEP]) -> None:
|
||||
include_version = any(pep.details["python_version"] for pep in peps)
|
||||
self.emit_column_headers(include_version=include_version)
|
||||
for pep in peps:
|
||||
details = pep.details
|
||||
if not include_version:
|
||||
details.pop("python_version")
|
||||
self.emit_pep_row(**details)
|
||||
|
||||
def emit_pep_category(self, category: str, peps: list[PEP]) -> None:
|
||||
self.emit_subtitle(category)
|
||||
self.emit_column_headers()
|
||||
for pep in peps:
|
||||
self.emit_pep_row(**pep.details)
|
||||
self.emit_table(peps)
|
||||
# list-table must have at least one body row
|
||||
if len(peps) == 0:
|
||||
self.emit_text(" * -")
|
||||
self.emit_text(" -")
|
||||
self.emit_text(" -")
|
||||
self.emit_text(" -")
|
||||
self.emit_text(" -")
|
||||
self.emit_newline()
|
||||
|
||||
def write_pep0(
|
||||
|
@ -149,7 +163,7 @@ class PEPZeroWriter:
|
|||
target = (
|
||||
f"topic/{subindex}.html"
|
||||
if builder == "html"
|
||||
else f"../topic/{subindex}"
|
||||
else f"../topic/{subindex}/"
|
||||
)
|
||||
self.emit_text(f"* `{subindex.title()} PEPs <{target}>`_")
|
||||
self.emit_newline()
|
||||
|
@ -184,19 +198,21 @@ class PEPZeroWriter:
|
|||
|
||||
# PEPs by number
|
||||
self.emit_title("Numerical Index")
|
||||
self.emit_column_headers()
|
||||
for pep in peps:
|
||||
self.emit_pep_row(**pep.details)
|
||||
self.emit_table(peps)
|
||||
|
||||
self.emit_newline()
|
||||
|
||||
# Reserved PEP numbers
|
||||
if is_pep0:
|
||||
self.emit_title("Reserved PEP Numbers")
|
||||
self.emit_column_headers()
|
||||
self.emit_column_headers(include_version=False)
|
||||
for number, claimants in sorted(self.RESERVED.items()):
|
||||
self.emit_pep_row(
|
||||
shorthand="", number=number, title="RESERVED", authors=claimants
|
||||
shorthand="",
|
||||
number=number,
|
||||
title="RESERVED",
|
||||
authors=claimants,
|
||||
python_version=None,
|
||||
)
|
||||
|
||||
self.emit_newline()
|
||||
|
@ -241,7 +257,7 @@ class PEPZeroWriter:
|
|||
self.emit_newline()
|
||||
self.emit_newline()
|
||||
|
||||
pep0_string = "\n".join([str(s) for s in self.output])
|
||||
pep0_string = "\n".join(map(str, self.output))
|
||||
return pep0_string
|
||||
|
||||
|
||||
|
@ -297,22 +313,22 @@ def _verify_email_addresses(peps: list[PEP]) -> dict[str, str]:
|
|||
for pep in peps:
|
||||
for author in pep.authors:
|
||||
# If this is the first time we have come across an author, add them.
|
||||
if author.last_first not in authors_dict:
|
||||
authors_dict[author.last_first] = set()
|
||||
if author.full_name not in authors_dict:
|
||||
authors_dict[author.full_name] = set()
|
||||
|
||||
# If the new email is an empty string, move on.
|
||||
if not author.email:
|
||||
continue
|
||||
# If the email has not been seen, add it to the list.
|
||||
authors_dict[author.last_first].add(author.email)
|
||||
authors_dict[author.full_name].add(author.email)
|
||||
|
||||
valid_authors_dict: dict[str, str] = {}
|
||||
too_many_emails: list[tuple[str, set[str]]] = []
|
||||
for last_first, emails in authors_dict.items():
|
||||
for full_name, emails in authors_dict.items():
|
||||
if len(emails) > 1:
|
||||
too_many_emails.append((last_first, emails))
|
||||
too_many_emails.append((full_name, emails))
|
||||
else:
|
||||
valid_authors_dict[last_first] = next(iter(emails), "")
|
||||
valid_authors_dict[full_name] = next(iter(emails), "")
|
||||
if too_many_emails:
|
||||
err_output = []
|
||||
for author, emails in too_many_emails:
|
||||
|
|
|
@ -0,0 +1,12 @@
|
|||
import importlib.util
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
_ROOT_PATH = Path(__file__, "..", "..", "..").resolve()
|
||||
PEP_ROOT = _ROOT_PATH / "peps"
|
||||
|
||||
# Import "check-peps.py" as "check_peps"
|
||||
CHECK_PEPS_PATH = _ROOT_PATH / "check-peps.py"
|
||||
spec = importlib.util.spec_from_file_location("check_peps", CHECK_PEPS_PATH)
|
||||
sys.modules["check_peps"] = check_peps = importlib.util.module_from_spec(spec)
|
||||
spec.loader.exec_module(check_peps)
|
|
@ -0,0 +1,105 @@
|
|||
import datetime as dt
|
||||
|
||||
import check_peps # NoQA: inserted into sys.modules in conftest.py
|
||||
import pytest
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"line",
|
||||
[
|
||||
# valid entries
|
||||
"01-Jan-2000",
|
||||
"29-Feb-2016",
|
||||
"31-Dec-2000",
|
||||
"01-Apr-2003",
|
||||
"01-Apr-2007",
|
||||
"01-Apr-2009",
|
||||
"01-Jan-1990",
|
||||
],
|
||||
)
|
||||
def test_validate_created(line: str):
|
||||
warnings = [warning for (_, warning) in check_peps._validate_created(1, line)]
|
||||
assert warnings == [], warnings
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"date_str",
|
||||
[
|
||||
# valid entries
|
||||
"01-Jan-2000",
|
||||
"29-Feb-2016",
|
||||
"31-Dec-2000",
|
||||
"01-Apr-2003",
|
||||
"01-Apr-2007",
|
||||
"01-Apr-2009",
|
||||
"01-Jan-1990",
|
||||
],
|
||||
)
|
||||
def test_date_checker_valid(date_str: str):
|
||||
warnings = [warning for (_, warning) in check_peps._date(1, date_str, "<Prefix>")]
|
||||
assert warnings == [], warnings
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"date_str",
|
||||
[
|
||||
# malformed
|
||||
"2000-01-01",
|
||||
"01 January 2000",
|
||||
"1 Jan 2000",
|
||||
"1-Jan-2000",
|
||||
"1-January-2000",
|
||||
"Jan-1-2000",
|
||||
"January 1 2000",
|
||||
"January 01 2000",
|
||||
"01/01/2000",
|
||||
"01/Jan/2000", # 🇬🇧, 🇦🇺, 🇨🇦, 🇳🇿, 🇮🇪 , ...
|
||||
"Jan/01/2000", # 🇺🇸
|
||||
"1st January 2000",
|
||||
"The First day of January in the year of Our Lord Two Thousand",
|
||||
"Jan, 1, 2000",
|
||||
"2000-Jan-1",
|
||||
"2000-Jan-01",
|
||||
"2000-January-1",
|
||||
"2000-January-01",
|
||||
"00 Jan 2000",
|
||||
"00-Jan-2000",
|
||||
],
|
||||
)
|
||||
def test_date_checker_malformed(date_str: str):
|
||||
warnings = [warning for (_, warning) in check_peps._date(1, date_str, "<Prefix>")]
|
||||
expected = f"<Prefix> must be a 'DD-mmm-YYYY' date: {date_str!r}"
|
||||
assert warnings == [expected], warnings
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"date_str",
|
||||
[
|
||||
# too early
|
||||
"31-Dec-1989",
|
||||
"01-Apr-1916",
|
||||
"01-Jan-0020",
|
||||
"01-Jan-0023",
|
||||
],
|
||||
)
|
||||
def test_date_checker_too_early(date_str: str):
|
||||
warnings = [warning for (_, warning) in check_peps._date(1, date_str, "<Prefix>")]
|
||||
expected = f"<Prefix> must not be before Python was invented: {date_str!r}"
|
||||
assert warnings == [expected], warnings
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"date_str",
|
||||
[
|
||||
# the future
|
||||
"31-Dec-2999",
|
||||
"01-Jan-2042",
|
||||
"01-Jan-2100",
|
||||
(dt.datetime.now() + dt.timedelta(days=15)).strftime("%d-%b-%Y"),
|
||||
(dt.datetime.now() + dt.timedelta(days=100)).strftime("%d-%b-%Y"),
|
||||
],
|
||||
)
|
||||
def test_date_checker_too_late(date_str: str):
|
||||
warnings = [warning for (_, warning) in check_peps._date(1, date_str, "<Prefix>")]
|
||||
expected = f"<Prefix> must not be in the future: {date_str!r}"
|
||||
assert warnings == [expected], warnings
|
|
@ -0,0 +1,30 @@
|
|||
import check_peps # NoQA: inserted into sys.modules in conftest.py
|
||||
import pytest
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"line",
|
||||
[
|
||||
"http://www.python.org/dev/peps/pep-0000/",
|
||||
"https://www.python.org/dev/peps/pep-0000/",
|
||||
"http://peps.python.org/pep-0000/",
|
||||
"https://peps.python.org/pep-0000/",
|
||||
],
|
||||
)
|
||||
def test_check_direct_links_pep(line: str):
|
||||
warnings = [warning for (_, warning) in check_peps.check_direct_links(1, line)]
|
||||
assert warnings == ["Use the :pep:`NNN` role to refer to PEPs"], warnings
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"line",
|
||||
[
|
||||
"http://www.rfc-editor.org/rfc/rfc2324",
|
||||
"https://www.rfc-editor.org/rfc/rfc2324",
|
||||
"http://datatracker.ietf.org/doc/html/rfc2324",
|
||||
"https://datatracker.ietf.org/doc/html/rfc2324",
|
||||
],
|
||||
)
|
||||
def test_check_direct_links_rfc(line: str):
|
||||
warnings = [warning for (_, warning) in check_peps.check_direct_links(1, line)]
|
||||
assert warnings == ["Use the :rfc:`NNN` role to refer to RFCs"], warnings
|
|
@ -0,0 +1,238 @@
|
|||
import check_peps # NoQA: inserted into sys.modules in conftest.py
|
||||
import pytest
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"line",
|
||||
[
|
||||
"Alice",
|
||||
"Alice,",
|
||||
"Alice, Bob, Charlie",
|
||||
"Alice,\nBob,\nCharlie",
|
||||
"Alice,\n Bob,\n Charlie",
|
||||
"Alice,\n Bob,\n Charlie",
|
||||
"Cardinal Ximénez",
|
||||
"Alice <alice@domain.example>",
|
||||
"Cardinal Ximénez <Cardinal.Ximenez@spanish.inquisition>",
|
||||
],
|
||||
ids=repr, # the default calls str and renders newlines.
|
||||
)
|
||||
def test_validate_author(line: str):
|
||||
warnings = [warning for (_, warning) in check_peps._validate_author(1, line)]
|
||||
assert warnings == [], warnings
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"line",
|
||||
[
|
||||
"Alice,\n Bob,\n Charlie",
|
||||
"Alice,\n Bob,\n Charlie",
|
||||
"Alice,\n Bob,\n Charlie",
|
||||
"Alice,\n Bob",
|
||||
],
|
||||
ids=repr, # the default calls str and renders newlines.
|
||||
)
|
||||
def test_validate_author_over__indented(line: str):
|
||||
warnings = [warning for (_, warning) in check_peps._validate_author(1, line)]
|
||||
assert {*warnings} == {"Author line must not be over-indented"}, warnings
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"line",
|
||||
[
|
||||
"Cardinal Ximénez\nCardinal Biggles\nCardinal Fang",
|
||||
"Cardinal Ximénez,\nCardinal Biggles\nCardinal Fang",
|
||||
"Cardinal Ximénez\nCardinal Biggles,\nCardinal Fang",
|
||||
],
|
||||
ids=repr, # the default calls str and renders newlines.
|
||||
)
|
||||
def test_validate_author_continuation(line: str):
|
||||
warnings = [warning for (_, warning) in check_peps._validate_author(1, line)]
|
||||
assert {*warnings} == {"Author continuation lines must end with a comma"}, warnings
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"line",
|
||||
[
|
||||
"Alice",
|
||||
"Cardinal Ximénez",
|
||||
"Alice <alice@domain.example>",
|
||||
"Cardinal Ximénez <Cardinal.Ximenez@spanish.inquisition>",
|
||||
],
|
||||
)
|
||||
def test_validate_sponsor(line: str):
|
||||
warnings = [warning for (_, warning) in check_peps._validate_sponsor(1, line)]
|
||||
assert warnings == [], warnings
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"line",
|
||||
[
|
||||
"",
|
||||
"Alice, Bob, Charlie",
|
||||
"Alice, Bob, Charlie,",
|
||||
"Alice <alice@domain.example>",
|
||||
"Cardinal Ximénez <Cardinal.Ximenez@spanish.inquisition>",
|
||||
],
|
||||
)
|
||||
def test_validate_delegate(line: str):
|
||||
warnings = [warning for (_, warning) in check_peps._validate_delegate(1, line)]
|
||||
assert warnings == [], warnings
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
("email", "expected_warnings"),
|
||||
[
|
||||
# ... entries must not contain multiple '...'
|
||||
("Cardinal Ximénez <<", {"multiple <"}),
|
||||
("Cardinal Ximénez <<<", {"multiple <"}),
|
||||
("Cardinal Ximénez >>", {"multiple >"}),
|
||||
("Cardinal Ximénez >>>", {"multiple >"}),
|
||||
("Cardinal Ximénez <<<>>>", {"multiple <", "multiple >"}),
|
||||
("Cardinal Ximénez @@", {"multiple @"}),
|
||||
("Cardinal Ximénez <<@@@>", {"multiple <", "multiple @"}),
|
||||
("Cardinal Ximénez <@@@>>", {"multiple >", "multiple @"}),
|
||||
("Cardinal Ximénez <<@@>>", {"multiple <", "multiple >", "multiple @"}),
|
||||
# valid names
|
||||
("Cardinal Ximénez", set()),
|
||||
(" Cardinal Ximénez", set()),
|
||||
("\t\tCardinal Ximénez", set()),
|
||||
("Cardinal Ximénez ", set()),
|
||||
("Cardinal Ximénez\t\t", set()),
|
||||
("Cardinal O'Ximénez", set()),
|
||||
("Cardinal Ximénez, Inquisitor", set()),
|
||||
("Cardinal Ximénez-Biggles", set()),
|
||||
("Cardinal Ximénez-Biggles, Inquisitor", set()),
|
||||
("Cardinal T. S. I. Ximénez", set()),
|
||||
# ... entries must have a valid 'Name'
|
||||
("Cardinal_Ximénez", {"valid name"}),
|
||||
("Cardinal Ximénez 3", {"valid name"}),
|
||||
("~ Cardinal Ximénez ~", {"valid name"}),
|
||||
("Cardinal Ximénez!", {"valid name"}),
|
||||
("@Cardinal Ximénez", {"valid name"}),
|
||||
("Cardinal_Ximénez <>", {"valid name"}),
|
||||
("Cardinal Ximénez 3 <>", {"valid name"}),
|
||||
("~ Cardinal Ximénez ~ <>", {"valid name"}),
|
||||
("Cardinal Ximénez! <>", {"valid name"}),
|
||||
("@Cardinal Ximénez <>", {"valid name"}),
|
||||
# ... entries must be formatted as 'Name <email@example.com>'
|
||||
("Cardinal Ximénez<>", {"name <email>"}),
|
||||
("Cardinal Ximénez<", {"name <email>"}),
|
||||
("Cardinal Ximénez <", {"name <email>"}),
|
||||
("Cardinal Ximénez <", {"name <email>"}),
|
||||
("Cardinal Ximénez <>", {"name <email>"}),
|
||||
# ... entries must contain a valid email address (missing)
|
||||
("Cardinal Ximénez <>", {"valid email"}),
|
||||
("Cardinal Ximénez <> ", {"valid email"}),
|
||||
("Cardinal Ximénez <@> ", {"valid email"}),
|
||||
("Cardinal Ximénez <at> ", {"valid email"}),
|
||||
("Cardinal Ximénez < at > ", {"valid email"}),
|
||||
# ... entries must contain a valid email address (local)
|
||||
("Cardinal Ximénez <Cardinal.Ximénez@spanish.inquisition>", {"valid email"}),
|
||||
("Cardinal Ximénez <Cardinal.Ximénez at spanish.inquisition>", {"valid email"}),
|
||||
("Cardinal Ximénez <Cardinal.Ximenez AT spanish.inquisition>", {"valid email"}),
|
||||
("Cardinal Ximénez <Cardinal.Ximenez @spanish.inquisition> ", {"valid email"}),
|
||||
("Cardinal Ximénez <Cardinal Ximenez@spanish.inquisition> ", {"valid email"}),
|
||||
("Cardinal Ximénez < Cardinal Ximenez @spanish.inquisition> ", {"valid email"}),
|
||||
("Cardinal Ximénez <(Cardinal.Ximenez)@spanish.inquisition>", {"valid email"}),
|
||||
("Cardinal Ximénez <Cardinal,Ximenez@spanish.inquisition>", {"valid email"}),
|
||||
("Cardinal Ximénez <Cardinal:Ximenez@spanish.inquisition>", {"valid email"}),
|
||||
("Cardinal Ximénez <Cardinal;Ximenez@spanish.inquisition>", {"valid email"}),
|
||||
(
|
||||
"Cardinal Ximénez <Cardinal><Ximenez@spanish.inquisition>",
|
||||
{"multiple <", "multiple >", "valid email"},
|
||||
),
|
||||
(
|
||||
"Cardinal Ximénez <Cardinal@Ximenez@spanish.inquisition>",
|
||||
{"multiple @", "valid email"},
|
||||
),
|
||||
(r"Cardinal Ximénez <Cardinal\Ximenez@spanish.inquisition>", {"valid email"}),
|
||||
("Cardinal Ximénez <[Cardinal.Ximenez]@spanish.inquisition>", {"valid email"}),
|
||||
('Cardinal Ximénez <"Cardinal"Ximenez"@spanish.inquisition>', {"valid email"}),
|
||||
("Cardinal Ximenez <Cardinal;Ximenez@spanish.inquisition>", {"valid email"}),
|
||||
("Cardinal Ximénez <Cardinal£Ximénez@spanish.inquisition>", {"valid email"}),
|
||||
("Cardinal Ximénez <Cardinal§Ximenez@spanish.inquisition>", {"valid email"}),
|
||||
# ... entries must contain a valid email address (domain)
|
||||
(
|
||||
"Cardinal Ximénez <Cardinal.Ximenez@spanish+american.inquisition>",
|
||||
{"valid email"},
|
||||
),
|
||||
("Cardinal Ximénez <Cardinal.Ximenez@spani$h.inquisition>", {"valid email"}),
|
||||
("Cardinal Ximénez <Cardinal.Ximenez@spanish.inquisitioñ>", {"valid email"}),
|
||||
(
|
||||
"Cardinal Ximénez <Cardinal.Ximenez@th£.spanish.inquisition>",
|
||||
{"valid email"},
|
||||
),
|
||||
# valid name-emails
|
||||
("Cardinal Ximénez <Cardinal.Ximenez@spanish.inquisition>", set()),
|
||||
("Cardinal Ximénez <Cardinal.Ximenez at spanish.inquisition>", set()),
|
||||
("Cardinal Ximénez <Cardinal_Ximenez@spanish.inquisition>", set()),
|
||||
("Cardinal Ximénez <Cardinal-Ximenez@spanish.inquisition>", set()),
|
||||
("Cardinal Ximénez <Cardinal!Ximenez@spanish.inquisition>", set()),
|
||||
("Cardinal Ximénez <Cardinal#Ximenez@spanish.inquisition>", set()),
|
||||
("Cardinal Ximénez <Cardinal$Ximenez@spanish.inquisition>", set()),
|
||||
("Cardinal Ximénez <Cardinal%Ximenez@spanish.inquisition>", set()),
|
||||
("Cardinal Ximénez <Cardinal&Ximenez@spanish.inquisition>", set()),
|
||||
("Cardinal Ximénez <Cardinal'Ximenez@spanish.inquisition>", set()),
|
||||
("Cardinal Ximénez <Cardinal*Ximenez@spanish.inquisition>", set()),
|
||||
("Cardinal Ximénez <Cardinal+Ximenez@spanish.inquisition>", set()),
|
||||
("Cardinal Ximénez <Cardinal/Ximenez@spanish.inquisition>", set()),
|
||||
("Cardinal Ximénez <Cardinal=Ximenez@spanish.inquisition>", set()),
|
||||
("Cardinal Ximénez <Cardinal?Ximenez@spanish.inquisition>", set()),
|
||||
("Cardinal Ximénez <Cardinal^Ximenez@spanish.inquisition>", set()),
|
||||
("Cardinal Ximénez <{Cardinal.Ximenez}@spanish.inquisition>", set()),
|
||||
("Cardinal Ximénez <Cardinal|Ximenez@spanish.inquisition>", set()),
|
||||
("Cardinal Ximénez <Cardinal~Ximenez@spanish.inquisition>", set()),
|
||||
("Cardinal Ximénez <Cardinal.Ximenez@español.inquisition>", set()),
|
||||
("Cardinal Ximénez <Cardinal.Ximenez at español.inquisition>", set()),
|
||||
("Cardinal Ximénez <Cardinal.Ximenez@spanish-american.inquisition>", set()),
|
||||
],
|
||||
# call str() on each parameterised value in the test ID.
|
||||
ids=str,
|
||||
)
|
||||
def test_email_checker(email: str, expected_warnings: set):
|
||||
warnings = [warning for (_, warning) in check_peps._email(1, email, "<Prefix>")]
|
||||
|
||||
found_warnings = set()
|
||||
email = email.strip()
|
||||
|
||||
if "multiple <" in expected_warnings:
|
||||
found_warnings.add("multiple <")
|
||||
expected = f"<Prefix> entries must not contain multiple '<': {email!r}"
|
||||
matching = [w for w in warnings if w == expected]
|
||||
assert matching == [expected], warnings
|
||||
|
||||
if "multiple >" in expected_warnings:
|
||||
found_warnings.add("multiple >")
|
||||
expected = f"<Prefix> entries must not contain multiple '>': {email!r}"
|
||||
matching = [w for w in warnings if w == expected]
|
||||
assert matching == [expected], warnings
|
||||
|
||||
if "multiple @" in expected_warnings:
|
||||
found_warnings.add("multiple @")
|
||||
expected = f"<Prefix> entries must not contain multiple '@': {email!r}"
|
||||
matching = [w for w in warnings if w == expected]
|
||||
assert matching == [expected], warnings
|
||||
|
||||
if "valid name" in expected_warnings:
|
||||
found_warnings.add("valid name")
|
||||
expected = f"<Prefix> entries must begin with a valid 'Name': {email!r}"
|
||||
matching = [w for w in warnings if w == expected]
|
||||
assert matching == [expected], warnings
|
||||
|
||||
if "name <email>" in expected_warnings:
|
||||
found_warnings.add("name <email>")
|
||||
expected = f"<Prefix> entries must be formatted as 'Name <email@example.com>': {email!r}"
|
||||
matching = [w for w in warnings if w == expected]
|
||||
assert matching == [expected], warnings
|
||||
|
||||
if "valid email" in expected_warnings:
|
||||
found_warnings.add("valid email")
|
||||
expected = f"<Prefix> entries must contain a valid email address: {email!r}"
|
||||
matching = [w for w in warnings if w == expected]
|
||||
assert matching == [expected], warnings
|
||||
|
||||
if expected_warnings == set():
|
||||
assert warnings == [], warnings
|
||||
|
||||
assert found_warnings == expected_warnings
|
|
@ -0,0 +1,408 @@
|
|||
import check_peps # NoQA: inserted into sys.modules in conftest.py
|
||||
import pytest
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
("test_input", "expected"),
|
||||
[
|
||||
# capitalisation
|
||||
("Header:", "Header"),
|
||||
("header:", "header"),
|
||||
("hEADER:", "hEADER"),
|
||||
("hEaDeR:", "hEaDeR"),
|
||||
# trailing spaces
|
||||
("Header: ", "Header"),
|
||||
("Header: ", "Header"),
|
||||
("Header: \t", "Header"),
|
||||
# trailing content
|
||||
("Header: Text", "Header"),
|
||||
("Header: 123", "Header"),
|
||||
("Header: !", "Header"),
|
||||
# separators
|
||||
("Hyphenated-Header:", "Hyphenated-Header"),
|
||||
],
|
||||
)
|
||||
def test_header_pattern(test_input, expected):
|
||||
assert check_peps.HEADER_PATTERN.match(test_input)[1] == expected
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"test_input",
|
||||
[
|
||||
# trailing content
|
||||
"Header:Text",
|
||||
"Header:123",
|
||||
"Header:!",
|
||||
# colon position
|
||||
"Header",
|
||||
"Header : ",
|
||||
"Header :",
|
||||
"SemiColonHeader;",
|
||||
# separators
|
||||
"Underscored_Header:",
|
||||
"Spaced Header:",
|
||||
"Plus+Header:",
|
||||
],
|
||||
)
|
||||
def test_header_pattern_no_match(test_input):
|
||||
assert check_peps.HEADER_PATTERN.match(test_input) is None
|
||||
|
||||
|
||||
def test_validate_required_headers():
|
||||
found_headers = dict.fromkeys(
|
||||
("PEP", "Title", "Author", "Status", "Type", "Created")
|
||||
)
|
||||
warnings = [
|
||||
warning for (_, warning) in check_peps._validate_required_headers(found_headers)
|
||||
]
|
||||
assert warnings == [], warnings
|
||||
|
||||
|
||||
def test_validate_required_headers_missing():
|
||||
found_headers = dict.fromkeys(("PEP", "Title", "Author", "Type"))
|
||||
warnings = [
|
||||
warning for (_, warning) in check_peps._validate_required_headers(found_headers)
|
||||
]
|
||||
assert warnings == [
|
||||
"Must have required header: Status",
|
||||
"Must have required header: Created",
|
||||
], warnings
|
||||
|
||||
|
||||
def test_validate_required_headers_order():
|
||||
found_headers = dict.fromkeys(
|
||||
("PEP", "Title", "Sponsor", "Author", "Type", "Status", "Replaces", "Created")
|
||||
)
|
||||
warnings = [
|
||||
warning for (_, warning) in check_peps._validate_required_headers(found_headers)
|
||||
]
|
||||
assert warnings == [
|
||||
"Headers must be in PEP 12 order. Correct order: PEP, Title, Author, Sponsor, Status, Type, Created, Replaces"
|
||||
], warnings
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"line",
|
||||
[
|
||||
"!",
|
||||
"The Zen of Python",
|
||||
"A title that is exactly 79 characters long, but shorter than 80 characters long",
|
||||
],
|
||||
)
|
||||
def test_validate_title(line: str):
|
||||
warnings = [warning for (_, warning) in check_peps._validate_title(1, line)]
|
||||
assert warnings == [], warnings
|
||||
|
||||
|
||||
def test_validate_title_blank():
|
||||
warnings = [warning for (_, warning) in check_peps._validate_title(1, "-" * 80)]
|
||||
assert warnings == ["PEP title must be less than 80 characters"], warnings
|
||||
|
||||
|
||||
def test_validate_title_too_long():
|
||||
warnings = [warning for (_, warning) in check_peps._validate_title(1, "")]
|
||||
assert warnings == ["PEP must have a title"], warnings
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"line",
|
||||
[
|
||||
"Accepted",
|
||||
"Active",
|
||||
"April Fool!",
|
||||
"Deferred",
|
||||
"Draft",
|
||||
"Final",
|
||||
"Provisional",
|
||||
"Rejected",
|
||||
"Superseded",
|
||||
"Withdrawn",
|
||||
],
|
||||
)
|
||||
def test_validate_status_valid(line: str):
|
||||
warnings = [warning for (_, warning) in check_peps._validate_status(1, line)]
|
||||
assert warnings == [], warnings
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"line",
|
||||
[
|
||||
"Standards Track",
|
||||
"Informational",
|
||||
"Process",
|
||||
"accepted",
|
||||
"active",
|
||||
"april fool!",
|
||||
"deferred",
|
||||
"draft",
|
||||
"final",
|
||||
"provisional",
|
||||
"rejected",
|
||||
"superseded",
|
||||
"withdrawn",
|
||||
],
|
||||
)
|
||||
def test_validate_status_invalid(line: str):
|
||||
warnings = [warning for (_, warning) in check_peps._validate_status(1, line)]
|
||||
assert warnings == ["Status must be a valid PEP status"], warnings
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"line",
|
||||
[
|
||||
"Standards Track",
|
||||
"Informational",
|
||||
"Process",
|
||||
],
|
||||
)
|
||||
def test_validate_type_valid(line: str):
|
||||
warnings = [warning for (_, warning) in check_peps._validate_type(1, line)]
|
||||
assert warnings == [], warnings
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"line",
|
||||
[
|
||||
"standards track",
|
||||
"informational",
|
||||
"process",
|
||||
"Accepted",
|
||||
"Active",
|
||||
"April Fool!",
|
||||
"Deferred",
|
||||
"Draft",
|
||||
"Final",
|
||||
"Provisional",
|
||||
"Rejected",
|
||||
"Superseded",
|
||||
"Withdrawn",
|
||||
],
|
||||
)
|
||||
def test_validate_type_invalid(line: str):
|
||||
warnings = [warning for (_, warning) in check_peps._validate_type(1, line)]
|
||||
assert warnings == ["Type must be a valid PEP type"], warnings
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
("line", "expected_warnings"),
|
||||
[
|
||||
# valid entries
|
||||
("Governance", set()),
|
||||
("Packaging", set()),
|
||||
("Typing", set()),
|
||||
("Release", set()),
|
||||
("Governance, Packaging", set()),
|
||||
("Packaging, Typing", set()),
|
||||
# duplicates
|
||||
("Governance, Governance", {"duplicates"}),
|
||||
("Release, Release", {"duplicates"}),
|
||||
("Packaging, Packaging", {"duplicates"}),
|
||||
("Spam, Spam", {"duplicates", "valid"}),
|
||||
("lobster, lobster", {"duplicates", "capitalisation", "valid"}),
|
||||
("governance, governance", {"duplicates", "capitalisation"}),
|
||||
# capitalisation
|
||||
("governance", {"capitalisation"}),
|
||||
("packaging", {"capitalisation"}),
|
||||
("typing", {"capitalisation"}),
|
||||
("release", {"capitalisation"}),
|
||||
("Governance, release", {"capitalisation"}),
|
||||
# validity
|
||||
("Spam", {"valid"}),
|
||||
("lobster", {"capitalisation", "valid"}),
|
||||
# sorted
|
||||
("Packaging, Governance", {"sorted"}),
|
||||
("Typing, Release", {"sorted"}),
|
||||
("Release, Governance", {"sorted"}),
|
||||
("spam, packaging", {"capitalisation", "valid", "sorted"}),
|
||||
],
|
||||
# call str() on each parameterised value in the test ID.
|
||||
ids=str,
|
||||
)
|
||||
def test_validate_topic(line: str, expected_warnings: set):
|
||||
warnings = [warning for (_, warning) in check_peps._validate_topic(1, line)]
|
||||
|
||||
found_warnings = set()
|
||||
|
||||
if "duplicates" in expected_warnings:
|
||||
found_warnings.add("duplicates")
|
||||
expected = "Topic must not contain duplicates"
|
||||
matching = [w for w in warnings if w == expected]
|
||||
assert matching == [expected], warnings
|
||||
|
||||
if "capitalisation" in expected_warnings:
|
||||
found_warnings.add("capitalisation")
|
||||
expected = "Topic must be properly capitalised (Title Case)"
|
||||
matching = [w for w in warnings if w == expected]
|
||||
assert matching == [expected], warnings
|
||||
|
||||
if "valid" in expected_warnings:
|
||||
found_warnings.add("valid")
|
||||
expected = "Topic must be for a valid sub-index"
|
||||
matching = [w for w in warnings if w == expected]
|
||||
assert matching == [expected], warnings
|
||||
|
||||
if "sorted" in expected_warnings:
|
||||
found_warnings.add("sorted")
|
||||
expected = "Topic must be sorted lexicographically"
|
||||
matching = [w for w in warnings if w == expected]
|
||||
assert matching == [expected], warnings
|
||||
|
||||
if expected_warnings == set():
|
||||
assert warnings == [], warnings
|
||||
|
||||
assert found_warnings == expected_warnings
|
||||
|
||||
|
||||
def test_validate_content_type_valid():
|
||||
warnings = [
|
||||
warning for (_, warning) in check_peps._validate_content_type(1, "text/x-rst")
|
||||
]
|
||||
assert warnings == [], warnings
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"line",
|
||||
[
|
||||
"text/plain",
|
||||
"text/markdown",
|
||||
"text/csv",
|
||||
"text/rtf",
|
||||
"text/javascript",
|
||||
"text/html",
|
||||
"text/xml",
|
||||
],
|
||||
)
|
||||
def test_validate_content_type_invalid(line: str):
|
||||
warnings = [warning for (_, warning) in check_peps._validate_content_type(1, line)]
|
||||
assert warnings == ["Content-Type must be 'text/x-rst'"], warnings
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"line",
|
||||
[
|
||||
"0, 1, 8, 12, 20,",
|
||||
"101, 801,",
|
||||
"3099, 9999",
|
||||
],
|
||||
)
|
||||
def test_validate_pep_references(line: str):
|
||||
warnings = [
|
||||
warning for (_, warning) in check_peps._validate_pep_references(1, line)
|
||||
]
|
||||
assert warnings == [], warnings
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"line",
|
||||
[
|
||||
"0,1,8, 12, 20,",
|
||||
"101,801,",
|
||||
"3099, 9998,9999",
|
||||
],
|
||||
)
|
||||
def test_validate_pep_references_separators(line: str):
|
||||
warnings = [
|
||||
warning for (_, warning) in check_peps._validate_pep_references(1, line)
|
||||
]
|
||||
assert warnings == [
|
||||
"PEP references must be separated by comma-spaces (', ')"
|
||||
], warnings
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
("line", "expected_warnings"),
|
||||
[
|
||||
# valid entries
|
||||
("1.0, 2.4, 2.7, 2.8, 3.0, 3.1, 3.4, 3.7, 3.11, 3.14", set()),
|
||||
("2.x", set()),
|
||||
("3.x", set()),
|
||||
("3.0.1", set()),
|
||||
# segments
|
||||
("", {"segments"}),
|
||||
("1", {"segments"}),
|
||||
("1.2.3.4", {"segments"}),
|
||||
# major
|
||||
("0.0", {"major"}),
|
||||
("4.0", {"major"}),
|
||||
("9.0", {"major"}),
|
||||
# minor number
|
||||
("3.a", {"minor numeric"}),
|
||||
("3.spam", {"minor numeric"}),
|
||||
("3.0+", {"minor numeric"}),
|
||||
("3.0-9", {"minor numeric"}),
|
||||
("9.Z", {"major", "minor numeric"}),
|
||||
# minor leading zero
|
||||
("3.01", {"minor zero"}),
|
||||
("0.00", {"major", "minor zero"}),
|
||||
# micro empty
|
||||
("3.x.1", {"micro empty"}),
|
||||
("9.x.1", {"major", "micro empty"}),
|
||||
# micro leading zero
|
||||
("3.3.0", {"micro zero"}),
|
||||
("3.3.00", {"micro zero"}),
|
||||
("3.3.01", {"micro zero"}),
|
||||
("3.0.0", {"micro zero"}),
|
||||
("3.00.0", {"minor zero", "micro zero"}),
|
||||
("0.00.0", {"major", "minor zero", "micro zero"}),
|
||||
# micro number
|
||||
("3.0.a", {"micro numeric"}),
|
||||
("0.3.a", {"major", "micro numeric"}),
|
||||
],
|
||||
# call str() on each parameterised value in the test ID.
|
||||
ids=str,
|
||||
)
|
||||
def test_validate_python_version(line: str, expected_warnings: set):
|
||||
warnings = [
|
||||
warning for (_, warning) in check_peps._validate_python_version(1, line)
|
||||
]
|
||||
|
||||
found_warnings = set()
|
||||
|
||||
if "segments" in expected_warnings:
|
||||
found_warnings.add("segments")
|
||||
expected = f"Python-Version must have two or three segments: {line}"
|
||||
matching = [w for w in warnings if w == expected]
|
||||
assert matching == [expected], warnings
|
||||
|
||||
if "major" in expected_warnings:
|
||||
found_warnings.add("major")
|
||||
expected = f"Python-Version major part must be 1, 2, or 3: {line}"
|
||||
matching = [w for w in warnings if w == expected]
|
||||
assert matching == [expected], warnings
|
||||
|
||||
if "minor numeric" in expected_warnings:
|
||||
found_warnings.add("minor numeric")
|
||||
expected = f"Python-Version minor part must be numeric: {line}"
|
||||
matching = [w for w in warnings if w == expected]
|
||||
assert matching == [expected], warnings
|
||||
|
||||
if "minor zero" in expected_warnings:
|
||||
found_warnings.add("minor zero")
|
||||
expected = f"Python-Version minor part must not have leading zeros: {line}"
|
||||
matching = [w for w in warnings if w == expected]
|
||||
assert matching == [expected], warnings
|
||||
|
||||
if "micro empty" in expected_warnings:
|
||||
found_warnings.add("micro empty")
|
||||
expected = (
|
||||
f"Python-Version micro part must be empty if minor part is 'x': {line}"
|
||||
)
|
||||
matching = [w for w in warnings if w == expected]
|
||||
assert matching == [expected], warnings
|
||||
|
||||
if "micro zero" in expected_warnings:
|
||||
found_warnings.add("micro zero")
|
||||
expected = f"Python-Version micro part must not have leading zeros: {line}"
|
||||
matching = [w for w in warnings if w == expected]
|
||||
assert matching == [expected], warnings
|
||||
|
||||
if "micro numeric" in expected_warnings:
|
||||
found_warnings.add("micro numeric")
|
||||
expected = f"Python-Version micro part must be numeric: {line}"
|
||||
matching = [w for w in warnings if w == expected]
|
||||
assert matching == [expected], warnings
|
||||
|
||||
if expected_warnings == set():
|
||||
assert warnings == [], warnings
|
||||
|
||||
assert found_warnings == expected_warnings
|
|
@ -0,0 +1,48 @@
|
|||
from pathlib import Path
|
||||
|
||||
import check_peps # NoQA: inserted into sys.modules in conftest.py
|
||||
|
||||
PEP_9002 = Path(__file__).parent.parent / "peps" / "pep-9002.rst"
|
||||
|
||||
|
||||
def test_with_fake_pep():
|
||||
content = PEP_9002.read_text(encoding="utf-8").splitlines()
|
||||
warnings = list(check_peps.check_peps(PEP_9002, content))
|
||||
assert warnings == [
|
||||
(1, "PEP must begin with the 'PEP:' header"),
|
||||
(9, "Must not have duplicate header: Sponsor "),
|
||||
(10, "Must not have invalid header: Horse-Guards"),
|
||||
(1, "Must have required header: PEP"),
|
||||
(1, "Must have required header: Type"),
|
||||
(
|
||||
1,
|
||||
"Headers must be in PEP 12 order. Correct order: Title, Version, "
|
||||
"Author, Sponsor, BDFL-Delegate, Discussions-To, Status, Topic, "
|
||||
"Content-Type, Requires, Created, Python-Version, Post-History, "
|
||||
"Resolution",
|
||||
),
|
||||
(4, "Author continuation lines must end with a comma"),
|
||||
(5, "Author line must not be over-indented"),
|
||||
(7, "Python-Version major part must be 1, 2, or 3: 4.0"),
|
||||
(
|
||||
8,
|
||||
"Sponsor entries must begin with a valid 'Name': "
|
||||
r"'Sponsor:\nHorse-Guards: Parade'",
|
||||
),
|
||||
(11, "Created must be a 'DD-mmm-YYYY' date: '1-Jan-1989'"),
|
||||
(12, "Delegate entries must begin with a valid 'Name': 'Barry!'"),
|
||||
(13, "Status must be a valid PEP status"),
|
||||
(14, "Topic must not contain duplicates"),
|
||||
(14, "Topic must be properly capitalised (Title Case)"),
|
||||
(14, "Topic must be for a valid sub-index"),
|
||||
(14, "Topic must be sorted lexicographically"),
|
||||
(15, "Content-Type must be 'text/x-rst'"),
|
||||
(16, "PEP references must be separated by comma-spaces (', ')"),
|
||||
(17, "Discussions-To must be a valid thread URL or mailing list"),
|
||||
(18, "Post-History must be a 'DD-mmm-YYYY' date: '2-Feb-2000'"),
|
||||
(18, "Post-History must be a valid thread URL"),
|
||||
(19, "Post-History must be a 'DD-mmm-YYYY' date: '3-Mar-2001'"),
|
||||
(19, "Post-History must be a valid thread URL"),
|
||||
(20, "Resolution must be a valid thread URL"),
|
||||
(23, "Use the :pep:`NNN` role to refer to PEPs"),
|
||||
]
|
|
@ -0,0 +1,108 @@
|
|||
import check_peps # NoQA: inserted into sys.modules in conftest.py
|
||||
import pytest
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"line",
|
||||
[
|
||||
"PEP: 0",
|
||||
"PEP: 12",
|
||||
],
|
||||
)
|
||||
def test_validate_pep_number(line: str):
|
||||
warnings = [warning for (_, warning) in check_peps._validate_pep_number(line)]
|
||||
assert warnings == [], warnings
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"line",
|
||||
[
|
||||
"0",
|
||||
"PEP:12",
|
||||
"PEP 0",
|
||||
"PEP 12",
|
||||
"PEP:0",
|
||||
],
|
||||
)
|
||||
def test_validate_pep_number_invalid_header(line: str):
|
||||
warnings = [warning for (_, warning) in check_peps._validate_pep_number(line)]
|
||||
assert warnings == ["PEP must begin with the 'PEP:' header"], warnings
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
("pep_number", "expected_warnings"),
|
||||
[
|
||||
# valid entries
|
||||
("0", set()),
|
||||
("1", set()),
|
||||
("12", set()),
|
||||
("20", set()),
|
||||
("101", set()),
|
||||
("801", set()),
|
||||
("3099", set()),
|
||||
("9999", set()),
|
||||
# empty
|
||||
("", {"not blank"}),
|
||||
# leading zeros
|
||||
("01", {"leading zeros"}),
|
||||
("001", {"leading zeros"}),
|
||||
("0001", {"leading zeros"}),
|
||||
("00001", {"leading zeros"}),
|
||||
# non-numeric
|
||||
("a", {"non-numeric"}),
|
||||
("123abc", {"non-numeric"}),
|
||||
("0123A", {"leading zeros", "non-numeric"}),
|
||||
("0", {"non-numeric"}),
|
||||
("101", {"non-numeric"}),
|
||||
("9999", {"non-numeric"}),
|
||||
("𝟎", {"non-numeric"}),
|
||||
("𝟘", {"non-numeric"}),
|
||||
("𝟏𝟚", {"non-numeric"}),
|
||||
("𝟸𝟬", {"non-numeric"}),
|
||||
("-1", {"non-numeric"}),
|
||||
("+1", {"non-numeric"}),
|
||||
# out of bounds
|
||||
("10000", {"range"}),
|
||||
("54321", {"range"}),
|
||||
("99999", {"range"}),
|
||||
("32768", {"range"}),
|
||||
],
|
||||
# call str() on each parameterised value in the test ID.
|
||||
ids=str,
|
||||
)
|
||||
def test_pep_num_checker(pep_number: str, expected_warnings: set):
|
||||
warnings = [
|
||||
warning for (_, warning) in check_peps._pep_num(1, pep_number, "<Prefix>")
|
||||
]
|
||||
|
||||
found_warnings = set()
|
||||
pep_number = pep_number.strip()
|
||||
|
||||
if "not blank" in expected_warnings:
|
||||
found_warnings.add("not blank")
|
||||
expected = f"<Prefix> must not be blank: {pep_number!r}"
|
||||
matching = [w for w in warnings if w == expected]
|
||||
assert matching == [expected], warnings
|
||||
|
||||
if "leading zeros" in expected_warnings:
|
||||
found_warnings.add("leading zeros")
|
||||
expected = f"<Prefix> must not contain leading zeros: {pep_number!r}"
|
||||
matching = [w for w in warnings if w == expected]
|
||||
assert matching == [expected], warnings
|
||||
|
||||
if "non-numeric" in expected_warnings:
|
||||
found_warnings.add("non-numeric")
|
||||
expected = f"<Prefix> must be numeric: {pep_number!r}"
|
||||
matching = [w for w in warnings if w == expected]
|
||||
assert matching == [expected], warnings
|
||||
|
||||
if "range" in expected_warnings:
|
||||
found_warnings.add("range")
|
||||
expected = f"<Prefix> must be between 0 and 9999: {pep_number!r}"
|
||||
matching = [w for w in warnings if w == expected]
|
||||
assert matching == [expected], warnings
|
||||
|
||||
if expected_warnings == set():
|
||||
assert warnings == [], warnings
|
||||
|
||||
assert found_warnings == expected_warnings
|
|
@ -0,0 +1,305 @@
|
|||
import check_peps # NoQA: inserted into sys.modules in conftest.py
|
||||
import pytest
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"line",
|
||||
[
|
||||
"list-name@python.org",
|
||||
"distutils-sig@python.org",
|
||||
"csv@python.org",
|
||||
"python-3000@python.org",
|
||||
"ipaddr-py-dev@googlegroups.com",
|
||||
"python-tulip@googlegroups.com",
|
||||
"https://discuss.python.org/t/thread-name/123456",
|
||||
"https://discuss.python.org/t/thread-name/123456/",
|
||||
"https://discuss.python.org/t/thread_name/123456",
|
||||
"https://discuss.python.org/t/thread_name/123456/",
|
||||
"https://discuss.python.org/t/123456/",
|
||||
"https://discuss.python.org/t/123456",
|
||||
],
|
||||
)
|
||||
def test_validate_discussions_to_valid(line: str):
|
||||
warnings = [
|
||||
warning for (_, warning) in check_peps._validate_discussions_to(1, line)
|
||||
]
|
||||
assert warnings == [], warnings
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"line",
|
||||
[
|
||||
"$pecial+chars@python.org",
|
||||
"a-discussions-to-list!@googlegroups.com",
|
||||
],
|
||||
)
|
||||
def test_validate_discussions_to_list_name(line: str):
|
||||
warnings = [
|
||||
warning for (_, warning) in check_peps._validate_discussions_to(1, line)
|
||||
]
|
||||
assert warnings == ["Discussions-To must be a valid mailing list"], warnings
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"line",
|
||||
[
|
||||
"list-name@python.org.uk",
|
||||
"distutils-sig@mail-server.example",
|
||||
],
|
||||
)
|
||||
def test_validate_discussions_to_invalid_list_domain(line: str):
|
||||
warnings = [
|
||||
warning for (_, warning) in check_peps._validate_discussions_to(1, line)
|
||||
]
|
||||
assert warnings == [
|
||||
"Discussions-To must be a valid thread URL or mailing list"
|
||||
], warnings
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"body",
|
||||
[
|
||||
"",
|
||||
(
|
||||
"01-Jan-2001, 02-Feb-2002,\n "
|
||||
"03-Mar-2003, 04-Apr-2004,\n "
|
||||
"05-May-2005,"
|
||||
),
|
||||
(
|
||||
"`01-Jan-2000 <https://mail.python.org/pipermail/list-name/0000-Month/0123456.html>`__,\n "
|
||||
"`11-Mar-2005 <https://mail.python.org/archives/list/list-name@python.org/thread/abcdef0123456789/>`__,\n "
|
||||
"`21-May-2010 <https://discuss.python.org/t/thread-name/123456/654321>`__,\n "
|
||||
"`31-Jul-2015 <https://discuss.python.org/t/123456>`__,"
|
||||
),
|
||||
"01-Jan-2001, `02-Feb-2002 <https://discuss.python.org/t/123456>`__,\n03-Mar-2003",
|
||||
],
|
||||
)
|
||||
def test_validate_post_history_valid(body: str):
|
||||
warnings = [warning for (_, warning) in check_peps._validate_post_history(1, body)]
|
||||
assert warnings == [], warnings
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"line",
|
||||
[
|
||||
"https://mail.python.org/archives/list/list-name@python.org/thread/abcXYZ123",
|
||||
"https://mail.python.org/archives/list/list-name@python.org/thread/abcXYZ123/",
|
||||
"https://mail.python.org/archives/list/list-name@python.org/message/abcXYZ123",
|
||||
"https://mail.python.org/archives/list/list-name@python.org/message/abcXYZ123/",
|
||||
"https://mail.python.org/archives/list/list-name@python.org/message/abcXYZ123#Anchor",
|
||||
"https://mail.python.org/archives/list/list-name@python.org/message/abcXYZ123/#Anchor",
|
||||
"https://mail.python.org/archives/list/list-name@python.org/message/abcXYZ123#Anchor123",
|
||||
"https://mail.python.org/archives/list/list-name@python.org/message/abcXYZ123/#Anchor123",
|
||||
],
|
||||
)
|
||||
def test_validate_resolution_valid(line: str):
|
||||
warnings = [warning for (_, warning) in check_peps._validate_resolution(1, line)]
|
||||
assert warnings == [], warnings
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"line",
|
||||
[
|
||||
"https://mail.python.org/archives/list/list-name@python.org/thread",
|
||||
"https://mail.python.org/archives/list/list-name@python.org/message",
|
||||
"https://mail.python.org/archives/list/list-name@python.org/thread/",
|
||||
"https://mail.python.org/archives/list/list-name@python.org/message/",
|
||||
"https://mail.python.org/archives/list/list-name@python.org/thread/abcXYZ123#anchor",
|
||||
"https://mail.python.org/archives/list/list-name@python.org/thread/abcXYZ123/#anchor",
|
||||
"https://mail.python.org/archives/list/list-name@python.org/message/#abcXYZ123",
|
||||
"https://mail.python.org/archives/list/list-name@python.org/message/#abcXYZ123/",
|
||||
"https://mail.python.org/archives/list/list-name@python.org/spam/abcXYZ123",
|
||||
"https://mail.python.org/archives/list/list-name@python.org/spam/abcXYZ123/",
|
||||
],
|
||||
)
|
||||
def test_validate_resolution_invalid(line: str):
|
||||
warnings = [warning for (_, warning) in check_peps._validate_resolution(1, line)]
|
||||
assert warnings == ["Resolution must be a valid thread URL"], warnings
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"thread_url",
|
||||
[
|
||||
"https://discuss.python.org/t/thread-name/123456",
|
||||
"https://discuss.python.org/t/thread-name/123456/",
|
||||
"https://discuss.python.org/t/thread_name/123456",
|
||||
"https://discuss.python.org/t/thread_name/123456/",
|
||||
"https://discuss.python.org/t/thread-name/123456/654321/",
|
||||
"https://discuss.python.org/t/thread-name/123456/654321",
|
||||
"https://discuss.python.org/t/123456",
|
||||
"https://discuss.python.org/t/123456/",
|
||||
"https://discuss.python.org/t/123456/654321/",
|
||||
"https://discuss.python.org/t/123456/654321",
|
||||
"https://discuss.python.org/t/1",
|
||||
"https://discuss.python.org/t/1/",
|
||||
"https://mail.python.org/pipermail/list-name/0000-Month/0123456.html",
|
||||
"https://mail.python.org/archives/list/list-name@python.org/thread/abcXYZ123",
|
||||
"https://mail.python.org/archives/list/list-name@python.org/thread/abcXYZ123/",
|
||||
],
|
||||
)
|
||||
def test_thread_checker_valid(thread_url: str):
|
||||
warnings = [
|
||||
warning for (_, warning) in check_peps._thread(1, thread_url, "<Prefix>")
|
||||
]
|
||||
assert warnings == [], warnings
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"thread_url",
|
||||
[
|
||||
"http://link.example",
|
||||
"list-name@python.org",
|
||||
"distutils-sig@python.org",
|
||||
"csv@python.org",
|
||||
"python-3000@python.org",
|
||||
"ipaddr-py-dev@googlegroups.com",
|
||||
"python-tulip@googlegroups.com",
|
||||
"https://link.example",
|
||||
"https://discuss.python.org",
|
||||
"https://discuss.python.org/",
|
||||
"https://discuss.python.org/c/category",
|
||||
"https://discuss.python.org/t/thread_name/123456//",
|
||||
"https://discuss.python.org/t/thread+name/123456",
|
||||
"https://discuss.python.org/t/thread+name/123456#",
|
||||
"https://discuss.python.org/t/thread+name/123456/#",
|
||||
"https://discuss.python.org/t/thread+name/123456/#anchor",
|
||||
"https://discuss.python.org/t/thread+name/",
|
||||
"https://discuss.python.org/t/thread+name",
|
||||
"https://discuss.python.org/t/thread-name/123abc",
|
||||
"https://discuss.python.org/t/thread-name/123abc/",
|
||||
"https://discuss.python.org/t/thread-name/123456/123abc",
|
||||
"https://discuss.python.org/t/thread-name/123456/123abc/",
|
||||
"https://discuss.python.org/t/123/456/789",
|
||||
"https://discuss.python.org/t/123/456/789/",
|
||||
"https://discuss.python.org/t/#/",
|
||||
"https://discuss.python.org/t/#",
|
||||
"https://mail.python.org/pipermail/list+name/0000-Month/0123456.html",
|
||||
"https://mail.python.org/pipermail/list-name/YYYY-Month/0123456.html",
|
||||
"https://mail.python.org/pipermail/list-name/0123456/0123456.html",
|
||||
"https://mail.python.org/pipermail/list-name/0000-Month/0123456",
|
||||
"https://mail.python.org/pipermail/list-name/0000-Month/0123456/",
|
||||
"https://mail.python.org/pipermail/list-name/0000-Month/",
|
||||
"https://mail.python.org/pipermail/list-name/0000-Month",
|
||||
"https://mail.python.org/archives/list/list-name@python.org/thread/abcXYZ123#anchor",
|
||||
"https://mail.python.org/archives/list/list-name@python.org/thread/abcXYZ123/#anchor",
|
||||
"https://mail.python.org/archives/list/list-name@python.org/message/abcXYZ123",
|
||||
"https://mail.python.org/archives/list/list-name@python.org/message/abcXYZ123/",
|
||||
"https://mail.python.org/archives/list/list-name@python.org/message/abcXYZ123#anchor",
|
||||
"https://mail.python.org/archives/list/list-name@python.org/message/abcXYZ123/#anchor",
|
||||
"https://mail.python.org/archives/list/list-name@python.org/spam/abcXYZ123",
|
||||
"https://mail.python.org/archives/list/list-name@python.org/spam/abcXYZ123/",
|
||||
],
|
||||
)
|
||||
def test_thread_checker_invalid(thread_url: str):
|
||||
warnings = [
|
||||
warning for (_, warning) in check_peps._thread(1, thread_url, "<Prefix>")
|
||||
]
|
||||
assert warnings == ["<Prefix> must be a valid thread URL"], warnings
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"thread_url",
|
||||
[
|
||||
"https://mail.python.org/archives/list/list-name@python.org/thread/abcXYZ123",
|
||||
"https://mail.python.org/archives/list/list-name@python.org/thread/abcXYZ123/",
|
||||
"https://mail.python.org/archives/list/list-name@python.org/message/abcXYZ123",
|
||||
"https://mail.python.org/archives/list/list-name@python.org/message/abcXYZ123/",
|
||||
"https://mail.python.org/archives/list/list-name@python.org/message/abcXYZ123#Anchor",
|
||||
"https://mail.python.org/archives/list/list-name@python.org/message/abcXYZ123/#Anchor",
|
||||
"https://mail.python.org/archives/list/list-name@python.org/message/abcXYZ123#Anchor123",
|
||||
"https://mail.python.org/archives/list/list-name@python.org/message/abcXYZ123/#Anchor123",
|
||||
],
|
||||
)
|
||||
def test_thread_checker_valid_allow_message(thread_url: str):
|
||||
warnings = [
|
||||
warning
|
||||
for (_, warning) in check_peps._thread(
|
||||
1, thread_url, "<Prefix>", allow_message=True
|
||||
)
|
||||
]
|
||||
assert warnings == [], warnings
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"thread_url",
|
||||
[
|
||||
"https://mail.python.org/archives/list/list-name@python.org/thread",
|
||||
"https://mail.python.org/archives/list/list-name@python.org/message",
|
||||
"https://mail.python.org/archives/list/list-name@python.org/thread/",
|
||||
"https://mail.python.org/archives/list/list-name@python.org/message/",
|
||||
"https://mail.python.org/archives/list/list-name@python.org/thread/abcXYZ123#anchor",
|
||||
"https://mail.python.org/archives/list/list-name@python.org/thread/abcXYZ123/#anchor",
|
||||
"https://mail.python.org/archives/list/list-name@python.org/message/#abcXYZ123",
|
||||
"https://mail.python.org/archives/list/list-name@python.org/message/#abcXYZ123/",
|
||||
"https://mail.python.org/archives/list/list-name@python.org/spam/abcXYZ123",
|
||||
"https://mail.python.org/archives/list/list-name@python.org/spam/abcXYZ123/",
|
||||
],
|
||||
)
|
||||
def test_thread_checker_invalid_allow_message(thread_url: str):
|
||||
warnings = [
|
||||
warning
|
||||
for (_, warning) in check_peps._thread(
|
||||
1, thread_url, "<Prefix>", allow_message=True
|
||||
)
|
||||
]
|
||||
assert warnings == ["<Prefix> must be a valid thread URL"], warnings
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"thread_url",
|
||||
[
|
||||
"list-name@python.org",
|
||||
"distutils-sig@python.org",
|
||||
"csv@python.org",
|
||||
"python-3000@python.org",
|
||||
"ipaddr-py-dev@googlegroups.com",
|
||||
"python-tulip@googlegroups.com",
|
||||
"https://discuss.python.org/t/thread-name/123456",
|
||||
"https://discuss.python.org/t/thread-name/123456/",
|
||||
"https://discuss.python.org/t/thread_name/123456",
|
||||
"https://discuss.python.org/t/thread_name/123456/",
|
||||
"https://discuss.python.org/t/123456/",
|
||||
"https://discuss.python.org/t/123456",
|
||||
],
|
||||
)
|
||||
def test_thread_checker_valid_discussions_to(thread_url: str):
|
||||
warnings = [
|
||||
warning
|
||||
for (_, warning) in check_peps._thread(
|
||||
1, thread_url, "<Prefix>", discussions_to=True
|
||||
)
|
||||
]
|
||||
assert warnings == [], warnings
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"thread_url",
|
||||
[
|
||||
"https://discuss.python.org/t/thread-name/123456/000",
|
||||
"https://discuss.python.org/t/thread-name/123456/000/",
|
||||
"https://discuss.python.org/t/thread_name/123456/000",
|
||||
"https://discuss.python.org/t/thread_name/123456/000/",
|
||||
"https://discuss.python.org/t/123456/000/",
|
||||
"https://discuss.python.org/t/12345656/000",
|
||||
"https://discuss.python.org/t/thread-name",
|
||||
"https://discuss.python.org/t/thread_name",
|
||||
"https://discuss.python.org/t/thread+name",
|
||||
],
|
||||
)
|
||||
def test_thread_checker_invalid_discussions_to(thread_url: str):
|
||||
warnings = [
|
||||
warning
|
||||
for (_, warning) in check_peps._thread(
|
||||
1, thread_url, "<Prefix>", discussions_to=True
|
||||
)
|
||||
]
|
||||
assert warnings == ["<Prefix> must be a valid thread URL"], warnings
|
||||
|
||||
|
||||
def test_thread_checker_allow_message_discussions_to():
|
||||
with pytest.raises(ValueError, match="cannot both be True"):
|
||||
list(
|
||||
check_peps._thread(
|
||||
1, "", "<Prefix>", allow_message=True, discussions_to=True
|
||||
)
|
||||
)
|
|
@ -1,27 +1,29 @@
|
|||
from pathlib import Path
|
||||
import datetime as dt
|
||||
|
||||
from pep_sphinx_extensions.pep_processor.transforms import pep_footer
|
||||
|
||||
from ...conftest import PEP_ROOT
|
||||
|
||||
|
||||
def test_add_source_link():
|
||||
out = pep_footer._add_source_link(Path("pep-0008.txt"))
|
||||
out = pep_footer._add_source_link(PEP_ROOT / "pep-0008.rst")
|
||||
|
||||
assert "https://github.com/python/peps/blob/main/pep-0008.txt" in str(out)
|
||||
assert "https://github.com/python/peps/blob/main/peps/pep-0008.rst" in str(out)
|
||||
|
||||
|
||||
def test_add_commit_history_info():
|
||||
out = pep_footer._add_commit_history_info(Path("pep-0008.txt"))
|
||||
out = pep_footer._add_commit_history_info(PEP_ROOT / "pep-0008.rst")
|
||||
|
||||
assert str(out).startswith(
|
||||
"<paragraph>Last modified: "
|
||||
'<reference refuri="https://github.com/python/peps/commits/main/pep-0008.txt">'
|
||||
'<reference refuri="https://github.com/python/peps/commits/main/pep-0008.rst">'
|
||||
)
|
||||
# A variable timestamp comes next, don't test that
|
||||
assert str(out).endswith("</reference></paragraph>")
|
||||
|
||||
|
||||
def test_add_commit_history_info_invalid():
|
||||
out = pep_footer._add_commit_history_info(Path("pep-not-found.txt"))
|
||||
out = pep_footer._add_commit_history_info(PEP_ROOT / "pep-not-found.rst")
|
||||
|
||||
assert str(out) == "<paragraph/>"
|
||||
|
||||
|
@ -31,4 +33,4 @@ def test_get_last_modified_timestamps():
|
|||
|
||||
assert len(out) >= 585
|
||||
# Should be a Unix timestamp and at least this
|
||||
assert out["pep-0008.txt"] >= 1643124055
|
||||
assert dt.datetime.fromisoformat(out["pep-0008"]).timestamp() >= 1643124055
|
||||
|
|
|
@ -18,7 +18,7 @@ from pep_sphinx_extensions.pep_zero_generator.constants import (
|
|||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"test_input, expected",
|
||||
("test_input", "expected"),
|
||||
[
|
||||
("my-mailing-list@example.com", "my-mailing-list@example.com"),
|
||||
("python-tulip@googlegroups.com", "https://groups.google.com/g/python-tulip"),
|
||||
|
@ -37,7 +37,7 @@ def test_generate_list_url(test_input, expected):
|
|||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"test_input, expected",
|
||||
("test_input", "expected"),
|
||||
[
|
||||
(
|
||||
"https://mail.python.org/pipermail/python-3000/2006-November/004190.html",
|
||||
|
@ -72,7 +72,7 @@ def test_process_pretty_url(test_input, expected):
|
|||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"test_input, expected",
|
||||
("test_input", "expected"),
|
||||
[
|
||||
(
|
||||
"https://example.com/",
|
||||
|
@ -94,7 +94,7 @@ def test_process_pretty_url_invalid(test_input, expected):
|
|||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"test_input, expected",
|
||||
("test_input", "expected"),
|
||||
[
|
||||
(
|
||||
"https://mail.python.org/pipermail/python-3000/2006-November/004190.html",
|
||||
|
@ -129,7 +129,7 @@ def test_make_link_pretty(test_input, expected):
|
|||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"test_input, expected",
|
||||
("test_input", "expected"),
|
||||
[
|
||||
(STATUS_ACCEPTED, "Normative proposal accepted for implementation"),
|
||||
(STATUS_ACTIVE, "Currently valid informational guidance, or an in-use process"),
|
||||
|
@ -155,7 +155,7 @@ def test_abbreviate_status_unknown():
|
|||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"test_input, expected",
|
||||
("test_input", "expected"),
|
||||
[
|
||||
(
|
||||
TYPE_INFO,
|
||||
|
|
|
@ -5,7 +5,7 @@ from pep_sphinx_extensions.pep_processor.transforms import pep_zero
|
|||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"test_input, expected",
|
||||
("test_input", "expected"),
|
||||
[
|
||||
(
|
||||
nodes.reference(
|
||||
|
|
|
@ -1,69 +0,0 @@
|
|||
import pytest
|
||||
|
||||
from pep_sphinx_extensions.pep_zero_generator import author
|
||||
from pep_sphinx_extensions.tests.utils import AUTHORS_OVERRIDES
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"test_input, expected",
|
||||
[
|
||||
(
|
||||
("First Last", "first@example.com"),
|
||||
author.Author(
|
||||
last_first="Last, First", nick="Last", email="first@example.com"
|
||||
),
|
||||
),
|
||||
(
|
||||
("Guido van Rossum", "guido@example.com"),
|
||||
author.Author(
|
||||
last_first="van Rossum, Guido (GvR)",
|
||||
nick="GvR",
|
||||
email="guido@example.com",
|
||||
),
|
||||
),
|
||||
(
|
||||
("Hugo van Kemenade", "hugo@example.com"),
|
||||
author.Author(
|
||||
last_first="van Kemenade, Hugo",
|
||||
nick="van Kemenade",
|
||||
email="hugo@example.com",
|
||||
),
|
||||
),
|
||||
(
|
||||
("Eric N. Vander Weele", "eric@example.com"),
|
||||
author.Author(
|
||||
last_first="Vander Weele, Eric N.",
|
||||
nick="Vander Weele",
|
||||
email="eric@example.com",
|
||||
),
|
||||
),
|
||||
(
|
||||
("Mariatta", "mariatta@example.com"),
|
||||
author.Author(
|
||||
last_first="Mariatta", nick="Mariatta", email="mariatta@example.com"
|
||||
),
|
||||
),
|
||||
(
|
||||
("First Last Jr.", "first@example.com"),
|
||||
author.Author(
|
||||
last_first="Last, First, Jr.", nick="Last", email="first@example.com"
|
||||
),
|
||||
),
|
||||
pytest.param(
|
||||
("First Last", "first at example.com"),
|
||||
author.Author(
|
||||
last_first="Last, First", nick="Last", email="first@example.com"
|
||||
),
|
||||
marks=pytest.mark.xfail,
|
||||
),
|
||||
],
|
||||
)
|
||||
def test_parse_author_email(test_input, expected):
|
||||
out = author.parse_author_email(test_input, AUTHORS_OVERRIDES)
|
||||
|
||||
assert out == expected
|
||||
|
||||
|
||||
def test_parse_author_email_empty_name():
|
||||
with pytest.raises(ValueError, match="Name is empty!"):
|
||||
author.parse_author_email(("", "user@example.com"), AUTHORS_OVERRIDES)
|
|
@ -1,9 +1,6 @@
|
|||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
from pep_sphinx_extensions.pep_zero_generator import parser
|
||||
from pep_sphinx_extensions.pep_zero_generator.author import Author
|
||||
from pep_sphinx_extensions.pep_zero_generator.constants import (
|
||||
STATUS_ACCEPTED,
|
||||
STATUS_ACTIVE,
|
||||
|
@ -18,84 +15,100 @@ from pep_sphinx_extensions.pep_zero_generator.constants import (
|
|||
TYPE_PROCESS,
|
||||
TYPE_STANDARDS,
|
||||
)
|
||||
from pep_sphinx_extensions.pep_zero_generator.errors import PEPError
|
||||
from pep_sphinx_extensions.tests.utils import AUTHORS_OVERRIDES
|
||||
from pep_sphinx_extensions.pep_zero_generator.parser import _Author
|
||||
|
||||
from ..conftest import PEP_ROOT
|
||||
|
||||
|
||||
def test_pep_repr():
|
||||
pep8 = parser.PEP(Path("pep-0008.txt"))
|
||||
pep8 = parser.PEP(PEP_ROOT / "pep-0008.rst")
|
||||
|
||||
assert repr(pep8) == "<PEP 0008 - Style Guide for Python Code>"
|
||||
|
||||
|
||||
def test_pep_less_than():
|
||||
pep8 = parser.PEP(Path("pep-0008.txt"))
|
||||
pep3333 = parser.PEP(Path("pep-3333.txt"))
|
||||
pep8 = parser.PEP(PEP_ROOT / "pep-0008.rst")
|
||||
pep3333 = parser.PEP(PEP_ROOT / "pep-3333.rst")
|
||||
|
||||
assert pep8 < pep3333
|
||||
|
||||
|
||||
def test_pep_equal():
|
||||
pep_a = parser.PEP(Path("pep-0008.txt"))
|
||||
pep_b = parser.PEP(Path("pep-0008.txt"))
|
||||
pep_a = parser.PEP(PEP_ROOT / "pep-0008.rst")
|
||||
pep_b = parser.PEP(PEP_ROOT / "pep-0008.rst")
|
||||
|
||||
assert pep_a == pep_b
|
||||
|
||||
|
||||
def test_pep_details(monkeypatch):
|
||||
pep8 = parser.PEP(Path("pep-0008.txt"))
|
||||
@pytest.mark.parametrize(
|
||||
("test_input", "expected"),
|
||||
[
|
||||
(
|
||||
"pep-0008.rst",
|
||||
{
|
||||
"authors": "Guido van Rossum, Barry Warsaw, Alyssa Coghlan",
|
||||
"number": 8,
|
||||
"shorthand": ":abbr:`PA (Process, Active)`",
|
||||
"title": "Style Guide for Python Code",
|
||||
"python_version": "",
|
||||
},
|
||||
),
|
||||
(
|
||||
"pep-0719.rst",
|
||||
{
|
||||
"authors": "Thomas Wouters",
|
||||
"number": 719,
|
||||
"shorthand": ":abbr:`IA (Informational, Active)`",
|
||||
"title": "Python 3.13 Release Schedule",
|
||||
"python_version": "3.13",
|
||||
},
|
||||
),
|
||||
],
|
||||
)
|
||||
def test_pep_details(test_input, expected):
|
||||
pep = parser.PEP(PEP_ROOT / test_input)
|
||||
|
||||
assert pep8.details == {
|
||||
"authors": "GvR, Warsaw, Coghlan",
|
||||
"number": 8,
|
||||
"shorthand": ":abbr:`PA (Process, Active)`",
|
||||
"title": "Style Guide for Python Code",
|
||||
}
|
||||
assert pep.details == expected
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"test_input, expected",
|
||||
("test_input", "expected"),
|
||||
[
|
||||
(
|
||||
"First Last <user@example.com>",
|
||||
[Author(last_first="Last, First", nick="Last", email="user@example.com")],
|
||||
[_Author(full_name="First Last", email="user@example.com")],
|
||||
),
|
||||
(
|
||||
"First Last",
|
||||
[Author(last_first="Last, First", nick="Last", email="")],
|
||||
),
|
||||
(
|
||||
"user@example.com (First Last)",
|
||||
[Author(last_first="Last, First", nick="Last", email="user@example.com")],
|
||||
[_Author(full_name="First Last", email="")],
|
||||
),
|
||||
pytest.param(
|
||||
"First Last <user at example.com>",
|
||||
[Author(last_first="Last, First", nick="Last", email="user@example.com")],
|
||||
[_Author(full_name="First Last", email="user@example.com")],
|
||||
marks=pytest.mark.xfail,
|
||||
),
|
||||
pytest.param(
|
||||
" , First Last,",
|
||||
{"First Last": ""},
|
||||
marks=pytest.mark.xfail(raises=ValueError),
|
||||
),
|
||||
],
|
||||
)
|
||||
def test_parse_authors(test_input, expected):
|
||||
# Arrange
|
||||
dummy_object = parser.PEP(Path("pep-0160.txt"))
|
||||
|
||||
# Act
|
||||
out = parser._parse_authors(dummy_object, test_input, AUTHORS_OVERRIDES)
|
||||
out = parser._parse_author(test_input)
|
||||
|
||||
# Assert
|
||||
assert out == expected
|
||||
|
||||
|
||||
def test_parse_authors_invalid():
|
||||
|
||||
pep = parser.PEP(Path("pep-0008.txt"))
|
||||
|
||||
with pytest.raises(PEPError, match="no authors found"):
|
||||
parser._parse_authors(pep, "", AUTHORS_OVERRIDES)
|
||||
with pytest.raises(ValueError, match="Name is empty!"):
|
||||
assert parser._parse_author("")
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"test_type, test_status, expected",
|
||||
("test_type", "test_status", "expected"),
|
||||
[
|
||||
(TYPE_INFO, STATUS_DRAFT, ":abbr:`I (Informational, Draft)`"),
|
||||
(TYPE_INFO, STATUS_ACTIVE, ":abbr:`IA (Informational, Active)`"),
|
||||
|
@ -113,7 +126,7 @@ def test_parse_authors_invalid():
|
|||
)
|
||||
def test_abbreviate_type_status(test_type, test_status, expected):
|
||||
# set up dummy PEP object and monkeypatch attributes
|
||||
pep = parser.PEP(Path("pep-0008.txt"))
|
||||
pep = parser.PEP(PEP_ROOT / "pep-0008.rst")
|
||||
pep.pep_type = test_type
|
||||
pep.status = test_status
|
||||
|
||||
|
|
|
@ -1,10 +1,10 @@
|
|||
from pathlib import Path
|
||||
|
||||
from pep_sphinx_extensions.pep_zero_generator import parser, pep_index_generator
|
||||
|
||||
from ..conftest import PEP_ROOT
|
||||
|
||||
|
||||
def test_create_pep_json():
|
||||
peps = [parser.PEP(Path("pep-0008.txt"))]
|
||||
peps = [parser.PEP(PEP_ROOT / "pep-0008.rst")]
|
||||
|
||||
out = pep_index_generator.create_pep_json(peps)
|
||||
|
||||
|
|
|
@ -30,18 +30,18 @@ def test_pep_zero_writer_emit_title():
|
|||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"test_input, expected",
|
||||
("test_input", "expected"),
|
||||
[
|
||||
(
|
||||
"pep-9000.rst",
|
||||
{
|
||||
"Fussyreverend, Francis": "one@example.com",
|
||||
"Soulfulcommodore, Javier": "two@example.com",
|
||||
"Francis Fussyreverend": "one@example.com",
|
||||
"Javier Soulfulcommodore": "two@example.com",
|
||||
},
|
||||
),
|
||||
(
|
||||
"pep-9001.rst",
|
||||
{"Fussyreverend, Francis": "", "Soulfulcommodore, Javier": ""},
|
||||
{"Francis Fussyreverend": "", "Javier Soulfulcommodore": ""},
|
||||
),
|
||||
],
|
||||
)
|
||||
|
|
|
@ -0,0 +1,23 @@
|
|||
PEP:9002
|
||||
Title: Nobody expects the example PEP!
|
||||
Author: Cardinal Ximénez <Cardinal.Ximenez@spanish.inquisition>,
|
||||
Cardinal Biggles
|
||||
Cardinal Fang
|
||||
Version: 4.0
|
||||
Python-Version: 4.0
|
||||
Sponsor:
|
||||
Sponsor:
|
||||
Horse-Guards: Parade
|
||||
Created: 1-Jan-1989
|
||||
BDFL-Delegate: Barry!
|
||||
Status: Draught
|
||||
Topic: Inquisiting, Governance, Governance, packaging
|
||||
Content-Type: video/quicktime
|
||||
Requires: 0020,1,2,3, 7, 8
|
||||
Discussions-To: MR ALBERT SPIM, I,OOO,OO8 LONDON ROAD, OXFORD
|
||||
Post-History: `2-Feb-2000 <FLIGHT LT. & PREBENDARY ETHEL MORRIS; THE DIMPLES; THAXTED; NR BUENOS AIRES>`__
|
||||
`3-Mar-2001 <The Royal Frog Trampling Institute; 16 Rayners Lane; London>`__
|
||||
Resolution:
|
||||
|
||||
|
||||
https://peps.python.org/pep-9002.html
|
|
@ -1,6 +0,0 @@
|
|||
AUTHORS_OVERRIDES = {
|
||||
"Guido van Rossum": {
|
||||
"Surname First": "van Rossum, Guido (GvR)",
|
||||
"Name Reference": "GvR",
|
||||
},
|
||||
}
|
|
@ -3,10 +3,12 @@
|
|||
|
||||
"""Configuration for building PEPs using Sphinx."""
|
||||
|
||||
import os
|
||||
from pathlib import Path
|
||||
import sys
|
||||
|
||||
sys.path.append(str(Path("pep_sphinx_extensions").absolute()))
|
||||
_ROOT = Path(__file__).resolve().parent.parent
|
||||
sys.path.append(os.fspath(_ROOT))
|
||||
|
||||
# -- Project information -----------------------------------------------------
|
||||
|
||||
|
@ -25,7 +27,6 @@ extensions = [
|
|||
# The file extensions of source files. Sphinx uses these suffixes as sources.
|
||||
source_suffix = {
|
||||
".rst": "pep",
|
||||
".txt": "pep",
|
||||
}
|
||||
|
||||
# List of patterns (relative to source dir) to ignore when looking for source files.
|
||||
|
@ -34,7 +35,6 @@ include_patterns = [
|
|||
"contents.rst",
|
||||
# PEP files
|
||||
"pep-????.rst",
|
||||
"pep-????.txt",
|
||||
# PEP ancillary files
|
||||
"pep-????/*.rst",
|
||||
# Documentation
|
||||
|
@ -45,6 +45,9 @@ exclude_patterns = [
|
|||
"pep-0012/pep-NNNN.rst",
|
||||
]
|
||||
|
||||
# Warn on missing references
|
||||
nitpicky = True
|
||||
|
||||
# Intersphinx configuration
|
||||
intersphinx_mapping = {
|
||||
'python': ('https://docs.python.org/3/', None),
|
||||
|
@ -57,11 +60,13 @@ intersphinx_disabled_reftypes = []
|
|||
|
||||
# -- Options for HTML output -------------------------------------------------
|
||||
|
||||
_PSE_PATH = _ROOT / "pep_sphinx_extensions"
|
||||
|
||||
# HTML output settings
|
||||
html_math_renderer = "maths_to_html" # Maths rendering
|
||||
|
||||
# Theme settings
|
||||
html_theme_path = ["pep_sphinx_extensions"]
|
||||
html_theme_path = [os.fspath(_PSE_PATH)]
|
||||
html_theme = "pep_theme" # The actual theme directory (child of html_theme_path)
|
||||
html_use_index = False # Disable index (we use PEP 0)
|
||||
html_style = "" # must be defined here or in theme.conf, but is unused
|
||||
|
@ -69,4 +74,4 @@ html_permalinks = False # handled in the PEPContents transform
|
|||
html_baseurl = "https://peps.python.org" # to create the CNAME file
|
||||
gettext_auto_build = False # speed-ups
|
||||
|
||||
templates_path = ["pep_sphinx_extensions/pep_theme/templates"] # Theme template relative paths from `confdir`
|
||||
templates_path = [os.fspath(_PSE_PATH / "pep_theme" / "templates")] # Theme template relative paths from `confdir`
|
|
@ -14,6 +14,5 @@ This is an internal Sphinx page; please go to the :doc:`PEP Index <pep-0000>`.
|
|||
:glob:
|
||||
:caption: PEP Table of Contents (needed for Sphinx):
|
||||
|
||||
docs/*
|
||||
pep-*
|
||||
topic/*
|
|
@ -1,9 +1,8 @@
|
|||
PEP: 1
|
||||
Title: PEP Purpose and Guidelines
|
||||
Author: Barry Warsaw, Jeremy Hylton, David Goodger, Nick Coghlan
|
||||
Author: Barry Warsaw, Jeremy Hylton, David Goodger, Alyssa Coghlan
|
||||
Status: Active
|
||||
Type: Process
|
||||
Content-Type: text/x-rst
|
||||
Created: 13-Jun-2000
|
||||
Post-History: 21-Mar-2001, 29-Jul-2002, 03-May-2003, 05-May-2012,
|
||||
07-Apr-2013
|
||||
|
@ -136,8 +135,8 @@ forums, and attempts to build community consensus around the idea. The PEP
|
|||
champion (a.k.a. Author) should first attempt to ascertain whether the idea is
|
||||
PEP-able. Posting to the `Ideas category`_ of the `Python Discourse`_ is usually
|
||||
the best way to go about this, unless a more specialized venue is appropriate,
|
||||
such as `Typing-SIG`_ for static typing or the `Packaging category`_ of the
|
||||
Python Discourse for packaging issues.
|
||||
such as the `Typing category`_ (for static typing ideas)
|
||||
or `Packaging category`_ (for packaging ideas) on the Python Discourse.
|
||||
|
||||
Vetting an idea publicly before going as far as writing a PEP is meant
|
||||
to save the potential author time. Many ideas have been brought
|
||||
|
@ -207,7 +206,7 @@ The standard PEP workflow is:
|
|||
It also provides a complete introduction to reST markup that is used
|
||||
in PEPs. Approval criteria are:
|
||||
|
||||
* It sound and complete. The ideas must make technical sense. The
|
||||
* It is sound and complete. The ideas must make technical sense. The
|
||||
editors do not consider whether they seem likely to be accepted.
|
||||
* The title accurately describes the content.
|
||||
* The PEP's language (spelling, grammar, sentence structure, etc.)
|
||||
|
@ -284,8 +283,9 @@ The `PEPs category`_ of the `Python Discourse`_
|
|||
is the preferred choice for most new PEPs,
|
||||
whereas historically the `Python-Dev`_ mailing list was commonly used.
|
||||
Some specialized topics have specific venues, such as
|
||||
`Typing-SIG`_ for typing PEPs or the `Packaging category`_ on the Python
|
||||
Discourse for packaging PEPs. If the PEP authors are unsure of the best venue,
|
||||
the `Typing category`_ and the `Packaging category`_ on the Python
|
||||
Discourse for typing and packaging PEPs, respectively.
|
||||
If the PEP authors are unsure of the best venue,
|
||||
the PEP Sponsor and PEP editors can advise them accordingly.
|
||||
|
||||
If a PEP undergoes a significant re-write or other major, substantive
|
||||
|
@ -296,7 +296,7 @@ pointing to this new thread.
|
|||
|
||||
If it is not chosen as the discussion venue,
|
||||
a brief announcement post should be made to the `PEPs category`_
|
||||
with at least a link to the rendered PEP and the `Discussions-To` thread
|
||||
with at least a link to the rendered PEP and the ``Discussions-To`` thread
|
||||
when the draft PEP is committed to the repository
|
||||
and if a major-enough change is made to trigger a new thread.
|
||||
|
||||
|
@ -609,7 +609,6 @@ optional and are described below. All other headers are required.
|
|||
Withdrawn | Final | Superseded>
|
||||
Type: <Standards Track | Informational | Process>
|
||||
* Topic: <Governance | Packaging | Release | Typing>
|
||||
* Content-Type: text/x-rst
|
||||
* Requires: <pep numbers>
|
||||
Created: <date created on, in dd-mmm-yyyy format>
|
||||
* Python-Version: <version number>
|
||||
|
@ -648,7 +647,7 @@ out.
|
|||
The PEP-Delegate field is used to record the individual appointed by the
|
||||
Steering Council to make the final decision on whether or not to approve or
|
||||
reject a PEP. (The delegate's email address is currently omitted due to a
|
||||
limitation in the email address masking for reStructuredText PEPs)
|
||||
limitation in the email address masking for reStructuredText PEPs.)
|
||||
|
||||
*Note: The Resolution header is required for Standards Track PEPs
|
||||
only. It contains a URL that should point to an email message or
|
||||
|
@ -667,11 +666,6 @@ The optional Topic header lists the special topic, if any,
|
|||
the PEP belongs under.
|
||||
See the :ref:`topic-index` for the existing topics.
|
||||
|
||||
The format of a PEP is specified with a Content-Type header.
|
||||
All PEPs must use reStructuredText (see :pep:`12`),
|
||||
and have a value of ``text/x-rst``, the default.
|
||||
Previously, plaintext PEPs used ``text/plain`` (see :pep:`9`).
|
||||
|
||||
The Created header records the date that the PEP was assigned a
|
||||
number, while Post-History is used to record the dates of and corresponding
|
||||
URLs to the Discussions-To threads for the PEP, with the former as the
|
||||
|
@ -859,7 +853,7 @@ Footnotes
|
|||
|
||||
.. _PEPs category: https://discuss.python.org/c/peps/
|
||||
|
||||
.. _Typing-SIG: https://mail.python.org/mailman3/lists/typing-sig.python.org/
|
||||
.. _Typing category: https://discuss.python.org/c/typing/
|
||||
|
||||
.. _Packaging category: https://discuss.python.org/c/packaging/
|
||||
|
||||
|
@ -877,13 +871,3 @@ Copyright
|
|||
|
||||
This document is placed in the public domain or under the
|
||||
CC0-1.0-Universal license, whichever is more permissive.
|
||||
|
||||
|
||||
..
|
||||
Local Variables:
|
||||
mode: indented-text
|
||||
indent-tabs-mode: nil
|
||||
sentence-end-double-space: t
|
||||
fill-column: 70
|
||||
coding: utf-8
|
||||
End:
|
Before Width: | Height: | Size: 27 KiB After Width: | Height: | Size: 27 KiB |
|
@ -45,7 +45,7 @@ Prohibitions
|
|||
|
||||
Bug fix releases are required to adhere to the following restrictions:
|
||||
|
||||
1. There must be zero syntax changes. All `.pyc` and `.pyo` files must
|
||||
1. There must be zero syntax changes. All ``.pyc`` and ``.pyo`` files must
|
||||
work (no regeneration needed) with all bugfix releases forked off
|
||||
from a major release.
|
||||
|
|
@ -4,7 +4,7 @@ Version: $Revision$
|
|||
Last-Modified: $Date$
|
||||
Author: Guido van Rossum <guido@python.org>,
|
||||
Barry Warsaw <barry@python.org>,
|
||||
Nick Coghlan <ncoghlan@gmail.com>
|
||||
Alyssa Coghlan <ncoghlan@gmail.com>
|
||||
Status: Active
|
||||
Type: Process
|
||||
Content-Type: text/x-rst
|
|
@ -8,7 +8,7 @@ Content-Type: text/x-rst
|
|||
Created: 07-Jul-2002
|
||||
Post-History: `18-Aug-2007 <https://mail.python.org/archives/list/python-dev@python.org/thread/DSSGXU5LBCMKYMZBRVB6RF3YAB6ST5AV/>`__,
|
||||
`14-May-2014 <https://mail.python.org/archives/list/python-dev@python.org/thread/T7WTUJ6TD3IGYGWV3M4PHJWNLM2WPZAW/>`__,
|
||||
`20-Feb-2015 <https://mail.python.org/archives/list/python-dev@python.org/thread/OEQHRR2COYZDL6LZ42RBZOMIUB32WI34/#L3K7IKGVT4ND45SKAJPJ3Q2ADVK5KP52>`__,
|
||||
`20-Feb-2015 <https://mail.python.org/archives/list/python-dev@python.org/thread/OEQHRR2COYZDL6LZ42RBZOMIUB32WI34/>`__,
|
||||
`10-Mar-2022 <https://mail.python.org/archives/list/python-committers@python.org/thread/K757345KX6W5ZLTWYBUXOXQTJJTL7GW5/>`__,
|
||||
|
||||
|
|
@ -5,7 +5,6 @@ Author: David Goodger <goodger@python.org>,
|
|||
Brett Cannon <brett@python.org>
|
||||
Status: Active
|
||||
Type: Process
|
||||
Content-Type: text/x-rst
|
||||
Created: 05-Aug-2002
|
||||
Post-History: `30-Aug-2002 <https://mail.python.org/archives/list/python-dev@python.org/thread/KX3AS7QAY26QH3WIUAEOCCNXQ4V2TGGV/>`__
|
||||
|
||||
|
@ -174,7 +173,6 @@ your PEP):
|
|||
Status: Draft
|
||||
Type: [Standards Track | Informational | Process]
|
||||
Topic: *[Governance | Packaging | Release | Typing]
|
||||
Content-Type: text/x-rst
|
||||
Requires: *[NNN]
|
||||
Created: [DD-MMM-YYYY]
|
||||
Python-Version: *[M.N]
|
|
@ -7,7 +7,6 @@ Discussions-To: <REQUIRED: URL of current canonical discussion thread>
|
|||
Status: <REQUIRED: Draft | Active | Accepted | Provisional | Deferred | Rejected | Withdrawn | Final | Superseded>
|
||||
Type: <REQUIRED: Standards Track | Informational | Process>
|
||||
Topic: <Governance | Packaging | Release | Typing>
|
||||
Content-Type: text/x-rst
|
||||
Requires: <pep numbers>
|
||||
Created: <date created on, in dd-mmm-yyyy format>
|
||||
Python-Version: <version number>
|
|
@ -20,12 +20,12 @@ purring little creatures up, and ride them into town, with some of their
|
|||
buddies firmly attached to your bare back, anchored by newly sharpened
|
||||
claws. At least they're cute, you remind yourself.
|
||||
|
||||
Actually, no that's a slight exaggeration <wink>. The Python release
|
||||
Actually, no, that's a slight exaggeration 😉 The Python release
|
||||
process has steadily improved over the years and now, with the help of our
|
||||
amazing community, is really not too difficult. This PEP attempts to
|
||||
collect, in one place, all the steps needed to make a Python release. It
|
||||
is organized as a recipe and you can actually print this out and check
|
||||
items off as you complete them.
|
||||
collect, in one place, all the steps needed to make a Python release.
|
||||
Most of the steps are now automated or guided by automation, so manually
|
||||
following this list is no longer necessary.
|
||||
|
||||
Things You'll Need
|
||||
==================
|
||||
|
@ -41,25 +41,23 @@ Here's a hopefully-complete list.
|
|||
|
||||
* A bunch of software:
|
||||
|
||||
* "release.py", the Python release manager's friend. It's in the
|
||||
python/release-tools repo on GitHub. It doesn't pip install
|
||||
or have any sort of install process--you'll have to put it on
|
||||
your path yourself, or just run it with a relative path, or
|
||||
whatever.
|
||||
* A checkout of the `python/release-tools
|
||||
<https://github.com/python/release-tools/>`_ repo.
|
||||
It contains a ``requirements.txt`` file that you need to install
|
||||
dependencies from first. Afterwards, you can fire up scripts in the
|
||||
repo, covered later in this PEP.
|
||||
|
||||
* "blurb", the Misc/NEWS management tool. The release process
|
||||
currently uses three blurb subcommands:
|
||||
release, merge, and export. Installable via pip3.
|
||||
|
||||
* "virtualenv". The release script installs Sphinx in a virtualenv
|
||||
when building the docs (for 2.7 and 3.5+).
|
||||
* ``blurb``, the
|
||||
`Misc/NEWS <https://github.com/python/cpython/tree/main/Misc/NEWS.d>`_
|
||||
management tool. You can pip install it.
|
||||
|
||||
* A fairly complete installation of a recent TeX distribution,
|
||||
such as texlive. You need that for building the PDF docs.
|
||||
|
||||
* Access to ``downloads.nyc1.psf.io``, the server that hosts download files,
|
||||
and ``docs.nyc1.psf.io``, the server that hosts the documentation.
|
||||
You'll be uploading files directly here.
|
||||
* Access to servers where you will upload files:
|
||||
|
||||
* ``downloads.nyc1.psf.io``, the server that hosts download files; and
|
||||
* ``docs.nyc1.psf.io``, the server that hosts the documentation.
|
||||
|
||||
* Administrator access to ``https://github.com/python/cpython``.
|
||||
|
||||
|
@ -89,7 +87,8 @@ Types of Releases
|
|||
There are several types of releases you will need to make. These include:
|
||||
|
||||
* ``alpha``
|
||||
* ``beta``
|
||||
* ``begin beta``, also known as ``beta 1``, also known as ``new branch``
|
||||
* ``beta 2+``
|
||||
* ``release candidate 1``
|
||||
* ``release candidate 2+``
|
||||
* ``final``
|
||||
|
@ -105,8 +104,8 @@ organization of the cpython git repository, the *main* branch is always
|
|||
the target for new features. At some point in the release cycle of the
|
||||
next feature release, a **new branch** release is made which creates a
|
||||
new separate branch for stabilization and later maintenance of the
|
||||
current in-progress feature release (x.y.0) and the *main* branch is modified
|
||||
to build a new version (which will eventually be released as x.y+1.0).
|
||||
current in-progress feature release (``3.n.0``) and the *main* branch is modified
|
||||
to build a new version (which will eventually be released as ``3.n+1.0``).
|
||||
While the **new branch** release step could occur at one of several points
|
||||
in the release cycle, current practice is for it to occur at feature code
|
||||
cutoff for the release which is scheduled for the first beta release.
|
||||
|
@ -114,7 +113,7 @@ cutoff for the release which is scheduled for the first beta release.
|
|||
In the descriptions that follow, steps specific to release types are
|
||||
labeled accordingly, for now, **new branch** and **final**.
|
||||
|
||||
How to Make A Release
|
||||
How To Make A Release
|
||||
=====================
|
||||
|
||||
Here are the steps taken to make a Python release. Some steps are more
|
||||
|
@ -129,12 +128,10 @@ release. The roles and their current experts are:
|
|||
- Thomas Wouters <thomas@python.org> (NL)
|
||||
- Pablo Galindo Salgado <pablogsal@python.org> (UK)
|
||||
- Łukasz Langa <lukasz@python.org> (PL)
|
||||
- Ned Deily <nad@python.org> (US)
|
||||
|
||||
* WE = Windows - Steve Dower <steve.dower@python.org>
|
||||
* ME = Mac - Ned Deily <nad@python.org> (US)
|
||||
* DE = Docs - Julien Palard <julien@python.org> (Central Europe)
|
||||
* IE = Idle Expert - Terry Reedy <tjreedy@udel.edu> (US)
|
||||
|
||||
.. note:: It is highly recommended that the RM contact the Experts the day
|
||||
before the release. Because the world is round and everyone lives
|
||||
|
@ -146,49 +143,42 @@ release. The roles and their current experts are:
|
|||
In rare cases where the expert for Windows or Mac is MIA, you may add
|
||||
a message "(Platform) binaries will be provided shortly" and proceed.
|
||||
|
||||
XXX: We should include a dependency graph to illustrate the steps that can
|
||||
be taken in parallel, or those that depend on other steps.
|
||||
|
||||
As much as possible, the release steps are automated and guided by the
|
||||
release script, which is available in a separate repository:
|
||||
|
||||
https://github.com/python/release-tools
|
||||
|
||||
We use the following conventions in the examples below. Where a release
|
||||
number is given, it is of the form ``X.Y.ZaN``, e.g. 3.3.0a3 for Python 3.3.0
|
||||
number is given, it is of the form ``3.X.YaN``, e.g. 3.13.0a3 for Python 3.13.0
|
||||
alpha 3, where "a" == alpha, "b" == beta, "rc" == release candidate.
|
||||
|
||||
Release tags are named ``vX.Y.ZaN``. The branch name for minor release
|
||||
maintenance branches is ``X.Y``.
|
||||
Release tags are named ``v3.X.YaN``. The branch name for minor release
|
||||
maintenance branches is ``3.X``.
|
||||
|
||||
This helps by performing several automatic editing steps, and guides you
|
||||
to perform some manual editing steps.
|
||||
|
||||
- Log into irc.libera.chat and join the #python-dev channel.
|
||||
- Log into Discord and join the Python Core Devs server. Ask Thomas
|
||||
or Łukasz for an invite.
|
||||
|
||||
You probably need to coordinate with other people around the world.
|
||||
This IRC channel is where we've arranged to meet.
|
||||
This communication channel is where we've arranged to meet.
|
||||
|
||||
- Check to see if there are any showstopper bugs.
|
||||
|
||||
Go to https://bugs.python.org and look for any open bugs that can block
|
||||
this release. You're looking at the Priority of the open bugs for the
|
||||
release you're making; here are the relevant definitions:
|
||||
Go to https://github.com/python/cpython/issues and look for any open
|
||||
bugs that can block this release. You're looking at two relevant labels:
|
||||
|
||||
release blocker
|
||||
release-blocker
|
||||
Stops the release dead in its tracks. You may not
|
||||
make any release with any open release blocker bugs.
|
||||
|
||||
deferred blocker
|
||||
deferred-blocker
|
||||
Doesn't block this release, but it will block a
|
||||
future release. You may not make a final or
|
||||
candidate release with any open deferred blocker
|
||||
bugs.
|
||||
|
||||
critical
|
||||
Important bugs that should be fixed, but which does not block
|
||||
a release.
|
||||
|
||||
Review the release blockers and either resolve them, bump them down to
|
||||
deferred, or stop the release and ask for community assistance. If
|
||||
you're making a final or candidate release, do the same with any open
|
||||
|
@ -212,7 +202,7 @@ to perform some manual editing steps.
|
|||
within it (called the "release clone" from now on). You can use the same
|
||||
GitHub fork you use for cpython development. Using the standard setup
|
||||
recommended in the Python Developer's Guide, your fork would be referred
|
||||
to as `origin` and the standard cpython repo as `upstream`. You will
|
||||
to as ``origin`` and the standard cpython repo as ``upstream``. You will
|
||||
use the branch on your fork to do the release engineering work, including
|
||||
tagging the release, and you will use it to share with the other experts
|
||||
for making the binaries.
|
||||
|
@ -244,22 +234,22 @@ to perform some manual editing steps.
|
|||
- Consider running ``autoconf`` using the currently accepted standard version
|
||||
in case ``configure`` or other autoconf-generated files were last
|
||||
committed with a newer or older version and may contain spurious or
|
||||
harmful differences. Currently, autoconf 2.69 is our de facto standard.
|
||||
harmful differences. Currently, autoconf 2.71 is our de facto standard.
|
||||
if there are differences, commit them.
|
||||
|
||||
- Make sure the ``SOURCE_URI`` in ``Doc/tools/extensions/pyspecific.py``
|
||||
points to the right branch in the git repository (``main`` or ``X.Y``).
|
||||
points to the right branch in the git repository (``main`` or ``3.X``).
|
||||
For a **new branch** release, change the branch in the file from *main*
|
||||
to the new release branch you are about to create (``X.Y``).
|
||||
to the new release branch you are about to create (``3.X``).
|
||||
|
||||
- Bump version numbers via the release script::
|
||||
|
||||
$ .../release-tools/release.py --bump X.Y.ZaN
|
||||
$ .../release-tools/release.py --bump 3.X.YaN
|
||||
|
||||
Reminder: X, Y, Z, and N should be integers.
|
||||
Reminder: X, Y, and N should be integers.
|
||||
a should be one of "a", "b", or "rc" (e.g. "3.4.3rc1").
|
||||
For **final** releases omit the aN ("3.4.3"). For the first
|
||||
release of a new version Z should be 0 ("3.6.0").
|
||||
release of a new version Y should be 0 ("3.6.0").
|
||||
|
||||
This automates updating various release numbers, but you will have to
|
||||
modify a few files manually. If your $EDITOR environment variable is
|
||||
|
@ -285,11 +275,8 @@ to perform some manual editing steps.
|
|||
Properties). This isn't a C include file, it's a Windows
|
||||
"resource file" include file.
|
||||
|
||||
- Check with the IE (if there is one <wink>) to be sure that
|
||||
Lib/idlelib/NEWS.txt has been similarly updated.
|
||||
|
||||
- For a **final** major release, edit the first paragraph of
|
||||
Doc/whatsnew/X.Y.rst to include the actual release date; e.g. "Python
|
||||
Doc/whatsnew/3.X.rst to include the actual release date; e.g. "Python
|
||||
2.5 was released on August 1, 2003." There's no need to edit this for
|
||||
alpha or beta releases.
|
||||
|
||||
|
@ -298,22 +285,14 @@ to perform some manual editing steps.
|
|||
You should not see any files. I.e. you better not have any uncommitted
|
||||
changes in your working directory.
|
||||
|
||||
- Tag the release for X.Y.ZaN::
|
||||
- Tag the release for 3.X.YaN::
|
||||
|
||||
$ .../release-tools/release.py --tag X.Y.ZaN
|
||||
$ .../release-tools/release.py --tag 3.X.YaN
|
||||
|
||||
This executes a `git tag` command with the `-s` option so that the
|
||||
This executes a ``git tag`` command with the ``-s`` option so that the
|
||||
release tag in the repo is signed with your gpg key. When prompted
|
||||
choose the private key you use for signing release tarballs etc.
|
||||
|
||||
- For a **new branch** release, add it to the ``VERSIONS`` list of
|
||||
`docsbuild scripts`_, so that the new maintenance branch is now
|
||||
``pre-release`` and add the new ``in development`` version.
|
||||
|
||||
- For a **final** major release, update the ``VERSIONS`` list of
|
||||
`docsbuild scripts`_: the release branch must be changed from
|
||||
``pre-release`` to ``stable``.
|
||||
|
||||
- For **begin security-only mode** and **end-of-life** releases, review the
|
||||
two files and update the versions accordingly in all active branches.
|
||||
|
||||
|
@ -321,11 +300,11 @@ to perform some manual editing steps.
|
|||
the source gzip and xz tarballs,
|
||||
documentation tar and zip files, and gpg signature files::
|
||||
|
||||
$ .../release-tools/release.py --export X.Y.ZaN
|
||||
$ .../release-tools/release.py --export 3.X.YaN
|
||||
|
||||
This can take a while for **final** releases, and it will leave all the
|
||||
tarballs and signatures in a subdirectory called ``X.Y.ZaN/src``, and the
|
||||
built docs in ``X.Y.ZaN/docs`` (for **final** releases).
|
||||
tarballs and signatures in a subdirectory called ``3.X.YaN/src``, and the
|
||||
built docs in ``3.X.YaN/docs`` (for **final** releases).
|
||||
|
||||
Note that the script will sign your release with Sigstore. Please use
|
||||
your **@python.org** email address for this. See here for more information:
|
||||
|
@ -368,20 +347,16 @@ to perform some manual editing steps.
|
|||
|
||||
- Notify the experts that they can start building binaries.
|
||||
|
||||
- STOP STOP STOP STOP STOP STOP STOP STOP
|
||||
.. warning::
|
||||
|
||||
At this point you must receive the "green light" from other experts in
|
||||
order to create the release. There are things you can do while you wait
|
||||
**STOP**: at this point you must receive the "green light" from other experts
|
||||
in order to create the release. There are things you can do while you wait
|
||||
though, so keep reading until you hit the next STOP.
|
||||
|
||||
- The WE generates and publishes the Windows files using the Azure
|
||||
Pipelines build scripts in ``.azure-pipelines/windows-release/``,
|
||||
currently set up at https://dev.azure.com/Python/cpython/_build?definitionId=21.
|
||||
|
||||
Note that this build requires a separate VM containing the code signing
|
||||
certificate. This VM is managed by the WE to ensure only official releases
|
||||
have access to the certificate.
|
||||
|
||||
The build process runs in multiple stages, with each stage's output being
|
||||
available as a downloadable artifact. The stages are:
|
||||
|
||||
|
@ -420,7 +395,7 @@ to perform some manual editing steps.
|
|||
over there. Our policy is that every Python version gets its own
|
||||
directory, but each directory contains all releases of that version.
|
||||
|
||||
- On downloads.nyc1.psf.io, cd /srv/www.python.org/ftp/python/X.Y.Z
|
||||
- On downloads.nyc1.psf.io, ``cd /srv/www.python.org/ftp/python/3.X.Y``
|
||||
creating it if necessary. Make sure it is owned by group 'downloads'
|
||||
and group-writable.
|
||||
|
||||
|
@ -434,33 +409,28 @@ to perform some manual editing steps.
|
|||
- Use ``gpg --verify`` to make sure they got uploaded intact.
|
||||
|
||||
- If this is a **final** or rc release: Move the doc zips and tarballs to
|
||||
``/srv/www.python.org/ftp/python/doc/X.Y.Z[rcA]``, creating the directory
|
||||
``/srv/www.python.org/ftp/python/doc/3.X.Y[rcA]``, creating the directory
|
||||
if necessary, and adapt the "current" symlink in ``.../doc`` to point to
|
||||
that directory. Note though that if you're releasing a maintenance
|
||||
release for an older version, don't change the current link.
|
||||
|
||||
- If this is a **final** or rc release (even a maintenance release), also
|
||||
unpack the HTML docs to ``/srv/docs.python.org/release/X.Y.Z[rcA]`` on
|
||||
unpack the HTML docs to ``/srv/docs.python.org/release/3.X.Y[rcA]`` on
|
||||
docs.nyc1.psf.io. Make sure the files are in group ``docs`` and are
|
||||
group-writeable. If it is a release of a security-fix-only version,
|
||||
tell the DE to start a build (``security-fixes`` and ``EOL`` version
|
||||
are not built daily).
|
||||
group-writeable.
|
||||
|
||||
- Let the DE check if the docs are built and work all right.
|
||||
|
||||
- If this is a **final** major release: Tell the DE to adapt redirects for
|
||||
docs.python.org/X.Y in the nginx config for docs.python.org.
|
||||
|
||||
- Note both the documentation and downloads are behind a caching CDN. If
|
||||
you change archives after downloading them through the website, you'll
|
||||
need to purge the stale data in the CDN like this::
|
||||
|
||||
$ curl -X PURGE https://www.python.org/ftp/python/2.7.5/Python-2.7.5.tar.xz
|
||||
$ curl -X PURGE https://www.python.org/ftp/python/3.12.0/Python-3.12.0.tar.xz
|
||||
|
||||
You should always purge the cache of the directory listing as people
|
||||
use that to browse the release files::
|
||||
|
||||
$ curl -X PURGE https://www.python.org/ftp/python/2.7.5/
|
||||
$ curl -X PURGE https://www.python.org/ftp/python/3.12.0/
|
||||
|
||||
- For the extra paranoid, do a completely clean test of the release.
|
||||
This includes downloading the tarball from www.python.org.
|
||||
|
@ -475,18 +445,20 @@ to perform some manual editing steps.
|
|||
To ensure that the regression test suite passes. If not, you
|
||||
screwed up somewhere!
|
||||
|
||||
- STOP STOP STOP STOP STOP STOP STOP STOP
|
||||
.. warning::
|
||||
|
||||
- Have you gotten the green light from the WE?
|
||||
**STOP** and confirm:
|
||||
|
||||
- Have you gotten the green light from the ME?
|
||||
- Have you gotten the green light from the WE?
|
||||
|
||||
- Have you gotten the green light from the DE?
|
||||
- Have you gotten the green light from the ME?
|
||||
|
||||
- Have you gotten the green light from the DE?
|
||||
|
||||
If green, it's time to merge the release engineering branch back into
|
||||
the main repo.
|
||||
|
||||
- In order to push your changes to Github, you'll have to temporarily
|
||||
- In order to push your changes to GitHub, you'll have to temporarily
|
||||
disable branch protection for administrators. Go to the
|
||||
``Settings | Branches`` page:
|
||||
|
||||
|
@ -511,18 +483,18 @@ the main repo.
|
|||
|
||||
# 2. Else, for all other releases, checkout the
|
||||
# appropriate release branch.
|
||||
$ git checkout X.Y
|
||||
$ git checkout 3.X
|
||||
|
||||
# Fetch the newly created and signed tag from your clone repo
|
||||
$ git fetch --tags git@github.com:your-github-id/cpython.git vX.Y.ZaN
|
||||
$ git fetch --tags git@github.com:your-github-id/cpython.git v3.X.YaN
|
||||
# Merge the temporary release engineering branch back into
|
||||
$ git merge --no-squash vX.Y.ZaN
|
||||
$ git merge --no-squash v3.X.YaN
|
||||
$ git commit -m 'Merge release engineering branch'
|
||||
|
||||
- If this is a **new branch** release, i.e. first beta,
|
||||
now create the new release branch::
|
||||
|
||||
$ git checkout -b X.Y
|
||||
$ git checkout -b 3.X
|
||||
|
||||
Do any steps needed to setup the new release branch, including:
|
||||
|
||||
|
@ -532,13 +504,13 @@ the main repo.
|
|||
- For *all* releases, do the guided post-release steps with the
|
||||
release script.::
|
||||
|
||||
$ .../release-tools/release.py --done X.Y.ZaN
|
||||
$ .../release-tools/release.py --done 3.X.YaN
|
||||
|
||||
- For a **final** or **release candidate 2+** release, you may need to
|
||||
do some post-merge cleanup. Check the top-level ``README.rst``
|
||||
and ``include/patchlevel.h`` files to ensure they now reflect
|
||||
the desired post-release values for on-going development.
|
||||
The patchlevel should be the release tag with a `+`.
|
||||
The patchlevel should be the release tag with a ``+``.
|
||||
Also, if you cherry-picked changes from the standard release
|
||||
branch into the release engineering branch for this release,
|
||||
you will now need to manual remove each blurb entry from
|
||||
|
@ -546,8 +518,8 @@ the main repo.
|
|||
into the release you are working on since that blurb entry
|
||||
is now captured in the merged x.y.z.rst file for the new
|
||||
release. Otherwise, the blurb entry will appear twice in
|
||||
the `changelog.html` file, once under `Python next` and again
|
||||
under `x.y.z`.
|
||||
the ``changelog.html`` file, once under ``Python next`` and again
|
||||
under ``x.y.z``.
|
||||
|
||||
- Review and commit these changes::
|
||||
|
||||
|
@ -621,27 +593,27 @@ the main repo.
|
|||
$ git push --tags git@github.com:python/cpython.git main
|
||||
|
||||
# For a **new branch** release, i.e. first beta:
|
||||
$ git push --dry-run --tags git@github.com:python/cpython.git X.Y
|
||||
$ git push --dry-run --tags git@github.com:python/cpython.git 3.X
|
||||
$ git push --dry-run --tags git@github.com:python/cpython.git main
|
||||
# If it looks OK, take the plunge. There's no going back!
|
||||
$ git push --tags git@github.com:python/cpython.git X.Y
|
||||
$ git push --tags git@github.com:python/cpython.git 3.X
|
||||
$ git push --tags git@github.com:python/cpython.git main
|
||||
|
||||
# For all other releases:
|
||||
$ git push --dry-run --tags git@github.com:python/cpython.git X.Y
|
||||
$ git push --dry-run --tags git@github.com:python/cpython.git 3.X
|
||||
# If it looks OK, take the plunge. There's no going back!
|
||||
$ git push --tags git@github.com:python/cpython.git X.Y
|
||||
$ git push --tags git@github.com:python/cpython.git 3.X
|
||||
|
||||
- If this is a **new branch** release, add a ``Branch protection rule``
|
||||
for the newly created branch (X.Y). Look at the values for the previous
|
||||
release branch (X.Y-1) and use them as a template.
|
||||
for the newly created branch (3.X). Look at the values for the previous
|
||||
release branch (3.X-1) and use them as a template.
|
||||
https://github.com/python/cpython/settings/branches/
|
||||
|
||||
Also, add a ``needs backport to X.Y`` label to the Github repo.
|
||||
Also, add a ``needs backport to 3.X`` label to the GitHub repo.
|
||||
https://github.com/python/cpython/labels
|
||||
|
||||
- You can now re-enable enforcement of branch settings against administrators
|
||||
on Github. Go back to the ``Settings | Branch`` page:
|
||||
on GitHub. Go back to the ``Settings | Branch`` page:
|
||||
|
||||
https://github.com/python/cpython/settings/branches/
|
||||
|
||||
|
@ -692,10 +664,8 @@ with RevSys.)
|
|||
Keep a copy in your home directory on dl-files and
|
||||
keep it fresh.
|
||||
|
||||
If new types of files are added to the release
|
||||
(e.g. the web-based installers or redistributable zip
|
||||
files added to Python 3.5) someone will need to update
|
||||
add-to-pydotorg.py so it recognizes these new files.
|
||||
If new types of files are added to the release, someone will need to
|
||||
update add-to-pydotorg.py so it recognizes these new files.
|
||||
(It's best to update add-to-pydotorg.py when file types
|
||||
are removed, too.)
|
||||
|
||||
|
@ -712,22 +682,22 @@ with RevSys.)
|
|||
- If this is a **final** release:
|
||||
|
||||
- Add the new version to the *Python Documentation by Version*
|
||||
page `https://www.python.org/doc/versions/` and
|
||||
page ``https://www.python.org/doc/versions/`` and
|
||||
remove the current version from any 'in development' section.
|
||||
|
||||
- For X.Y.Z, edit all the previous X.Y releases' page(s) to
|
||||
- For 3.X.Y, edit all the previous X.Y releases' page(s) to
|
||||
point to the new release. This includes the content field of the
|
||||
`Downloads -> Releases` entry for the release::
|
||||
``Downloads -> Releases`` entry for the release::
|
||||
|
||||
Note: Python x.y.m has been superseded by
|
||||
`Python x.y.n </downloads/release/python-xyn/>`_.
|
||||
Note: Python 3.x.(y-1) has been superseded by
|
||||
`Python 3.x.y </downloads/release/python-3xy/>`_.
|
||||
|
||||
And, for those releases having separate release page entries
|
||||
(phasing these out?), update those pages as well,
|
||||
e.g. `download/releases/x.y.z`::
|
||||
e.g. ``download/releases/3.x.y``::
|
||||
|
||||
Note: Python x.y.m has been superseded by
|
||||
`Python x.y.n </download/releases/x.y.n/>`_.
|
||||
Note: Python 3.x.(y-1) has been superseded by
|
||||
`Python 3.x.y </download/releases/3.x.y/>`_.
|
||||
|
||||
- Update the "Current Pre-release Testing Versions web page".
|
||||
|
||||
|
@ -739,12 +709,11 @@ with RevSys.)
|
|||
Every time you make a release, one way or another you'll
|
||||
have to update this page:
|
||||
|
||||
- If you're releasing a version before *x.y.0*,
|
||||
or *x.y.z release candidate N,*
|
||||
- If you're releasing a version before *3.x.0*,
|
||||
you should add it to this page, removing the previous pre-release
|
||||
of version *x.y* as needed.
|
||||
of version *3.x* as needed.
|
||||
|
||||
- If you're releasing *x.y.z final*, you need to remove the pre-release
|
||||
- If you're releasing *3.x.0 final*, you need to remove the pre-release
|
||||
version from this page.
|
||||
|
||||
This is in the "Pages" category on the Django-based website, and finding
|
||||
|
@ -764,15 +733,12 @@ with RevSys.)
|
|||
should go. And yes you should be able to click on the link above then
|
||||
press the shiny, exciting "Edit this page" button.
|
||||
|
||||
- Other steps (other update for new web site)??
|
||||
|
||||
- Write the announcement for the mailing lists. This is the
|
||||
- Write the announcement on https://discuss.python.org/. This is the
|
||||
fuzzy bit because not much can be automated. You can use an earlier
|
||||
announcement as a template, but edit it for content!
|
||||
|
||||
|
||||
- Once the announcement is ready, send it to the following
|
||||
addresses:
|
||||
- Once the announcement is up on Discourse, send an equivalent to the
|
||||
following mailing lists:
|
||||
|
||||
python-list@python.org
|
||||
python-announce@python.org
|
||||
|
@ -783,29 +749,19 @@ with RevSys.)
|
|||
To add a new entry, go to
|
||||
`your Blogger home page, here. <https://www.blogger.com/home>`_
|
||||
|
||||
- Send email to python-committers informing them that the release has been
|
||||
published and a reminder about any relevant changes in policy
|
||||
based on the phase of the release cycle. In particular,
|
||||
if this is a **new branch** release, remind everyone that the
|
||||
new release branch exists and that they need to start
|
||||
considering whether to backport to it when merging changes to
|
||||
main.
|
||||
- Update any release PEPs (e.g. 719) with the release dates.
|
||||
|
||||
- Update any release PEPs (e.g. 361) with the release dates.
|
||||
- Update the labels on https://github.com/python/cpython/issues:
|
||||
|
||||
- Update the tracker at https://bugs.python.org:
|
||||
|
||||
- Flip all the deferred blocker issues back to release blocker
|
||||
- Flip all the deferred-blocker issues back to release-blocker
|
||||
for the next release.
|
||||
|
||||
- Add version X.Y+1 as when version X.Y enters alpha.
|
||||
- Add version 3.X+1 as when version 3.X enters alpha.
|
||||
|
||||
- Change non-doc RFEs to version X.Y+1 when version X.Y enters beta.
|
||||
- Change non-doc feature requests to version 3.X+1 when version 3.X
|
||||
enters beta.
|
||||
|
||||
- Add ``X.Yregression`` keyword (https://bugs.python.org/keyword)
|
||||
when version X.Y enters beta.
|
||||
|
||||
- Update 'behavior' issues from versions that your release make
|
||||
- Update issues from versions that your release makes
|
||||
unsupported to the next supported version.
|
||||
|
||||
- Review open issues, as this might find lurking showstopper bugs,
|
||||
|
@ -817,13 +773,8 @@ with RevSys.)
|
|||
pieces of the development infrastructure are updated for the new branch.
|
||||
These include:
|
||||
|
||||
- Update the issue tracker for the new branch.
|
||||
|
||||
* Add the new version to the versions list (contact the tracker
|
||||
admins?).
|
||||
|
||||
* Add a `regressions keyword <https://bugs.python.org/keyword>`_
|
||||
for the release
|
||||
- Update the issue tracker for the new branch: add the new version to
|
||||
the versions list.
|
||||
|
||||
- Update the devguide to reflect the new branches and versions.
|
||||
|
||||
|
@ -831,12 +782,10 @@ with RevSys.)
|
|||
`downloads page <https://www.python.org/downloads/>`_.
|
||||
(See https://github.com/python/pythondotorg/issues/1302)
|
||||
|
||||
- Ensure buildbots are defined for the new branch (contact zware).
|
||||
- Ensure buildbots are defined for the new branch (contact Łukasz
|
||||
or Zach Ware).
|
||||
|
||||
- Ensure the daily docs build scripts are updated to include
|
||||
the new branch (contact DE).
|
||||
|
||||
- Ensure the various Github bots are updated, as needed, for the
|
||||
- Ensure the various GitHub bots are updated, as needed, for the
|
||||
new branch, in particular, make sure backporting to the new
|
||||
branch works (contact core-workflow team)
|
||||
https://github.com/python/core-workflow/issues
|
||||
|
@ -883,16 +832,6 @@ else does them. Some of those tasks include:
|
|||
- Optionally making a final release to publish any remaining unreleased
|
||||
changes.
|
||||
|
||||
- Update the ``VERSIONS`` list of `docsbuild scripts`_: change the
|
||||
version state to ``EOL``.
|
||||
|
||||
- On the docs download server (docs.nyc1.psf.io), ensure the top-level
|
||||
symlink points to the upload of unpacked html docs from final release::
|
||||
|
||||
cd /srv/docs.python.org
|
||||
ls -l 3.3
|
||||
lrwxrwxrwx 1 nad docs 13 Sep 6 21:38 3.3 -> release/3.3.7
|
||||
|
||||
- Freeze the state of the release branch by creating a tag of its current HEAD
|
||||
and then deleting the branch from the cpython repo. The current HEAD should
|
||||
be at or beyond the final security release for the branch::
|
||||
|
@ -904,12 +843,12 @@ else does them. Some of those tasks include:
|
|||
- If all looks good, delete the branch. This may require the assistance of
|
||||
someone with repo administrator privileges::
|
||||
|
||||
git push upstream --delete 3.3 # or perform from Github Settings page
|
||||
git push upstream --delete 3.3 # or perform from GitHub Settings page
|
||||
|
||||
- Remove the release from the list of "Active Python Releases" on the Downloads
|
||||
page. To do this, log in to the admin page for python.org, navigate to Boxes,
|
||||
and edit the `downloads-active-releases` entry. Simply strip out the relevant
|
||||
paragraph of HTML for your release. (You'll probably have to do the `curl -X PURGE`
|
||||
and edit the ``downloads-active-releases`` entry. Simply strip out the relevant
|
||||
paragraph of HTML for your release. (You'll probably have to do the ``curl -X PURGE``
|
||||
trick to purge the cache if you want to confirm you made the change correctly.)
|
||||
|
||||
- Add retired notice to each release page on python.org for the retired branch.
|
||||
|
@ -923,31 +862,20 @@ else does them. Some of those tasks include:
|
|||
list (https://devguide.python.org/devcycle/#end-of-life-branches) and update
|
||||
or remove references to the branch elsewhere in the devguide.
|
||||
|
||||
- Retire the release from the bugs.python.org issue tracker. Tasks include:
|
||||
- Retire the release from the issue tracker. Tasks include:
|
||||
|
||||
* remove branch from tracker list of versions
|
||||
* remove version label from list of versions
|
||||
|
||||
* remove any release-release keywords (3.3regressions)
|
||||
* remove the "needs backport to" label for the retired version
|
||||
|
||||
* review and dispose of open issues marked for this branch
|
||||
|
||||
Note, with the likely future migration of bug tracking from the current
|
||||
Roundup bugs.python.org to Github issues and with the impending end-of-life
|
||||
of Python 2.7, it probably makes sense to avoid unnecessary churn for
|
||||
currently and about-to-be retired 3.x branches by deferring any major
|
||||
wholesale changes to existing issues until the migration process is
|
||||
clarified.
|
||||
|
||||
In practice, you're probably not going to do this yourself, you're going
|
||||
to ask one of the bpo maintainers to do it for you (e.g. Ezio Melotti,
|
||||
Zachary Ware.)
|
||||
|
||||
- Announce the branch retirement in the usual places:
|
||||
|
||||
* mailing lists (python-committers, python-dev, python-list, python-announcements)
|
||||
|
||||
* discuss.python.org
|
||||
|
||||
* mailing lists (python-dev, python-list, python-announcements)
|
||||
|
||||
* Python Dev blog
|
||||
|
||||
- Enjoy your retirement and bask in the glow of a job well done!
|
||||
|
@ -956,12 +884,12 @@ else does them. Some of those tasks include:
|
|||
Windows Notes
|
||||
=============
|
||||
|
||||
NOTE, have Steve Dower review; probably obsolete.
|
||||
|
||||
Windows has a MSI installer, various flavors of Windows have
|
||||
"special limitations", and the Windows installer also packs
|
||||
precompiled "foreign" binaries (Tcl/Tk, expat, etc). So Windows
|
||||
testing is tiresome but very necessary.
|
||||
precompiled "foreign" binaries (Tcl/Tk, expat, etc).
|
||||
|
||||
The installer is tested as part of the Azure Pipeline. In the past,
|
||||
those steps were performed manually. We're keeping this for posterity.
|
||||
|
||||
Concurrent with uploading the installer, the WE installs Python
|
||||
from it twice: once into the default directory suggested by the
|
||||
|
@ -994,9 +922,6 @@ Copyright
|
|||
This document has been placed in the public domain.
|
||||
|
||||
|
||||
.. _docsbuild scripts:
|
||||
https://github.com/python/docsbuild-scripts/blob/main/build_docs.py
|
||||
|
||||
..
|
||||
Local Variables:
|
||||
mode: indented-text
|
|
@ -46,8 +46,8 @@ Lockstep For-Loops
|
|||
Lockstep for-loops are non-nested iterations over two or more
|
||||
sequences, such that at each pass through the loop, one element from
|
||||
each sequence is taken to compose the target. This behavior can
|
||||
already be accomplished in Python through the use of the map() built-
|
||||
in function::
|
||||
already be accomplished in Python through the use of the map() built-in
|
||||
function::
|
||||
|
||||
>>> a = (1, 2, 3)
|
||||
>>> b = (4, 5, 6)
|
|
@ -185,8 +185,8 @@ Implementation Strategy
|
|||
=======================
|
||||
|
||||
The implementation of weak references will include a list of
|
||||
reference containers that must be cleared for each weakly-
|
||||
referencable object. If the reference is from a weak dictionary,
|
||||
reference containers that must be cleared for each weakly-referencable
|
||||
object. If the reference is from a weak dictionary,
|
||||
the dictionary entry is cleared first. Then, any associated
|
||||
callback is called with the object passed as a parameter. Once
|
||||
all callbacks have been called, the object is finalized and
|
|
@ -12,9 +12,9 @@ Post-History:
|
|||
Abstract
|
||||
========
|
||||
|
||||
This PEP proposes a redesign and re-implementation of the multi-
|
||||
dimensional array module, Numeric, to make it easier to add new
|
||||
features and functionality to the module. Aspects of Numeric 2
|
||||
This PEP proposes a redesign and re-implementation of the
|
||||
multi-dimensional array module, Numeric, to make it easier to add
|
||||
new features and functionality to the module. Aspects of Numeric 2
|
||||
that will receive special attention are efficient access to arrays
|
||||
exceeding a gigabyte in size and composed of inhomogeneous data
|
||||
structures or records. The proposed design uses four Python
|
||||
|
@ -128,8 +128,8 @@ Some planned features are:
|
|||
automatically handle alignment and representational issues
|
||||
when data is accessed or operated on. There are two
|
||||
approaches to implementing records; as either a derived array
|
||||
class or a special array type, depending on your point-of-
|
||||
view. We defer this discussion to the Open Issues section.
|
||||
class or a special array type, depending on your point-of-view.
|
||||
We defer this discussion to the Open Issues section.
|
||||
|
||||
|
||||
2. Additional array types
|
||||
|
@ -265,8 +265,8 @@ The design of Numeric 2 has four primary classes:
|
|||
_ufunc.compute(slice, data, func, swap, conv)
|
||||
|
||||
The 'func' argument is a CFuncObject, while the 'swap' and 'conv'
|
||||
arguments are lists of CFuncObjects for those arrays needing pre-
|
||||
or post-processing, otherwise None is used. The data argument is
|
||||
arguments are lists of CFuncObjects for those arrays needing pre- or
|
||||
post-processing, otherwise None is used. The data argument is
|
||||
a list of buffer objects, and the slice argument gives the number
|
||||
of iterations for each dimension along with the buffer offset and
|
||||
step size for each array and each dimension.
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue