Merge remote-tracking branch 'upstream/main' into feature/pep-0440-packaging-regex-update

This commit is contained in:
Hugo van Kemenade 2023-09-20 21:50:46 +03:00
commit f6ef4ae2a5
710 changed files with 5559 additions and 2176 deletions

4
.gitattributes vendored
View File

@ -3,3 +3,7 @@
*.png binary *.png binary
*.pptx binary *.pptx binary
*.odp binary *.odp binary
# Instruct linguist not to ignore the PEPs
# https://github.com/github-linguist/linguist/blob/master/docs/overrides.md
peps/*.rst text linguist-detectable

1311
.github/CODEOWNERS vendored

File diff suppressed because it is too large Load Diff

View File

@ -10,7 +10,7 @@ If your PEP is not Standards Track, remove the corresponding section.
## Basic requirements (all PEP Types) ## Basic requirements (all PEP Types)
* [ ] Read and followed [PEP 1](https://peps.python.org/1) & [PEP 12](https://peps.python.org/12) * [ ] Read and followed [PEP 1](https://peps.python.org/1) & [PEP 12](https://peps.python.org/12)
* [ ] File created from the [latest PEP template](https://github.com/python/peps/blob/main/pep-0012/pep-NNNN.rst?plain=1) * [ ] File created from the [latest PEP template](https://github.com/python/peps/blob/main/peps/pep-0012/pep-NNNN.rst?plain=1)
* [ ] PEP has next available number, & set in filename (``pep-NNNN.rst``), PR title (``PEP 123: <Title of PEP>``) and ``PEP`` header * [ ] PEP has next available number, & set in filename (``pep-NNNN.rst``), PR title (``PEP 123: <Title of PEP>``) and ``PEP`` header
* [ ] Title clearly, accurately and concisely describes the content in 79 characters or less * [ ] Title clearly, accurately and concisely describes the content in 79 characters or less
* [ ] Core dev/PEP editor listed as ``Author`` or ``Sponsor``, and formally confirmed their approval * [ ] Core dev/PEP editor listed as ``Author`` or ``Sponsor``, and formally confirmed their approval

View File

@ -14,6 +14,7 @@ concurrency:
env: env:
FORCE_COLOR: 1 FORCE_COLOR: 1
RUFF_FORMAT: github
jobs: jobs:
pre-commit: pre-commit:
@ -21,7 +22,7 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v3 - uses: actions/checkout@v4
- name: Set up Python 3 - name: Set up Python 3
uses: actions/setup-python@v4 uses: actions/setup-python@v4
with: with:
@ -35,3 +36,17 @@ jobs:
uses: pre-commit/action@v3.0.0 uses: pre-commit/action@v3.0.0
with: with:
extra_args: --all-files --hook-stage manual codespell || true extra_args: --all-files --hook-stage manual codespell || true
check-peps:
name: Run check-peps
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Python 3
uses: actions/setup-python@v4
with:
python-version: "3"
- name: Run check-peps
run: python check-peps.py --detailed

View File

@ -30,7 +30,7 @@ jobs:
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v3 uses: actions/checkout@v4
with: with:
fetch-depth: 0 # fetch all history so that last modified date-times are accurate fetch-depth: 0 # fetch all history so that last modified date-times are accurate

View File

@ -40,7 +40,7 @@ jobs:
- "ubuntu-latest" - "ubuntu-latest"
steps: steps:
- uses: actions/checkout@v3 - uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }} - name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4 uses: actions/setup-python@v4
with: with:

28
.gitignore vendored
View File

@ -1,18 +1,24 @@
coverage.xml # PEPs
pep-0000.txt
pep-0000.rst pep-0000.rst
pep-????.html
peps.rss peps.rss
topic
/build
# Bytecode
__pycache__ __pycache__
*.pyc *.py[co]
*.pyo
# Editors
*~ *~
*env .idea
.coverage
.tox
.vscode .vscode
*.swp *.swp
/build
/package # Tests
/topic coverage.xml
.coverage
.tox
# Virtual environments
*env
/venv /venv

View File

@ -43,7 +43,7 @@ repos:
name: "Check YAML" name: "Check YAML"
- repo: https://github.com/psf/black - repo: https://github.com/psf/black
rev: 22.12.0 rev: 23.7.0
hooks: hooks:
- id: black - id: black
name: "Format with Black" name: "Format with Black"
@ -52,22 +52,23 @@ repos:
- '--target-version=py310' - '--target-version=py310'
files: 'pep_sphinx_extensions/tests/.*' files: 'pep_sphinx_extensions/tests/.*'
- repo: https://github.com/PyCQA/isort - repo: https://github.com/astral-sh/ruff-pre-commit
rev: 5.12.0 rev: v0.0.287
hooks: hooks:
- id: isort - id: ruff
name: "Sort imports with isort" name: "Lint with Ruff"
args: ['--profile=black', '--atomic'] args:
files: 'pep_sphinx_extensions/tests/.*' - '--exit-non-zero-on-fix'
files: '^pep_sphinx_extensions/tests/'
- repo: https://github.com/tox-dev/tox-ini-fmt - repo: https://github.com/tox-dev/tox-ini-fmt
rev: 0.6.1 rev: 1.3.1
hooks: hooks:
- id: tox-ini-fmt - id: tox-ini-fmt
name: "Format tox.ini" name: "Format tox.ini"
- repo: https://github.com/sphinx-contrib/sphinx-lint - repo: https://github.com/sphinx-contrib/sphinx-lint
rev: v0.6.7 rev: v0.6.8
hooks: hooks:
- id: sphinx-lint - id: sphinx-lint
name: "Sphinx lint" name: "Sphinx lint"
@ -79,20 +80,16 @@ repos:
hooks: hooks:
- id: rst-backticks - id: rst-backticks
name: "Check RST: No single backticks" name: "Check RST: No single backticks"
files: '^pep-\d\.txt|\.rst$'
types: [text]
- id: rst-inline-touching-normal - id: rst-inline-touching-normal
name: "Check RST: No backticks touching text" name: "Check RST: No backticks touching text"
files: '^pep-\d+\.txt|\.rst$'
types: [text]
- id: rst-directive-colons - id: rst-directive-colons
name: "Check RST: 2 colons after directives" name: "Check RST: 2 colons after directives"
files: '^pep-\d+\.txt|\.rst$'
types: [text]
# Manual codespell check # Manual codespell check
- repo: https://github.com/codespell-project/codespell - repo: https://github.com/codespell-project/codespell
rev: v2.2.2 rev: v2.2.5
hooks: hooks:
- id: codespell - id: codespell
name: "Check for common misspellings in text files" name: "Check for common misspellings in text files"
@ -101,152 +98,134 @@ repos:
# Local checks for PEP headers and more # Local checks for PEP headers and more
- repo: local - repo: local
hooks: hooks:
- id: check-no-tabs # # Hook to run "check-peps.py"
name: "Check tabs not used in PEPs" # - id: "check-peps"
language: pygrep # name: "Check PEPs for metadata and content enforcement"
entry: '\t' # entry: "python check-peps.py"
files: '^pep-\d+\.(rst|txt)$' # language: "system"
types: [text] # files: "^pep-\d{4}\.(rst|txt)$"
# require_serial: true
- id: check-required-headers - id: check-required-headers
name: "PEPs must have all required headers" name: "PEPs must have all required headers"
language: pygrep language: pygrep
entry: '(?-m:^PEP:(?=[\s\S]*\nTitle:)(?=[\s\S]*\nAuthor:)(?=[\s\S]*\nStatus:)(?=[\s\S]*\nType:)(?=[\s\S]*\nContent-Type:)(?=[\s\S]*\nCreated:))' entry: '(?-m:^PEP:(?=[\s\S]*\nTitle:)(?=[\s\S]*\nAuthor:)(?=[\s\S]*\nStatus:)(?=[\s\S]*\nType:)(?=[\s\S]*\nContent-Type:)(?=[\s\S]*\nCreated:))'
args: ['--negate', '--multiline'] args: ['--negate', '--multiline']
files: '^pep-\d+\.(rst|txt)$' files: '^peps/pep-\d+\.rst$'
types: [text]
- id: check-header-order - id: check-header-order
name: "PEP header order must follow PEP 12" name: "PEP header order must follow PEP 12"
language: pygrep language: pygrep
entry: '^PEP:[^\n]+\nTitle:[^\n]+\n(Version:[^\n]+\n)?(Last-Modified:[^\n]+\n)?Author:[^\n]+\n( +\S[^\n]+\n)*(Sponsor:[^\n]+\n)?((PEP|BDFL)-Delegate:[^\n]*\n)?(Discussions-To:[^\n]*\n)?Status:[^\n]+\nType:[^\n]+\n(Topic:[^\n]+\n)?Content-Type:[^\n]+\n(Requires:[^\n]+\n)?Created:[^\n]+\n(Python-Version:[^\n]*\n)?(Post-History:[^\n]*\n( +\S[^\n]*\n)*)?(Replaces:[^\n]+\n)?(Superseded-By:[^\n]+\n)?(Resolution:[^\n]*\n)?\n' entry: '^PEP:[^\n]+\nTitle:[^\n]+\n(Version:[^\n]+\n)?(Last-Modified:[^\n]+\n)?Author:[^\n]+\n( +\S[^\n]+\n)*(Sponsor:[^\n]+\n)?((PEP|BDFL)-Delegate:[^\n]*\n)?(Discussions-To:[^\n]*\n)?Status:[^\n]+\nType:[^\n]+\n(Topic:[^\n]+\n)?Content-Type:[^\n]+\n(Requires:[^\n]+\n)?Created:[^\n]+\n(Python-Version:[^\n]*\n)?(Post-History:[^\n]*\n( +\S[^\n]*\n)*)?(Replaces:[^\n]+\n)?(Superseded-By:[^\n]+\n)?(Resolution:[^\n]*\n)?\n'
args: ['--negate', '--multiline'] args: ['--negate', '--multiline']
files: '^pep-\d+\.(rst|txt)$' files: '^peps/pep-\d+\.rst$'
types: [text]
- id: validate-pep-number - id: validate-pep-number
name: "'PEP' header must be a number 1-9999" name: "'PEP' header must be a number 1-9999"
language: pygrep language: pygrep
entry: '(?-m:^PEP:(?:(?! +(0|[1-9][0-9]{0,3})\n)))' entry: '(?-m:^PEP:(?:(?! +(0|[1-9][0-9]{0,3})\n)))'
args: ['--multiline'] args: ['--multiline']
files: '^pep-\d+\.(rst|txt)$' files: '^peps/pep-\d+\.rst$'
types: [text]
- id: validate-title - id: validate-title
name: "'Title' must be 1-79 characters" name: "'Title' must be 1-79 characters"
language: pygrep language: pygrep
entry: '(?<=\n)Title:(?:(?! +\S.{1,78}\n(?=[A-Z])))' entry: '(?<=\n)Title:(?:(?! +\S.{1,78}\n(?=[A-Z])))'
args: ['--multiline'] args: ['--multiline']
files: '^pep-\d+\.(rst|txt)$' files: '^peps/pep-\d+\.rst$'
exclude: '^pep-(0499)\.(rst|txt)$' exclude: '^peps/pep-(0499)\.rst$'
types: [text]
- id: validate-author - id: validate-author
name: "'Author' must be list of 'Name <email@example.com>, ...'" name: "'Author' must be list of 'Name <email@example.com>, ...'"
language: pygrep language: pygrep
entry: '(?<=\n)Author:(?:(?!((( +|\n {1,8})[^!#$%&()*+,/:;<=>?@\[\\\]\^_`{|}~]+( <[\w!#$%&''*+\-/=?^_{|}~.]+(@| at )[\w\-.]+\.[A-Za-z0-9]+>)?)(,|(?=\n[^ ])))+\n(?=[A-Z])))' entry: '(?<=\n)Author:(?:(?!((( +|\n {1,8})[^!#$%&()*+,/:;<=>?@\[\\\]\^_`{|}~]+( <[\w!#$%&''*+\-/=?^_{|}~.]+(@| at )[\w\-.]+\.[A-Za-z0-9]+>)?)(,|(?=\n[^ ])))+\n(?=[A-Z])))'
args: [--multiline] args: ["--multiline"]
files: '^pep-\d+\.(rst|txt)$' files: '^peps/pep-\d+\.rst$'
types: [text]
- id: validate-sponsor - id: validate-sponsor
name: "'Sponsor' must have format 'Name <email@example.com>'" name: "'Sponsor' must have format 'Name <email@example.com>'"
language: pygrep language: pygrep
entry: '^Sponsor:(?: (?! *[^!#$%&()*+,/:;<=>?@\[\\\]\^_`{|}~]+( <[\w!#$%&''*+\-/=?^_{|}~.]+(@| at )[\w\-.]+\.[A-Za-z0-9]+>)?$))' entry: '^Sponsor:(?: (?! *[^!#$%&()*+,/:;<=>?@\[\\\]\^_`{|}~]+( <[\w!#$%&''*+\-/=?^_{|}~.]+(@| at )[\w\-.]+\.[A-Za-z0-9]+>)?$))'
files: '^pep-\d+\.(rst|txt)$' files: '^peps/pep-\d+\.rst$'
types: [text]
- id: validate-delegate - id: validate-delegate
name: "'Delegate' must have format 'Name <email@example.com>'" name: "'Delegate' must have format 'Name <email@example.com>'"
language: pygrep language: pygrep
entry: '^(PEP|BDFL)-Delegate: (?:(?! *[^!#$%&()*+,/:;<=>?@\[\\\]\^_`{|}~]+( <[\w!#$%&''*+\-/=?^_{|}~.]+(@| at )[\w\-.]+\.[A-Za-z0-9]+>)?$))' entry: '^(PEP|BDFL)-Delegate: (?:(?! *[^!#$%&()*+,/:;<=>?@\[\\\]\^_`{|}~]+( <[\w!#$%&''*+\-/=?^_{|}~.]+(@| at )[\w\-.]+\.[A-Za-z0-9]+>)?$))'
files: '^pep-\d+\.(rst|txt)$' files: '^peps/pep-\d+\.rst$'
exclude: '^pep-(0451)\.(rst|txt)$' exclude: '^peps/pep-(0451)\.rst$'
types: [text]
- id: validate-discussions-to - id: validate-discussions-to
name: "'Discussions-To' must be a thread URL" name: "'Discussions-To' must be a thread URL"
language: pygrep language: pygrep
entry: '^Discussions-To: (?:(?!([\w\-]+@(python\.org|googlegroups\.com))|https://((discuss\.python\.org/t/([\w\-]+/)?\d+/?)|(mail\.python\.org/pipermail/[\w\-]+/\d{4}-[A-Za-z]+/[A-Za-z0-9]+\.html)|(mail\.python\.org/archives/list/[\w\-]+@python\.org/thread/[A-Za-z0-9]+/?))$))' entry: '^Discussions-To: (?:(?!([\w\-]+@(python\.org|googlegroups\.com))|https://((discuss\.python\.org/t/([\w\-]+/)?\d+/?)|(mail\.python\.org/pipermail/[\w\-]+/\d{4}-[A-Za-z]+/[A-Za-z0-9]+\.html)|(mail\.python\.org/archives/list/[\w\-]+@python\.org/thread/[A-Za-z0-9]+/?))$))'
files: '^pep-\d+\.(rst|txt)$' files: '^peps/pep-\d+\.rst$'
types: [text]
- id: validate-status - id: validate-status
name: "'Status' must be a valid PEP status" name: "'Status' must be a valid PEP status"
language: pygrep language: pygrep
entry: '^Status:(?:(?! +(Draft|Withdrawn|Rejected|Accepted|Final|Active|Provisional|Deferred|Superseded|April Fool!)$))' entry: '^Status:(?:(?! +(Draft|Withdrawn|Rejected|Accepted|Final|Active|Provisional|Deferred|Superseded|April Fool!)$))'
files: '^pep-\d+\.(rst|txt)$' files: '^peps/pep-\d+\.rst$'
types: [text]
- id: validate-type - id: validate-type
name: "'Type' must be a valid PEP type" name: "'Type' must be a valid PEP type"
language: pygrep language: pygrep
entry: '^Type:(?:(?! +(Standards Track|Informational|Process)$))' entry: '^Type:(?:(?! +(Standards Track|Informational|Process)$))'
files: '^pep-\d+\.(rst|txt)$' files: '^peps/pep-\d+\.rst$'
types: [text]
- id: validate-topic - id: validate-topic
name: "'Topic' must be for a valid sub-index" name: "'Topic' must be for a valid sub-index"
language: pygrep language: pygrep
entry: '^Topic:(?:(?! +(Governance|Packaging|Typing|Release)(, (Governance|Packaging|Typing|Release))*$))' entry: '^Topic:(?:(?! +(Governance|Packaging|Typing|Release)(, (Governance|Packaging|Typing|Release))*$))'
files: '^pep-\d+\.(rst|txt)$' files: '^peps/pep-\d+\.rst$'
types: [text]
- id: validate-content-type - id: validate-content-type
name: "'Content-Type' must be 'text/x-rst'" name: "'Content-Type' must be 'text/x-rst'"
language: pygrep language: pygrep
entry: '^Content-Type:(?:(?! +text/x-rst$))' entry: '^Content-Type:(?:(?! +text/x-rst$))'
files: '^pep-\d+\.(rst|txt)$' files: '^peps/pep-\d+\.rst$'
types: [text]
- id: validate-pep-references - id: validate-pep-references
name: "`Requires`/`Replaces`/`Superseded-By` must be 'NNN' PEP IDs" name: "`Requires`/`Replaces`/`Superseded-By` must be 'NNN' PEP IDs"
language: pygrep language: pygrep
entry: '^(Requires|Replaces|Superseded-By):(?:(?! *( (0|[1-9][0-9]{0,3})(,|$))+$))' entry: '^(Requires|Replaces|Superseded-By):(?:(?! *( (0|[1-9][0-9]{0,3})(,|$))+$))'
files: '^pep-\d+\.(rst|txt)$' files: '^peps/pep-\d+\.rst$'
types: [text]
- id: validate-created - id: validate-created
name: "'Created' must be a 'DD-mmm-YYYY' date" name: "'Created' must be a 'DD-mmm-YYYY' date"
language: pygrep language: pygrep
entry: '^Created:(?:(?! +([0-2][0-9]|(3[01]))-(Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)-(199[0-9]|20[0-9][0-9])$))' entry: '^Created:(?:(?! +([0-2][0-9]|(3[01]))-(Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)-(199[0-9]|20[0-9][0-9])$))'
files: '^pep-\d+\.(rst|txt)$' files: '^peps/pep-\d+\.rst$'
types: [text]
- id: validate-python-version - id: validate-python-version
name: "'Python-Version' must be a 'X.Y[.Z]` version" name: "'Python-Version' must be a 'X.Y[.Z]` version"
language: pygrep language: pygrep
entry: '^Python-Version:(?:(?! *( [1-9]\.([0-9][0-9]?|x)(\.[1-9][0-9]?)?(,|$))+$))' entry: '^Python-Version:(?:(?! *( [1-9]\.([0-9][0-9]?|x)(\.[1-9][0-9]?)?(,|$))+$))'
files: '^pep-\d+\.(rst|txt)$' files: '^peps/pep-\d+\.rst$'
types: [text]
- id: validate-post-history - id: validate-post-history
name: "'Post-History' must be '`DD-mmm-YYYY <Thread URL>`__, ...'" name: "'Post-History' must be '`DD-mmm-YYYY <Thread URL>`__, ...'"
language: pygrep language: pygrep
entry: '(?<=\n)Post-History:(?:(?! ?\n|((( +|\n {1,14})(([0-2][0-9]|(3[01]))-(Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)-(199[0-9]|20[0-9][0-9])|`([0-2][0-9]|(3[01]))-(Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)-(199[0-9]|20[0-9][0-9]) <https://((discuss\.python\.org/t/([\w\-]+/)?\d+(?:/\d+/|/?))|(mail\.python\.org/pipermail/[\w\-]+/\d{4}-[A-Za-z]+/[A-Za-z0-9]+\.html)|(mail\.python\.org/archives/list/[\w\-]+@python\.org/thread/[A-Za-z0-9]+/?(#[A-Za-z0-9]+)?))>`__)(,|(?=\n[^ ])))+\n(?=[A-Z\n]))))' entry: '(?<=\n)Post-History:(?:(?! ?\n|((( +|\n {1,14})(([0-2][0-9]|(3[01]))-(Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)-(199[0-9]|20[0-9][0-9])|`([0-2][0-9]|(3[01]))-(Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)-(199[0-9]|20[0-9][0-9]) <https://((discuss\.python\.org/t/([\w\-]+/)?\d+(?:/\d+/|/?))|(mail\.python\.org/pipermail/[\w\-]+/\d{4}-[A-Za-z]+/[A-Za-z0-9]+\.html)|(mail\.python\.org/archives/list/[\w\-]+@python\.org/thread/[A-Za-z0-9]+/?(#[A-Za-z0-9]+)?))>`__)(,|(?=\n[^ ])))+\n(?=[A-Z\n]))))'
args: [--multiline] args: [--multiline]
files: '^pep-\d+\.(rst|txt)$' files: '^peps/pep-\d+\.rst$'
types: [text]
- id: validate-resolution - id: validate-resolution
name: "'Resolution' must be a direct thread/message URL" name: "'Resolution' must be a direct thread/message URL"
language: pygrep language: pygrep
entry: '(?<!\n\n)(?<=\n)Resolution: (?:(?!https://((discuss\.python\.org/t/([\w\-]+/)?\d+(/\d+)?/?)|(mail\.python\.org/pipermail/[\w\-]+/\d{4}-[A-Za-z]+/[A-Za-z0-9]+\.html)|(mail\.python\.org/archives/list/[\w\-]+@python\.org/(message|thread)/[A-Za-z0-9]+/?(#[A-Za-z0-9]+)?))\n))' entry: '(?<!\n\n)(?<=\n)Resolution: (?:(?!https://((discuss\.python\.org/t/([\w\-]+/)?\d+(/\d+)?/?)|(mail\.python\.org/pipermail/[\w\-]+/\d{4}-[A-Za-z]+/[A-Za-z0-9]+\.html)|(mail\.python\.org/archives/list/[\w\-]+@python\.org/(message|thread)/[A-Za-z0-9]+/?(#[A-Za-z0-9]+)?))\n))'
args: ['--multiline'] args: ['--multiline']
files: '^pep-\d+\.(rst|txt)$' files: '^peps/pep-\d+\.rst$'
types: [text]
- id: check-direct-pep-links - id: check-direct-pep-links
name: "Check that PEPs aren't linked directly" name: "Check that PEPs aren't linked directly"
language: pygrep language: pygrep
entry: '(dev/peps|peps\.python\.org)/pep-\d+' entry: '(dev/peps|peps\.python\.org)/pep-\d+'
files: '^pep-\d+\.(rst|txt)$' files: '^peps/pep-\d+\.rst$'
exclude: '^pep-(0009|0287|0676|0684|8001)\.(rst|txt)$' exclude: '^peps/pep-(0009|0287|0676|0684|8001)\.rst$'
types: [text]
- id: check-direct-rfc-links - id: check-direct-rfc-links
name: "Check that RFCs aren't linked directly" name: "Check that RFCs aren't linked directly"
language: pygrep language: pygrep
entry: '(rfc-editor\.org|ietf\.org)/[\.\-_\?\&\#\w/]*[Rr][Ff][Cc][\-_]?\d+' entry: '(rfc-editor\.org|ietf\.org)/[\.\-_\?\&\#\w/]*[Rr][Ff][Cc][\-_]?\d+'
files: '\.(rst|txt)$' types: ['rst']
types: [text]

15
.ruff.toml Normal file
View File

@ -0,0 +1,15 @@
ignore = [
"E501", # Line too long
]
select = [
"E", # pycodestyle errors
"F", # pyflakes
"I", # isort
"PT", # flake8-pytest-style
"W", # pycodestyle warnings
]
show-source = true
target-version = "py39"

View File

@ -1,13 +0,0 @@
Overridden Name,Surname First,Name Reference
The Python core team and community,"The Python core team and community",python-dev
Erik De Bonte,"De Bonte, Erik",De Bonte
Greg Ewing,"Ewing, Gregory",Ewing
Guido van Rossum,"van Rossum, Guido (GvR)",GvR
Inada Naoki,"Inada, Naoki",Inada
Jim Jewett,"Jewett, Jim J.",Jewett
Just van Rossum,"van Rossum, Just (JvR)",JvR
Martin v. Löwis,"von Löwis, Martin",von Löwis
Nathaniel Smith,"Smith, Nathaniel J.",Smith
P.J. Eby,"Eby, Phillip J.",Eby
Germán Méndez Bravo,"Méndez Bravo, Germán",Méndez Bravo
Amethyst Reese,"Reese, Amethyst",Amethyst
1 Overridden Name Surname First Name Reference
2 The Python core team and community The Python core team and community python-dev
3 Erik De Bonte De Bonte, Erik De Bonte
4 Greg Ewing Ewing, Gregory Ewing
5 Guido van Rossum van Rossum, Guido (GvR) GvR
6 Inada Naoki Inada, Naoki Inada
7 Jim Jewett Jewett, Jim J. Jewett
8 Just van Rossum van Rossum, Just (JvR) JvR
9 Martin v. Löwis von Löwis, Martin von Löwis
10 Nathaniel Smith Smith, Nathaniel J. Smith
11 P.J. Eby Eby, Phillip J. Eby
12 Germán Méndez Bravo Méndez Bravo, Germán Méndez Bravo
13 Amethyst Reese Reese, Amethyst Amethyst

View File

@ -12,7 +12,7 @@ OUTPUT_DIR = build
SPHINXERRORHANDLING = -W --keep-going -w sphinx-warnings.txt SPHINXERRORHANDLING = -W --keep-going -w sphinx-warnings.txt
ALLSPHINXOPTS = -b $(BUILDER) -j $(JOBS) \ ALLSPHINXOPTS = -b $(BUILDER) -j $(JOBS) \
$(SPHINXOPTS) $(SPHINXERRORHANDLING) . $(OUTPUT_DIR) $(SOURCES) $(SPHINXOPTS) $(SPHINXERRORHANDLING) peps $(OUTPUT_DIR) $(SOURCES)
## html to render PEPs to "pep-NNNN.html" files ## html to render PEPs to "pep-NNNN.html" files
.PHONY: html .PHONY: html

View File

@ -5,6 +5,7 @@
"""Build script for Sphinx documentation""" """Build script for Sphinx documentation"""
import argparse import argparse
import os
from pathlib import Path from pathlib import Path
from sphinx.application import Sphinx from sphinx.application import Sphinx
@ -27,15 +28,6 @@ def create_parser():
help='Render PEPs to "index.html" files within "pep-NNNN" directories. ' help='Render PEPs to "index.html" files within "pep-NNNN" directories. '
'Cannot be used with "-f" or "-l".') 'Cannot be used with "-f" or "-l".')
# flags / options
parser.add_argument("-w", "--fail-on-warning", action="store_true",
help="Fail the Sphinx build on any warning.")
parser.add_argument("-n", "--nitpicky", action="store_true",
help="Run Sphinx in 'nitpicky' mode, "
"warning on every missing reference target.")
parser.add_argument("-j", "--jobs", type=int, default=1,
help="How many parallel jobs to run (if supported). "
"Integer, default 1.")
parser.add_argument( parser.add_argument(
"-o", "-o",
"--output-dir", "--output-dir",
@ -61,33 +53,23 @@ def create_index_file(html_root: Path, builder: str) -> None:
if __name__ == "__main__": if __name__ == "__main__":
args = create_parser() args = create_parser()
root_directory = Path(".").absolute() root_directory = Path(__file__).resolve().parent
source_directory = root_directory source_directory = root_directory / "peps"
build_directory = root_directory / args.output_dir build_directory = root_directory / args.output_dir
doctree_directory = build_directory / ".doctrees"
# builder configuration # builder configuration
if args.builder is not None: sphinx_builder = args.builder or "html"
sphinx_builder = args.builder
else:
# default builder
sphinx_builder = "html"
# other configuration
config_overrides = {}
if args.nitpicky:
config_overrides["nitpicky"] = True
app = Sphinx( app = Sphinx(
source_directory, source_directory,
confdir=source_directory, confdir=source_directory,
outdir=build_directory, outdir=build_directory / sphinx_builder,
doctreedir=doctree_directory, doctreedir=build_directory / "doctrees",
buildername=sphinx_builder, buildername=sphinx_builder,
confoverrides=config_overrides, warningiserror=True,
warningiserror=args.fail_on_warning, parallel=os.cpu_count() or 1,
parallel=args.jobs,
tags=["internal_builder"], tags=["internal_builder"],
keep_going=True,
) )
app.build() app.build()

605
check-peps.py Executable file
View File

@ -0,0 +1,605 @@
#!/usr/bin/env python3
# This file is placed in the public domain or under the
# CC0-1.0-Universal license, whichever is more permissive.
"""check-peps: Check PEPs for common mistakes.
Usage: check-peps [-d | --detailed] <PEP files...>
Only the PEPs specified are checked.
If none are specified, all PEPs are checked.
Use "--detailed" to show the contents of lines where errors were found.
"""
from __future__ import annotations
import datetime as dt
import re
import sys
from pathlib import Path
TYPE_CHECKING = False
if TYPE_CHECKING:
from collections.abc import Iterable, Iterator, KeysView, Sequence
from typing import TypeAlias
# (line number, warning message)
Message: TypeAlias = tuple[int, str]
MessageIterator: TypeAlias = Iterator[Message]
# get the directory with the PEP sources
ROOT_DIR = Path(__file__).resolve().parent
PEP_ROOT = ROOT_DIR / "peps"
# See PEP 12 for the order
# Note we retain "BDFL-Delegate"
ALL_HEADERS = (
"PEP",
"Title",
"Version",
"Last-Modified",
"Author",
"Sponsor",
"BDFL-Delegate", "PEP-Delegate",
"Discussions-To",
"Status",
"Type",
"Topic",
"Content-Type",
"Requires",
"Created",
"Python-Version",
"Post-History",
"Replaces",
"Superseded-By",
"Resolution",
)
REQUIRED_HEADERS = frozenset({"PEP", "Title", "Author", "Status", "Type", "Created"})
# See PEP 1 for the full list
ALL_STATUSES = frozenset({
"Accepted",
"Active",
"April Fool!",
"Deferred",
"Draft",
"Final",
"Provisional",
"Rejected",
"Superseded",
"Withdrawn",
})
# PEPs that are allowed to link directly to PEPs
SKIP_DIRECT_PEP_LINK_CHECK = frozenset({"0009", "0287", "0676", "0684", "8001"})
DEFAULT_FLAGS = re.ASCII | re.IGNORECASE # Insensitive latin
# any sequence of letters or '-', followed by a single ':' and a space or end of line
HEADER_PATTERN = re.compile(r"^([a-z\-]+):(?: |$)", DEFAULT_FLAGS)
# any sequence of unicode letters or legal special characters
NAME_PATTERN = re.compile(r"(?:[^\W\d_]|[ ',\-.])+(?: |$)")
# any sequence of ASCII letters, digits, or legal special characters
EMAIL_LOCAL_PART_PATTERN = re.compile(r"[\w!#$%&'*+\-/=?^{|}~.]+", DEFAULT_FLAGS)
DISCOURSE_THREAD_PATTERN = re.compile(r"([\w\-]+/)?\d+", DEFAULT_FLAGS)
DISCOURSE_POST_PATTERN = re.compile(r"([\w\-]+/)?\d+(/\d+)?", DEFAULT_FLAGS)
MAILMAN_2_PATTERN = re.compile(r"[\w\-]+/\d{4}-[a-z]+/\d+\.html", DEFAULT_FLAGS)
MAILMAN_3_THREAD_PATTERN = re.compile(r"[\w\-]+@python\.org/thread/[a-z0-9]+/?", DEFAULT_FLAGS)
MAILMAN_3_MESSAGE_PATTERN = re.compile(r"[\w\-]+@python\.org/message/[a-z0-9]+/?(#[a-z0-9]+)?", DEFAULT_FLAGS)
# Controlled by the "--detailed" flag
DETAILED_ERRORS = False
def check(filenames: Sequence[str] = (), /) -> int:
"""The main entry-point."""
if filenames:
filenames = map(Path, filenames)
else:
filenames = PEP_ROOT.glob("pep-????.rst")
if (count := sum(map(check_file, filenames))) > 0:
s = "s" * (count != 1)
print(f"check-peps failed: {count} error{s}", file=sys.stderr)
return 1
return 0
def check_file(filename: Path, /) -> int:
filename = filename.resolve()
try:
content = filename.read_text(encoding="utf-8")
except FileNotFoundError:
return _output_error(filename, [""], [(0, "Could not read PEP!")])
else:
lines = content.splitlines()
return _output_error(filename, lines, check_peps(filename, lines))
def check_peps(filename: Path, lines: Sequence[str], /) -> MessageIterator:
yield from check_headers(lines)
for line_num, line in enumerate(lines, start=1):
if filename.stem.removeprefix("pep-") in SKIP_DIRECT_PEP_LINK_CHECK:
continue
yield from check_direct_links(line_num, line.lstrip())
def check_headers(lines: Sequence[str], /) -> MessageIterator:
yield from _validate_pep_number(next(iter(lines), ""))
found_headers = {}
line_num = 0
for line_num, line in enumerate(lines, start=1):
if line.strip() == "":
headers_end_line_num = line_num
break
if match := HEADER_PATTERN.match(line):
header = match[1]
if header in ALL_HEADERS:
if header not in found_headers:
found_headers[match[1]] = line_num
else:
yield line_num, f"Must not have duplicate header: {header} "
else:
yield line_num, f"Must not have invalid header: {header}"
else:
headers_end_line_num = line_num
yield from _validate_required_headers(found_headers.keys())
shifted_line_nums = list(found_headers.values())[1:]
for i, (header, line_num) in enumerate(found_headers.items()):
start = line_num - 1
end = headers_end_line_num - 1
if i < len(found_headers) - 1:
end = shifted_line_nums[i] - 1
remainder = "\n".join(lines[start:end]).removeprefix(f"{header}:")
if remainder != "":
if remainder[0] not in {" ", "\n"}:
yield line_num, f"Headers must have a space after the colon: {header}"
remainder = remainder.lstrip()
yield from _validate_header(header, line_num, remainder)
def _validate_header(header: str, line_num: int, content: str) -> MessageIterator:
if header == "Title":
yield from _validate_title(line_num, content)
elif header == "Author":
yield from _validate_author(line_num, content)
elif header == "Sponsor":
yield from _validate_sponsor(line_num, content)
elif header in {"BDFL-Delegate", "PEP-Delegate"}:
yield from _validate_delegate(line_num, content)
elif header == "Discussions-To":
yield from _validate_discussions_to(line_num, content)
elif header == "Status":
yield from _validate_status(line_num, content)
elif header == "Type":
yield from _validate_type(line_num, content)
elif header == "Topic":
yield from _validate_topic(line_num, content)
elif header == "Content-Type":
yield from _validate_content_type(line_num, content)
elif header in {"Requires", "Replaces", "Superseded-By"}:
yield from _validate_pep_references(line_num, content)
elif header == "Created":
yield from _validate_created(line_num, content)
elif header == "Python-Version":
yield from _validate_python_version(line_num, content)
elif header == "Post-History":
yield from _validate_post_history(line_num, content)
elif header == "Resolution":
yield from _validate_resolution(line_num, content)
def check_direct_links(line_num: int, line: str) -> MessageIterator:
"""Check that PEPs and RFCs aren't linked directly"""
line = line.lower()
if "dev/peps/pep-" in line or "peps.python.org/pep-" in line:
yield line_num, "Use the :pep:`NNN` role to refer to PEPs"
if "rfc-editor.org/rfc/" in line or "ietf.org/doc/html/rfc" in line:
yield line_num, "Use the :rfc:`NNN` role to refer to RFCs"
def _output_error(filename: Path, lines: Sequence[str], errors: Iterable[Message]) -> int:
relative_filename = filename.relative_to(ROOT_DIR)
err_count = 0
for line_num, msg in errors:
err_count += 1
print(f"{relative_filename}:{line_num}: {msg}")
if not DETAILED_ERRORS:
continue
line = lines[line_num - 1]
print(" |")
print(f"{line_num: >4} | '{line}'")
print(" |")
return err_count
###########################
# PEP Header Validators #
###########################
def _validate_required_headers(found_headers: KeysView[str]) -> MessageIterator:
"""PEPs must have all required headers, in the PEP 12 order"""
if missing := REQUIRED_HEADERS.difference(found_headers):
for missing_header in sorted(missing, key=ALL_HEADERS.index):
yield 1, f"Must have required header: {missing_header}"
ordered_headers = sorted(found_headers, key=ALL_HEADERS.index)
if list(found_headers) != ordered_headers:
order_str = ", ".join(ordered_headers)
yield 1, "Headers must be in PEP 12 order. Correct order: " + order_str
def _validate_pep_number(line: str) -> MessageIterator:
"""'PEP' header must be a number 1-9999"""
if not line.startswith("PEP: "):
yield 1, "PEP must begin with the 'PEP:' header"
return
pep_number = line.removeprefix("PEP: ").lstrip()
yield from _pep_num(1, pep_number, "'PEP:' header")
def _validate_title(line_num: int, line: str) -> MessageIterator:
"""'Title' must be 1-79 characters"""
if len(line) == 0:
yield line_num, "PEP must have a title"
elif len(line) > 79:
yield line_num, "PEP title must be less than 80 characters"
def _validate_author(line_num: int, body: str) -> MessageIterator:
"""'Author' must be list of 'Name <email@example.com>, …'"""
lines = body.split("\n")
for offset, line in enumerate(lines):
if offset >= 1 and line[:9].isspace():
# Checks for:
# Author: Alice
# Bob
# ^^^^
# Note that len("Author: ") == 8
yield line_num + offset, "Author line must not be over-indented"
if offset < len(lines) - 1:
if not line.endswith(","):
yield line_num + offset, "Author continuation lines must end with a comma"
for part in line.removesuffix(",").split(", "):
yield from _email(line_num + offset, part, "Author")
def _validate_sponsor(line_num: int, line: str) -> MessageIterator:
"""'Sponsor' must have format 'Name <email@example.com>'"""
yield from _email(line_num, line, "Sponsor")
def _validate_delegate(line_num: int, line: str) -> MessageIterator:
"""'Delegate' must have format 'Name <email@example.com>'"""
if line == "":
return
# PEP 451
if ", " in line:
for part in line.removesuffix(",").split(", "):
yield from _email(line_num, part, "Delegate")
return
yield from _email(line_num, line, "Delegate")
def _validate_discussions_to(line_num: int, line: str) -> MessageIterator:
"""'Discussions-To' must be a thread URL"""
yield from _thread(line_num, line, "Discussions-To", discussions_to=True)
if line.startswith("https://"):
return
for suffix in "@python.org", "@googlegroups.com":
if line.endswith(suffix):
remainder = line.removesuffix(suffix)
if re.fullmatch(r"[\w\-]+", remainder) is None:
yield line_num, "Discussions-To must be a valid mailing list"
return
yield line_num, "Discussions-To must be a valid thread URL or mailing list"
def _validate_status(line_num: int, line: str) -> MessageIterator:
"""'Status' must be a valid PEP status"""
if line not in ALL_STATUSES:
yield line_num, "Status must be a valid PEP status"
def _validate_type(line_num: int, line: str) -> MessageIterator:
"""'Type' must be a valid PEP type"""
if line not in {"Standards Track", "Informational", "Process"}:
yield line_num, "Type must be a valid PEP type"
def _validate_topic(line_num: int, line: str) -> MessageIterator:
"""'Topic' must be for a valid sub-index"""
topics = line.split(", ")
unique_topics = set(topics)
if len(topics) > len(unique_topics):
yield line_num, "Topic must not contain duplicates"
if unique_topics - {"Governance", "Packaging", "Typing", "Release"}:
if not all(map(str.istitle, unique_topics)):
yield line_num, "Topic must be properly capitalised (Title Case)"
if unique_topics - {"governance", "packaging", "typing", "release"}:
yield line_num, "Topic must be for a valid sub-index"
if sorted(topics) != topics:
yield line_num, "Topic must be sorted lexicographically"
def _validate_content_type(line_num: int, line: str) -> MessageIterator:
"""'Content-Type' must be 'text/x-rst'"""
if line != "text/x-rst":
yield line_num, "Content-Type must be 'text/x-rst'"
def _validate_pep_references(line_num: int, line: str) -> MessageIterator:
"""`Requires`/`Replaces`/`Superseded-By` must be 'NNN' PEP IDs"""
line = line.removesuffix(",").rstrip()
if line.count(", ") != line.count(","):
yield line_num, "PEP references must be separated by comma-spaces (', ')"
return
references = line.split(", ")
for reference in references:
yield from _pep_num(line_num, reference, "PEP reference")
def _validate_created(line_num: int, line: str) -> MessageIterator:
"""'Created' must be a 'DD-mmm-YYYY' date"""
yield from _date(line_num, line, "Created")
def _validate_python_version(line_num: int, line: str) -> MessageIterator:
"""'Python-Version' must be an ``X.Y[.Z]`` version"""
versions = line.split(", ")
for version in versions:
if version.count(".") not in {1, 2}:
yield line_num, f"Python-Version must have two or three segments: {version}"
continue
try:
major, minor, micro = version.split(".", 2)
except ValueError:
major, minor = version.split(".", 1)
micro = ""
if major not in "123":
yield line_num, f"Python-Version major part must be 1, 2, or 3: {version}"
if not _is_digits(minor) and minor != "x":
yield line_num, f"Python-Version minor part must be numeric: {version}"
elif minor != "0" and minor[0] == "0":
yield line_num, f"Python-Version minor part must not have leading zeros: {version}"
if micro == "":
return
if minor == "x":
yield line_num, f"Python-Version micro part must be empty if minor part is 'x': {version}"
elif micro[0] == "0":
yield line_num, f"Python-Version micro part must not have leading zeros: {version}"
elif not _is_digits(micro):
yield line_num, f"Python-Version micro part must be numeric: {version}"
def _validate_post_history(line_num: int, body: str) -> MessageIterator:
"""'Post-History' must be '`DD-mmm-YYYY <Thread URL>`__, …'"""
if body == "":
return
for offset, line in enumerate(body.removesuffix(",").split("\n"), start=line_num):
for post in line.removesuffix(",").strip().split(", "):
if not post.startswith("`") and not post.endswith(">`__"):
yield from _date(offset, post, "Post-History")
else:
post_date, post_url = post[1:-4].split(" <")
yield from _date(offset, post_date, "Post-History")
yield from _thread(offset, post_url, "Post-History")
def _validate_resolution(line_num: int, line: str) -> MessageIterator:
"""'Resolution' must be a direct thread/message URL"""
yield from _thread(line_num, line, "Resolution", allow_message=True)
########################
# Validation Helpers #
########################
def _pep_num(line_num: int, pep_number: str, prefix: str) -> MessageIterator:
if pep_number == "":
yield line_num, f"{prefix} must not be blank: {pep_number!r}"
return
if pep_number.startswith("0") and pep_number != "0":
yield line_num, f"{prefix} must not contain leading zeros: {pep_number!r}"
if not _is_digits(pep_number):
yield line_num, f"{prefix} must be numeric: {pep_number!r}"
elif not 0 <= int(pep_number) <= 9999:
yield line_num, f"{prefix} must be between 0 and 9999: {pep_number!r}"
def _is_digits(string: str) -> bool:
"""Match a string of ASCII digits ([0-9]+)."""
return string.isascii() and string.isdigit()
def _email(line_num: int, author_email: str, prefix: str) -> MessageIterator:
author_email = author_email.strip()
if author_email.count("<") > 1:
msg = f"{prefix} entries must not contain multiple '<': {author_email!r}"
yield line_num, msg
if author_email.count(">") > 1:
msg = f"{prefix} entries must not contain multiple '>': {author_email!r}"
yield line_num, msg
if author_email.count("@") > 1:
msg = f"{prefix} entries must not contain multiple '@': {author_email!r}"
yield line_num, msg
author = author_email.split("<", 1)[0].rstrip()
if NAME_PATTERN.fullmatch(author) is None:
msg = f"{prefix} entries must begin with a valid 'Name': {author_email!r}"
yield line_num, msg
return
email_text = author_email.removeprefix(author)
if not email_text:
# Does not have the optional email part
return
if not email_text.startswith(" <") or not email_text.endswith(">"):
msg = f"{prefix} entries must be formatted as 'Name <email@example.com>': {author_email!r}"
yield line_num, msg
email_text = email_text.removeprefix(" <").removesuffix(">")
if "@" in email_text:
local, domain = email_text.rsplit("@", 1)
elif " at " in email_text:
local, domain = email_text.rsplit(" at ", 1)
else:
yield line_num, f"{prefix} entries must contain a valid email address: {author_email!r}"
return
if EMAIL_LOCAL_PART_PATTERN.fullmatch(local) is None or _invalid_domain(domain):
yield line_num, f"{prefix} entries must contain a valid email address: {author_email!r}"
def _invalid_domain(domain_part: str) -> bool:
*labels, root = domain_part.split(".")
for label in labels:
if not label.replace("-", "").isalnum():
return True
return not root.isalnum() or not root.isascii()
def _thread(line_num: int, url: str, prefix: str, *, allow_message: bool = False, discussions_to: bool = False) -> MessageIterator:
if allow_message and discussions_to:
msg = "allow_message and discussions_to cannot both be True"
raise ValueError(msg)
msg = f"{prefix} must be a valid thread URL"
if not url.startswith("https://"):
if not discussions_to:
yield line_num, msg
return
if url.startswith("https://discuss.python.org/t/"):
remainder = url.removeprefix("https://discuss.python.org/t/").removesuffix("/")
# Discussions-To links must be the thread itself, not a post
if discussions_to:
# The equivalent pattern is similar to '([\w\-]+/)?\d+',
# but the topic name must contain a non-numeric character
# We use ``str.rpartition`` as the topic name is optional
topic_name, _, topic_id = remainder.rpartition("/")
if topic_name == '' and _is_digits(topic_id):
return
topic_name = topic_name.replace("-", "0").replace("_", "0")
# the topic name must not be entirely numeric
valid_topic_name = not _is_digits(topic_name) and topic_name.isalnum()
if valid_topic_name and _is_digits(topic_id):
return
else:
# The equivalent pattern is similar to '([\w\-]+/)?\d+(/\d+)?',
# but the topic name must contain a non-numeric character
if remainder.count("/") == 2:
# When there are three parts, the URL must be "topic-name/topic-id/post-id".
topic_name, topic_id, post_id = remainder.rsplit("/", 2)
topic_name = topic_name.replace("-", "0").replace("_", "0")
valid_topic_name = not _is_digits(topic_name) and topic_name.isalnum()
if valid_topic_name and _is_digits(topic_id) and _is_digits(post_id):
# the topic name must not be entirely numeric
return
elif remainder.count("/") == 1:
# When there are only two parts, there's an ambiguity between
# "topic-name/topic-id" and "topic-id/post-id".
# We disambiguate by checking if the LHS is a valid name and
# the RHS is a valid topic ID (for the former),
# and then if both the LHS and RHS are valid IDs (for the latter).
left, right = remainder.rsplit("/")
left = left.replace("-", "0").replace("_", "0")
# the topic name must not be entirely numeric
left_is_name = not _is_digits(left) and left.isalnum()
if left_is_name and _is_digits(right):
return
elif _is_digits(left) and _is_digits(right):
return
else:
# When there's only one part, it must be a valid topic ID.
if _is_digits(remainder):
return
if url.startswith("https://mail.python.org/pipermail/"):
remainder = url.removeprefix("https://mail.python.org/pipermail/")
if MAILMAN_2_PATTERN.fullmatch(remainder) is not None:
return
if url.startswith("https://mail.python.org/archives/list/"):
remainder = url.removeprefix("https://mail.python.org/archives/list/")
if allow_message and MAILMAN_3_MESSAGE_PATTERN.fullmatch(remainder) is not None:
return
if MAILMAN_3_THREAD_PATTERN.fullmatch(remainder) is not None:
return
yield line_num, msg
def _date(line_num: int, date_str: str, prefix: str) -> MessageIterator:
try:
parsed_date = dt.datetime.strptime(date_str, "%d-%b-%Y")
except ValueError:
yield line_num, f"{prefix} must be a 'DD-mmm-YYYY' date: {date_str!r}"
return
else:
if date_str[1] == "-": # Date must be zero-padded
yield line_num, f"{prefix} must be a 'DD-mmm-YYYY' date: {date_str!r}"
return
if parsed_date.year < 1990:
yield line_num, f"{prefix} must not be before Python was invented: {date_str!r}"
if parsed_date > (dt.datetime.now() + dt.timedelta(days=14)):
yield line_num, f"{prefix} must not be in the future: {date_str!r}"
if __name__ == "__main__":
if {"-h", "--help", "-?"}.intersection(sys.argv[1:]):
print(__doc__, file=sys.stderr)
raise SystemExit(0)
files = {}
for arg in sys.argv[1:]:
if not arg.startswith("-"):
files[arg] = None
elif arg in {"-d", "--detailed"}:
DETAILED_ERRORS = True
else:
print(f"Unknown option: {arg!r}", file=sys.stderr)
raise SystemExit(1)
raise SystemExit(check(files))

View File

@ -1,5 +1,4 @@
.. :author: Adam Turner
Author: Adam Turner
Building PEPs Locally Building PEPs Locally
@ -10,8 +9,8 @@ This can also be used to check that the PEP is valid reStructuredText before
submission to the PEP editors. submission to the PEP editors.
The rest of this document assumes you are working from a local clone of the The rest of this document assumes you are working from a local clone of the
`PEPs repository <https://github.com/python/peps>`__, with `PEPs repository <https://github.com/python/peps>`__,
**Python 3.9 or later** installed. with **Python 3.9 or later** installed.
Render PEPs locally Render PEPs locally
@ -51,11 +50,6 @@ Render PEPs locally
(venv) PS> python build.py (venv) PS> python build.py
.. note::
There may be a series of warnings about unreferenced citations or labels.
Whilst these are valid warnings, they do not impact the build process.
4. Navigate to the ``build`` directory of your PEPs repo to find the HTML pages. 4. Navigate to the ``build`` directory of your PEPs repo to find the HTML pages.
PEP 0 provides a formatted index, and may be a useful reference. PEP 0 provides a formatted index, and may be a useful reference.
@ -87,28 +81,8 @@ Check the validity of links within PEP sources (runs the `Sphinx linkchecker
.. code-block:: shell .. code-block:: shell
python build.py --check-links python build.py --check-links
make check-links make check-links
Stricter rendering
''''''''''''''''''
Run in `nit-picky <https://www.sphinx-doc.org/en/master/usage/configuration.html#confval-nitpicky>`__
mode.
This generates warnings for all missing references.
.. code-block:: shell
python build.py --nitpicky
Fail the build on any warning.
As of January 2022, there are around 250 warnings when building the PEPs.
.. code-block:: shell
python build.py --fail-on-warning
make fail-warning
``build.py`` usage ``build.py`` usage
@ -118,4 +92,4 @@ For details on the command-line options to the ``build.py`` script, run:
.. code-block:: shell .. code-block:: shell
python build.py --help python build.py --help

View File

@ -1,6 +1,6 @@
.. :author: Adam Turner
Author: Adam Turner
..
We can't use :pep:`N` references in this document, as they use links relative We can't use :pep:`N` references in this document, as they use links relative
to the current file, which doesn't work in a subdirectory like this one. to the current file, which doesn't work in a subdirectory like this one.
@ -9,7 +9,7 @@ An Overview of the PEP Rendering System
======================================= =======================================
This document provides an overview of the PEP rendering system, as a companion This document provides an overview of the PEP rendering system, as a companion
to :doc:`PEP 676 <../pep-0676>`. to `PEP 676 <https://peps.python.org/pep-0676/>`__.
1. Configuration 1. Configuration
@ -17,14 +17,14 @@ to :doc:`PEP 676 <../pep-0676>`.
Configuration is stored in three files: Configuration is stored in three files:
- ``conf.py`` contains the majority of the Sphinx configuration - ``peps/conf.py`` contains the majority of the Sphinx configuration
- ``contents.rst`` creates the Sphinx-mandated table of contents directive - ``peps/contents.rst`` contains the compulsory table of contents directive
- ``pep_sphinx_extensions/pep_theme/theme.conf`` sets the Pygments themes - ``pep_sphinx_extensions/pep_theme/theme.conf`` sets the Pygments themes
The configuration: The configuration:
- registers the custom Sphinx extension - registers the custom Sphinx extension
- sets both ``.txt`` and ``.rst`` suffixes to be parsed as PEPs - sets the ``.rst`` suffix to be parsed as PEPs
- tells Sphinx which source files to use - tells Sphinx which source files to use
- registers the PEP theme, maths renderer, and template - registers the PEP theme, maths renderer, and template
- disables some default settings that are covered in the extension - disables some default settings that are covered in the extension
@ -35,7 +35,7 @@ The configuration:
---------------- ----------------
``build.py`` manages the rendering process. ``build.py`` manages the rendering process.
Usage is covered in :doc:`build`. Usage is covered in `Building PEPs Locally <./build.rst>`_.
3. Extension 3. Extension
@ -110,7 +110,8 @@ This overrides the built-in ``:pep:`` role to return the correct URL.
3.4.2 ``PEPHeaders`` transform 3.4.2 ``PEPHeaders`` transform
****************************** ******************************
PEPs start with a set of :rfc:`2822` headers, per :doc:`PEP 1 <../pep-0001>`. PEPs start with a set of :rfc:`2822` headers,
per `PEP 1 <https://peps.python.org/pep-0001/>`__.
This transform validates that the required headers are present and of the This transform validates that the required headers are present and of the
correct data type, and removes headers not for display. correct data type, and removes headers not for display.
It must run before the ``PEPTitle`` transform. It must run before the ``PEPTitle`` transform.
@ -122,7 +123,7 @@ It must run before the ``PEPTitle`` transform.
We generate the title node from the parsed title in the PEP headers, and make We generate the title node from the parsed title in the PEP headers, and make
all nodes in the document children of the new title node. all nodes in the document children of the new title node.
This transform must also handle parsing reStructuredText markup within PEP This transform must also handle parsing reStructuredText markup within PEP
titles, such as :doc:`PEP 604 <../pep-0604>`. titles, such as `PEP 604 <https://peps.python.org/pep-0604/>`__.
3.4.4 ``PEPContents`` transform 3.4.4 ``PEPContents`` transform
@ -216,12 +217,9 @@ parse and validate that metadata.
After collecting and validating all the PEP data, the index itself is created in After collecting and validating all the PEP data, the index itself is created in
three steps: three steps:
1. Output the header text 1. Output the header text
2. Output the category and numerical indices 2. Output the category and numerical indices
3. Output the author index 3. Output the author index
The ``AUTHOR_OVERRIDES.csv`` file can be used to override an author's name in
the PEP 0 output.
We then add the newly created PEP 0 file to two Sphinx variables so that it will We then add the newly created PEP 0 file to two Sphinx variables so that it will
be processed as a normal source document. be processed as a normal source document.

View File

@ -28,7 +28,7 @@ def _update_config_for_builder(app: Sphinx) -> None:
app.env.document_ids = {} # For PEPReferenceRoleTitleText app.env.document_ids = {} # For PEPReferenceRoleTitleText
app.env.settings["builder"] = app.builder.name app.env.settings["builder"] = app.builder.name
if app.builder.name == "dirhtml": if app.builder.name == "dirhtml":
app.env.settings["pep_url"] = "pep-{:0>4}" app.env.settings["pep_url"] = "pep-{:0>4}/"
app.connect("build-finished", _post_build) # Post-build tasks app.connect("build-finished", _post_build) # Post-build tasks

View File

@ -17,9 +17,6 @@ RSS_DESCRIPTION = (
"and some meta-information like release procedure and schedules." "and some meta-information like release procedure and schedules."
) )
# get the directory with the PEP sources
PEP_ROOT = Path(__file__).parent
def _format_rfc_2822(datetime: dt.datetime) -> str: def _format_rfc_2822(datetime: dt.datetime) -> str:
datetime = datetime.replace(tzinfo=dt.timezone.utc) datetime = datetime.replace(tzinfo=dt.timezone.utc)

View File

@ -1,5 +1,3 @@
from pathlib import Path
from docutils import nodes from docutils import nodes
from docutils.frontend import OptionParser from docutils.frontend import OptionParser
from sphinx.builders.html import StandaloneHTMLBuilder from sphinx.builders.html import StandaloneHTMLBuilder
@ -22,6 +20,7 @@ class FileBuilder(StandaloneHTMLBuilder):
self.docwriter = HTMLWriter(self) self.docwriter = HTMLWriter(self)
_opt_parser = OptionParser([self.docwriter], defaults=self.env.settings, read_config_files=True) _opt_parser = OptionParser([self.docwriter], defaults=self.env.settings, read_config_files=True)
self.docsettings = _opt_parser.get_default_values() self.docsettings = _opt_parser.get_default_values()
self._orig_css_files = self._orig_js_files = []
def get_doc_context(self, docname: str, body: str, _metatags: str) -> dict: def get_doc_context(self, docname: str, body: str, _metatags: str) -> dict:
"""Collect items for the template context of a page.""" """Collect items for the template context of a page."""
@ -30,10 +29,6 @@ class FileBuilder(StandaloneHTMLBuilder):
except KeyError: except KeyError:
title = "" title = ""
# source filename
file_is_rst = Path(self.env.srcdir, docname + ".rst").exists()
source_name = f"{docname}.rst" if file_is_rst else f"{docname}.txt"
# local table of contents # local table of contents
toc_tree = self.env.tocs[docname].deepcopy() toc_tree = self.env.tocs[docname].deepcopy()
if len(toc_tree) and len(toc_tree[0]) > 1: if len(toc_tree) and len(toc_tree[0]) > 1:
@ -45,7 +40,7 @@ class FileBuilder(StandaloneHTMLBuilder):
else: else:
toc = "" # PEPs with no sections -- 9, 210 toc = "" # PEPs with no sections -- 9, 210
return {"title": title, "sourcename": source_name, "toc": toc, "body": body} return {"title": title, "toc": toc, "body": body}
class DirectoryBuilder(FileBuilder): class DirectoryBuilder(FileBuilder):

View File

@ -5,7 +5,6 @@ from __future__ import annotations
from docutils import nodes from docutils import nodes
from docutils.parsers import rst from docutils.parsers import rst
PYPA_SPEC_BASE_URL = "https://packaging.python.org/en/latest/specifications/" PYPA_SPEC_BASE_URL = "https://packaging.python.org/en/latest/specifications/"

View File

@ -1,4 +1,4 @@
import datetime as dt import time
from pathlib import Path from pathlib import Path
import subprocess import subprocess
@ -23,7 +23,7 @@ class PEPFooter(transforms.Transform):
def apply(self) -> None: def apply(self) -> None:
pep_source_path = Path(self.document["source"]) pep_source_path = Path(self.document["source"])
if not pep_source_path.match("pep-*"): if not pep_source_path.match("pep-????.???"):
return # not a PEP file, exit early return # not a PEP file, exit early
# Iterate through sections from the end of the document # Iterate through sections from the end of the document
@ -54,7 +54,7 @@ class PEPFooter(transforms.Transform):
def _add_source_link(pep_source_path: Path) -> nodes.paragraph: def _add_source_link(pep_source_path: Path) -> nodes.paragraph:
"""Add link to source text on VCS (GitHub)""" """Add link to source text on VCS (GitHub)"""
source_link = f"https://github.com/python/peps/blob/main/{pep_source_path.name}" source_link = f"https://github.com/python/peps/blob/main/peps/{pep_source_path.name}"
link_node = nodes.reference("", source_link, refuri=source_link) link_node = nodes.reference("", source_link, refuri=source_link)
return nodes.paragraph("", "Source: ", link_node) return nodes.paragraph("", "Source: ", link_node)
@ -62,12 +62,10 @@ def _add_source_link(pep_source_path: Path) -> nodes.paragraph:
def _add_commit_history_info(pep_source_path: Path) -> nodes.paragraph: def _add_commit_history_info(pep_source_path: Path) -> nodes.paragraph:
"""Use local git history to find last modified date.""" """Use local git history to find last modified date."""
try: try:
since_epoch = LAST_MODIFIED_TIMES[pep_source_path.name] iso_time = _LAST_MODIFIED_TIMES[pep_source_path.stem]
except KeyError: except KeyError:
return nodes.paragraph() return nodes.paragraph()
epoch_dt = dt.datetime.fromtimestamp(since_epoch, dt.timezone.utc)
iso_time = epoch_dt.isoformat(sep=" ")
commit_link = f"https://github.com/python/peps/commits/main/{pep_source_path.name}" commit_link = f"https://github.com/python/peps/commits/main/{pep_source_path.name}"
link_node = nodes.reference("", f"{iso_time} GMT", refuri=commit_link) link_node = nodes.reference("", f"{iso_time} GMT", refuri=commit_link)
return nodes.paragraph("", "Last modified: ", link_node) return nodes.paragraph("", "Last modified: ", link_node)
@ -75,29 +73,36 @@ def _add_commit_history_info(pep_source_path: Path) -> nodes.paragraph:
def _get_last_modified_timestamps(): def _get_last_modified_timestamps():
# get timestamps and changed files from all commits (without paging results) # get timestamps and changed files from all commits (without paging results)
args = ["git", "--no-pager", "log", "--format=#%at", "--name-only"] args = ("git", "--no-pager", "log", "--format=#%at", "--name-only")
with subprocess.Popen(args, stdout=subprocess.PIPE) as process: ret = subprocess.run(args, stdout=subprocess.PIPE, text=True, encoding="utf-8")
all_modified = process.stdout.read().decode("utf-8") if ret.returncode: # non-zero return code
process.stdout.close() return {}
if process.wait(): # non-zero return code all_modified = ret.stdout
return {}
# remove "peps/" prefix from file names
all_modified = all_modified.replace("\npeps/", "\n")
# set up the dictionary with the *current* files # set up the dictionary with the *current* files
last_modified = {path.name: 0 for path in Path().glob("pep-*") if path.suffix in {".txt", ".rst"}} peps_dir = Path(__file__, "..", "..", "..", "..", "peps").resolve()
last_modified = {path.stem: "" for path in peps_dir.glob("pep-????.rst")}
# iterate through newest to oldest, updating per file timestamps # iterate through newest to oldest, updating per file timestamps
change_sets = all_modified.removeprefix("#").split("#") change_sets = all_modified.removeprefix("#").split("#")
for change_set in change_sets: for change_set in change_sets:
timestamp, files = change_set.split("\n", 1) timestamp, files = change_set.split("\n", 1)
for file in files.strip().split("\n"): for file in files.strip().split("\n"):
if file.startswith("pep-") and file[-3:] in {"txt", "rst"}: if not file.startswith("pep-") or not file.endswith((".rst", ".txt")):
if last_modified.get(file) == 0: continue # not a PEP
try: file = file[:-4]
last_modified[file] = float(timestamp) if last_modified.get(file) != "":
except ValueError: continue # most recent modified date already found
pass # if float conversion fails try:
y, m, d, hh, mm, ss, *_ = time.gmtime(float(timestamp))
except ValueError:
continue # if float conversion fails
last_modified[file] = f"{y:04}-{m:02}-{d:02} {hh:02}:{mm:02}:{ss:02}"
return last_modified return last_modified
LAST_MODIFIED_TIMES = _get_last_modified_timestamps() _LAST_MODIFIED_TIMES = _get_last_modified_timestamps()

View File

@ -230,6 +230,22 @@ table th + th,
table td + td { table td + td {
border-left: 1px solid var(--colour-background-accent-strong); border-left: 1px solid var(--colour-background-accent-strong);
} }
/* Common column widths for PEP status tables */
table.pep-zero-table tr td:nth-child(1) {
width: 5.5%;
}
table.pep-zero-table tr td:nth-child(2) {
width: 6.5%;
}
table.pep-zero-table tr td:nth-child(3),
table.pep-zero-table tr td:nth-child(4){
width: 44%;
}
/* Authors & Sponsors table */
#authors-owners table td,
#authors-owners table th {
width: 50%;
}
/* Breadcrumbs rules */ /* Breadcrumbs rules */
section#pep-page-section > header { section#pep-page-section > header {

View File

@ -43,8 +43,8 @@
<h2>Contents</h2> <h2>Contents</h2>
{{ toc }} {{ toc }}
<br> <br>
{%- if not (sourcename.startswith("pep-0000") or sourcename.startswith("topic")) %} {%- if not pagename.startswith(("pep-0000", "topic")) %}
<a id="source" href="https://github.com/python/peps/blob/main/{{sourcename}}">Page Source (GitHub)</a> <a id="source" href="https://github.com/python/peps/blob/main/peps/{{pagename}}.rst">Page Source (GitHub)</a>
{%- endif %} {%- endif %}
</nav> </nav>
</section> </section>

View File

@ -1,89 +0,0 @@
from __future__ import annotations
from typing import NamedTuple
class _Name(NamedTuple):
mononym: str = None
forename: str = None
surname: str = None
suffix: str = None
class Author(NamedTuple):
"""Represent PEP authors."""
last_first: str # The author's name in Surname, Forename, Suffix order.
nick: str # Author's nickname for PEP tables. Defaults to surname.
email: str # The author's email address.
def parse_author_email(author_email_tuple: tuple[str, str], authors_overrides: dict[str, dict[str, str]]) -> Author:
"""Parse the name and email address of an author."""
name, email = author_email_tuple
_first_last = name.strip()
email = email.lower()
if _first_last in authors_overrides:
name_dict = authors_overrides[_first_last]
last_first = name_dict["Surname First"]
nick = name_dict["Name Reference"]
return Author(last_first, nick, email)
name_parts = _parse_name(_first_last)
if name_parts.mononym is not None:
return Author(name_parts.mononym, name_parts.mononym, email)
if name_parts.suffix:
last_first = f"{name_parts.surname}, {name_parts.forename}, {name_parts.suffix}"
return Author(last_first, name_parts.surname, email)
last_first = f"{name_parts.surname}, {name_parts.forename}"
return Author(last_first, name_parts.surname, email)
def _parse_name(full_name: str) -> _Name:
"""Decompose a full name into parts.
If a mononym (e.g, 'Aahz') then return the full name. If there are
suffixes in the name (e.g. ', Jr.' or 'II'), then find and extract
them. If there is a middle initial followed by a full stop, then
combine the following words into a surname (e.g. N. Vander Weele). If
there is a leading, lowercase portion to the last name (e.g. 'van' or
'von') then include it in the surname.
"""
possible_suffixes = {"Jr", "Jr.", "II", "III"}
pre_suffix, _, raw_suffix = full_name.partition(",")
name_parts = pre_suffix.strip().split(" ")
num_parts = len(name_parts)
suffix = raw_suffix.strip()
if name_parts == [""]:
raise ValueError("Name is empty!")
elif num_parts == 1:
return _Name(mononym=name_parts[0], suffix=suffix)
elif num_parts == 2:
return _Name(forename=name_parts[0].strip(), surname=name_parts[1], suffix=suffix)
# handles rogue uncaught suffixes
if name_parts[-1] in possible_suffixes:
suffix = f"{name_parts.pop(-1)} {suffix}".strip()
# handles von, van, v. etc.
if name_parts[-2].islower():
forename = " ".join(name_parts[:-2]).strip()
surname = " ".join(name_parts[-2:])
return _Name(forename=forename, surname=surname, suffix=suffix)
# handles double surnames after a middle initial (e.g. N. Vander Weele)
elif any(s.endswith(".") for s in name_parts):
split_position = [i for i, x in enumerate(name_parts) if x.endswith(".")][-1] + 1
forename = " ".join(name_parts[:split_position]).strip()
surname = " ".join(name_parts[split_position:])
return _Name(forename=forename, surname=surname, suffix=suffix)
# default to using the last item as the surname
else:
forename = " ".join(name_parts[:-1]).strip()
return _Name(forename=forename, surname=name_parts[-1], suffix=suffix)

View File

@ -2,13 +2,10 @@
from __future__ import annotations from __future__ import annotations
import csv import dataclasses
from email.parser import HeaderParser from email.parser import HeaderParser
from pathlib import Path from pathlib import Path
import re
from typing import TYPE_CHECKING
from pep_sphinx_extensions.pep_zero_generator.author import parse_author_email
from pep_sphinx_extensions.pep_zero_generator.constants import ACTIVE_ALLOWED from pep_sphinx_extensions.pep_zero_generator.constants import ACTIVE_ALLOWED
from pep_sphinx_extensions.pep_zero_generator.constants import HIDE_STATUS from pep_sphinx_extensions.pep_zero_generator.constants import HIDE_STATUS
from pep_sphinx_extensions.pep_zero_generator.constants import SPECIAL_STATUSES from pep_sphinx_extensions.pep_zero_generator.constants import SPECIAL_STATUSES
@ -19,16 +16,12 @@ from pep_sphinx_extensions.pep_zero_generator.constants import TYPE_STANDARDS
from pep_sphinx_extensions.pep_zero_generator.constants import TYPE_VALUES from pep_sphinx_extensions.pep_zero_generator.constants import TYPE_VALUES
from pep_sphinx_extensions.pep_zero_generator.errors import PEPError from pep_sphinx_extensions.pep_zero_generator.errors import PEPError
if TYPE_CHECKING:
from pep_sphinx_extensions.pep_zero_generator.author import Author
@dataclasses.dataclass(order=True, frozen=True)
# AUTHOR_OVERRIDES.csv is an exception file for PEP 0 name parsing class _Author:
AUTHOR_OVERRIDES: dict[str, dict[str, str]] = {} """Represent PEP authors."""
with open("AUTHOR_OVERRIDES.csv", "r", encoding="utf-8") as f: full_name: str # The author's name.
for line in csv.DictReader(f): email: str # The author's email address.
full_name = line.pop("Overridden Name")
AUTHOR_OVERRIDES[full_name] = line
class PEP: class PEP:
@ -97,7 +90,9 @@ class PEP:
self.status: str = status self.status: str = status
# Parse PEP authors # Parse PEP authors
self.authors: list[Author] = _parse_authors(self, metadata["Author"], AUTHOR_OVERRIDES) self.authors: list[_Author] = _parse_author(metadata["Author"])
if not self.authors:
raise _raise_pep_error(self, "no authors found", pep_num=True)
# Topic (for sub-indices) # Topic (for sub-indices)
_topic = metadata.get("Topic", "").lower().split(",") _topic = metadata.get("Topic", "").lower().split(",")
@ -144,7 +139,7 @@ class PEP:
# a tooltip representing the type and status # a tooltip representing the type and status
"shorthand": self.shorthand, "shorthand": self.shorthand,
# the author list as a comma-separated with only last names # the author list as a comma-separated with only last names
"authors": ", ".join(author.nick for author in self.authors), "authors": ", ".join(author.full_name for author in self.authors),
} }
@property @property
@ -153,7 +148,7 @@ class PEP:
return { return {
"number": self.number, "number": self.number,
"title": self.title, "title": self.title,
"authors": ", ".join(author.nick for author in self.authors), "authors": ", ".join(author.full_name for author in self.authors),
"discussions_to": self.discussions_to, "discussions_to": self.discussions_to,
"status": self.status, "status": self.status,
"type": self.pep_type, "type": self.pep_type,
@ -175,41 +170,27 @@ def _raise_pep_error(pep: PEP, msg: str, pep_num: bool = False) -> None:
raise PEPError(msg, pep.filename) raise PEPError(msg, pep.filename)
def _parse_authors(pep: PEP, author_header: str, authors_overrides: dict) -> list[Author]: jr_placeholder = ",Jr"
"""Parse Author header line"""
authors_and_emails = _parse_author(author_header)
if not authors_and_emails:
raise _raise_pep_error(pep, "no authors found", pep_num=True)
return [parse_author_email(author_tuple, authors_overrides) for author_tuple in authors_and_emails]
author_angled = re.compile(r"(?P<author>.+?) <(?P<email>.+?)>(,\s*)?") def _parse_author(data: str) -> list[_Author]:
author_paren = re.compile(r"(?P<email>.+?) \((?P<author>.+?)\)(,\s*)?")
author_simple = re.compile(r"(?P<author>[^,]+)(,\s*)?")
def _parse_author(data: str) -> list[tuple[str, str]]:
"""Return a list of author names and emails.""" """Return a list of author names and emails."""
author_list = [] author_list = []
for regex in (author_angled, author_paren, author_simple): data = (data.replace("\n", " ")
for match in regex.finditer(data): .replace(", Jr", jr_placeholder)
# Watch out for suffixes like 'Jr.' when they are comma-separated .rstrip().removesuffix(","))
# from the name and thus cause issues when *all* names are only for author_email in data.split(", "):
# separated by commas. if ' <' in author_email:
match_dict = match.groupdict() author, email = author_email.removesuffix(">").split(" <")
author = match_dict["author"] else:
if not author.partition(" ")[1] and author.endswith("."): author, email = author_email, ""
prev_author = author_list.pop()
author = ", ".join([prev_author, author])
if "email" not in match_dict:
email = ""
else:
email = match_dict["email"]
author_list.append((author, email))
# If authors were found then stop searching as only expect one author = author.strip()
# style of author citation. if author == "":
if author_list: raise ValueError("Name is empty!")
break
author = author.replace(jr_placeholder, ", Jr")
email = email.lower()
author_list.append(_Author(author, email))
return author_list return author_list

View File

@ -18,22 +18,22 @@ to allow it to be processed as normal.
from __future__ import annotations from __future__ import annotations
import json import json
import os
from pathlib import Path from pathlib import Path
from typing import TYPE_CHECKING from typing import TYPE_CHECKING
from pep_sphinx_extensions.pep_zero_generator.constants import SUBINDICES_BY_TOPIC
from pep_sphinx_extensions.pep_zero_generator import parser from pep_sphinx_extensions.pep_zero_generator import parser
from pep_sphinx_extensions.pep_zero_generator import subindices from pep_sphinx_extensions.pep_zero_generator import subindices
from pep_sphinx_extensions.pep_zero_generator import writer from pep_sphinx_extensions.pep_zero_generator import writer
from pep_sphinx_extensions.pep_zero_generator.constants import SUBINDICES_BY_TOPIC
if TYPE_CHECKING: if TYPE_CHECKING:
from sphinx.application import Sphinx from sphinx.application import Sphinx
from sphinx.environment import BuildEnvironment from sphinx.environment import BuildEnvironment
def _parse_peps() -> list[parser.PEP]: def _parse_peps(path: Path) -> list[parser.PEP]:
# Read from root directory # Read from root directory
path = Path(".")
peps: list[parser.PEP] = [] peps: list[parser.PEP] = []
for file_path in path.iterdir(): for file_path in path.iterdir():
@ -41,7 +41,7 @@ def _parse_peps() -> list[parser.PEP]:
continue # Skip directories etc. continue # Skip directories etc.
if file_path.match("pep-0000*"): if file_path.match("pep-0000*"):
continue # Skip pre-existing PEP 0 files continue # Skip pre-existing PEP 0 files
if file_path.match("pep-????.???") and file_path.suffix in {".txt", ".rst"}: if file_path.match("pep-????.rst"):
pep = parser.PEP(path.joinpath(file_path).absolute()) pep = parser.PEP(path.joinpath(file_path).absolute())
peps.append(pep) peps.append(pep)
@ -52,8 +52,16 @@ def create_pep_json(peps: list[parser.PEP]) -> str:
return json.dumps({pep.number: pep.full_details for pep in peps}, indent=1) return json.dumps({pep.number: pep.full_details for pep in peps}, indent=1)
def write_peps_json(peps: list[parser.PEP], path: Path) -> None:
# Create peps.json
json_peps = create_pep_json(peps)
Path(path, "peps.json").write_text(json_peps, encoding="utf-8")
os.makedirs(os.path.join(path, "api"), exist_ok=True)
Path(path, "api", "peps.json").write_text(json_peps, encoding="utf-8")
def create_pep_zero(app: Sphinx, env: BuildEnvironment, docnames: list[str]) -> None: def create_pep_zero(app: Sphinx, env: BuildEnvironment, docnames: list[str]) -> None:
peps = _parse_peps() peps = _parse_peps(Path(app.srcdir))
pep0_text = writer.PEPZeroWriter().write_pep0(peps, builder=env.settings["builder"]) pep0_text = writer.PEPZeroWriter().write_pep0(peps, builder=env.settings["builder"])
pep0_path = subindices.update_sphinx("pep-0000", pep0_text, docnames, env) pep0_path = subindices.update_sphinx("pep-0000", pep0_text, docnames, env)
@ -61,7 +69,4 @@ def create_pep_zero(app: Sphinx, env: BuildEnvironment, docnames: list[str]) ->
subindices.generate_subindices(SUBINDICES_BY_TOPIC, peps, docnames, env) subindices.generate_subindices(SUBINDICES_BY_TOPIC, peps, docnames, env)
# Create peps.json write_peps_json(peps, Path(app.outdir))
json_path = Path(app.outdir, "api", "peps.json").resolve()
json_path.parent.mkdir(exist_ok=True)
json_path.write_text(create_pep_json(peps), encoding="utf-8")

View File

@ -2,6 +2,7 @@
from __future__ import annotations from __future__ import annotations
import os
from pathlib import Path from pathlib import Path
from typing import TYPE_CHECKING from typing import TYPE_CHECKING
@ -14,8 +15,7 @@ if TYPE_CHECKING:
def update_sphinx(filename: str, text: str, docnames: list[str], env: BuildEnvironment) -> Path: def update_sphinx(filename: str, text: str, docnames: list[str], env: BuildEnvironment) -> Path:
file_path = Path(f"{filename}.rst").resolve() file_path = Path(env.srcdir, f"{filename}.rst")
file_path.parent.mkdir(parents=True, exist_ok=True)
file_path.write_text(text, encoding="utf-8") file_path.write_text(text, encoding="utf-8")
# Add to files for builder # Add to files for builder
@ -32,6 +32,9 @@ def generate_subindices(
docnames: list[str], docnames: list[str],
env: BuildEnvironment, env: BuildEnvironment,
) -> None: ) -> None:
# create topic directory
os.makedirs(os.path.join(env.srcdir, "topic"), exist_ok=True)
# Create sub index page # Create sub index page
generate_topic_contents(docnames, env) generate_topic_contents(docnames, env)

View File

@ -2,14 +2,11 @@
from __future__ import annotations from __future__ import annotations
import datetime as dt
from typing import TYPE_CHECKING from typing import TYPE_CHECKING
import unicodedata import unicodedata
from pep_sphinx_extensions.pep_processor.transforms.pep_headers import ( from pep_sphinx_extensions.pep_processor.transforms.pep_headers import ABBREVIATED_STATUSES
ABBREVIATED_STATUSES, from pep_sphinx_extensions.pep_processor.transforms.pep_headers import ABBREVIATED_TYPES
ABBREVIATED_TYPES,
)
from pep_sphinx_extensions.pep_zero_generator.constants import DEAD_STATUSES from pep_sphinx_extensions.pep_zero_generator.constants import DEAD_STATUSES
from pep_sphinx_extensions.pep_zero_generator.constants import STATUS_ACCEPTED from pep_sphinx_extensions.pep_zero_generator.constants import STATUS_ACCEPTED
from pep_sphinx_extensions.pep_zero_generator.constants import STATUS_ACTIVE from pep_sphinx_extensions.pep_zero_generator.constants import STATUS_ACTIVE
@ -29,11 +26,10 @@ from pep_sphinx_extensions.pep_zero_generator.errors import PEPError
if TYPE_CHECKING: if TYPE_CHECKING:
from pep_sphinx_extensions.pep_zero_generator.parser import PEP from pep_sphinx_extensions.pep_zero_generator.parser import PEP
HEADER = f"""\ HEADER = """\
PEP: 0 PEP: 0
Title: Index of Python Enhancement Proposals (PEPs) Title: Index of Python Enhancement Proposals (PEPs)
Last-Modified: {dt.date.today()} Author: The PEP Editors
Author: python-dev <python-dev@python.org>
Status: Active Status: Active
Type: Informational Type: Informational
Content-Type: text/x-rst Content-Type: text/x-rst
@ -149,7 +145,7 @@ class PEPZeroWriter:
target = ( target = (
f"topic/{subindex}.html" f"topic/{subindex}.html"
if builder == "html" if builder == "html"
else f"../topic/{subindex}" else f"../topic/{subindex}/"
) )
self.emit_text(f"* `{subindex.title()} PEPs <{target}>`_") self.emit_text(f"* `{subindex.title()} PEPs <{target}>`_")
self.emit_newline() self.emit_newline()
@ -241,7 +237,7 @@ class PEPZeroWriter:
self.emit_newline() self.emit_newline()
self.emit_newline() self.emit_newline()
pep0_string = "\n".join([str(s) for s in self.output]) pep0_string = "\n".join(map(str, self.output))
return pep0_string return pep0_string
@ -297,22 +293,22 @@ def _verify_email_addresses(peps: list[PEP]) -> dict[str, str]:
for pep in peps: for pep in peps:
for author in pep.authors: for author in pep.authors:
# If this is the first time we have come across an author, add them. # If this is the first time we have come across an author, add them.
if author.last_first not in authors_dict: if author.full_name not in authors_dict:
authors_dict[author.last_first] = set() authors_dict[author.full_name] = set()
# If the new email is an empty string, move on. # If the new email is an empty string, move on.
if not author.email: if not author.email:
continue continue
# If the email has not been seen, add it to the list. # If the email has not been seen, add it to the list.
authors_dict[author.last_first].add(author.email) authors_dict[author.full_name].add(author.email)
valid_authors_dict: dict[str, str] = {} valid_authors_dict: dict[str, str] = {}
too_many_emails: list[tuple[str, set[str]]] = [] too_many_emails: list[tuple[str, set[str]]] = []
for last_first, emails in authors_dict.items(): for full_name, emails in authors_dict.items():
if len(emails) > 1: if len(emails) > 1:
too_many_emails.append((last_first, emails)) too_many_emails.append((full_name, emails))
else: else:
valid_authors_dict[last_first] = next(iter(emails), "") valid_authors_dict[full_name] = next(iter(emails), "")
if too_many_emails: if too_many_emails:
err_output = [] err_output = []
for author, emails in too_many_emails: for author, emails in too_many_emails:

View File

@ -0,0 +1,12 @@
import importlib.util
import sys
from pathlib import Path
_ROOT_PATH = Path(__file__, "..", "..", "..").resolve()
PEP_ROOT = _ROOT_PATH / "peps"
# Import "check-peps.py" as "check_peps"
CHECK_PEPS_PATH = _ROOT_PATH / "check-peps.py"
spec = importlib.util.spec_from_file_location("check_peps", CHECK_PEPS_PATH)
sys.modules["check_peps"] = check_peps = importlib.util.module_from_spec(spec)
spec.loader.exec_module(check_peps)

View File

@ -0,0 +1,105 @@
import datetime as dt
import check_peps # NoQA: inserted into sys.modules in conftest.py
import pytest
@pytest.mark.parametrize(
"line",
[
# valid entries
"01-Jan-2000",
"29-Feb-2016",
"31-Dec-2000",
"01-Apr-2003",
"01-Apr-2007",
"01-Apr-2009",
"01-Jan-1990",
],
)
def test_validate_created(line: str):
warnings = [warning for (_, warning) in check_peps._validate_created(1, line)]
assert warnings == [], warnings
@pytest.mark.parametrize(
"date_str",
[
# valid entries
"01-Jan-2000",
"29-Feb-2016",
"31-Dec-2000",
"01-Apr-2003",
"01-Apr-2007",
"01-Apr-2009",
"01-Jan-1990",
],
)
def test_date_checker_valid(date_str: str):
warnings = [warning for (_, warning) in check_peps._date(1, date_str, "<Prefix>")]
assert warnings == [], warnings
@pytest.mark.parametrize(
"date_str",
[
# malformed
"2000-01-01",
"01 January 2000",
"1 Jan 2000",
"1-Jan-2000",
"1-January-2000",
"Jan-1-2000",
"January 1 2000",
"January 01 2000",
"01/01/2000",
"01/Jan/2000", # 🇬🇧, 🇦🇺, 🇨🇦, 🇳🇿, 🇮🇪 , ...
"Jan/01/2000", # 🇺🇸
"1st January 2000",
"The First day of January in the year of Our Lord Two Thousand",
"Jan, 1, 2000",
"2000-Jan-1",
"2000-Jan-01",
"2000-January-1",
"2000-January-01",
"00 Jan 2000",
"00-Jan-2000",
],
)
def test_date_checker_malformed(date_str: str):
warnings = [warning for (_, warning) in check_peps._date(1, date_str, "<Prefix>")]
expected = f"<Prefix> must be a 'DD-mmm-YYYY' date: {date_str!r}"
assert warnings == [expected], warnings
@pytest.mark.parametrize(
"date_str",
[
# too early
"31-Dec-1989",
"01-Apr-1916",
"01-Jan-0020",
"01-Jan-0023",
],
)
def test_date_checker_too_early(date_str: str):
warnings = [warning for (_, warning) in check_peps._date(1, date_str, "<Prefix>")]
expected = f"<Prefix> must not be before Python was invented: {date_str!r}"
assert warnings == [expected], warnings
@pytest.mark.parametrize(
"date_str",
[
# the future
"31-Dec-2999",
"01-Jan-2042",
"01-Jan-2100",
(dt.datetime.now() + dt.timedelta(days=15)).strftime("%d-%b-%Y"),
(dt.datetime.now() + dt.timedelta(days=100)).strftime("%d-%b-%Y"),
],
)
def test_date_checker_too_late(date_str: str):
warnings = [warning for (_, warning) in check_peps._date(1, date_str, "<Prefix>")]
expected = f"<Prefix> must not be in the future: {date_str!r}"
assert warnings == [expected], warnings

View File

@ -0,0 +1,30 @@
import check_peps # NoQA: inserted into sys.modules in conftest.py
import pytest
@pytest.mark.parametrize(
"line",
[
"http://www.python.org/dev/peps/pep-0000/",
"https://www.python.org/dev/peps/pep-0000/",
"http://peps.python.org/pep-0000/",
"https://peps.python.org/pep-0000/",
],
)
def test_check_direct_links_pep(line: str):
warnings = [warning for (_, warning) in check_peps.check_direct_links(1, line)]
assert warnings == ["Use the :pep:`NNN` role to refer to PEPs"], warnings
@pytest.mark.parametrize(
"line",
[
"http://www.rfc-editor.org/rfc/rfc2324",
"https://www.rfc-editor.org/rfc/rfc2324",
"http://datatracker.ietf.org/doc/html/rfc2324",
"https://datatracker.ietf.org/doc/html/rfc2324",
],
)
def test_check_direct_links_rfc(line: str):
warnings = [warning for (_, warning) in check_peps.check_direct_links(1, line)]
assert warnings == ["Use the :rfc:`NNN` role to refer to RFCs"], warnings

View File

@ -0,0 +1,238 @@
import check_peps # NoQA: inserted into sys.modules in conftest.py
import pytest
@pytest.mark.parametrize(
"line",
[
"Alice",
"Alice,",
"Alice, Bob, Charlie",
"Alice,\nBob,\nCharlie",
"Alice,\n Bob,\n Charlie",
"Alice,\n Bob,\n Charlie",
"Cardinal Ximénez",
"Alice <alice@domain.example>",
"Cardinal Ximénez <Cardinal.Ximenez@spanish.inquisition>",
],
ids=repr, # the default calls str and renders newlines.
)
def test_validate_author(line: str):
warnings = [warning for (_, warning) in check_peps._validate_author(1, line)]
assert warnings == [], warnings
@pytest.mark.parametrize(
"line",
[
"Alice,\n Bob,\n Charlie",
"Alice,\n Bob,\n Charlie",
"Alice,\n Bob,\n Charlie",
"Alice,\n Bob",
],
ids=repr, # the default calls str and renders newlines.
)
def test_validate_author_over__indented(line: str):
warnings = [warning for (_, warning) in check_peps._validate_author(1, line)]
assert {*warnings} == {"Author line must not be over-indented"}, warnings
@pytest.mark.parametrize(
"line",
[
"Cardinal Ximénez\nCardinal Biggles\nCardinal Fang",
"Cardinal Ximénez,\nCardinal Biggles\nCardinal Fang",
"Cardinal Ximénez\nCardinal Biggles,\nCardinal Fang",
],
ids=repr, # the default calls str and renders newlines.
)
def test_validate_author_continuation(line: str):
warnings = [warning for (_, warning) in check_peps._validate_author(1, line)]
assert {*warnings} == {"Author continuation lines must end with a comma"}, warnings
@pytest.mark.parametrize(
"line",
[
"Alice",
"Cardinal Ximénez",
"Alice <alice@domain.example>",
"Cardinal Ximénez <Cardinal.Ximenez@spanish.inquisition>",
],
)
def test_validate_sponsor(line: str):
warnings = [warning for (_, warning) in check_peps._validate_sponsor(1, line)]
assert warnings == [], warnings
@pytest.mark.parametrize(
"line",
[
"",
"Alice, Bob, Charlie",
"Alice, Bob, Charlie,",
"Alice <alice@domain.example>",
"Cardinal Ximénez <Cardinal.Ximenez@spanish.inquisition>",
],
)
def test_validate_delegate(line: str):
warnings = [warning for (_, warning) in check_peps._validate_delegate(1, line)]
assert warnings == [], warnings
@pytest.mark.parametrize(
("email", "expected_warnings"),
[
# ... entries must not contain multiple '...'
("Cardinal Ximénez <<", {"multiple <"}),
("Cardinal Ximénez <<<", {"multiple <"}),
("Cardinal Ximénez >>", {"multiple >"}),
("Cardinal Ximénez >>>", {"multiple >"}),
("Cardinal Ximénez <<<>>>", {"multiple <", "multiple >"}),
("Cardinal Ximénez @@", {"multiple @"}),
("Cardinal Ximénez <<@@@>", {"multiple <", "multiple @"}),
("Cardinal Ximénez <@@@>>", {"multiple >", "multiple @"}),
("Cardinal Ximénez <<@@>>", {"multiple <", "multiple >", "multiple @"}),
# valid names
("Cardinal Ximénez", set()),
(" Cardinal Ximénez", set()),
("\t\tCardinal Ximénez", set()),
("Cardinal Ximénez ", set()),
("Cardinal Ximénez\t\t", set()),
("Cardinal O'Ximénez", set()),
("Cardinal Ximénez, Inquisitor", set()),
("Cardinal Ximénez-Biggles", set()),
("Cardinal Ximénez-Biggles, Inquisitor", set()),
("Cardinal T. S. I. Ximénez", set()),
# ... entries must have a valid 'Name'
("Cardinal_Ximénez", {"valid name"}),
("Cardinal Ximénez 3", {"valid name"}),
("~ Cardinal Ximénez ~", {"valid name"}),
("Cardinal Ximénez!", {"valid name"}),
("@Cardinal Ximénez", {"valid name"}),
("Cardinal_Ximénez <>", {"valid name"}),
("Cardinal Ximénez 3 <>", {"valid name"}),
("~ Cardinal Ximénez ~ <>", {"valid name"}),
("Cardinal Ximénez! <>", {"valid name"}),
("@Cardinal Ximénez <>", {"valid name"}),
# ... entries must be formatted as 'Name <email@example.com>'
("Cardinal Ximénez<>", {"name <email>"}),
("Cardinal Ximénez<", {"name <email>"}),
("Cardinal Ximénez <", {"name <email>"}),
("Cardinal Ximénez <", {"name <email>"}),
("Cardinal Ximénez <>", {"name <email>"}),
# ... entries must contain a valid email address (missing)
("Cardinal Ximénez <>", {"valid email"}),
("Cardinal Ximénez <> ", {"valid email"}),
("Cardinal Ximénez <@> ", {"valid email"}),
("Cardinal Ximénez <at> ", {"valid email"}),
("Cardinal Ximénez < at > ", {"valid email"}),
# ... entries must contain a valid email address (local)
("Cardinal Ximénez <Cardinal.Ximénez@spanish.inquisition>", {"valid email"}),
("Cardinal Ximénez <Cardinal.Ximénez at spanish.inquisition>", {"valid email"}),
("Cardinal Ximénez <Cardinal.Ximenez AT spanish.inquisition>", {"valid email"}),
("Cardinal Ximénez <Cardinal.Ximenez @spanish.inquisition> ", {"valid email"}),
("Cardinal Ximénez <Cardinal Ximenez@spanish.inquisition> ", {"valid email"}),
("Cardinal Ximénez < Cardinal Ximenez @spanish.inquisition> ", {"valid email"}),
("Cardinal Ximénez <(Cardinal.Ximenez)@spanish.inquisition>", {"valid email"}),
("Cardinal Ximénez <Cardinal,Ximenez@spanish.inquisition>", {"valid email"}),
("Cardinal Ximénez <Cardinal:Ximenez@spanish.inquisition>", {"valid email"}),
("Cardinal Ximénez <Cardinal;Ximenez@spanish.inquisition>", {"valid email"}),
(
"Cardinal Ximénez <Cardinal><Ximenez@spanish.inquisition>",
{"multiple <", "multiple >", "valid email"},
),
(
"Cardinal Ximénez <Cardinal@Ximenez@spanish.inquisition>",
{"multiple @", "valid email"},
),
(r"Cardinal Ximénez <Cardinal\Ximenez@spanish.inquisition>", {"valid email"}),
("Cardinal Ximénez <[Cardinal.Ximenez]@spanish.inquisition>", {"valid email"}),
('Cardinal Ximénez <"Cardinal"Ximenez"@spanish.inquisition>', {"valid email"}),
("Cardinal Ximenez <Cardinal;Ximenez@spanish.inquisition>", {"valid email"}),
("Cardinal Ximénez <Cardinal£Ximénez@spanish.inquisition>", {"valid email"}),
("Cardinal Ximénez <Cardinal§Ximenez@spanish.inquisition>", {"valid email"}),
# ... entries must contain a valid email address (domain)
(
"Cardinal Ximénez <Cardinal.Ximenez@spanish+american.inquisition>",
{"valid email"},
),
("Cardinal Ximénez <Cardinal.Ximenez@spani$h.inquisition>", {"valid email"}),
("Cardinal Ximénez <Cardinal.Ximenez@spanish.inquisitioñ>", {"valid email"}),
(
"Cardinal Ximénez <Cardinal.Ximenez@th£.spanish.inquisition>",
{"valid email"},
),
# valid name-emails
("Cardinal Ximénez <Cardinal.Ximenez@spanish.inquisition>", set()),
("Cardinal Ximénez <Cardinal.Ximenez at spanish.inquisition>", set()),
("Cardinal Ximénez <Cardinal_Ximenez@spanish.inquisition>", set()),
("Cardinal Ximénez <Cardinal-Ximenez@spanish.inquisition>", set()),
("Cardinal Ximénez <Cardinal!Ximenez@spanish.inquisition>", set()),
("Cardinal Ximénez <Cardinal#Ximenez@spanish.inquisition>", set()),
("Cardinal Ximénez <Cardinal$Ximenez@spanish.inquisition>", set()),
("Cardinal Ximénez <Cardinal%Ximenez@spanish.inquisition>", set()),
("Cardinal Ximénez <Cardinal&Ximenez@spanish.inquisition>", set()),
("Cardinal Ximénez <Cardinal'Ximenez@spanish.inquisition>", set()),
("Cardinal Ximénez <Cardinal*Ximenez@spanish.inquisition>", set()),
("Cardinal Ximénez <Cardinal+Ximenez@spanish.inquisition>", set()),
("Cardinal Ximénez <Cardinal/Ximenez@spanish.inquisition>", set()),
("Cardinal Ximénez <Cardinal=Ximenez@spanish.inquisition>", set()),
("Cardinal Ximénez <Cardinal?Ximenez@spanish.inquisition>", set()),
("Cardinal Ximénez <Cardinal^Ximenez@spanish.inquisition>", set()),
("Cardinal Ximénez <{Cardinal.Ximenez}@spanish.inquisition>", set()),
("Cardinal Ximénez <Cardinal|Ximenez@spanish.inquisition>", set()),
("Cardinal Ximénez <Cardinal~Ximenez@spanish.inquisition>", set()),
("Cardinal Ximénez <Cardinal.Ximenez@español.inquisition>", set()),
("Cardinal Ximénez <Cardinal.Ximenez at español.inquisition>", set()),
("Cardinal Ximénez <Cardinal.Ximenez@spanish-american.inquisition>", set()),
],
# call str() on each parameterised value in the test ID.
ids=str,
)
def test_email_checker(email: str, expected_warnings: set):
warnings = [warning for (_, warning) in check_peps._email(1, email, "<Prefix>")]
found_warnings = set()
email = email.strip()
if "multiple <" in expected_warnings:
found_warnings.add("multiple <")
expected = f"<Prefix> entries must not contain multiple '<': {email!r}"
matching = [w for w in warnings if w == expected]
assert matching == [expected], warnings
if "multiple >" in expected_warnings:
found_warnings.add("multiple >")
expected = f"<Prefix> entries must not contain multiple '>': {email!r}"
matching = [w for w in warnings if w == expected]
assert matching == [expected], warnings
if "multiple @" in expected_warnings:
found_warnings.add("multiple @")
expected = f"<Prefix> entries must not contain multiple '@': {email!r}"
matching = [w for w in warnings if w == expected]
assert matching == [expected], warnings
if "valid name" in expected_warnings:
found_warnings.add("valid name")
expected = f"<Prefix> entries must begin with a valid 'Name': {email!r}"
matching = [w for w in warnings if w == expected]
assert matching == [expected], warnings
if "name <email>" in expected_warnings:
found_warnings.add("name <email>")
expected = f"<Prefix> entries must be formatted as 'Name <email@example.com>': {email!r}"
matching = [w for w in warnings if w == expected]
assert matching == [expected], warnings
if "valid email" in expected_warnings:
found_warnings.add("valid email")
expected = f"<Prefix> entries must contain a valid email address: {email!r}"
matching = [w for w in warnings if w == expected]
assert matching == [expected], warnings
if expected_warnings == set():
assert warnings == [], warnings
assert found_warnings == expected_warnings

View File

@ -0,0 +1,408 @@
import check_peps # NoQA: inserted into sys.modules in conftest.py
import pytest
@pytest.mark.parametrize(
("test_input", "expected"),
[
# capitalisation
("Header:", "Header"),
("header:", "header"),
("hEADER:", "hEADER"),
("hEaDeR:", "hEaDeR"),
# trailing spaces
("Header: ", "Header"),
("Header: ", "Header"),
("Header: \t", "Header"),
# trailing content
("Header: Text", "Header"),
("Header: 123", "Header"),
("Header: !", "Header"),
# separators
("Hyphenated-Header:", "Hyphenated-Header"),
],
)
def test_header_pattern(test_input, expected):
assert check_peps.HEADER_PATTERN.match(test_input)[1] == expected
@pytest.mark.parametrize(
"test_input",
[
# trailing content
"Header:Text",
"Header:123",
"Header:!",
# colon position
"Header",
"Header : ",
"Header :",
"SemiColonHeader;",
# separators
"Underscored_Header:",
"Spaced Header:",
"Plus+Header:",
],
)
def test_header_pattern_no_match(test_input):
assert check_peps.HEADER_PATTERN.match(test_input) is None
def test_validate_required_headers():
found_headers = dict.fromkeys(
("PEP", "Title", "Author", "Status", "Type", "Created")
)
warnings = [
warning for (_, warning) in check_peps._validate_required_headers(found_headers)
]
assert warnings == [], warnings
def test_validate_required_headers_missing():
found_headers = dict.fromkeys(("PEP", "Title", "Author", "Type"))
warnings = [
warning for (_, warning) in check_peps._validate_required_headers(found_headers)
]
assert warnings == [
"Must have required header: Status",
"Must have required header: Created",
], warnings
def test_validate_required_headers_order():
found_headers = dict.fromkeys(
("PEP", "Title", "Sponsor", "Author", "Type", "Status", "Replaces", "Created")
)
warnings = [
warning for (_, warning) in check_peps._validate_required_headers(found_headers)
]
assert warnings == [
"Headers must be in PEP 12 order. Correct order: PEP, Title, Author, Sponsor, Status, Type, Created, Replaces"
], warnings
@pytest.mark.parametrize(
"line",
[
"!",
"The Zen of Python",
"A title that is exactly 79 characters long, but shorter than 80 characters long",
],
)
def test_validate_title(line: str):
warnings = [warning for (_, warning) in check_peps._validate_title(1, line)]
assert warnings == [], warnings
def test_validate_title_blank():
warnings = [warning for (_, warning) in check_peps._validate_title(1, "-" * 80)]
assert warnings == ["PEP title must be less than 80 characters"], warnings
def test_validate_title_too_long():
warnings = [warning for (_, warning) in check_peps._validate_title(1, "")]
assert warnings == ["PEP must have a title"], warnings
@pytest.mark.parametrize(
"line",
[
"Accepted",
"Active",
"April Fool!",
"Deferred",
"Draft",
"Final",
"Provisional",
"Rejected",
"Superseded",
"Withdrawn",
],
)
def test_validate_status_valid(line: str):
warnings = [warning for (_, warning) in check_peps._validate_status(1, line)]
assert warnings == [], warnings
@pytest.mark.parametrize(
"line",
[
"Standards Track",
"Informational",
"Process",
"accepted",
"active",
"april fool!",
"deferred",
"draft",
"final",
"provisional",
"rejected",
"superseded",
"withdrawn",
],
)
def test_validate_status_invalid(line: str):
warnings = [warning for (_, warning) in check_peps._validate_status(1, line)]
assert warnings == ["Status must be a valid PEP status"], warnings
@pytest.mark.parametrize(
"line",
[
"Standards Track",
"Informational",
"Process",
],
)
def test_validate_type_valid(line: str):
warnings = [warning for (_, warning) in check_peps._validate_type(1, line)]
assert warnings == [], warnings
@pytest.mark.parametrize(
"line",
[
"standards track",
"informational",
"process",
"Accepted",
"Active",
"April Fool!",
"Deferred",
"Draft",
"Final",
"Provisional",
"Rejected",
"Superseded",
"Withdrawn",
],
)
def test_validate_type_invalid(line: str):
warnings = [warning for (_, warning) in check_peps._validate_type(1, line)]
assert warnings == ["Type must be a valid PEP type"], warnings
@pytest.mark.parametrize(
("line", "expected_warnings"),
[
# valid entries
("Governance", set()),
("Packaging", set()),
("Typing", set()),
("Release", set()),
("Governance, Packaging", set()),
("Packaging, Typing", set()),
# duplicates
("Governance, Governance", {"duplicates"}),
("Release, Release", {"duplicates"}),
("Packaging, Packaging", {"duplicates"}),
("Spam, Spam", {"duplicates", "valid"}),
("lobster, lobster", {"duplicates", "capitalisation", "valid"}),
("governance, governance", {"duplicates", "capitalisation"}),
# capitalisation
("governance", {"capitalisation"}),
("packaging", {"capitalisation"}),
("typing", {"capitalisation"}),
("release", {"capitalisation"}),
("Governance, release", {"capitalisation"}),
# validity
("Spam", {"valid"}),
("lobster", {"capitalisation", "valid"}),
# sorted
("Packaging, Governance", {"sorted"}),
("Typing, Release", {"sorted"}),
("Release, Governance", {"sorted"}),
("spam, packaging", {"capitalisation", "valid", "sorted"}),
],
# call str() on each parameterised value in the test ID.
ids=str,
)
def test_validate_topic(line: str, expected_warnings: set):
warnings = [warning for (_, warning) in check_peps._validate_topic(1, line)]
found_warnings = set()
if "duplicates" in expected_warnings:
found_warnings.add("duplicates")
expected = "Topic must not contain duplicates"
matching = [w for w in warnings if w == expected]
assert matching == [expected], warnings
if "capitalisation" in expected_warnings:
found_warnings.add("capitalisation")
expected = "Topic must be properly capitalised (Title Case)"
matching = [w for w in warnings if w == expected]
assert matching == [expected], warnings
if "valid" in expected_warnings:
found_warnings.add("valid")
expected = "Topic must be for a valid sub-index"
matching = [w for w in warnings if w == expected]
assert matching == [expected], warnings
if "sorted" in expected_warnings:
found_warnings.add("sorted")
expected = "Topic must be sorted lexicographically"
matching = [w for w in warnings if w == expected]
assert matching == [expected], warnings
if expected_warnings == set():
assert warnings == [], warnings
assert found_warnings == expected_warnings
def test_validate_content_type_valid():
warnings = [
warning for (_, warning) in check_peps._validate_content_type(1, "text/x-rst")
]
assert warnings == [], warnings
@pytest.mark.parametrize(
"line",
[
"text/plain",
"text/markdown",
"text/csv",
"text/rtf",
"text/javascript",
"text/html",
"text/xml",
],
)
def test_validate_content_type_invalid(line: str):
warnings = [warning for (_, warning) in check_peps._validate_content_type(1, line)]
assert warnings == ["Content-Type must be 'text/x-rst'"], warnings
@pytest.mark.parametrize(
"line",
[
"0, 1, 8, 12, 20,",
"101, 801,",
"3099, 9999",
],
)
def test_validate_pep_references(line: str):
warnings = [
warning for (_, warning) in check_peps._validate_pep_references(1, line)
]
assert warnings == [], warnings
@pytest.mark.parametrize(
"line",
[
"0,1,8, 12, 20,",
"101,801,",
"3099, 9998,9999",
],
)
def test_validate_pep_references_separators(line: str):
warnings = [
warning for (_, warning) in check_peps._validate_pep_references(1, line)
]
assert warnings == [
"PEP references must be separated by comma-spaces (', ')"
], warnings
@pytest.mark.parametrize(
("line", "expected_warnings"),
[
# valid entries
("1.0, 2.4, 2.7, 2.8, 3.0, 3.1, 3.4, 3.7, 3.11, 3.14", set()),
("2.x", set()),
("3.x", set()),
("3.0.1", set()),
# segments
("", {"segments"}),
("1", {"segments"}),
("1.2.3.4", {"segments"}),
# major
("0.0", {"major"}),
("4.0", {"major"}),
("9.0", {"major"}),
# minor number
("3.a", {"minor numeric"}),
("3.spam", {"minor numeric"}),
("3.0+", {"minor numeric"}),
("3.0-9", {"minor numeric"}),
("9.Z", {"major", "minor numeric"}),
# minor leading zero
("3.01", {"minor zero"}),
("0.00", {"major", "minor zero"}),
# micro empty
("3.x.1", {"micro empty"}),
("9.x.1", {"major", "micro empty"}),
# micro leading zero
("3.3.0", {"micro zero"}),
("3.3.00", {"micro zero"}),
("3.3.01", {"micro zero"}),
("3.0.0", {"micro zero"}),
("3.00.0", {"minor zero", "micro zero"}),
("0.00.0", {"major", "minor zero", "micro zero"}),
# micro number
("3.0.a", {"micro numeric"}),
("0.3.a", {"major", "micro numeric"}),
],
# call str() on each parameterised value in the test ID.
ids=str,
)
def test_validate_python_version(line: str, expected_warnings: set):
warnings = [
warning for (_, warning) in check_peps._validate_python_version(1, line)
]
found_warnings = set()
if "segments" in expected_warnings:
found_warnings.add("segments")
expected = f"Python-Version must have two or three segments: {line}"
matching = [w for w in warnings if w == expected]
assert matching == [expected], warnings
if "major" in expected_warnings:
found_warnings.add("major")
expected = f"Python-Version major part must be 1, 2, or 3: {line}"
matching = [w for w in warnings if w == expected]
assert matching == [expected], warnings
if "minor numeric" in expected_warnings:
found_warnings.add("minor numeric")
expected = f"Python-Version minor part must be numeric: {line}"
matching = [w for w in warnings if w == expected]
assert matching == [expected], warnings
if "minor zero" in expected_warnings:
found_warnings.add("minor zero")
expected = f"Python-Version minor part must not have leading zeros: {line}"
matching = [w for w in warnings if w == expected]
assert matching == [expected], warnings
if "micro empty" in expected_warnings:
found_warnings.add("micro empty")
expected = (
f"Python-Version micro part must be empty if minor part is 'x': {line}"
)
matching = [w for w in warnings if w == expected]
assert matching == [expected], warnings
if "micro zero" in expected_warnings:
found_warnings.add("micro zero")
expected = f"Python-Version micro part must not have leading zeros: {line}"
matching = [w for w in warnings if w == expected]
assert matching == [expected], warnings
if "micro numeric" in expected_warnings:
found_warnings.add("micro numeric")
expected = f"Python-Version micro part must be numeric: {line}"
matching = [w for w in warnings if w == expected]
assert matching == [expected], warnings
if expected_warnings == set():
assert warnings == [], warnings
assert found_warnings == expected_warnings

View File

@ -0,0 +1,48 @@
from pathlib import Path
import check_peps # NoQA: inserted into sys.modules in conftest.py
PEP_9002 = Path(__file__).parent.parent / "peps" / "pep-9002.rst"
def test_with_fake_pep():
content = PEP_9002.read_text(encoding="utf-8").splitlines()
warnings = list(check_peps.check_peps(PEP_9002, content))
assert warnings == [
(1, "PEP must begin with the 'PEP:' header"),
(9, "Must not have duplicate header: Sponsor "),
(10, "Must not have invalid header: Horse-Guards"),
(1, "Must have required header: PEP"),
(1, "Must have required header: Type"),
(
1,
"Headers must be in PEP 12 order. Correct order: Title, Version, "
"Author, Sponsor, BDFL-Delegate, Discussions-To, Status, Topic, "
"Content-Type, Requires, Created, Python-Version, Post-History, "
"Resolution",
),
(4, "Author continuation lines must end with a comma"),
(5, "Author line must not be over-indented"),
(7, "Python-Version major part must be 1, 2, or 3: 4.0"),
(
8,
"Sponsor entries must begin with a valid 'Name': "
r"'Sponsor:\nHorse-Guards: Parade'",
),
(11, "Created must be a 'DD-mmm-YYYY' date: '1-Jan-1989'"),
(12, "Delegate entries must begin with a valid 'Name': 'Barry!'"),
(13, "Status must be a valid PEP status"),
(14, "Topic must not contain duplicates"),
(14, "Topic must be properly capitalised (Title Case)"),
(14, "Topic must be for a valid sub-index"),
(14, "Topic must be sorted lexicographically"),
(15, "Content-Type must be 'text/x-rst'"),
(16, "PEP references must be separated by comma-spaces (', ')"),
(17, "Discussions-To must be a valid thread URL or mailing list"),
(18, "Post-History must be a 'DD-mmm-YYYY' date: '2-Feb-2000'"),
(18, "Post-History must be a valid thread URL"),
(19, "Post-History must be a 'DD-mmm-YYYY' date: '3-Mar-2001'"),
(19, "Post-History must be a valid thread URL"),
(20, "Resolution must be a valid thread URL"),
(23, "Use the :pep:`NNN` role to refer to PEPs"),
]

View File

@ -0,0 +1,108 @@
import check_peps # NoQA: inserted into sys.modules in conftest.py
import pytest
@pytest.mark.parametrize(
"line",
[
"PEP: 0",
"PEP: 12",
],
)
def test_validate_pep_number(line: str):
warnings = [warning for (_, warning) in check_peps._validate_pep_number(line)]
assert warnings == [], warnings
@pytest.mark.parametrize(
"line",
[
"0",
"PEP:12",
"PEP 0",
"PEP 12",
"PEP:0",
],
)
def test_validate_pep_number_invalid_header(line: str):
warnings = [warning for (_, warning) in check_peps._validate_pep_number(line)]
assert warnings == ["PEP must begin with the 'PEP:' header"], warnings
@pytest.mark.parametrize(
("pep_number", "expected_warnings"),
[
# valid entries
("0", set()),
("1", set()),
("12", set()),
("20", set()),
("101", set()),
("801", set()),
("3099", set()),
("9999", set()),
# empty
("", {"not blank"}),
# leading zeros
("01", {"leading zeros"}),
("001", {"leading zeros"}),
("0001", {"leading zeros"}),
("00001", {"leading zeros"}),
# non-numeric
("a", {"non-numeric"}),
("123abc", {"non-numeric"}),
("0123A", {"leading zeros", "non-numeric"}),
("", {"non-numeric"}),
("10", {"non-numeric"}),
("999", {"non-numeric"}),
("𝟎", {"non-numeric"}),
("𝟘", {"non-numeric"}),
("𝟏𝟚", {"non-numeric"}),
("𝟸𝟬", {"non-numeric"}),
("-1", {"non-numeric"}),
("+1", {"non-numeric"}),
# out of bounds
("10000", {"range"}),
("54321", {"range"}),
("99999", {"range"}),
("32768", {"range"}),
],
# call str() on each parameterised value in the test ID.
ids=str,
)
def test_pep_num_checker(pep_number: str, expected_warnings: set):
warnings = [
warning for (_, warning) in check_peps._pep_num(1, pep_number, "<Prefix>")
]
found_warnings = set()
pep_number = pep_number.strip()
if "not blank" in expected_warnings:
found_warnings.add("not blank")
expected = f"<Prefix> must not be blank: {pep_number!r}"
matching = [w for w in warnings if w == expected]
assert matching == [expected], warnings
if "leading zeros" in expected_warnings:
found_warnings.add("leading zeros")
expected = f"<Prefix> must not contain leading zeros: {pep_number!r}"
matching = [w for w in warnings if w == expected]
assert matching == [expected], warnings
if "non-numeric" in expected_warnings:
found_warnings.add("non-numeric")
expected = f"<Prefix> must be numeric: {pep_number!r}"
matching = [w for w in warnings if w == expected]
assert matching == [expected], warnings
if "range" in expected_warnings:
found_warnings.add("range")
expected = f"<Prefix> must be between 0 and 9999: {pep_number!r}"
matching = [w for w in warnings if w == expected]
assert matching == [expected], warnings
if expected_warnings == set():
assert warnings == [], warnings
assert found_warnings == expected_warnings

View File

@ -0,0 +1,305 @@
import check_peps # NoQA: inserted into sys.modules in conftest.py
import pytest
@pytest.mark.parametrize(
"line",
[
"list-name@python.org",
"distutils-sig@python.org",
"csv@python.org",
"python-3000@python.org",
"ipaddr-py-dev@googlegroups.com",
"python-tulip@googlegroups.com",
"https://discuss.python.org/t/thread-name/123456",
"https://discuss.python.org/t/thread-name/123456/",
"https://discuss.python.org/t/thread_name/123456",
"https://discuss.python.org/t/thread_name/123456/",
"https://discuss.python.org/t/123456/",
"https://discuss.python.org/t/123456",
],
)
def test_validate_discussions_to_valid(line: str):
warnings = [
warning for (_, warning) in check_peps._validate_discussions_to(1, line)
]
assert warnings == [], warnings
@pytest.mark.parametrize(
"line",
[
"$pecial+chars@python.org",
"a-discussions-to-list!@googlegroups.com",
],
)
def test_validate_discussions_to_list_name(line: str):
warnings = [
warning for (_, warning) in check_peps._validate_discussions_to(1, line)
]
assert warnings == ["Discussions-To must be a valid mailing list"], warnings
@pytest.mark.parametrize(
"line",
[
"list-name@python.org.uk",
"distutils-sig@mail-server.example",
],
)
def test_validate_discussions_to_invalid_list_domain(line: str):
warnings = [
warning for (_, warning) in check_peps._validate_discussions_to(1, line)
]
assert warnings == [
"Discussions-To must be a valid thread URL or mailing list"
], warnings
@pytest.mark.parametrize(
"body",
[
"",
(
"01-Jan-2001, 02-Feb-2002,\n "
"03-Mar-2003, 04-Apr-2004,\n "
"05-May-2005,"
),
(
"`01-Jan-2000 <https://mail.python.org/pipermail/list-name/0000-Month/0123456.html>`__,\n "
"`11-Mar-2005 <https://mail.python.org/archives/list/list-name@python.org/thread/abcdef0123456789/>`__,\n "
"`21-May-2010 <https://discuss.python.org/t/thread-name/123456/654321>`__,\n "
"`31-Jul-2015 <https://discuss.python.org/t/123456>`__,"
),
"01-Jan-2001, `02-Feb-2002 <https://discuss.python.org/t/123456>`__,\n03-Mar-2003",
],
)
def test_validate_post_history_valid(body: str):
warnings = [warning for (_, warning) in check_peps._validate_post_history(1, body)]
assert warnings == [], warnings
@pytest.mark.parametrize(
"line",
[
"https://mail.python.org/archives/list/list-name@python.org/thread/abcXYZ123",
"https://mail.python.org/archives/list/list-name@python.org/thread/abcXYZ123/",
"https://mail.python.org/archives/list/list-name@python.org/message/abcXYZ123",
"https://mail.python.org/archives/list/list-name@python.org/message/abcXYZ123/",
"https://mail.python.org/archives/list/list-name@python.org/message/abcXYZ123#Anchor",
"https://mail.python.org/archives/list/list-name@python.org/message/abcXYZ123/#Anchor",
"https://mail.python.org/archives/list/list-name@python.org/message/abcXYZ123#Anchor123",
"https://mail.python.org/archives/list/list-name@python.org/message/abcXYZ123/#Anchor123",
],
)
def test_validate_resolution_valid(line: str):
warnings = [warning for (_, warning) in check_peps._validate_resolution(1, line)]
assert warnings == [], warnings
@pytest.mark.parametrize(
"line",
[
"https://mail.python.org/archives/list/list-name@python.org/thread",
"https://mail.python.org/archives/list/list-name@python.org/message",
"https://mail.python.org/archives/list/list-name@python.org/thread/",
"https://mail.python.org/archives/list/list-name@python.org/message/",
"https://mail.python.org/archives/list/list-name@python.org/thread/abcXYZ123#anchor",
"https://mail.python.org/archives/list/list-name@python.org/thread/abcXYZ123/#anchor",
"https://mail.python.org/archives/list/list-name@python.org/message/#abcXYZ123",
"https://mail.python.org/archives/list/list-name@python.org/message/#abcXYZ123/",
"https://mail.python.org/archives/list/list-name@python.org/spam/abcXYZ123",
"https://mail.python.org/archives/list/list-name@python.org/spam/abcXYZ123/",
],
)
def test_validate_resolution_invalid(line: str):
warnings = [warning for (_, warning) in check_peps._validate_resolution(1, line)]
assert warnings == ["Resolution must be a valid thread URL"], warnings
@pytest.mark.parametrize(
"thread_url",
[
"https://discuss.python.org/t/thread-name/123456",
"https://discuss.python.org/t/thread-name/123456/",
"https://discuss.python.org/t/thread_name/123456",
"https://discuss.python.org/t/thread_name/123456/",
"https://discuss.python.org/t/thread-name/123456/654321/",
"https://discuss.python.org/t/thread-name/123456/654321",
"https://discuss.python.org/t/123456",
"https://discuss.python.org/t/123456/",
"https://discuss.python.org/t/123456/654321/",
"https://discuss.python.org/t/123456/654321",
"https://discuss.python.org/t/1",
"https://discuss.python.org/t/1/",
"https://mail.python.org/pipermail/list-name/0000-Month/0123456.html",
"https://mail.python.org/archives/list/list-name@python.org/thread/abcXYZ123",
"https://mail.python.org/archives/list/list-name@python.org/thread/abcXYZ123/",
],
)
def test_thread_checker_valid(thread_url: str):
warnings = [
warning for (_, warning) in check_peps._thread(1, thread_url, "<Prefix>")
]
assert warnings == [], warnings
@pytest.mark.parametrize(
"thread_url",
[
"http://link.example",
"list-name@python.org",
"distutils-sig@python.org",
"csv@python.org",
"python-3000@python.org",
"ipaddr-py-dev@googlegroups.com",
"python-tulip@googlegroups.com",
"https://link.example",
"https://discuss.python.org",
"https://discuss.python.org/",
"https://discuss.python.org/c/category",
"https://discuss.python.org/t/thread_name/123456//",
"https://discuss.python.org/t/thread+name/123456",
"https://discuss.python.org/t/thread+name/123456#",
"https://discuss.python.org/t/thread+name/123456/#",
"https://discuss.python.org/t/thread+name/123456/#anchor",
"https://discuss.python.org/t/thread+name/",
"https://discuss.python.org/t/thread+name",
"https://discuss.python.org/t/thread-name/123abc",
"https://discuss.python.org/t/thread-name/123abc/",
"https://discuss.python.org/t/thread-name/123456/123abc",
"https://discuss.python.org/t/thread-name/123456/123abc/",
"https://discuss.python.org/t/123/456/789",
"https://discuss.python.org/t/123/456/789/",
"https://discuss.python.org/t/#/",
"https://discuss.python.org/t/#",
"https://mail.python.org/pipermail/list+name/0000-Month/0123456.html",
"https://mail.python.org/pipermail/list-name/YYYY-Month/0123456.html",
"https://mail.python.org/pipermail/list-name/0123456/0123456.html",
"https://mail.python.org/pipermail/list-name/0000-Month/0123456",
"https://mail.python.org/pipermail/list-name/0000-Month/0123456/",
"https://mail.python.org/pipermail/list-name/0000-Month/",
"https://mail.python.org/pipermail/list-name/0000-Month",
"https://mail.python.org/archives/list/list-name@python.org/thread/abcXYZ123#anchor",
"https://mail.python.org/archives/list/list-name@python.org/thread/abcXYZ123/#anchor",
"https://mail.python.org/archives/list/list-name@python.org/message/abcXYZ123",
"https://mail.python.org/archives/list/list-name@python.org/message/abcXYZ123/",
"https://mail.python.org/archives/list/list-name@python.org/message/abcXYZ123#anchor",
"https://mail.python.org/archives/list/list-name@python.org/message/abcXYZ123/#anchor",
"https://mail.python.org/archives/list/list-name@python.org/spam/abcXYZ123",
"https://mail.python.org/archives/list/list-name@python.org/spam/abcXYZ123/",
],
)
def test_thread_checker_invalid(thread_url: str):
warnings = [
warning for (_, warning) in check_peps._thread(1, thread_url, "<Prefix>")
]
assert warnings == ["<Prefix> must be a valid thread URL"], warnings
@pytest.mark.parametrize(
"thread_url",
[
"https://mail.python.org/archives/list/list-name@python.org/thread/abcXYZ123",
"https://mail.python.org/archives/list/list-name@python.org/thread/abcXYZ123/",
"https://mail.python.org/archives/list/list-name@python.org/message/abcXYZ123",
"https://mail.python.org/archives/list/list-name@python.org/message/abcXYZ123/",
"https://mail.python.org/archives/list/list-name@python.org/message/abcXYZ123#Anchor",
"https://mail.python.org/archives/list/list-name@python.org/message/abcXYZ123/#Anchor",
"https://mail.python.org/archives/list/list-name@python.org/message/abcXYZ123#Anchor123",
"https://mail.python.org/archives/list/list-name@python.org/message/abcXYZ123/#Anchor123",
],
)
def test_thread_checker_valid_allow_message(thread_url: str):
warnings = [
warning
for (_, warning) in check_peps._thread(
1, thread_url, "<Prefix>", allow_message=True
)
]
assert warnings == [], warnings
@pytest.mark.parametrize(
"thread_url",
[
"https://mail.python.org/archives/list/list-name@python.org/thread",
"https://mail.python.org/archives/list/list-name@python.org/message",
"https://mail.python.org/archives/list/list-name@python.org/thread/",
"https://mail.python.org/archives/list/list-name@python.org/message/",
"https://mail.python.org/archives/list/list-name@python.org/thread/abcXYZ123#anchor",
"https://mail.python.org/archives/list/list-name@python.org/thread/abcXYZ123/#anchor",
"https://mail.python.org/archives/list/list-name@python.org/message/#abcXYZ123",
"https://mail.python.org/archives/list/list-name@python.org/message/#abcXYZ123/",
"https://mail.python.org/archives/list/list-name@python.org/spam/abcXYZ123",
"https://mail.python.org/archives/list/list-name@python.org/spam/abcXYZ123/",
],
)
def test_thread_checker_invalid_allow_message(thread_url: str):
warnings = [
warning
for (_, warning) in check_peps._thread(
1, thread_url, "<Prefix>", allow_message=True
)
]
assert warnings == ["<Prefix> must be a valid thread URL"], warnings
@pytest.mark.parametrize(
"thread_url",
[
"list-name@python.org",
"distutils-sig@python.org",
"csv@python.org",
"python-3000@python.org",
"ipaddr-py-dev@googlegroups.com",
"python-tulip@googlegroups.com",
"https://discuss.python.org/t/thread-name/123456",
"https://discuss.python.org/t/thread-name/123456/",
"https://discuss.python.org/t/thread_name/123456",
"https://discuss.python.org/t/thread_name/123456/",
"https://discuss.python.org/t/123456/",
"https://discuss.python.org/t/123456",
],
)
def test_thread_checker_valid_discussions_to(thread_url: str):
warnings = [
warning
for (_, warning) in check_peps._thread(
1, thread_url, "<Prefix>", discussions_to=True
)
]
assert warnings == [], warnings
@pytest.mark.parametrize(
"thread_url",
[
"https://discuss.python.org/t/thread-name/123456/000",
"https://discuss.python.org/t/thread-name/123456/000/",
"https://discuss.python.org/t/thread_name/123456/000",
"https://discuss.python.org/t/thread_name/123456/000/",
"https://discuss.python.org/t/123456/000/",
"https://discuss.python.org/t/12345656/000",
"https://discuss.python.org/t/thread-name",
"https://discuss.python.org/t/thread_name",
"https://discuss.python.org/t/thread+name",
],
)
def test_thread_checker_invalid_discussions_to(thread_url: str):
warnings = [
warning
for (_, warning) in check_peps._thread(
1, thread_url, "<Prefix>", discussions_to=True
)
]
assert warnings == ["<Prefix> must be a valid thread URL"], warnings
def test_thread_checker_allow_message_discussions_to():
with pytest.raises(ValueError, match="cannot both be True"):
list(
check_peps._thread(
1, "", "<Prefix>", allow_message=True, discussions_to=True
)
)

View File

@ -1,27 +1,29 @@
from pathlib import Path import datetime as dt
from pep_sphinx_extensions.pep_processor.transforms import pep_footer from pep_sphinx_extensions.pep_processor.transforms import pep_footer
from ...conftest import PEP_ROOT
def test_add_source_link(): def test_add_source_link():
out = pep_footer._add_source_link(Path("pep-0008.txt")) out = pep_footer._add_source_link(PEP_ROOT / "pep-0008.rst")
assert "https://github.com/python/peps/blob/main/pep-0008.txt" in str(out) assert "https://github.com/python/peps/blob/main/peps/pep-0008.rst" in str(out)
def test_add_commit_history_info(): def test_add_commit_history_info():
out = pep_footer._add_commit_history_info(Path("pep-0008.txt")) out = pep_footer._add_commit_history_info(PEP_ROOT / "pep-0008.rst")
assert str(out).startswith( assert str(out).startswith(
"<paragraph>Last modified: " "<paragraph>Last modified: "
'<reference refuri="https://github.com/python/peps/commits/main/pep-0008.txt">' '<reference refuri="https://github.com/python/peps/commits/main/pep-0008.rst">'
) )
# A variable timestamp comes next, don't test that # A variable timestamp comes next, don't test that
assert str(out).endswith("</reference></paragraph>") assert str(out).endswith("</reference></paragraph>")
def test_add_commit_history_info_invalid(): def test_add_commit_history_info_invalid():
out = pep_footer._add_commit_history_info(Path("pep-not-found.txt")) out = pep_footer._add_commit_history_info(PEP_ROOT / "pep-not-found.rst")
assert str(out) == "<paragraph/>" assert str(out) == "<paragraph/>"
@ -31,4 +33,4 @@ def test_get_last_modified_timestamps():
assert len(out) >= 585 assert len(out) >= 585
# Should be a Unix timestamp and at least this # Should be a Unix timestamp and at least this
assert out["pep-0008.txt"] >= 1643124055 assert dt.datetime.fromisoformat(out["pep-0008"]).timestamp() >= 1643124055

View File

@ -18,7 +18,7 @@ from pep_sphinx_extensions.pep_zero_generator.constants import (
@pytest.mark.parametrize( @pytest.mark.parametrize(
"test_input, expected", ("test_input", "expected"),
[ [
("my-mailing-list@example.com", "my-mailing-list@example.com"), ("my-mailing-list@example.com", "my-mailing-list@example.com"),
("python-tulip@googlegroups.com", "https://groups.google.com/g/python-tulip"), ("python-tulip@googlegroups.com", "https://groups.google.com/g/python-tulip"),
@ -37,7 +37,7 @@ def test_generate_list_url(test_input, expected):
@pytest.mark.parametrize( @pytest.mark.parametrize(
"test_input, expected", ("test_input", "expected"),
[ [
( (
"https://mail.python.org/pipermail/python-3000/2006-November/004190.html", "https://mail.python.org/pipermail/python-3000/2006-November/004190.html",
@ -72,7 +72,7 @@ def test_process_pretty_url(test_input, expected):
@pytest.mark.parametrize( @pytest.mark.parametrize(
"test_input, expected", ("test_input", "expected"),
[ [
( (
"https://example.com/", "https://example.com/",
@ -94,7 +94,7 @@ def test_process_pretty_url_invalid(test_input, expected):
@pytest.mark.parametrize( @pytest.mark.parametrize(
"test_input, expected", ("test_input", "expected"),
[ [
( (
"https://mail.python.org/pipermail/python-3000/2006-November/004190.html", "https://mail.python.org/pipermail/python-3000/2006-November/004190.html",
@ -129,7 +129,7 @@ def test_make_link_pretty(test_input, expected):
@pytest.mark.parametrize( @pytest.mark.parametrize(
"test_input, expected", ("test_input", "expected"),
[ [
(STATUS_ACCEPTED, "Normative proposal accepted for implementation"), (STATUS_ACCEPTED, "Normative proposal accepted for implementation"),
(STATUS_ACTIVE, "Currently valid informational guidance, or an in-use process"), (STATUS_ACTIVE, "Currently valid informational guidance, or an in-use process"),
@ -155,7 +155,7 @@ def test_abbreviate_status_unknown():
@pytest.mark.parametrize( @pytest.mark.parametrize(
"test_input, expected", ("test_input", "expected"),
[ [
( (
TYPE_INFO, TYPE_INFO,

View File

@ -5,7 +5,7 @@ from pep_sphinx_extensions.pep_processor.transforms import pep_zero
@pytest.mark.parametrize( @pytest.mark.parametrize(
"test_input, expected", ("test_input", "expected"),
[ [
( (
nodes.reference( nodes.reference(

View File

@ -1,69 +0,0 @@
import pytest
from pep_sphinx_extensions.pep_zero_generator import author
from pep_sphinx_extensions.tests.utils import AUTHORS_OVERRIDES
@pytest.mark.parametrize(
"test_input, expected",
[
(
("First Last", "first@example.com"),
author.Author(
last_first="Last, First", nick="Last", email="first@example.com"
),
),
(
("Guido van Rossum", "guido@example.com"),
author.Author(
last_first="van Rossum, Guido (GvR)",
nick="GvR",
email="guido@example.com",
),
),
(
("Hugo van Kemenade", "hugo@example.com"),
author.Author(
last_first="van Kemenade, Hugo",
nick="van Kemenade",
email="hugo@example.com",
),
),
(
("Eric N. Vander Weele", "eric@example.com"),
author.Author(
last_first="Vander Weele, Eric N.",
nick="Vander Weele",
email="eric@example.com",
),
),
(
("Mariatta", "mariatta@example.com"),
author.Author(
last_first="Mariatta", nick="Mariatta", email="mariatta@example.com"
),
),
(
("First Last Jr.", "first@example.com"),
author.Author(
last_first="Last, First, Jr.", nick="Last", email="first@example.com"
),
),
pytest.param(
("First Last", "first at example.com"),
author.Author(
last_first="Last, First", nick="Last", email="first@example.com"
),
marks=pytest.mark.xfail,
),
],
)
def test_parse_author_email(test_input, expected):
out = author.parse_author_email(test_input, AUTHORS_OVERRIDES)
assert out == expected
def test_parse_author_email_empty_name():
with pytest.raises(ValueError, match="Name is empty!"):
author.parse_author_email(("", "user@example.com"), AUTHORS_OVERRIDES)

View File

@ -1,9 +1,6 @@
from pathlib import Path
import pytest import pytest
from pep_sphinx_extensions.pep_zero_generator import parser from pep_sphinx_extensions.pep_zero_generator import parser
from pep_sphinx_extensions.pep_zero_generator.author import Author
from pep_sphinx_extensions.pep_zero_generator.constants import ( from pep_sphinx_extensions.pep_zero_generator.constants import (
STATUS_ACCEPTED, STATUS_ACCEPTED,
STATUS_ACTIVE, STATUS_ACTIVE,
@ -18,35 +15,36 @@ from pep_sphinx_extensions.pep_zero_generator.constants import (
TYPE_PROCESS, TYPE_PROCESS,
TYPE_STANDARDS, TYPE_STANDARDS,
) )
from pep_sphinx_extensions.pep_zero_generator.errors import PEPError from pep_sphinx_extensions.pep_zero_generator.parser import _Author
from pep_sphinx_extensions.tests.utils import AUTHORS_OVERRIDES
from ..conftest import PEP_ROOT
def test_pep_repr(): def test_pep_repr():
pep8 = parser.PEP(Path("pep-0008.txt")) pep8 = parser.PEP(PEP_ROOT / "pep-0008.rst")
assert repr(pep8) == "<PEP 0008 - Style Guide for Python Code>" assert repr(pep8) == "<PEP 0008 - Style Guide for Python Code>"
def test_pep_less_than(): def test_pep_less_than():
pep8 = parser.PEP(Path("pep-0008.txt")) pep8 = parser.PEP(PEP_ROOT / "pep-0008.rst")
pep3333 = parser.PEP(Path("pep-3333.txt")) pep3333 = parser.PEP(PEP_ROOT / "pep-3333.rst")
assert pep8 < pep3333 assert pep8 < pep3333
def test_pep_equal(): def test_pep_equal():
pep_a = parser.PEP(Path("pep-0008.txt")) pep_a = parser.PEP(PEP_ROOT / "pep-0008.rst")
pep_b = parser.PEP(Path("pep-0008.txt")) pep_b = parser.PEP(PEP_ROOT / "pep-0008.rst")
assert pep_a == pep_b assert pep_a == pep_b
def test_pep_details(monkeypatch): def test_pep_details(monkeypatch):
pep8 = parser.PEP(Path("pep-0008.txt")) pep8 = parser.PEP(PEP_ROOT / "pep-0008.rst")
assert pep8.details == { assert pep8.details == {
"authors": "GvR, Warsaw, Coghlan", "authors": "Guido van Rossum, Barry Warsaw, Nick Coghlan",
"number": 8, "number": 8,
"shorthand": ":abbr:`PA (Process, Active)`", "shorthand": ":abbr:`PA (Process, Active)`",
"title": "Style Guide for Python Code", "title": "Style Guide for Python Code",
@ -54,48 +52,43 @@ def test_pep_details(monkeypatch):
@pytest.mark.parametrize( @pytest.mark.parametrize(
"test_input, expected", ("test_input", "expected"),
[ [
( (
"First Last <user@example.com>", "First Last <user@example.com>",
[Author(last_first="Last, First", nick="Last", email="user@example.com")], [_Author(full_name="First Last", email="user@example.com")],
), ),
( (
"First Last", "First Last",
[Author(last_first="Last, First", nick="Last", email="")], [_Author(full_name="First Last", email="")],
),
(
"user@example.com (First Last)",
[Author(last_first="Last, First", nick="Last", email="user@example.com")],
), ),
pytest.param( pytest.param(
"First Last <user at example.com>", "First Last <user at example.com>",
[Author(last_first="Last, First", nick="Last", email="user@example.com")], [_Author(full_name="First Last", email="user@example.com")],
marks=pytest.mark.xfail, marks=pytest.mark.xfail,
), ),
pytest.param(
" , First Last,",
{"First Last": ""},
marks=pytest.mark.xfail(raises=ValueError),
),
], ],
) )
def test_parse_authors(test_input, expected): def test_parse_authors(test_input, expected):
# Arrange
dummy_object = parser.PEP(Path("pep-0160.txt"))
# Act # Act
out = parser._parse_authors(dummy_object, test_input, AUTHORS_OVERRIDES) out = parser._parse_author(test_input)
# Assert # Assert
assert out == expected assert out == expected
def test_parse_authors_invalid(): def test_parse_authors_invalid():
with pytest.raises(ValueError, match="Name is empty!"):
pep = parser.PEP(Path("pep-0008.txt")) assert parser._parse_author("")
with pytest.raises(PEPError, match="no authors found"):
parser._parse_authors(pep, "", AUTHORS_OVERRIDES)
@pytest.mark.parametrize( @pytest.mark.parametrize(
"test_type, test_status, expected", ("test_type", "test_status", "expected"),
[ [
(TYPE_INFO, STATUS_DRAFT, ":abbr:`I (Informational, Draft)`"), (TYPE_INFO, STATUS_DRAFT, ":abbr:`I (Informational, Draft)`"),
(TYPE_INFO, STATUS_ACTIVE, ":abbr:`IA (Informational, Active)`"), (TYPE_INFO, STATUS_ACTIVE, ":abbr:`IA (Informational, Active)`"),
@ -113,7 +106,7 @@ def test_parse_authors_invalid():
) )
def test_abbreviate_type_status(test_type, test_status, expected): def test_abbreviate_type_status(test_type, test_status, expected):
# set up dummy PEP object and monkeypatch attributes # set up dummy PEP object and monkeypatch attributes
pep = parser.PEP(Path("pep-0008.txt")) pep = parser.PEP(PEP_ROOT / "pep-0008.rst")
pep.pep_type = test_type pep.pep_type = test_type
pep.status = test_status pep.status = test_status

View File

@ -1,10 +1,10 @@
from pathlib import Path
from pep_sphinx_extensions.pep_zero_generator import parser, pep_index_generator from pep_sphinx_extensions.pep_zero_generator import parser, pep_index_generator
from ..conftest import PEP_ROOT
def test_create_pep_json(): def test_create_pep_json():
peps = [parser.PEP(Path("pep-0008.txt"))] peps = [parser.PEP(PEP_ROOT / "pep-0008.rst")]
out = pep_index_generator.create_pep_json(peps) out = pep_index_generator.create_pep_json(peps)

View File

@ -30,18 +30,18 @@ def test_pep_zero_writer_emit_title():
@pytest.mark.parametrize( @pytest.mark.parametrize(
"test_input, expected", ("test_input", "expected"),
[ [
( (
"pep-9000.rst", "pep-9000.rst",
{ {
"Fussyreverend, Francis": "one@example.com", "Francis Fussyreverend": "one@example.com",
"Soulfulcommodore, Javier": "two@example.com", "Javier Soulfulcommodore": "two@example.com",
}, },
), ),
( (
"pep-9001.rst", "pep-9001.rst",
{"Fussyreverend, Francis": "", "Soulfulcommodore, Javier": ""}, {"Francis Fussyreverend": "", "Javier Soulfulcommodore": ""},
), ),
], ],
) )

View File

@ -0,0 +1,23 @@
PEP:9002
Title: Nobody expects the example PEP!
Author: Cardinal Ximénez <Cardinal.Ximenez@spanish.inquisition>,
Cardinal Biggles
Cardinal Fang
Version: 4.0
Python-Version: 4.0
Sponsor:
Sponsor:
Horse-Guards: Parade
Created: 1-Jan-1989
BDFL-Delegate: Barry!
Status: Draught
Topic: Inquisiting, Governance, Governance, packaging
Content-Type: video/quicktime
Requires: 0020,1,2,3, 7, 8
Discussions-To: MR ALBERT SPIM, I,OOO,OO8 LONDON ROAD, OXFORD
Post-History: `2-Feb-2000 <FLIGHT LT. & PREBENDARY ETHEL MORRIS; THE DIMPLES; THAXTED; NR BUENOS AIRES>`__
`3-Mar-2001 <The Royal Frog Trampling Institute; 16 Rayners Lane; London>`__
Resolution:
https://peps.python.org/pep-9002.html

View File

@ -1,6 +0,0 @@
AUTHORS_OVERRIDES = {
"Guido van Rossum": {
"Surname First": "van Rossum, Guido (GvR)",
"Name Reference": "GvR",
},
}

View File

@ -3,10 +3,12 @@
"""Configuration for building PEPs using Sphinx.""" """Configuration for building PEPs using Sphinx."""
import os
from pathlib import Path from pathlib import Path
import sys import sys
sys.path.append(str(Path(".").absolute())) _ROOT = Path(__file__).resolve().parent.parent
sys.path.append(os.fspath(_ROOT))
# -- Project information ----------------------------------------------------- # -- Project information -----------------------------------------------------
@ -25,7 +27,6 @@ extensions = [
# The file extensions of source files. Sphinx uses these suffixes as sources. # The file extensions of source files. Sphinx uses these suffixes as sources.
source_suffix = { source_suffix = {
".rst": "pep", ".rst": "pep",
".txt": "pep",
} }
# List of patterns (relative to source dir) to ignore when looking for source files. # List of patterns (relative to source dir) to ignore when looking for source files.
@ -34,7 +35,6 @@ include_patterns = [
"contents.rst", "contents.rst",
# PEP files # PEP files
"pep-????.rst", "pep-????.rst",
"pep-????.txt",
# PEP ancillary files # PEP ancillary files
"pep-????/*.rst", "pep-????/*.rst",
# Documentation # Documentation
@ -60,11 +60,13 @@ intersphinx_disabled_reftypes = []
# -- Options for HTML output ------------------------------------------------- # -- Options for HTML output -------------------------------------------------
_PSE_PATH = _ROOT / "pep_sphinx_extensions"
# HTML output settings # HTML output settings
html_math_renderer = "maths_to_html" # Maths rendering html_math_renderer = "maths_to_html" # Maths rendering
# Theme settings # Theme settings
html_theme_path = ["pep_sphinx_extensions"] html_theme_path = [os.fspath(_PSE_PATH)]
html_theme = "pep_theme" # The actual theme directory (child of html_theme_path) html_theme = "pep_theme" # The actual theme directory (child of html_theme_path)
html_use_index = False # Disable index (we use PEP 0) html_use_index = False # Disable index (we use PEP 0)
html_style = "" # must be defined here or in theme.conf, but is unused html_style = "" # must be defined here or in theme.conf, but is unused
@ -72,4 +74,4 @@ html_permalinks = False # handled in the PEPContents transform
html_baseurl = "https://peps.python.org" # to create the CNAME file html_baseurl = "https://peps.python.org" # to create the CNAME file
gettext_auto_build = False # speed-ups gettext_auto_build = False # speed-ups
templates_path = ["pep_sphinx_extensions/pep_theme/templates"] # Theme template relative paths from `confdir` templates_path = [os.fspath(_PSE_PATH / "pep_theme" / "templates")] # Theme template relative paths from `confdir`

View File

@ -14,6 +14,5 @@ This is an internal Sphinx page; please go to the :doc:`PEP Index <pep-0000>`.
:glob: :glob:
:caption: PEP Table of Contents (needed for Sphinx): :caption: PEP Table of Contents (needed for Sphinx):
docs/*
pep-* pep-*
topic/* topic/*

View File

@ -207,7 +207,7 @@ The standard PEP workflow is:
It also provides a complete introduction to reST markup that is used It also provides a complete introduction to reST markup that is used
in PEPs. Approval criteria are: in PEPs. Approval criteria are:
* It sound and complete. The ideas must make technical sense. The * It is sound and complete. The ideas must make technical sense. The
editors do not consider whether they seem likely to be accepted. editors do not consider whether they seem likely to be accepted.
* The title accurately describes the content. * The title accurately describes the content.
* The PEP's language (spelling, grammar, sentence structure, etc.) * The PEP's language (spelling, grammar, sentence structure, etc.)
@ -296,7 +296,7 @@ pointing to this new thread.
If it is not chosen as the discussion venue, If it is not chosen as the discussion venue,
a brief announcement post should be made to the `PEPs category`_ a brief announcement post should be made to the `PEPs category`_
with at least a link to the rendered PEP and the `Discussions-To` thread with at least a link to the rendered PEP and the ``Discussions-To`` thread
when the draft PEP is committed to the repository when the draft PEP is committed to the repository
and if a major-enough change is made to trigger a new thread. and if a major-enough change is made to trigger a new thread.

View File

Before

Width:  |  Height:  |  Size: 27 KiB

After

Width:  |  Height:  |  Size: 27 KiB

View File

@ -45,7 +45,7 @@ Prohibitions
Bug fix releases are required to adhere to the following restrictions: Bug fix releases are required to adhere to the following restrictions:
1. There must be zero syntax changes. All `.pyc` and `.pyo` files must 1. There must be zero syntax changes. All ``.pyc`` and ``.pyo`` files must
work (no regeneration needed) with all bugfix releases forked off work (no regeneration needed) with all bugfix releases forked off
from a major release. from a major release.

View File

@ -8,7 +8,7 @@ Content-Type: text/x-rst
Created: 07-Jul-2002 Created: 07-Jul-2002
Post-History: `18-Aug-2007 <https://mail.python.org/archives/list/python-dev@python.org/thread/DSSGXU5LBCMKYMZBRVB6RF3YAB6ST5AV/>`__, Post-History: `18-Aug-2007 <https://mail.python.org/archives/list/python-dev@python.org/thread/DSSGXU5LBCMKYMZBRVB6RF3YAB6ST5AV/>`__,
`14-May-2014 <https://mail.python.org/archives/list/python-dev@python.org/thread/T7WTUJ6TD3IGYGWV3M4PHJWNLM2WPZAW/>`__, `14-May-2014 <https://mail.python.org/archives/list/python-dev@python.org/thread/T7WTUJ6TD3IGYGWV3M4PHJWNLM2WPZAW/>`__,
`20-Feb-2015 <https://mail.python.org/archives/list/python-dev@python.org/thread/OEQHRR2COYZDL6LZ42RBZOMIUB32WI34/#L3K7IKGVT4ND45SKAJPJ3Q2ADVK5KP52>`__, `20-Feb-2015 <https://mail.python.org/archives/list/python-dev@python.org/thread/OEQHRR2COYZDL6LZ42RBZOMIUB32WI34/>`__,
`10-Mar-2022 <https://mail.python.org/archives/list/python-committers@python.org/thread/K757345KX6W5ZLTWYBUXOXQTJJTL7GW5/>`__, `10-Mar-2022 <https://mail.python.org/archives/list/python-committers@python.org/thread/K757345KX6W5ZLTWYBUXOXQTJJTL7GW5/>`__,

View File

@ -212,7 +212,7 @@ to perform some manual editing steps.
within it (called the "release clone" from now on). You can use the same within it (called the "release clone" from now on). You can use the same
GitHub fork you use for cpython development. Using the standard setup GitHub fork you use for cpython development. Using the standard setup
recommended in the Python Developer's Guide, your fork would be referred recommended in the Python Developer's Guide, your fork would be referred
to as `origin` and the standard cpython repo as `upstream`. You will to as ``origin`` and the standard cpython repo as ``upstream``. You will
use the branch on your fork to do the release engineering work, including use the branch on your fork to do the release engineering work, including
tagging the release, and you will use it to share with the other experts tagging the release, and you will use it to share with the other experts
for making the binaries. for making the binaries.
@ -302,7 +302,7 @@ to perform some manual editing steps.
$ .../release-tools/release.py --tag X.Y.ZaN $ .../release-tools/release.py --tag X.Y.ZaN
This executes a `git tag` command with the `-s` option so that the This executes a ``git tag`` command with the ``-s`` option so that the
release tag in the repo is signed with your gpg key. When prompted release tag in the repo is signed with your gpg key. When prompted
choose the private key you use for signing release tarballs etc. choose the private key you use for signing release tarballs etc.
@ -538,7 +538,7 @@ the main repo.
do some post-merge cleanup. Check the top-level ``README.rst`` do some post-merge cleanup. Check the top-level ``README.rst``
and ``include/patchlevel.h`` files to ensure they now reflect and ``include/patchlevel.h`` files to ensure they now reflect
the desired post-release values for on-going development. the desired post-release values for on-going development.
The patchlevel should be the release tag with a `+`. The patchlevel should be the release tag with a ``+``.
Also, if you cherry-picked changes from the standard release Also, if you cherry-picked changes from the standard release
branch into the release engineering branch for this release, branch into the release engineering branch for this release,
you will now need to manual remove each blurb entry from you will now need to manual remove each blurb entry from
@ -546,8 +546,8 @@ the main repo.
into the release you are working on since that blurb entry into the release you are working on since that blurb entry
is now captured in the merged x.y.z.rst file for the new is now captured in the merged x.y.z.rst file for the new
release. Otherwise, the blurb entry will appear twice in release. Otherwise, the blurb entry will appear twice in
the `changelog.html` file, once under `Python next` and again the ``changelog.html`` file, once under ``Python next`` and again
under `x.y.z`. under ``x.y.z``.
- Review and commit these changes:: - Review and commit these changes::
@ -712,19 +712,19 @@ with RevSys.)
- If this is a **final** release: - If this is a **final** release:
- Add the new version to the *Python Documentation by Version* - Add the new version to the *Python Documentation by Version*
page `https://www.python.org/doc/versions/` and page ``https://www.python.org/doc/versions/`` and
remove the current version from any 'in development' section. remove the current version from any 'in development' section.
- For X.Y.Z, edit all the previous X.Y releases' page(s) to - For X.Y.Z, edit all the previous X.Y releases' page(s) to
point to the new release. This includes the content field of the point to the new release. This includes the content field of the
`Downloads -> Releases` entry for the release:: ``Downloads -> Releases`` entry for the release::
Note: Python x.y.m has been superseded by Note: Python x.y.m has been superseded by
`Python x.y.n </downloads/release/python-xyn/>`_. `Python x.y.n </downloads/release/python-xyn/>`_.
And, for those releases having separate release page entries And, for those releases having separate release page entries
(phasing these out?), update those pages as well, (phasing these out?), update those pages as well,
e.g. `download/releases/x.y.z`:: e.g. ``download/releases/x.y.z``::
Note: Python x.y.m has been superseded by Note: Python x.y.m has been superseded by
`Python x.y.n </download/releases/x.y.n/>`_. `Python x.y.n </download/releases/x.y.n/>`_.
@ -908,8 +908,8 @@ else does them. Some of those tasks include:
- Remove the release from the list of "Active Python Releases" on the Downloads - Remove the release from the list of "Active Python Releases" on the Downloads
page. To do this, log in to the admin page for python.org, navigate to Boxes, page. To do this, log in to the admin page for python.org, navigate to Boxes,
and edit the `downloads-active-releases` entry. Simply strip out the relevant and edit the ``downloads-active-releases`` entry. Simply strip out the relevant
paragraph of HTML for your release. (You'll probably have to do the `curl -X PURGE` paragraph of HTML for your release. (You'll probably have to do the ``curl -X PURGE``
trick to purge the cache if you want to confirm you made the change correctly.) trick to purge the cache if you want to confirm you made the change correctly.)
- Add retired notice to each release page on python.org for the retired branch. - Add retired notice to each release page on python.org for the retired branch.

View File

@ -46,8 +46,8 @@ Lockstep For-Loops
Lockstep for-loops are non-nested iterations over two or more Lockstep for-loops are non-nested iterations over two or more
sequences, such that at each pass through the loop, one element from sequences, such that at each pass through the loop, one element from
each sequence is taken to compose the target. This behavior can each sequence is taken to compose the target. This behavior can
already be accomplished in Python through the use of the map() built- already be accomplished in Python through the use of the map() built-in
in function:: function::
>>> a = (1, 2, 3) >>> a = (1, 2, 3)
>>> b = (4, 5, 6) >>> b = (4, 5, 6)

View File

@ -185,8 +185,8 @@ Implementation Strategy
======================= =======================
The implementation of weak references will include a list of The implementation of weak references will include a list of
reference containers that must be cleared for each weakly- reference containers that must be cleared for each weakly-referencable
referencable object. If the reference is from a weak dictionary, object. If the reference is from a weak dictionary,
the dictionary entry is cleared first. Then, any associated the dictionary entry is cleared first. Then, any associated
callback is called with the object passed as a parameter. Once callback is called with the object passed as a parameter. Once
all callbacks have been called, the object is finalized and all callbacks have been called, the object is finalized and

View File

@ -12,9 +12,9 @@ Post-History:
Abstract Abstract
======== ========
This PEP proposes a redesign and re-implementation of the multi- This PEP proposes a redesign and re-implementation of the
dimensional array module, Numeric, to make it easier to add new multi-dimensional array module, Numeric, to make it easier to add
features and functionality to the module. Aspects of Numeric 2 new features and functionality to the module. Aspects of Numeric 2
that will receive special attention are efficient access to arrays that will receive special attention are efficient access to arrays
exceeding a gigabyte in size and composed of inhomogeneous data exceeding a gigabyte in size and composed of inhomogeneous data
structures or records. The proposed design uses four Python structures or records. The proposed design uses four Python
@ -128,8 +128,8 @@ Some planned features are:
automatically handle alignment and representational issues automatically handle alignment and representational issues
when data is accessed or operated on. There are two when data is accessed or operated on. There are two
approaches to implementing records; as either a derived array approaches to implementing records; as either a derived array
class or a special array type, depending on your point-of- class or a special array type, depending on your point-of-view.
view. We defer this discussion to the Open Issues section. We defer this discussion to the Open Issues section.
2. Additional array types 2. Additional array types
@ -265,8 +265,8 @@ The design of Numeric 2 has four primary classes:
_ufunc.compute(slice, data, func, swap, conv) _ufunc.compute(slice, data, func, swap, conv)
The 'func' argument is a CFuncObject, while the 'swap' and 'conv' The 'func' argument is a CFuncObject, while the 'swap' and 'conv'
arguments are lists of CFuncObjects for those arrays needing pre- arguments are lists of CFuncObjects for those arrays needing pre- or
or post-processing, otherwise None is used. The data argument is post-processing, otherwise None is used. The data argument is
a list of buffer objects, and the slice argument gives the number a list of buffer objects, and the slice argument gives the number
of iterations for each dimension along with the buffer offset and of iterations for each dimension along with the buffer offset and
step size for each array and each dimension. step size for each array and each dimension.

Some files were not shown because too many files have changed in this diff Show More