Miscellaneous fixes and formatting enhancements. (#238)

This commit is contained in:
Serhiy Storchaka 2017-04-05 19:14:26 +03:00 committed by GitHub
parent 7d932d9588
commit 7ca8985b8f
53 changed files with 529 additions and 524 deletions

View File

@ -30,7 +30,7 @@ Rationale
=========
List comprehensions provide a more concise way to create lists in situations
where map() and filter() and/or nested loops would currently be used.
where ``map()`` and ``filter()`` and/or nested loops would currently be used.
Examples

View File

@ -50,11 +50,11 @@ above construct.
For these instances, and others where a range of numbers is
desired, Python provides the ``range`` builtin function, which
creates a list of numbers. The ``range`` function takes three
arguments, ``start``, ``end`` and ``step``. ``start`` and ``step`` are
arguments, *start*, *end* and *step*. *start* and *step* are
optional, and default to 0 and 1, respectively.
The ``range`` function creates a list of numbers, starting at
``start``, with a step of ``step``, up to, but not including ``end``, so
*start*, with a step of *step*, up to, but not including *end*, so
that ``range(10)`` produces a list that has exactly 10 items, the
numbers 0 through 9.
@ -109,19 +109,19 @@ the original sequence. This is done using a "range notation"::
['c', 'd']
This range notation consists of zero, one or two indices separated
by a colon. The first index is the ``start`` index, the second the
``end``. When either is left out, they default to respectively the
by a colon. The first index is the *start* index, the second the
*end*. When either is left out, they default to respectively the
start and the end of the sequence.
There is also an extended range notation, which incorporates
``step`` as well. Though this notation is not currently supported
*step* as well. Though this notation is not currently supported
by most builtin types, if it were, it would work as follows::
>>> l[1:4:2]
['b', 'd']
The third "argument" to the slice syntax is exactly the same as
the ``step`` argument to ``range()``. The underlying mechanisms of the
the *step* argument to ``range()``. The underlying mechanisms of the
standard, and these extended slices, are sufficiently different
and inconsistent that many classes and extensions outside of
mathematical packages do not implement support for the extended
@ -160,9 +160,9 @@ range literals::
[5, 4, 3, 2]
There is one minor difference between range literals and the slice
syntax: though it is possible to omit all of ``start``, ``end`` and
``step`` in slices, it does not make sense to omit ``end`` in range
literals. In slices, ``end`` would default to the end of the list,
syntax: though it is possible to omit all of *start*, *end* and
*step* in slices, it does not make sense to omit *end* in range
literals. In slices, *end* would default to the end of the list,
but this has no meaning in range literals.
@ -178,7 +178,7 @@ The use of a new bytecode is necessary to be able to build ranges
based on other calculations, whose outcome is not known at compile
time.
The code introduces two new functions to listobject.c, which are
The code introduces two new functions to ``listobject.c``, which are
currently hovering between private functions and full-fledged API
calls.
@ -189,8 +189,8 @@ returning NULL if an error occurs. Its prototype is::
``PyList_GetLenOfRange()`` is a helper function used to determine the
length of a range. Previously, it was a static function in
bltinmodule.c, but is now necessary in both listobject.c and
bltinmodule.c (for ``xrange``). It is made non-static solely to avoid
``bltinmodule.c``, but is now necessary in both ``listobject.c`` and
``bltinmodule.c`` (for ``xrange``). It is made non-static solely to avoid
code duplication. Its prototype is::
long PyList_GetLenOfRange(long start, long end, long step)
@ -199,7 +199,7 @@ code duplication. Its prototype is::
Open issues
===========
- One possible solution to the discrepancy of requiring the ``end``
- One possible solution to the discrepancy of requiring the *end*
argument in range literals is to allow the range syntax to
create a "generator", rather than a list, such as the ``xrange``
builtin function does. However, a generator would not be a

View File

@ -28,10 +28,10 @@ start in many projects.
However, the standard library modules aren't always the best
choices for a job. Some library modules were quick hacks
(e.g. calendar, commands), some were designed poorly and are now
near-impossible to fix (cgi), and some have been rendered obsolete
by other, more complete modules (binascii offers the same features
as the binhex, uu, base64 modules). This PEP describes a list of
(e.g. ``calendar``, ``commands``), some were designed poorly and are now
near-impossible to fix (``cgi``), and some have been rendered obsolete
by other, more complete modules (``binascii`` offers the same features
as the ``binhex``, ``uu``, ``base64`` modules). This PEP describes a list of
third-party modules that make Python more competitive for various
application domains, forming the Python Advanced Library.

View File

@ -64,8 +64,8 @@ operators for matrix solution and other operations, Prof. James
Rawlings replied [3]_:
I DON'T think it's a must have, and I do a lot of matrix
inversion. I cannot remember if its A\b or b\A so I always
write inv(A)*b instead. I recommend dropping \.
inversion. I cannot remember if its ``A\b`` or ``b\A`` so I always
write ``inv(A)*b`` instead. I recommend dropping ``\``.
Based on this discussion, and feedback from students at the US
national laboratories and elsewhere, we recommended adding only

View File

@ -58,7 +58,7 @@ or with ``unicode()`` if it is a Unicode string.
- an expression enclosed in square brackets, or
- an argument list enclosed in parentheses
(This is exactly the pattern expressed in the Python grammar
by "``NAME`` trailer*", using the definitions in Grammar/Grammar.)
by "``NAME trailer*``", using the definitions in ``Grammar/Grammar``.)
2. Any complete Python expression enclosed in curly braces.
@ -69,7 +69,7 @@ Examples
========
Here is an example of an interactive session exhibiting the
expected behaviour of this feature::
expected behaviour of this feature. ::
>>> a, b = 5, 6
>>> print $'a = $a, b = $b'
@ -127,18 +127,18 @@ Implementation
==============
The ``Itpl`` module at [1]_ provides a
prototype of this feature. It uses the tokenize module to find
prototype of this feature. It uses the ``tokenize`` module to find
the end of an expression to be interpolated, then calls ``eval()``
on the expression each time a value is needed. In the prototype,
the expression is parsed and compiled again each time it is
evaluated.
As an optimization, interpolated strings could be compiled
directly into the corresponding bytecode; that is::
directly into the corresponding bytecode; that is, ::
$'a = $a, b = $b'
could be compiled as though it were the expression::
could be compiled as though it were the expression ::
('a = ' + str(a) + ', b = ' + str(b))

View File

@ -40,8 +40,8 @@ not...)
Programmers are often told that they can implement sets as
dictionaries with "don't care" values. Items can be added to
these "sets" by assigning the "don't care" value to them;
membership can be tested using "dict.has_key"; and items can be
deleted using "del". However, the other main operations on sets
membership can be tested using ``dict.has_key``; and items can be
deleted using ``del``. However, the other main operations on sets
(union, intersection, and difference) are not directly supported
by this representation, since their meaning is ambiguous for
dictionaries containing key/value pairs.
@ -65,22 +65,22 @@ will step through the elements of S in arbitrary order, while::
set(x**2 for x in S)
will produce a set containing the squares of all elements in S,
Membership will be tested using "in" and "not in", and basic set
Membership will be tested using ``in`` and ``not in``, and basic set
operations will be implemented by a mixture of overloaded
operators:
========= =============================
\| union
& intersection
^ symmetric difference
\- asymmetric difference
== != equality and inequality tests
< <= >= > subset and superset tests
========= =============================
============= =============================
``|`` union
``&`` intersection
``^`` symmetric difference
``-`` asymmetric difference
``== !=`` equality and inequality tests
``< <= >= >`` subset and superset tests
============= =============================
and methods:
================== ============================================
================== =============================================
``S.add(x)`` Add "x" to the set.
``S.update(s)`` Add all elements of sequence "s" to the set.
@ -93,8 +93,8 @@ and methods:
do nothing if it is not.
``S.pop()`` Remove and return an arbitrary element,
raising a LookupError if the element is not
present.
raising a ``LookupError`` if the element is
not present.
``S.clear()`` Remove all elements from this set.
@ -103,7 +103,7 @@ and methods:
``s.issuperset()`` Check for a superset relationship.
``s.issubset()`` Check for a subset relationship.
================== ============================================
================== =============================================
and two new built-in conversion functions:
@ -117,14 +117,14 @@ and two new built-in conversion functions:
Notes:
1. We propose using the bitwise operators "\|\&" for intersection
and union. While "+" for union would be intuitive, "\*" for
1. We propose using the bitwise operators "``|&``" for intersection
and union. While "``+``" for union would be intuitive, "``*``" for
intersection is not (very few of the people asked guessed what
it did correctly).
2. We considered using "+" to add elements to a set, rather than
"add". However, Guido van Rossum pointed out that "+" is
symmetric for other built-in types (although "\*" is not). Use
2. We considered using "``+``" to add elements to a set, rather than
"add". However, Guido van Rossum pointed out that "``+``" is
symmetric for other built-in types (although "``*``" is not). Use
of "add" will also avoid confusion between that operation and
set union.
@ -132,8 +132,8 @@ Notes:
Set Notation
============
The PEP originally proposed {1,2,3} as the set notation and {-} for
the empty set. Experience with Python 2.3's sets.py showed that
The PEP originally proposed ``{1,2,3}`` as the set notation and ``{-}`` for
the empty set. Experience with Python 2.3's ``sets.py`` showed that
the notation was not necessary. Also, there was some risk of making
dictionaries less instantly recognizable.
@ -156,7 +156,7 @@ types were introduced in Python 2.4. The improvements are:
* Better hash algorithm for frozensets
* More compact pickle format (storing only an element list
instead of a dictionary of key:value pairs where the value
is always True).
is always ``True``).
* Use a ``__reduce__`` function so that deep copying is automatic.
* The BaseSet concept was eliminated.
* The ``union_update()`` method became just ``update()``.
@ -196,9 +196,9 @@ to be immutable, this would preclude sets of sets (which are
widely used in graph algorithms and other applications).
Earlier drafts of PEP 218 had only a single set type, but the
sets.py implementation in Python 2.3 has two, Set and
``sets.py`` implementation in Python 2.3 has two, Set and
ImmutableSet. For Python 2.4, the new built-in types were named
set and frozenset which are slightly less cumbersome.
``set`` and ``frozenset`` which are slightly less cumbersome.
There are two classes implemented in the "sets" module. Instances
of the Set class can be modified by the addition or removal of

View File

@ -55,17 +55,17 @@ more advanced parsers/tokenizers, however, this should not be a
problem.
A slightly special case exists for importing sub-modules. The
statement::
statement ::
import os.path
stores the module ``os`` locally as ``os``, so that the imported
submodule ``path`` is accessible as ``os.path``. As a result::
submodule ``path`` is accessible as ``os.path``. As a result, ::
import os.path as p
stores ``os.path``, not ``os``, in ``p``. This makes it effectively the
same as::
same as ::
from os import path as p

View File

@ -42,11 +42,11 @@ And even if we did, that would mean creating yet another object
with its ``__init__`` call and associated overhead.
cgi.py: Currently, query data with no ``=`` are ignored. Even if
keep_blank_values is set, queries like ``...?value=&...`` are
``keep_blank_values`` is set, queries like ``...?value=&...`` are
returned with blank values but queries like ``...?value&...`` are
completely lost. It would be great if such data were made
available through the ``FieldStorage`` interface, either as entries
with None as values, or in a separate list.
with ``None`` as values, or in a separate list.
Utility function: build a query string from a list of 2-tuples

View File

@ -25,12 +25,12 @@ to existing code.
Syntax
======
The syntax of ``\x`` escapes, in all flavors of non-raw strings, becomes::
The syntax of ``\x`` escapes, in all flavors of non-raw strings, becomes ::
\xhh
where h is a hex digit (0-9, a-f, A-F). The exact syntax in 1.5.2 is
not clearly specified in the Reference Manual; it says::
not clearly specified in the Reference Manual; it says ::
\xhh...
@ -44,11 +44,11 @@ whether the Reference Manual intended either of the 1-digit or
Semantics
=========
In an 8-bit non-raw string::
In an 8-bit non-raw string, ::
\xij
expands to the character::
expands to the character ::
chr(int(ij, 16))
@ -59,7 +59,7 @@ In a Unicode string,
\xij
acts the same as::
acts the same as ::
\u00ij
@ -67,7 +67,7 @@ i.e. it expands to the obvious Latin-1 character from the initial
segment of the Unicode space.
An ``\x`` not followed by at least two hex digits is a compile-time error,
specifically ``ValueError`` in 8-bit strings, and UnicodeError (a subclass
specifically ``ValueError`` in 8-bit strings, and ``UnicodeError`` (a subclass
of ``ValueError``) in Unicode strings. Note that if an ``\x`` is followed by
more than two hex digits, only the first two are "consumed". In 1.6
and before all but the *last* two were silently ignored.
@ -126,7 +126,7 @@ than 2 hex digits following -- it's clearly more Pythonic to insist on
When Unicode strings were introduced to Python, ``\x`` was generalized so
as to ignore all but the last *four* hex digits in Unicode strings.
This caused a technical difficulty for the new regular expression engine::
This caused a technical difficulty for the new regular expression engine:
SRE tries very hard to allow mixing 8-bit and Unicode patterns and
strings in intuitive ways, and it no longer had any way to guess what,
for example, ``r"\x123456"`` should mean as a pattern: is it asking to match
@ -192,10 +192,10 @@ Believed to be none. The candidates for breakage would mostly be
parsing tools, but the author knows of none that worry about the
internal structure of Python strings beyond the approximation "when
there's a backslash, swallow the next character". Tim Peters checked
python-mode.el, the std tokenize.py and pyclbr.py, and the IDLE syntax
``python-mode.el``, the std ``tokenize.py`` and ``pyclbr.py``, and the IDLE syntax
coloring subsystem, and believes there's no need to change any of
them. Tools like tabnanny.py and checkappend.py inherit their immunity
from tokenize.py.
them. Tools like ``tabnanny.py`` and ``checkappend.py`` inherit their immunity
from ``tokenize.py``.
Reference Implementation

View File

@ -191,7 +191,7 @@ Early comments on the PEP from Guido:
2. I don't like the access method either (``__doc_<attrname>__``).
The author's reply
The author's reply:
::

View File

@ -25,7 +25,7 @@ between numerical types are requested, coercions happen. While
the C rationale for the numerical model is that it is very similar
to what happens at the hardware level, that rationale does not
apply to Python. So, while it is acceptable to C programmers that
2/3 == 0, it is surprising to many Python programmers.
``2/3 == 0``, it is surprising to many Python programmers.
NOTE: in the light of recent discussions in the newsgroup, the
motivation in this PEP (and details) need to be extended.
@ -76,7 +76,7 @@ then any answer might be wrong.
(But not horribly wrong: it's close to the truth.)
Now, there is two thing the models promises for the field operations
(+, -, /, \*):
(``+``, ``-``, ``/``, ``*``):
- If both operands satisfy ``isexact()``, the result satisfies
``isexact()``.
@ -86,7 +86,7 @@ Now, there is two thing the models promises for the field operations
One consequence of these two rules is that all exact calcutions
are done as (complex) rationals: since the field laws must hold,
then::
then ::
(a/b)*b == a

View File

@ -13,12 +13,12 @@ Post-History:
Introduction
============
The Modules/Setup mechanism has some flaws:
The ``Modules/Setup`` mechanism has some flaws:
* People have to remember to uncomment bits of Modules/Setup in
* People have to remember to uncomment bits of ``Modules/Setup`` in
order to get all the possible modules.
* Moving Setup to a new version of Python is tedious; new modules
* Moving ``Setup`` to a new version of Python is tedious; new modules
have been added, so you can't just copy the older version, but
have to reconcile the two versions.
@ -34,24 +34,24 @@ Use the Distutils to build the modules that come with Python.
The changes can be broken up into several pieces:
1. The Distutils needs some Python modules to be able to build
modules. Currently I believe the minimal list is posix, _sre,
and string.
modules. Currently I believe the minimal list is ``posix``, ``_sre``,
and ``string``.
These modules will have to be built before the Distutils can be
used, so they'll simply be hardwired into Modules/Makefile and
used, so they'll simply be hardwired into ``Modules/Makefile`` and
be automatically built.
2. A top-level setup.py script will be written that checks the
libraries installed on the system and compiles as many modules
as possible.
3. Modules/Setup will be kept and settings in it will override
setup.py's usual behavior, so you can disable a module known
3. ``Modules/Setup`` will be kept and settings in it will override
``setup.py``'s usual behavior, so you can disable a module known
to be buggy, or specify particular compilation or linker flags.
However, in the common case where setup.py works correctly,
everything in Setup will remain commented out. The other
Setup.* become unnecessary, since nothing will be generating
Setup automatically.
However, in the common case where ``setup.py`` works correctly,
everything in ``Setup`` will remain commented out. The other
``Setup.*`` become unnecessary, since nothing will be generating
``Setup`` automatically.
The patch was checked in for Python 2.1, and has been subsequently
modified.
@ -73,26 +73,26 @@ The patch makes the following changes:
* Makes some required changes to distutils/sysconfig (these will
be checked in separately)
* In the top-level Makefile.in, the "sharedmods" target simply
runs "./python setup.py build", and "sharedinstall" runs
"./python setup.py install". The "clobber" target also deletes
the build/ subdirectory where Distutils puts its output.
* In the top-level ``Makefile.in``, the "sharedmods" target simply
runs ``"./python setup.py build"``, and "sharedinstall" runs
``"./python setup.py install"``. The "clobber" target also deletes
the ``build/`` subdirectory where Distutils puts its output.
* Modules/Setup.config.in only contains entries for the gc and thread
modules; the readline, curses, and db modules are removed because
it's now setup.py's job to handle them.
* ``Modules/Setup.config.in`` only contains entries for the ``gc`` and ``thread``
modules; the ``readline``, ``curses``, and ``db`` modules are removed because
it's now ``setup.py``'s job to handle them.
* Modules/Setup.dist now contains entries for only 3 modules --
_sre, posix, and strop.
* ``Modules/Setup.dist`` now contains entries for only 3 modules --
``_sre``, ``posix``, and ``strop``.
* The configure script builds setup.cfg from setup.cfg.in. This
* The ``configure`` script builds ``setup.cfg`` from ``setup.cfg.in``. This
is needed for two reasons: to make building in subdirectories
work, and to get the configured installation prefix.
* Adds setup.py to the top directory of the source tree. setup.py
* Adds ``setup.py`` to the top directory of the source tree. ``setup.py``
is the largest piece of the puzzle, though not the most
complicated. setup.py contains a subclass of the BuildExt
class, and extends it with a detect_modules() method that does
complicated. ``setup.py`` contains a subclass of the ``BuildExt``
class, and extends it with a ``detect_modules()`` method that does
the work of figuring out when modules can be compiled, and adding
them to the 'exts' list.
@ -110,7 +110,7 @@ binary?
[Answer: building a Python binary with the Distutils should be
feasible, though no one has implemented it yet. This should be
done someday, but isn't a pressing priority as messing around with
the top-level Makefile.pre.in is good enough.]
the top-level ``Makefile.pre.in`` is good enough.]
Copyright

View File

@ -24,16 +24,16 @@ modules.
Interactive use
===============
Simply typing "help" describes the help function (through ``repr()``
Simply typing ``help`` describes the help function (through ``repr()``
overloading).
"help" can also be used as a function.
``help`` can also be used as a function.
The function takes the following forms of input::
The function takes the following forms of input:
help( "string" ) -- built-in topic or global
help( <ob> ) -- docstring from object or type
help( "doc:filename" ) -- filename from Python documentation
* ``help( "string" )`` -- built-in topic or global
* ``help( <ob> )`` -- docstring from object or type
* ``help( "doc:filename" )`` -- filename from Python documentation
If you ask for a global, it can be a fully-qualified name, such as::
@ -43,7 +43,7 @@ You can also use the facility from a command-line::
python --help if
In either situation, the output does paging similar to the "more"
In either situation, the output does paging similar to the ``more``
command.
@ -60,7 +60,7 @@ module::
onlinehelp.gethelp(object_or_string) -> string
It should also be possible to override the help display function
by assigning to ``onlinehelp``.displayhelp(object_or_string).
by assigning to ``onlinehelp.displayhelp(object_or_string)``.
The module should be able to extract module information from
either the HTML or LaTeX versions of the Python documentation.
@ -71,35 +71,35 @@ in "special" syntaxes like structured text, HTML and LaTeX and
decode them appropriately.
A prototype implementation is available with the Python source
distribution as nondist/sandbox/doctools/``onlinehelp``.py.
distribution as ``nondist/sandbox/doctools/onlinehelp.py``.
Built-in Topics
===============
help( "intro" ) - What is Python? Read this first!
* ``help( "intro" )`` -- What is Python? Read this first!
help( "keywords" ) - What are the keywords?
* ``help( "keywords" )`` -- What are the keywords?
help( "syntax" ) - What is the overall syntax?
* ``help( "syntax" )`` -- What is the overall syntax?
help( "operators" ) - What operators are available?
* ``help( "operators" )`` -- What operators are available?
help( "builtins" ) - What functions, types, etc. are built-in?
* ``help( "builtins" )`` -- What functions, types, etc. are built-in?
help( "modules" ) - What modules are in the standard library?
* ``help( "modules" )`` -- What modules are in the standard library?
help( "copyright" ) - Who owns Python?
* ``help( "copyright" )`` -- Who owns Python?
help( "moreinfo" ) - Where is there more information?
* ``help( "moreinfo" )`` -- Where is there more information?
help( "changes" ) - What changed in Python 2.0?
* ``help( "changes" )`` -- What changed in Python 2.0?
help( "extensions" ) - What extensions are installed?
* ``help( "extensions" )`` -- What extensions are installed?
help( "faq" ) - What questions are frequently asked?
* ``help( "faq" )`` -- What questions are frequently asked?
help( "ack" ) - Who has done work on Python lately?
* ``help( "ack" )`` -- Who has done work on Python lately?
Security Issues

View File

@ -39,7 +39,7 @@ case-sensitive matches::
+-------------------+------------------+
In the upper left box, if you create "fiLe" it's stored as "fiLe",
and only ``open("fiLe")`` will open it ``(open("file")`` will not, nor
and only ``open("fiLe")`` will open it (``open("file")`` will not, nor
will the 14 other variations on that theme).
In the lower right box, if you create "fiLe", there's no telling
@ -70,7 +70,7 @@ the current rules for import on Windows:
1. Despite that the filesystem is case-insensitive, Python insists
on a case-sensitive match. But not in the way the upper left
box works: if you have two files, ``FiLe.py`` and ``file.py`` on
``sys.path``, and do::
``sys.path``, and do ::
import file
@ -120,7 +120,7 @@ The proposed new semantics for the lower left box:
A. If the ``PYTHONCASEOK`` environment variable exists, same as
before: silently accept the first case-insensitive match of any
kind; raise ImportError if none found.
kind; raise ``ImportError`` if none found.
B. Else search ``sys.path`` for the first case-sensitive match; raise
``ImportError`` if none found.

View File

@ -100,13 +100,13 @@ Open Issues
- Since ``2 == 2/1`` and maybe ``str(2/1) == '2'``, it reduces surprises
where objects seem equal but behave differently.
- / can be freely used for integer division when I *know* that
- ``/`` can be freely used for integer division when I *know* that
there is no remainder (if I am wrong and there is a remainder,
there will probably be some exception later).
Arguments against:
- When I use the result of / as a sequence index, it's usually
- When I use the result of ``/`` as a sequence index, it's usually
an error which should not be hidden by making the program
working for some data, since it will break for other data.

View File

@ -39,14 +39,14 @@ math classes. Making the "obvious" non-integer type one with more
predictable semantics will surprise new programmers less than
using floating point numbers. As quite a few posts on c.l.py and
on tutor@python.org have shown, people often get bit by strange
semantics of floating point numbers: for example, round(0.98, 2)
semantics of floating point numbers: for example, ``round(0.98, 2)``
still gives 0.97999999999999998.
Proposal
========
Literals conforming to the regular expression '\d*.\d*' will be
Literals conforming to the regular expression ``'\d*.\d*'`` will be
rational numbers.
@ -76,11 +76,11 @@ Common Objections
Rationals are slow and memory intensive!
(Relax, I'm not taking floats away, I'm just adding two more characters.
1e0 will still be a float)
``1e0`` will still be a float)
Rationals must present themselves as a decimal float or they will be
horrible for users expecting decimals (i.e. ``str(.5)`` should return '.5' and
not '1/2'). This means that many rationals must be truncated at some
horrible for users expecting decimals (i.e. ``str(.5)`` should return ``'.5'`` and
not ``'1/2'``). This means that many rationals must be truncated at some
point, which gives us a new loss of precision.

View File

@ -45,7 +45,7 @@ The upload will be made to the host "www.python.org" on port
will consist of the following fields:
- ``distribution`` -- The file containing the module software (for
example, a .tar.gz or .zip file).
example, a ``.tar.gz`` or ``.zip`` file).
- ``distmd5sum`` -- The MD5 hash of the uploaded distribution,
encoded in ASCII representing the hexadecimal representation
@ -83,7 +83,7 @@ Return Data
===========
The status of the upload will be reported using HTTP non-standard
("X-\*)" headers. The ``X-Swalow-Status`` header may have the following
(``X-*``) headers. The ``X-Swalow-Status`` header may have the following
values:
- ``SUCCESS`` -- Indicates that the upload has succeeded.

View File

@ -139,22 +139,22 @@ such look-ahead is not available in the Python tokenizer.
Questions and Answers
=====================
Q: It looks like this PEP was written to allow definition of source
**Q:** It looks like this PEP was written to allow definition of source
code character sets. Is that true?
A: No. Even though the directive facility can be extended to
**A:** No. Even though the directive facility can be extended to
allow source code encodings, no specific directive is proposed.
Q: Then why was this PEP written at all?
**Q:** Then why was this PEP written at all?
A: It acts as a counter-proposal to [3]_, which proposes to
**A:** It acts as a counter-proposal to [3]_, which proposes to
overload the import statement with a new meaning. This PEP
allows to solve the problem in a more general way.
Q: But isn't mixing source encodings and language changes like
**Q:** But isn't mixing source encodings and language changes like
mixing apples and oranges?
A: Perhaps. To address the difference, the predefined
**A:** Perhaps. To address the difference, the predefined
"transitional" directive has been defined.

View File

@ -44,11 +44,11 @@ something terminated by ``db``. Existing examples are: ``oracledb``,
``informixdb``, and ``pg95db``. These modules should export several
names:
modulename(connection_string)
``modulename(connection_string)``
Constructor for creating a connection to the database.
Returns a Connection Object.
error
``error``
Exception raised for errors from the database module.
@ -57,24 +57,24 @@ Connection Objects
Connection Objects should respond to the following methods:
close()
``close()``
Close the connection now (rather than whenever ``__del__`` is
called). The connection will be unusable from this point
forward; an exception will be raised if any operation is
attempted with the connection.
commit()
``commit()``
Commit any pending transaction to the database.
rollback()
``rollback()``
Roll the database back to the start of any pending
transaction.
cursor()
``cursor()``
Return a new Cursor Object. An exception may be thrown if
the database does not support a cursor concept.
callproc([params])
``callproc([params])``
(Note: this method is not well-defined yet.) Call a
stored database procedure with the given (optional)
parameters. Returns the result of the stored procedure.
@ -97,7 +97,7 @@ the context of a fetch operation.
Cursor Objects should respond to the following methods and
attributes:
arraysize
``arraysize``
This read/write attribute specifies the number of rows to
fetch at a time with ``fetchmany()``. This value is also used
when inserting multiple rows at a time (passing a
@ -110,11 +110,11 @@ arraysize
``fetchmany()`` method, but are free to interact with the
database a single row at a time.
description
``description``
This read-only attribute is a tuple of 7-tuples. Each
7-tuple contains information describing each result
column: (name, type_code, display_size, internal_size,
precision, scale, null_ok). This attribute will be None
precision, scale, null_ok). This attribute will be ``None``
for operations that do not return rows or if the cursor
has not had an operation invoked via the ``execute()`` method
yet.
@ -126,13 +126,13 @@ description
items of the 7-tuple will always be present; the others
may be database specific.
close()
``close()``
Close the cursor now (rather than whenever ``__del__`` is
called). The cursor will be unusable from this point
forward; an exception will be raised if any operation is
attempted with the cursor.
execute(operation [,params])
``execute(operation [,params])``
Execute (prepare) a database operation (query or command).
Parameters may be provided (as a sequence
(e.g. tuple/list)) and will be bound to variables in the
@ -160,26 +160,26 @@ execute(operation [,params])
Using SQL terminology, these are the possible result
values from the ``execute()`` method:
- If the statement is DDL (e.g. CREATE TABLE), then 1 is
- If the statement is DDL (e.g. ``CREATE TABLE``), then 1 is
returned.
- If the statement is DML (e.g. UPDATE or INSERT), then the
- If the statement is DML (e.g. ``UPDATE`` or ``INSERT``), then the
number of rows affected is returned (0 or a positive
integer).
- If the statement is DQL (e.g. SELECT), None is returned,
- If the statement is DQL (e.g. ``SELECT``), ``None`` is returned,
indicating that the statement is not really complete until
you use one of the 'fetch' methods.
fetchone()
``fetchone()``
Fetch the next row of a query result, returning a single
tuple.
fetchmany([size])
``fetchmany([size])``
Fetch the next set of rows of a query result, returning as
a list of tuples. An empty list is returned when no more
rows are available. The number of rows to fetch is
specified by the parameter. If it is None, then the
specified by the parameter. If it is ``None``, then the
cursor's arraysize determines the number of rows to be
fetched.
@ -189,12 +189,12 @@ fetchmany([size])
parameter is used, then it is best for it to retain the
same value from one ``fetchmany()`` call to the next.
fetchall()
``fetchall()``
Fetch all rows of a query result, returning as a list of
tuples. Note that the cursor's arraysize attribute can
affect the performance of this operation.
setinputsizes(sizes)
``setinputsizes(sizes)``
(Note: this method is not well-defined yet.) This can be
used before a call to ``execute()`` to predefine memory
areas for the operation's parameters. sizes is specified
@ -202,7 +202,7 @@ setinputsizes(sizes)
should be a Type object that corresponds to the input that
will be used, or it should be an integer specifying the
maximum length of a string parameter. If the item is
'None', then no predefined memory area will be reserved
``None``, then no predefined memory area will be reserved
for that column (this is useful to avoid predefined areas
for large inputs).
@ -214,12 +214,12 @@ setinputsizes(sizes)
Implementations are free to do nothing and users are free
to not use it.
setoutputsize(size [,col])
``setoutputsize(size [,col])``
(Note: this method is not well-defined yet.)
Set a column buffer size for fetches of large columns
(e.g. LONG). The column is specified as an index into the
result tuple. Using a column of None will set the default
result tuple. Using a column of ``None`` will set the default
size for all large columns in the cursor.
This method would be used before the ``execute()`` method is
@ -236,57 +236,57 @@ DBI Helper Objects
Many databases need to have the input in a particular format for
binding to an operation's input parameters. For example, if an
input is destined for a DATE column, then it must be bound to the
input is destined for a ``DATE`` column, then it must be bound to the
database in a particular string format. Similar problems exist
for "Row ID" columns or large binary items (e.g. blobs or RAW
for "Row ID" columns or large binary items (e.g. blobs or ``RAW``
columns). This presents problems for Python since the parameters
to the ``execute()`` method are untyped. When the database module
sees a Python string object, it doesn't know if it should be bound
as a simple CHAR column, as a raw binary item, or as a DATE.
as a simple CHAR column, as a raw binary item, or as a ``DATE``.
To overcome this problem, the 'dbi' module was created. This
module specifies some basic database interface types for working
with databases. There are two classes: 'dbiDate' and 'dbiRaw'.
These are simple container classes that wrap up a value. When
passed to the database modules, the module can then detect that
the input parameter is intended as a DATE or a RAW. For symmetry,
the database modules will return DATE and RAW columns as instances
the input parameter is intended as a ``DATE`` or a ``RAW``. For symmetry,
the database modules will return ``DATE`` and ``RAW`` columns as instances
of these classes.
A Cursor Object's 'description' attribute returns information
about each of the result columns of a query. The 'type_code is
defined to be one of five types exported by this module: 'STRING',
'RAW', 'NUMBER', 'DATE', or 'ROWID'.
about each of the result columns of a query. The 'type_code' is
defined to be one of five types exported by this module: ``STRING``,
``RAW``, ``NUMBER``, ``DATE``, or ``ROWID``.
The module exports the following names:
dbiDate(value)
``dbiDate(value)``
This function constructs a 'dbiDate' instance that holds a
date value. The value should be specified as an integer
number of seconds since the "epoch" (e.g. ``time.time()``).
dbiRaw(value)
``dbiRaw(value)``
This function constructs a 'dbiRaw' instance that holds a
raw (binary) value. The value should be specified as a
Python string.
STRING
``STRING``
This object is used to describe columns in a database that
are string-based (e.g. CHAR).
RAW
``RAW``
This object is used to describe (large) binary columns in
a database (e.g. LONG RAW, blobs).
NUMBER
``NUMBER``
This object is used to describe numeric columns in a
database.
DATE
``DATE``
This object is used to describe date columns in a
database.
ROWID
``ROWID``
This object is used to describe the "Row ID" column in a
database.

View File

@ -25,7 +25,7 @@ The ``xrange()`` function has one idiomatic use::
for i in xrange(...): ...
However, the xrange() object has a bunch of rarely used behaviors
However, the ``xrange()`` object has a bunch of rarely used behaviors
that attempt to make it more sequence-like. These are so rarely
used that historically they have has serious bugs (e.g. off-by-one
errors) that went undetected for several releases.
@ -38,18 +38,18 @@ reduce maintenance and code size.
Proposed Solution
=================
I propose to strip the `xrange()` object to the bare minimum. The
only retained sequence behaviors are x[i], len(x), and repr(x).
In particular, these behaviors will be dropped::
I propose to strip the ``xrange()`` object to the bare minimum. The
only retained sequence behaviors are ``x[i]``, ``len(x)``, and ``repr(x)``.
In particular, these behaviors will be dropped:
x[i:j] (slicing)
x*n, n*x (sequence-repeat)
cmp(x1, x2) (comparisons)
i in x (containment test)
x.tolist() method
x.start, x.stop, x.step attributes
* ``x[i:j]`` (slicing)
* ``x*n``, ``n*x`` (sequence-repeat)
* ``cmp(x1, x2)`` (comparisons)
* ``i in x`` (containment test)
* ``x.tolist()`` method
* ``x.start``, ``x.stop``, ``x.step`` attributes
I also propose to change the signature of the `PyRange_New()` C API
I also propose to change the signature of the ``PyRange_New()`` C API
to remove the 4th argument (the repetition count).
By implementing a custom iterator type, we could speed up the
@ -60,8 +60,8 @@ does just fine).
Scope
=====
This PEP affects the `xrange()` built-in function and the
`PyRange_New()` C API.
This PEP affects the ``xrange()`` built-in function and the
``PyRange_New()`` C API.
Risks

View File

@ -14,11 +14,11 @@ Post-History: 27-Jun-2001
Abstract
========
Python 2.1 unicode characters can have ordinals only up to 2**16 -1.
Python 2.1 unicode characters can have ordinals only up to ``2**16 - 1``.
This range corresponds to a range in Unicode known as the Basic
Multilingual Plane. There are now characters in Unicode that live
on other "planes". The largest addressable character in Unicode
has the ordinal 17 * 2**16 - 1 (0x10ffff). For readability, we
has the ordinal ``17 * 2**16 - 1`` (``0x10ffff``). For readability, we
will call this TOPCHAR and call characters in this range "wide
characters".
@ -74,26 +74,26 @@ user, Python 2.2 will allow the 4-byte implementation as a
build-time option. Users can choose whether they care about
wide characters or prefer to preserve memory.
The 4-byte option is called ``wide Py_UNICODE``. The 2-byte option
is called ``narrow Py_UNICODE``.
The 4-byte option is called "wide ``Py_UNICODE``". The 2-byte option
is called "narrow ``Py_UNICODE``".
Most things will behave identically in the wide and narrow worlds.
* unichr(i) for 0 <= i < 2**16 (0x10000) always returns a
* ``unichr(i)`` for 0 <= i < ``2**16`` (``0x10000``) always returns a
length-one string.
* unichr(i) for 2**16 <= i <= TOPCHAR will return a
* ``unichr(i)`` for ``2**16`` <= i <= TOPCHAR will return a
length-one string on wide Python builds. On narrow builds it will
raise ``ValueError``.
ISSUE
**ISSUE**
Python currently allows ``\U`` literals that cannot be
represented as a single Python character. It generates two
Python characters known as a "surrogate pair". Should this
be disallowed on future narrow Python builds?
Pro:
**Pro:**
Python already the construction of a surrogate pair
for a large unicode literal character escape sequence.
@ -103,7 +103,7 @@ Most things will behave identically in the wide and narrow worlds.
is basically a short-form way of invoking the unicode-escape
codec.
Con:
**Con:**
Surrogates could be easily created this way but the user
still needs to be careful about slicing, indexing, printing
@ -111,7 +111,7 @@ Most things will behave identically in the wide and narrow worlds.
literals should not support surrogates.
ISSUE
**ISSUE**
Should Python allow the construction of characters that do
not correspond to Unicode code points? Unassigned Unicode
@ -120,14 +120,14 @@ Most things will behave identically in the wide and narrow worlds.
guaranteed never to be used by Unicode. Should we allow access
to them anyhow?
Pro:
**Pro:**
If a Python user thinks they know what they're doing why
should we try to prevent them from violating the Unicode
spec? After all, we don't stop 8-bit strings from
containing non-ASCII characters.
Con:
**Con:**
Codecs and other Unicode-consuming code will have to be
careful of these characters which are disallowed by the
@ -137,16 +137,16 @@ Most things will behave identically in the wide and narrow worlds.
* There is an integer value in the sys module that describes the
largest ordinal for a character in a Unicode string on the current
interpreter. ``sys.maxunicode`` is 2**16-1 (0xffff) on narrow builds
interpreter. ``sys.maxunicode`` is ``2**16-1`` (``0xffff``) on narrow builds
of Python and TOPCHAR on wide builds.
ISSUE:
**ISSUE:**
Should there be distinct constants for accessing
TOPCHAR and the real upper bound for the domain of
unichr (if they differ)? There has also been a
suggestion of sys.unicodewidth which can take the
values 'wide' and 'narrow'.
``unichr`` (if they differ)? There has also been a
suggestion of ``sys.unicodewidth`` which can take the
values ``'wide'`` and ``'narrow'``.
* every Python Unicode character represents exactly one Unicode code
point (i.e. Python Unicode Character = Abstract Unicode character).
@ -162,19 +162,19 @@ Most things will behave identically in the wide and narrow worlds.
and encode 32-bit code points as surrogate pairs on narrow Python
builds.
ISSUE
**ISSUE**
Should there be a way to tell codecs not to generate
surrogates and instead treat wide characters as
errors?
Pro:
**Pro:**
I might want to write code that works only with
fixed-width characters and does not have to worry about
surrogates.
Con:
**Con:**
No clear proposal of how to communicate this to codecs.
@ -198,14 +198,14 @@ use.
There is a new configure option:
===================== ==========================================
--enable-unicode=ucs2 configures a narrow Py_UNICODE, and uses
===================== ============================================
--enable-unicode=ucs2 configures a narrow ``Py_UNICODE``, and uses
wchar_t if it fits
--enable-unicode=ucs4 configures a wide Py_UNICODE, and uses
--enable-unicode=ucs4 configures a wide `Py_UNICODE``, and uses
wchar_t if it fits
--enable-unicode same as "=ucs2"
--disable-unicode entirely remove the Unicode functionality.
===================== ==========================================
===================== ============================================
It is also proposed that one day ``--enable-unicode`` will just
default to the width of your platforms ``wchar_t``.

View File

@ -53,7 +53,7 @@ recognize any of the bits set in the supplied flags.
The flags supplied will be bitwise-"or"ed with the flags that
would be set anyway, unless the new fifth optional argument is a
non-zero intger, in which case the flags supplied will be exactly
non-zero integer, in which case the flags supplied will be exactly
the set used.
The above-mentioned flags are not currently exposed to Python. I
@ -67,7 +67,7 @@ write code such as::
__future__.generators.compiler_flag)
A recent change means that these same bits can be used to tell if
a code object was compiled with a given feature; for instance::
a code object was compiled with a given feature; for instance ::
codeob.co_flags & __future__.generators.compiler_flag``
@ -81,7 +81,7 @@ options supported by the running interpreter.
I also propose adding a pair of classes to the standard library
module codeop.
One - Compile - will sport a ``__call__`` method which will act much
One - ``Compile`` - will sport a ``__call__`` method which will act much
like the builtin "compile" of 2.1 with the difference that after
it has compiled a ``__future__`` statement, it "remembers" it and
compiles all subsequent code with the ``__future__`` option in effect.

View File

@ -84,7 +84,7 @@ resulting in::
[('b', 23), ('d', 17), ('c', 5), ('a', 2), ('e', 1)]
which shows the list in by-value order, largest first. (In this
case, ``b`` was found to have the most occurrences.)
case, ``'b'`` was found to have the most occurrences.)
This works fine, but is "hard to use" in two aspects. First,
although this idiom is known to veteran Pythoneers, it is not at

View File

@ -14,7 +14,7 @@ Post-History:
Abstract
========
Much like the parser module exposes the Python parser, this PEP
Much like the ``parser`` module exposes the Python parser, this PEP
proposes that the parser generator used to create the Python
parser, ``pgen``, be exposed as a module in Python.
@ -25,8 +25,8 @@ Rationale
Through the course of Pythonic history, there have been numerous
discussions about the creation of a Python compiler [1]_. These
have resulted in several implementations of Python parsers, most
notably the parser module currently provided in the Python
standard library [2]_ and Jeremy Hylton's compiler module [3]_.
notably the ``parser`` module currently provided in the Python
standard library [2]_ and Jeremy Hylton's ``compiler`` module [3]_.
However, while multiple language changes have been proposed
[4]_ [5]_, experimentation with the Python syntax has lacked the
benefit of a Python binding to the actual parser generator used to
@ -75,7 +75,7 @@ The ``parseGrammarFile()`` function will read the file pointed to
by fileName and create an AST object. The AST nodes will
contain the nonterminal, numeric values of the parser
generator meta-grammar. The output AST will be an instance of
the AST extension class as provided by the parser module.
the AST extension class as provided by the ``parser`` module.
Syntax errors in the input file will cause the SyntaxError
exception to be raised.
@ -94,7 +94,7 @@ string for input, as opposed to the file name.
The ``buildParser()`` function will accept an AST object for input
and return a DFA (deterministic finite automaton) data
structure. The DFA data structure will be a C extension
class, much like the AST structure is provided in the parser
class, much like the AST structure is provided in the ``parser``
module. If the input AST does not conform to the nonterminal
codes defined for the ``pgen`` meta-grammar, ``buildParser()`` will
throw a ``ValueError`` exception.
@ -106,7 +106,7 @@ throw a ``ValueError`` exception.
The ``parseFile()`` function will essentially be a wrapper for the
``PyParser_ParseFile()`` C API function. The wrapper code will
accept the DFA C extension class, and the file name. An AST
instance that conforms to the lexical values in the token
instance that conforms to the lexical values in the ``token``
module and the nonterminal values contained in the DFA will be
output.
@ -150,18 +150,18 @@ A cunning plan has been devised to accomplish this enhancement:
1. Rename the ``pgen`` functions to conform to the CPython naming
standards. This action may involve adding some header files to
the Include subdirectory.
the ``Include`` subdirectory.
2. Move the ``pgen`` C modules in the Makefile.pre.in from unique ``pgen``
elements to the Python C library.
3. Make any needed changes to the parser module so the AST
3. Make any needed changes to the ``parser`` module so the AST
extension class understands that there are AST types it may not
understand. Cursory examination of the AST extension class
shows that it keeps track of whether the tree is a suite or an
expression.
3. Code an additional C module in the Modules directory. The C
3. Code an additional C module in the ``Modules`` directory. The C
extension module will implement the DFA extension class and the
functions outlined in the previous section.

View File

@ -14,13 +14,13 @@ Post-History:
Notice
======
This PEP is withdrawn by the author. He writes::
This PEP is withdrawn by the author. He writes:
Removing duplicate elements from a list is a common task, but
there are only two reasons I can see for making it a built-in.
The first is if it could be done much faster, which isn't the
case. The second is if it makes it significantly easier to
write code. The introduction of sets.py eliminates this
write code. The introduction of ``sets.py`` eliminates this
situation since creating a sequence without duplicates is just
a matter of choosing a different data structure: a set instead
of a list.
@ -64,16 +64,16 @@ Reference Implementation
========================
I've written the brute force version. It's about 20 lines of code
in listobject.c. Adding support for hash table and sorted
in ``listobject.c``. Adding support for hash table and sorted
duplicate removal would only take another hour or so.
References
==========
.. [1] http://groups.google.com/groups?as_q=duplicates&as_ugroup=comp.lang.python
.. [1] https://groups.google.com/forum/#!searchin/comp.lang.python/duplicates
.. [2] Tim Peters unique() entry in the Python cookbook::
.. [2] Tim Peters unique() entry in the Python cookbook:
http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/52560/index_txt

View File

@ -136,7 +136,7 @@ find its corresponding libraries even when there are multiple
Python versions on the same machine.
We add one name to ``sys.path``. On Unix, the directory is
``sys.prefix + "lib"``, and the file name is
``sys.prefix + "/lib"``, and the file name is
``"python%s%s.zip" % (sys.version[0], sys.version[2])``.
So for Python 2.2 and prefix ``/usr/local``, the path
``/usr/local/lib/python2.2/`` is already on ``sys.path``, and
@ -192,7 +192,7 @@ Custom Imports
==============
The logic demonstrates the ability to import using default searching
until a needed Python module (in this case, os) becomes available.
until a needed Python module (in this case, ``os``) becomes available.
This can be used to bootstrap custom importers. For example, if
"``importer()``" in ``__init__.py`` exists, then it could be used for imports.
The "``importer()``" can freely import os and other modules, and these

View File

@ -94,30 +94,37 @@ in-core list object first, which could be expensive.
Examples
========
::
>>> print {i : chr(65+i) for i in range(4)}
{0 : 'A', 1 : 'B', 2 : 'C', 3 : 'D'}
::
>>> print {i : chr(65+i) for i in range(4)}
{0 : 'A', 1 : 'B', 2 : 'C', 3 : 'D'}
>>> print {k : v for k, v in someDict.iteritems()} == someDict.copy()
1
>>> print {k : v for k, v in someDict.iteritems()} == someDict.copy()
1
::
>>> print {x.lower() : 1 for x in list_of_email_addrs}
{'barry@zope.com' : 1, 'barry@python.org' : 1, 'guido@python.org' : 1}
>>> print {x.lower() : 1 for x in list_of_email_addrs}
{'barry@zope.com' : 1, 'barry@python.org' : 1, 'guido@python.org' : 1}
>>> def invert(d):
... return {v : k for k, v in d.iteritems()}
...
>>> d = {0 : 'A', 1 : 'B', 2 : 'C', 3 : 'D'}
>>> print invert(d)
{'A' : 0, 'B' : 1, 'C' : 2, 'D' : 3}
::
>>> {(k, v): k+v for k in range(4) for v in range(4)}
... {(3, 3): 6, (3, 2): 5, (3, 1): 4, (0, 1): 1, (2, 1): 3,
(0, 2): 2, (3, 0): 3, (0, 3): 3, (1, 1): 2, (1, 0): 1,
(0, 0): 0, (1, 2): 3, (2, 0): 2, (1, 3): 4, (2, 2): 4, (
2, 3): 5}
>>> def invert(d):
... return {v : k for k, v in d.iteritems()}
...
>>> d = {0 : 'A', 1 : 'B', 2 : 'C', 3 : 'D'}
>>> print invert(d)
{'A' : 0, 'B' : 1, 'C' : 2, 'D' : 3}
::
>>> {(k, v): k+v for k in range(4) for v in range(4)}
... {(3, 3): 6, (3, 2): 5, (3, 1): 4, (0, 1): 1, (2, 1): 3,
(0, 2): 2, (3, 0): 3, (0, 3): 3, (1, 1): 2, (1, 0): 1,
(0, 0): 0, (1, 2): 3, (2, 0): 2, (1, 3): 4, (2, 2): 4, (
2, 3): 5}
Implementation

View File

@ -23,7 +23,7 @@ Rationale
=========
Python 2.2 on Win32 platforms converts Unicode file names passed
to open and to functions in the os module into the 'mbcs' encoding
to open and to functions in the ``os`` module into the 'mbcs' encoding
before passing the result to the operating system. This is often
successful in the common case where the script is operating with
the locale set to the same value as when the file was created.
@ -51,17 +51,17 @@ are made instead of the standard C library and posix calls.
The Python file object is extended to use a Unicode file name
argument directly rather than converting it. This affects the
file object constructor ``file(filename[, mode[, bufsize]])`` and also
the open function which is an alias of this constructor. When a
Unicode filename argument is used here then the name attribute of
the ``open`` function which is an alias of this constructor. When a
Unicode filename argument is used here then the ``name`` attribute of
the file object will be Unicode. The representation of a file
object, ``repr(f)`` will display Unicode file names as an escaped
string in a similar manner to the representation of Unicode
strings.
The posix module contains functions that take file or directory
The ``posix`` module contains functions that take file or directory
names: ``chdir``, ``listdir``, ``mkdir``, ``open``, ``remove``, ``rename``,
``rmdir``, ``stat``, and ``_getfullpathname``. These will use Unicode
arguments directly rather than converting them. For the rename function, this
arguments directly rather than converting them. For the ``rename`` function, this
behaviour is triggered when either of the arguments is Unicode and
the other argument converted to Unicode using the default
encoding.

View File

@ -45,7 +45,7 @@ parameter can also be "U", meaning "open for input as a text file
with universal newline interpretation". Mode "rU" is also allowed,
for symmetry with "rb". Mode "U" cannot be
combined with other mode flags such as "+". Any line ending in the
input file will be seen as a '\n' in Python, so little other code has
input file will be seen as a ``'\n'`` in Python, so little other code has
to change to handle universal newlines.
Conversion of newlines happens in all calls that read data: ``read()``,
@ -57,7 +57,7 @@ newline convention, and so mode "wU" is also illegal.
A file object that has been opened in universal newline mode gets
a new attribute "newlines" which reflects the newline convention
used in the file. The value for this attribute is one of None (no
newline read yet), "\r", "\n", "\r\n" or a tuple containing all the
newline read yet), ``"\r"``, ``"\n"``, ``"\r\n"`` or a tuple containing all the
newline types seen.
@ -100,7 +100,7 @@ newlines attribute of the file object is not updated during the
``fread()`` or ``fgets()`` calls that are done direct from C.
A partial output implementation, where strings passed to ``fp.write()``
would be converted to use fp.newlines as their line terminator but
would be converted to use ``fp.newlines`` as their line terminator but
all other output would not is far too surprising, in my view.
Because there is no output support for universal newlines there is
@ -123,8 +123,8 @@ readline/readlines methods.
While universal newlines are automatically enabled for import they
are not for opening, where you have to specifically say open(...,
"U"). This is open to debate, but here are a few reasons for this
are not for opening, where you have to specifically say ``open(...,
"U")``. This is open to debate, but here are a few reasons for this
design:
- Compatibility. Programs which already do their own
@ -142,7 +142,7 @@ design:
had encountered Mac newlines? But what if you then later read a
Unix newline?
The newlines attribute is included so that programs that really
The ``newlines`` attribute is included so that programs that really
care about the newline convention, such as text editors, can
examine what was in a file. They can then save (a copy of) the
file with the same newline convention (or, in case of a file with
@ -158,7 +158,7 @@ replacements for ``fgets()`` and ``fread()`` as well it may be difficult
to decide whether or not the lock is held when the routine is
called. Moreover, the only danger is that if two threads read the
same ``FileObject`` at the same time an extraneous newline may be seen
or the "newlines" attribute may inadvertently be set to mixed. I
or the ``newlines`` attribute may inadvertently be set to mixed. I
would argue that if you read the same ``FileObject`` in two threads
simultaneously you are asking for trouble anyway.
@ -170,22 +170,22 @@ Universal newline support can be disabled during configure because it does
have a small performance penalty, and moreover the implementation has
not been tested on all conceivable platforms yet. It might also be silly
on some platforms (WinCE or Palm devices, for instance). If universal
newline support is not enabled then file objects do not have the "newlines"
newline support is not enabled then file objects do not have the ``newlines``
attribute, so testing whether the current Python has it can be done with a
simple::
if hasattr(open, 'newlines'):
print 'We have universal newline support'
Note that this test uses the ``open()`` function rather than the file
type so that it won't fail for versions of Python where the file
type was not available (the file type was added to the built-in
Note that this test uses the ``open()`` function rather than the ``file``
type so that it won't fail for versions of Python where the ``file``
type was not available (the ``file`` type was added to the built-in
namespace in the same release as the universal newline feature was
added).
Additionally, note that this test fails again on Python versions
>= 2.5, when ``open()`` was made a function again and is not synonymous
with the file type anymore.
with the ``file`` type anymore.
Reference Implementation

View File

@ -86,7 +86,7 @@ sequence in mid-stream with no loss of computation effort.
There are other PEPs which touch on related issues: integer
iterators, integer for-loops, and one for modifying the arguments
to range and xrange. The ``enumerate()`` proposal does not preclude
to ``range`` and ``xrange``. The ``enumerate()`` proposal does not preclude
the other proposals and it still meets an important need even if
those are adopted -- the need to count items in any iterable. The
other proposals give a means of producing an index but not the
@ -131,7 +131,7 @@ linear sequencing.
Note D: This function was originally proposed with optional start
and stop arguments. GvR pointed out that the function call
enumerate(seqn,4,6) had an alternate, plausible interpretation as
``enumerate(seqn,4,6)`` had an alternate, plausible interpretation as
a slice that would return the fourth and fifth elements of the
sequence. To avoid the ambiguity, the optional arguments were
dropped even though it meant losing flexibility as a loop counter.
@ -142,7 +142,7 @@ counting from one, as in::
Comments from GvR:
filter and map should die and be subsumed into list
``filter`` and ``map`` should die and be subsumed into list
comprehensions, not grow more variants. I'd rather introduce
built-ins that do iterator algebra (e.g. the iterzip that I've
often used as an example).
@ -185,7 +185,7 @@ Comments from the Community:
Author response:
Prior to these comments, four built-ins were proposed.
After the comments, ``xmap`` ``xfilter`` and ``xzip`` were withdrawn. The
After the comments, ``xmap``, ``xfilter`` and ``xzip`` were withdrawn. The
one that remains is vital for the language and is proposed by
itself. ``Indexed()`` is trivially easy to implement and can be
documented in minutes. More importantly, it is useful in
@ -194,7 +194,7 @@ Author response:
This proposal originally included another function ``iterzip()``.
That was subsequently implemented as the ``izip()`` function in
the itertools module.
the ``itertools`` module.
References

View File

@ -45,12 +45,12 @@ Barry Warsaw, Jeremy Hylton, Tim Peters
Completed features for 2.3
==========================
This list is not complete. See Doc/whatsnew/whatsnew23.tex in CVS
for more, and of course Misc/NEWS for the full list.
This list is not complete. See ``Doc/whatsnew/whatsnew23.tex`` in CVS
for more, and of course ``Misc/NEWS`` for the full list.
- Tk 8.4 update.
- The bool type and its constants, True and False (PEP 285).
- The ``bool`` type and its constants, ``True`` and ``False`` (PEP 285).
- ``PyMalloc`` was greatly enhanced and is enabled by default.
@ -70,10 +70,10 @@ for more, and of course Misc/NEWS for the full list.
- Timeout sockets. http://www.python.org/sf/555085
- Stage B0 of the int/long integration (PEP 237). This means
issuing a ``FutureWarning`` about situations where hex or oct
conversions or left shifts returns a different value for an int
than for a long with the same value. The semantics do *not*
- Stage B0 of the ``int``/``long`` integration (PEP 237). This means
issuing a ``FutureWarning`` about situations where ``hex`` or ``oct``
conversions or left shifts returns a different value for an ``int``
than for a ``long`` with the same value. The semantics do *not*
change in Python 2.3; that will happen in Python 2.4.
- Nuke ``SET_LINENO`` from all code objects (providing a different way
@ -81,10 +81,10 @@ for more, and of course Misc/NEWS for the full list.
http://www.python.org/sf/587993, now checked in. (Unfortunately
the ``pystone`` boost didn't happen. What happened?)
- Write a pymemcompat.h that people can bundle with their
- Write a ``pymemcompat.h`` that people can bundle with their
extensions and then use the 2.3 memory interface with all
Pythons in the range 1.5.2 to 2.3. (Michael Hudson checked in
Misc/pymemcompat.h.)
``Misc/pymemcompat.h``.)
- Add a new concept, "pending deprecation", with associated
warning ``PendingDeprecationWarning``. This warning is normally
@ -94,13 +94,13 @@ for more, and of course Misc/NEWS for the full list.
- Warn when an extension type's ``tp_compare`` returns anything except
-1, 0 or 1. http://www.python.org/sf/472523
- Warn for assignment to None (in various forms).
- Warn for assignment to ``None`` (in various forms).
- PEP 218 Adding a Built-In Set Object Type, Wilson
Alex Martelli contributed a new version of Greg Wilson's
prototype, and I've reworked that quite a bit. It's in the
standard library now as the module "sets", although some details
standard library now as the module ``sets``, although some details
may still change until the first beta release. (There are no
plans to make this a built-in type, for now.)
@ -127,7 +127,7 @@ for more, and of course Misc/NEWS for the full list.
- A standard ``datetime`` type. This started as a wiki:
http://www.zope.org/Members/fdrake/DateTimeWiki/FrontPage. A
prototype was coded in nondist/sandbox/datetime/. Tim Peters
prototype was coded in ``nondist/sandbox/datetime/``. Tim Peters
has finished the C implementation and checked it in.
- PEP 273 Import Modules from Zip Archives, Ahlstrom
@ -157,9 +157,9 @@ for more, and of course Misc/NEWS for the full list.
Heller did this work.)
- A new version of IDLE was imported from the IDLEfork project
(http://idlefork.sf.net). The code now lives in the idlelib
package in the standard library and the idle script is installed
by setup.py.
(http://idlefork.sf.net). The code now lives in the ``idlelib``
package in the standard library and the ``idle`` script is installed
by ``setup.py``.
Planned features for 2.3
@ -180,7 +180,7 @@ work on without hoping for completion by any particular date.
- Documentation: complete the documentation for new-style
classes.
- Look over the Demos/ directory and update where required (Andrew
- Look over the ``Demos/`` directory and update where required (Andrew
Kuchling has done a lot of this)
- New tests.
@ -219,8 +219,8 @@ Features that did not make it into Python 2.3
- A nicer API to open text files, replacing the ugly (in some
people's eyes) "U" mode flag. There's a proposal out there to
have a new built-in type textfile(filename, mode, encoding).
(Shouldn't it have a bufsize argument too?)
have a new built-in type ``textfile(filename, mode, encoding)``.
(Shouldn't it have a *bufsize* argument too?)
Ditto.
@ -240,7 +240,7 @@ Features that did not make it into Python 2.3
seems to have lost steam.
- For a class defined inside another class, the ``__name__`` should be
"outer.inner", and pickling should work. (SF 633930. I'm no
``"outer.inner"``, and pickling should work. (SF 633930. I'm no
longer certain this is easy or even right.)
- reST is going to be used a lot in Zope3. Maybe it could become
@ -253,19 +253,19 @@ Features that did not make it into Python 2.3
There seems insufficient interest in moving this further in an
organized fashion, and it's not particularly important.
- Provide alternatives for common uses of the types module;
- Provide alternatives for common uses of the ``types`` module;
Skip Montanaro has posted a proto-PEP for this idea:
http://mail.python.org/pipermail/python-dev/2002-May/024346.html
There hasn't been any progress on this, AFAICT.
- Use pending deprecation for the types and string modules. This
- Use pending deprecation for the ``types`` and ``string`` modules. This
requires providing alternatives for the parts that aren't
covered yet (e.g. ``string.whitespace`` and ``types.TracebackType``).
It seems we can't get consensus on this.
- Deprecate the buffer object.
- Deprecate the ``buffer`` object.
- http://mail.python.org/pipermail/python-dev/2002-July/026388.html
- http://mail.python.org/pipermail/python-dev/2002-July/026408.html

View File

@ -17,11 +17,11 @@ Abstract
This PEP proposes to simplify iteration over intervals of
integers, by extending the range of expressions allowed after a
"for" keyword to allow three-way comparisons such as::
"for" keyword to allow three-way comparisons such as ::
for lower <= var < upper:
in place of the current::
in place of the current ::
for item in list:
@ -64,12 +64,12 @@ to re-use Python's slice syntax for integer ranges, leading to a
terser syntax but not solving the readability problem of
multi-argument ``range()``. PEP 212 [2]_ (deferred) proposed several
syntaxes for directly converting a list to a sequence of integer
indices, in place of the current idiom::
indices, in place of the current idiom ::
range(len(list))
for such conversion, and PEP 281 [3]_ proposes to simplify the same
idiom by allowing it to be written as::
idiom by allowing it to be written as ::
range(list).
@ -103,16 +103,16 @@ for-loops::
for item in list
iterates over exactly those values of item that cause the
expression::
expression ::
item in list
to be true. Similarly, the new format::
to be true. Similarly, the new format ::
for lower <= var < upper:
would iterate over exactly those integer values of var that cause
the expression::
the expression ::
lower <= var < upper
@ -122,7 +122,7 @@ to be true.
Specification
=============
We propose to extend the syntax of a for statement, currently::
We propose to extend the syntax of a for statement, currently ::
for_stmt: "for" target_list "in" expression_list ":" suite
["else" ":" suite]
@ -137,7 +137,7 @@ as described below::
greater_comp: ">" | ">="
Similarly, we propose to extend the syntax of list comprehensions,
currently::
currently ::
list_for: "for" expression_list "in" testlist [list_iter]
@ -161,7 +161,7 @@ operations used. The iterator will begin with an integer equal or
near to the left bound, and then step through the remaining
integers with a step size of +1 or -1 if the comparison operation
is in the set described by less_comp or greater_comp respectively.
The execution will then proceed as if the expression had been::
The execution will then proceed as if the expression had been ::
for variable in iterator
@ -200,18 +200,18 @@ proposals on the Python list.
- The proposal does not allow increments other than 1 and -1.
More general arithmetic progressions would need to be created by
``range()`` or ``xrange()``, or by a list comprehension syntax such as::
``range()`` or ``xrange()``, or by a list comprehension syntax such as ::
[2*x for 0 <= x <= 100]
- The position of the loop variable in the middle of a three-way
comparison is not as apparent as the variable in the present::
comparison is not as apparent as the variable in the present ::
for item in list
syntax, leading to a possible loss of readability. We feel that
this loss is outweighed by the increase in readability from a
natural integer iteration syntax.
syntax, leading to a possible loss of readability. We feel that
this loss is outweighed by the increase in readability from a
natural integer iteration syntax.
- To some extent, this PEP addresses the same issues as PEP 276
[4]_. We feel that the two PEPs are not in conflict since PEP

View File

@ -45,13 +45,13 @@ is then to be released by the caller. This has two problems:
1. In case of failure, the application cannot know what memory to
release; most callers don't even know that they have the
responsibility to release that memory. Example for this are
the N converter (bug #416288 [1]_) and the es# converter (bug
the ``N`` converter (bug #416288 [1]_) and the ``es#`` converter (bug
#501716 [2]_).
2. Even for successful argument parsing, it is still inconvenient
for the caller to be responsible for releasing the memory. In
some cases, this is unnecessarily inefficient. For example,
the es converter copies the conversion result into memory, even
the ``es`` converter copies the conversion result into memory, even
though there already is a string object that has the right
contents.
@ -95,15 +95,15 @@ affected converters without using argument tuples is deprecated.
Affected converters
===================
The following converters will add fail memory and fail objects: N,
es, et, es#, et# (unless memory is passed into the converter)
The following converters will add fail memory and fail objects: ``N``,
``es``, ``et``, ``es#``, ``et#`` (unless memory is passed into the converter)
New converters
==============
To simplify Unicode conversion, the ``e*`` converters are duplicated
as ``E*`` converters (Es, Et, Es#, Et#). The usage of the ``E*``
as ``E*`` converters (``Es``, ``Et``, ``Es#``, ``Et#``). The usage of the ``E*``
converters is identical to that of the ``e*`` converters, except that
the application will not need to manage the resulting memory.
This will be implemented through registration of Ok objects with

View File

@ -95,7 +95,7 @@ only the access function needs to be added.
Specification for Generator Exception Passing
=============================================
Add a .throw(exception) method to the generator interface::
Add a ``.throw(exception)`` method to the generator interface::
def logger():
start = time.time()
@ -135,7 +135,7 @@ as ``throw()``.
Note B: To keep the ``throw()`` syntax simple only the instance
version of the raise syntax would be supported (no variants for
"raise string" or "raise class, instance").
"``raise string``" or "``raise class, instance``").
Calling ``g.throw(instance)`` would correspond to writing
``raise instance`` immediately after the most recent yield.

View File

@ -35,7 +35,7 @@ Python currently supports a string substitution syntax based on
C's ``printf()`` '``%``' formatting character [1]_. While quite rich,
``%``-formatting codes are also error prone, even for
experienced Python programmers. A common mistake is to leave off
the trailing format character, e.g. the '``s``' in ``'%(name)s'``.
the trailing format character, e.g. the '``s``' in ``"%(name)s"``.
In addition, the rules for what can follow a ``%`` sign are fairly
complex, while the usual application rarely needs such complexity.
@ -63,7 +63,7 @@ introduced with the ``$`` character. The following rules for
3. ``${identifier}`` is equivalent to ``$identifier``. It is required
when valid identifier characters follow the placeholder but are
not part of the placeholder, e.g. "${noun}ification".
not part of the placeholder, e.g. ``"${noun}ification"``.
If the ``$`` character appears at the end of the line, or is followed
by any other character than those described above, a ``ValueError``

View File

@ -42,7 +42,7 @@ Specification
The built-in ``divmod()`` function would be changed to accept multiple
divisors, changing its signature from ``divmod(dividend, divisor)`` to
``divmod(dividend, divisors)``. The dividend is divided by the last
``divmod(dividend, *divisors)``. The dividend is divided by the last
divisor, giving a quotient and a remainder. The quotient is then
divided by the second to last divisor, giving a new quotient and
remainder. This is repeated until all divisors have been used,
@ -82,7 +82,7 @@ This is tedious and easy to get wrong each time you need it.
If instead the ``divmod()`` built-in is changed according the proposal,
the code for converting seconds to weeks, days, hours, minutes and
seconds then become::
seconds then become ::
def secs_to_wdhms(seconds):
w, d, h, m, s = divmod(seconds, 7, 24, 60, 60)
@ -96,13 +96,13 @@ Other applications are:
- Astronomical angles (declination is measured in degrees, minutes
and seconds, right ascension is measured in hours, minutes and
seconds).
- Old British currency (1 pound = 20 shilling, 1 shilling = 12 pence)
- Old British currency (1 pound = 20 shilling, 1 shilling = 12 pence).
- Anglo-Saxon length units: 1 mile = 1760 yards, 1 yard = 3 feet,
1 foot = 12 inches.
- Anglo-Saxon weight units: 1 long ton = 160 stone, 1 stone = 14
pounds, 1 pound = 16 ounce, 1 ounce = 16 dram
pounds, 1 pound = 16 ounce, 1 ounce = 16 dram.
- British volumes: 1 gallon = 4 quart, 1 quart = 2 pint, 1 pint
= 20 fluid ounces
= 20 fluid ounces.
Rationale
@ -153,7 +153,7 @@ The inverse operation::
product = product * x + y
return product
could also be useful. However, writing::
could also be useful. However, writing ::
seconds = (((((w * 7) + d) * 24 + h) * 60 + m) * 60 + s)
@ -195,7 +195,7 @@ References
==========
.. [1] Raymond Hettinger, "Propose rejection of PEP 303 -- Extend divmod() for
Multiple Divisors" http://mail.python.org/pipermail/python-dev/2003-January/032492.html
Multiple Divisors" https://mail.python.org/pipermail/python-dev/2005-June/054283.html
Copyright

View File

@ -20,7 +20,7 @@ Abstract
========
There's more to changing Python's grammar than editing
Grammar/Grammar and Python/compile.c. This PEP aims to be a
``Grammar/Grammar`` and ``Python/compile.c``. This PEP aims to be a
checklist of places that must also be fixed.
It is probably incomplete. If you see omissions, just add them if
@ -36,45 +36,45 @@ Rationale
People are getting this wrong all the time; it took well over a
year before someone noticed [2]_ that adding the floor division
operator (//) broke the parser module.
operator (``//``) broke the ``parser`` module.
Checklist
=========
- Grammar/Grammar: OK, you'd probably worked this one out :)
- ``Grammar/Grammar``: OK, you'd probably worked this one out :)
- Parser/Python.asdl may need changes to match the Grammar. Run
make to regenerate Include/Python-ast.h and
Python/Python-ast.c.
- ``Parser/Python.asdl`` may need changes to match the ``Grammar``. Run
``make`` to regenerate ``Include/Python-ast.h`` and
``Python/Python-ast.c``.
- Python/ast.c will need changes to create the AST objects
involved with the Grammar change. Lib/compiler/ast.py will
- ``Python/ast.c`` will need changes to create the AST objects
involved with the ``Grammar`` change. ``Lib/compiler/ast.py`` will
need matching changes to the pure-python AST objects.
- Parser/pgen needs to be rerun to regenerate Include/graminit.h
and Python/graminit.c. (make should handle this for you.)
- ``Parser/pgen`` needs to be rerun to regenerate ``Include/graminit.h``
and ``Python/graminit.c``. (make should handle this for you.)
- Python/symbtable.c: This handles the symbol collection pass
- ``Python/symbtable.c``: This handles the symbol collection pass
that happens immediately before the compilation pass.
- Python/compile.c: You will need to create or modify the
- ``Python/compile.c``: You will need to create or modify the
``compiler_*`` functions to generate opcodes for your productions.
- You may need to regenerate Lib/symbol.py and/or Lib/token.py
and/or Lib/keyword.py.
- You may need to regenerate ``Lib/symbol.py`` and/or ``Lib/token.py``
and/or ``Lib/keyword.py``.
- The parser module. Add some of your new syntax to test_parser,
bang on Modules/parsermodule.c until it passes.
- The ``parser`` module. Add some of your new syntax to ``test_parser``,
bang on ``Modules/parsermodule.c`` until it passes.
- Add some usage of your new syntax to test_grammar.py
- Add some usage of your new syntax to ``test_grammar.py``.
- The compiler package. A good test is to compile the standard
library and test suite with the compiler package and then check
- The ``compiler`` package. A good test is to compile the standard
library and test suite with the ``compiler`` package and then check
it runs. Note that this only needs to be done in Python 2.x.
- If you've gone so far as to change the token structure of
Python, then the Lib/tokenizer.py library module will need to
Python, then the ``Lib/tokenizer.py`` library module will need to
be changed.
- Certain changes may require tweaks to the library module
@ -83,7 +83,7 @@ Checklist
- Documentation must be written!
- After everything's been checked in, you're likely to see a new
change to Python/Python-ast.c. This is because this
change to ``Python/Python-ast.c``. This is because this
(generated) file contains the SVN version of the source from
which it was generated. There's no way to avoid this; you just
have to submit this file separately.

View File

@ -181,7 +181,7 @@ Alternative Ideas
=================
IEXEC: Holger Krekel -- generalised approach with XML-like syntax
(no URL found...)
(no URL found...).
Holger has much more far-reaching ideas about "execution monitors"
that are informed about details of control flow in the monitored
@ -210,7 +210,7 @@ complicated have something else to complain about. It's something
else to teach.
For the proposal to be useful, many file-like and lock-like
classes in the standard library and other code will have to have::
classes in the standard library and other code will have to have ::
__exit__ = close

View File

@ -111,14 +111,14 @@ Examples of Use
Implementation
==============
Implementation requires some tweaking of the Grammar/Grammar file
Implementation requires some tweaking of the ``Grammar/Grammar`` file
in the Python sources, and some adjustment of
Modules/parsermodule.c to make syntactic and pragmatic changes.
``Modules/parsermodule.c`` to make syntactic and pragmatic changes.
(Some grammar/parser guru is needed to make a full
implementation.)
Here are the changes needed to Grammar to allow implicit lambda::
Here are the changes needed to ``Grammar`` to allow implicit lambda::
varargslist: (fpdef ['=' imptest] ',')* ('*' NAME [',' '**'
NAME] | '**' NAME) | fpdef ['=' imptest] (',' fpdef ['='

View File

@ -102,7 +102,7 @@ belongs, and does not require code to be duplicated.
Syntax
======
The syntax of the while statement::
The syntax of the while statement ::
while_stmt : "while" expression ":" suite
["else" ":" suite]
@ -137,7 +137,7 @@ in regular while loops.
Future Statement
================
Because of the new keyword "do", the statement::
Because of the new keyword "do", the statement ::
from __future__ import do_while

View File

@ -72,12 +72,12 @@ Completed features for 2.4
- Added a builtin called ``sorted()`` which may be used in expressions.
- The itertools module has two new functions, ``tee()`` and ``groupby()``.
- The ``itertools`` module has two new functions, ``tee()`` and ``groupby()``.
- Add a collections module with a ``deque()`` object.
- Add a ``collections`` module with a ``deque()`` object.
- Add two statistical/reduction functions, ``nlargest()`` and ``nsmallest()``
to the heapq module.
to the ``heapq`` module.
- Python's windows installer now uses MSI
@ -120,7 +120,7 @@ work on without hoping for completion by any particular date.
- Documentation: complete the documentation for new-style
classes.
- Look over the Demos/ directory and update where required (Andrew
- Look over the ``Demos/`` directory and update where required (Andrew
Kuchling has done a lot of this)
- New tests.
@ -152,8 +152,8 @@ Carryover features from Python 2.3
- A nicer API to open text files, replacing the ugly (in some
people's eyes) "U" mode flag. There's a proposal out there to
have a new built-in type textfile(filename, mode, encoding).
(Shouldn't it have a bufsize argument too?)
have a new built-in type ``textfile(filename, mode, encoding)``.
(Shouldn't it have a *bufsize* argument too?)
- New widgets for Tkinter???
@ -173,11 +173,11 @@ Carryover features from Python 2.3
There seems insufficient interest in moving this further in an
organized fashion, and it's not particularly important.
- Provide alternatives for common uses of the types module;
- Provide alternatives for common uses of the ``types`` module;
Skip Montanaro has posted a proto-PEP for this idea [5]_.
There hasn't been any progress on this, AFAICT.
- Use pending deprecation for the types and string modules. This
- Use pending deprecation for the ``types`` and ``string`` modules. This
requires providing alternatives for the parts that aren't
covered yet (e.g. ``string.whitespace`` and ``types.TracebackType``).
It seems we can't get consensus on this.
@ -190,7 +190,7 @@ Carryover features from Python 2.3
- PEP 269 Pgen Module for Python (Riehl)
(Some necessary changes are in; the pgen module itself needs to
(Some necessary changes are in; the ``pgen`` module itself needs to
mature more.)
- PEP 266 Optimizing Global Variable/Attribute Access (Montanaro)

View File

@ -64,7 +64,7 @@ could be passed in as argument, the same is not applicable for the
files opened depending on the contents of the index).
If we want timely release, we have to sacrifice the simplicity and
directness of the generator-only approach: (e.g.)::
directness of the generator-only approach: (e.g.) ::
class AllLines:

View File

@ -30,7 +30,7 @@ Pronouncement
=============
Guido believes that a verification tool has some value. If
someone wants to add it to Tools/scripts, no PEP is required.
someone wants to add it to ``Tools/scripts``, no PEP is required.
Such a tool may have value for validating the output from
"bytecodehacks" or from direct edits of PYC files. As security

View File

@ -89,7 +89,7 @@ logger.
So ``print`` and ``sys.std{out|err}.write`` statements should be
replaced with ``_log.{debug|info}``, and ``traceback.print_exception``
with ``_log.exception`` or sometimes ``_log.debug('...', exc_info=1)```.
with ``_log.exception`` or sometimes ``_log.debug('...', exc_info=1)``.
Module List

View File

@ -58,7 +58,7 @@ Completed features for 2.5
- PEP 341: Unified try-except/try-finally to try-except-finally
- PEP 342: Coroutines via Enhanced Generators
- PEP 343: The "with" Statement (still need updates in Doc/ref and for the
contextlib module)
``contextlib`` module)
- PEP 352: Required Superclass for Exceptions
- PEP 353: Using ``ssize_t`` as the index type
- PEP 357: Allowing Any Object to be Used for Slicing
@ -67,7 +67,7 @@ Completed features for 2.5
- AST-based compiler
- Access to C AST from Python through new _ast module
- Access to C AST from Python through new ``_ast`` module
- ``any()``/``all()`` builtin truth functions
@ -81,9 +81,9 @@ New standard library modules:
- ``ElementTree`` and ``cElementTree`` -- by Fredrik Lundh
- ``hashlib`` -- adds support for SHA-224, -256, -384, and -512
(replaces old md5 and sha modules)
(replaces old ``md5`` and ``sha`` modules)
- ``msilib`` -- for creating MSI files and bdist_msi in distutils.
- ``msilib`` -- for creating MSI files and ``bdist_msi`` in distutils.
- ``pysqlite``
@ -132,11 +132,11 @@ will require BDFL approval for inclusion in 2.5.
(Owner: ???)
MacOS: http://hcs.harvard.edu/~jrus/python/prettified-py-icons.png
- Check the various bits of code in Demo/ all still work, update or
- Check the various bits of code in ``Demo/`` all still work, update or
remove the ones that don't.
(Owner: Anthony)
- All modules in Modules/ should be updated to be ssize_t clean.
- All modules in ``Modules/`` should be updated to be ssize_t clean.
(Owner: Neal)
@ -152,7 +152,7 @@ Deferred until 2.6
- Remove the ``fpectl`` module?
- Make everything in Modules/ build cleanly with g++
- Make everything in ``Modules/`` build cleanly with g++
Open issues

View File

@ -125,14 +125,14 @@ Specification
raised when the Python integer or long was converted to ``Py_ssize_t``.
4) A new ``operator.index(obj)`` function will be added that calls
equivalent of obj.``__index__``() and raises an error if obj does not implement
equivalent of ``obj.__index__()`` and raises an error if obj does not implement
the special method.
Implementation Plan
===================
1) Add the ``nb_index`` slot in object.h and modify typeobject.c to
1) Add the ``nb_index`` slot in ``object.h`` and modify ``typeobject.c`` to
create the ``__index__`` method
2) Change the ``ISINT`` macro in ``ceval.c`` to ``ISINDEX`` and alter it to
@ -141,7 +141,7 @@ Implementation Plan
3) Change the ``_PyEval_SliceIndex`` function to accommodate objects
with the index slot defined.
4) Change all builtin objects (e.g. lists) that use the as_mapping
4) Change all builtin objects (e.g. lists) that use the ``as_mapping``
slots for subscript access and use a special-check for integers to
check for the slot as well.
@ -187,7 +187,7 @@ Why the name ``__index__``?
Some questions were raised regarding the name ``__index__`` when other
interpretations of the slot are possible. For example, the slot
can be used any time Python requires an integer internally (such
as in "mystring" \* 3). The name was suggested by Guido because
as in ``"mystring" * 3``). The name was suggested by Guido because
slicing syntax is the biggest reason for having such a slot and
in the end no better name emerged. See the discussion thread [1]_
for examples of names that were suggested such as "``__discrete__``" and
@ -209,7 +209,7 @@ For example, the initial implementation that returned ``Py_ssize_t`` for
``s = 'x' * (2**100)`` works but ``len(s)`` was clipped at 2147483647.
Several fixes were suggested but eventually it was decided that
``nb_index`` needed to return a Python Object similar to the ``nb_int``
and nb_long slots in order to handle overflow correctly.
and ``nb_long`` slots in order to handle overflow correctly.
Why can't ``__index__`` return any object with the ``nb_index`` method?
-----------------------------------------------------------------------

View File

@ -101,7 +101,7 @@ generate a new bytes object containing a bytes literal::
The object has a ``.decode()`` method equivalent to the ``.decode()``
method of the str object. The object has a classmethod ``.fromhex()``
that takes a string of characters from the set ``[0-9a-fA-F ]`` and
returns a bytes object (similar to binascii.unhexlify). For
returns a bytes object (similar to ``binascii.unhexlify``). For
example::
>>> bytes.fromhex('5c5350ff')
@ -110,7 +110,7 @@ example::
b'\\SP\xff'
The object has a ``.hex()`` method that does the reverse conversion
(similar to binascii.hexlify)::
(similar to ``binascii.hexlify``)::
>> bytes([92, 83, 80, 255]).hex()
'5c5350ff'
@ -226,12 +226,12 @@ Open Issues
Frequently Asked Questions
==========================
Q: Why have the optional encoding argument when the encode method of
**Q:** Why have the optional encoding argument when the encode method of
Unicode objects does the same thing?
A: In the current version of Python, the encode method returns a str
**A:** In the current version of Python, the encode method returns a str
object and we cannot change that without breaking code. The
construct bytes(``s.encode(...)``) is expensive because it has to
construct ``bytes(s.encode(...))`` is expensive because it has to
copy the byte sequence multiple times. Also, Python generally
provides two ways of converting an object of type A into an
object of type B: ask an A instance to convert itself to a B, or
@ -242,10 +242,10 @@ have to use the latter approach; sometimes B can't know about A,
in which case you have to use the former.
Q: Why does bytes ignore the encoding argument if the initializer is
**Q:** Why does bytes ignore the encoding argument if the initializer is
a str? (This only applies to 2.6.)
A: There is no sane meaning that the encoding can have in that case.
**A:** There is no sane meaning that the encoding can have in that case.
str objects *are* byte arrays and they know nothing about the
encoding of character data they contain. We need to assume that
the programmer has provided a str object that already uses the
@ -255,11 +255,11 @@ the bytes then you need to first decode the string. For example::
bytes(s.decode(encoding1), encoding2)
Q: Why not have the encoding argument default to Latin-1 (or some
**Q:** Why not have the encoding argument default to Latin-1 (or some
other encoding that covers the entire byte range) rather than
ASCII?
A: The system default encoding for Python is ASCII. It seems least
**A:** The system default encoding for Python is ASCII. It seems least
confusing to use that default. Also, in Py3k, using Latin-1 as
the default might not be what users expect. For example, they
might prefer a Unicode encoding. Any default will not always

View File

@ -98,9 +98,9 @@ Completed features for 2.6
PEPs:
- 352: Raising a string exception now triggers a TypeError.
Attempting to catch a string exception raises DeprecationWarning.
BaseException.message has been deprecated. [pep352]_
- 352: Raising a string exception now triggers a ``TypeError``.
Attempting to catch a string exception raises ``DeprecationWarning``.
``BaseException.message`` has been deprecated. [pep352]_
- 358: The "bytes" Object [pep358]_
- 366: Main module explicit relative imports [pep366]_
- 370: Per user site-packages directory [pep370]_
@ -110,50 +110,50 @@ PEPs:
New modules in the standard library:
- json
- new enhanced turtle module
- ast
- ``json``
- new enhanced ``turtle`` module
- ``ast``
Deprecated modules and functions in the standard library:
- buildtools
- cfmfile
- commands.getstatus()
- macostools.touched()
- md5
- MimeWriter
- mimify
- popen2, os.popen[234]()
- posixfile
- sets
- sha
- ``buildtools``
- ``cfmfile``
- ``commands.getstatus()``
- ``macostools.touched()``
- ``md5``
- ``MimeWriter``
- ``mimify``
- ``popen2``, ``os.popen[234]()``
- ``posixfile``
- ``sets``
- ``sha``
Modules removed from the standard library:
- gopherlib
- rgbimg
- macfs
- ``gopherlib``
- ``rgbimg``
- ``macfs``
Warnings for features removed in Py3k:
- builtins: apply, callable, coerce, dict.has_key, execfile,
reduce, reload
- backticks and <>
- float args to xrange
- coerce and all its friends
- builtins: ``apply``, ``callable``, ``coerce``, ``dict.has_key``, ``execfile``,
``reduce``, ``reload``
- backticks and ``<>``
- float args to ``xrange``
- ``coerce`` and all its friends
- comparing by default comparison
- {}.has_key()
- file.xreadlines
- softspace removal for print() function
- ``{}.has_key()``
- ``file.xreadlines``
- softspace removal for ``print()`` function
- removal of modules because of PEP 4/3100/3108
Other major features:
- with/as will be keywords
- a __dir__() special method to control dir() was added [1]
- ``with``/``as`` will be keywords
- a ``__dir__()`` special method to control ``dir()`` was added [1]
- AtheOS support stopped.
- warnings module implemented in C
- compile() takes an AST and can convert to byte code
- ``warnings`` module implemented in C
- ``compile()`` takes an AST and can convert to byte code
Possible features for 2.6
@ -168,15 +168,15 @@ The following PEPs are being worked on for inclusion in 2.6: None.
Each non-trivial feature listed here that is not a PEP must be
discussed on python-dev. Other enhancements include:
- distutils replacement (requires a PEP)
- ``distutils`` replacement (requires a PEP)
New modules in the standard library:
- winerror
- ``winerror``
http://python.org/sf/1505257
(Patch rejected, module should be written in C)
- setuptools
- ``setuptools``
BDFL pronouncement for inclusion in 2.5:
http://mail.python.org/pipermail/python-dev/2006-April/063964.html
@ -186,69 +186,69 @@ http://mail.python.org/pipermail/python-dev/2006-April/064145.html
Modules to gain a DeprecationWarning (as specified for Python 2.6
or through negligence):
- rfc822
- mimetools
- multifile
- compiler package (or a Py3K warning instead?)
- ``rfc822``
- ``mimetools``
- ``multifile``
- ``compiler`` package (or a Py3K warning instead?)
- Convert Parser/\*.c to use the C warnings module rather than printf
- Convert ``Parser/*.c`` to use the C ``warnings`` module rather than ``printf``
- Add warnings for Py3k features removed:
* __getslice__/__setslice__/__delslice__
* ``__getslice__``/``__setslice__``/``__delslice__``
* float args to PyArgs_ParseTuple
* float args to ``PyArgs_ParseTuple``
* __cmp__?
* ``__cmp__``?
* other comparison changes?
* int division?
* All PendingDeprecationWarnings (e.g. exceptions)
* All ``PendingDeprecationWarnings`` (e.g. exceptions)
* using zip() result as a list
* using ``zip()`` result as a list
* the exec statement (use function syntax)
* the ``exec`` statement (use function syntax)
* function attributes that start with func_* (should use __*__)
* function attributes that start with ``func_*`` (should use ``__*__``)
* the L suffix for long literals
* the ``L`` suffix for long literals
* renaming of __nonzero__ to __bool__
* renaming of ``__nonzero__`` to ``__bool__``
* multiple inheritance with classic classes? (MRO might change)
* properties and classic classes? (instance attrs shadow property)
- use __bool__ method if available and there's no __nonzero__
- use ``__bool__`` method if available and there's no ``__nonzero__``
- Check the various bits of code in Demo/ and Tools/ all still work,
- Check the various bits of code in ``Demo/`` and ``Tools/`` all still work,
update or remove the ones that don't.
- All modules in Modules/ should be updated to be ssize_t clean.
- All modules in ``Modules/`` should be updated to be ``ssize_t`` clean.
- All of Python (including Modules/) should compile cleanly with g++
- All of Python (including ``Modules/``) should compile cleanly with g++
- Start removing deprecated features and generally moving towards Py3k
- Replace all old style tests (operate on import) with unittest or docttest
- Replace all old style tests (operate on import) with ``unittest`` or ``docttest``
- Add tests for all untested modules
- Document undocumented modules/features
- bdist_deb in distutils package
- ``bdist_deb`` in ``distutils`` package
http://mail.python.org/pipermail/python-dev/2006-February/060926.html
- bdist_egg in distutils package
- ``bdist_egg`` in ``distutils`` package
- pure python pgen module
- pure python ``pgen`` module
(Owner: Guido)
Deferral to 2.6:
http://mail.python.org/pipermail/python-dev/2006-April/064528.html
- Remove the fpectl module?
- Remove the ``fpectl`` module?
Deferred until 2.7

View File

@ -22,9 +22,9 @@ will do this with a command line switch. Programs that aren't
formatted the way the programmer wants things will raise
``IndentationError``.
- ``Python -TNone`` will refuse to run when there are any tabs.
- ``Python -Tn`` will refuse to run when tabs are not exactly n spaces
- ``Python -TOnly`` will refuse to run when blocks are indented by anything
- ``python -TNone`` will refuse to run when there are any tabs.
- ``python -Tn`` will refuse to run when tabs are not exactly ``n`` spaces
- ``python -TOnly`` will refuse to run when blocks are indented by anything
other than tabs
People who mix tabs and spaces, naturally, will find that their

View File

@ -113,10 +113,10 @@ before the evaluation of the class body. The ``__prepare__`` function
takes two positional arguments, and an arbitrary number of keyword
arguments. The two positional arguments are:
========= ====================================
``name`` the name of the class being created.
``bases`` the list of base classes.
========= ====================================
======= ====================================
*name* the name of the class being created.
*bases* the list of base classes.
======= ====================================
The interpreter always tests for the existence of ``__prepare__`` before
calling it; If it is not present, then a regular dictionary is used,
@ -137,9 +137,9 @@ The example above illustrates how the arguments to 'class' are
interpreted. The class name is the first argument, followed by
an arbitrary length list of base classes. After the base classes,
there may be one or more keyword arguments, one of which can be
'metaclass'. Note that the 'metaclass' argument is not included
in kwargs, since it is filtered out by the normal parameter
assignment algorithm. (Note also that 'metaclass' is a keyword-
*metaclass*. Note that the *metaclass* argument is not included
in *kwargs*, since it is filtered out by the normal parameter
assignment algorithm. (Note also that *metaclass* is a keyword-
only argument as per PEP 3102 [6]_.)
Even though ``__prepare__`` is not required, the default metaclass

View File

@ -29,8 +29,8 @@ do this. This PEP proposes adding the keywords ``__module__``,
``__class__``, and ``__function__``.
Rationale for __module__
========================
Rationale for ``__module__``
============================
Many modules export various functions, classes, and other objects,
but will perform additional activities (such as running unit
@ -73,8 +73,8 @@ currently being defined (executed). (But see open issues.)
...
Rationale for __class__
=======================
Rationale for ``__class__``
===========================
Class methods are passed the current instance; from this they can
determine ``self.__class__`` (or cls, for class methods).
@ -112,7 +112,7 @@ of that PEP, but was separated out as an independent decision.
Note that ``__class__`` (or ``__this_class__``) is not quite the same as
the ``__thisclass__`` property on bound super objects. The existing
super.``__thisclass__`` property refers to the class from which the
``super.__thisclass__`` property refers to the class from which the
Method Resolution Order search begins. In the above class D, it
would refer to (the current reference of name) C.

View File

@ -32,7 +32,7 @@ Motivation
Several very good IP address modules for python already exist.
The truth is that all of them struggle with the balance between
adherence to Pythonic principals and the shorthand upon which
network engineers and administrators rely. ipaddress aims to
network engineers and administrators rely. ``ipaddress`` aims to
strike the right balance.
@ -47,32 +47,32 @@ seeks to provide.
Background
==========
PEP 3144 and ipaddr have been up for inclusion before. The
PEP 3144 and ``ipaddr`` have been up for inclusion before. The
version of the library specified here is backwards incompatible
with the version on PyPI and the one which was discussed before.
In order to avoid confusing users of the current ipaddr, I've
renamed this version of the library "ipaddress".
In order to avoid confusing users of the current ``ipaddr``, I've
renamed this version of the library ``ipaddress``.
The main differences between ipaddr and ipaddress are:
* ipaddress \*Network classes are equivalent to the ipaddr \*Network
class counterparts with the strict flag set to True.
* ``ipaddress`` \*Network classes are equivalent to the ``ipaddr`` \*Network
class counterparts with the ``strict`` flag set to ``True``.
* ipaddress \*Interface classes are equivalent to the ipaddr
\*Network class counterparts with the strict flag set to False.
* ``ipaddress`` \*Interface classes are equivalent to the ``ipaddr``
\*Network class counterparts with the ``strict`` flag set to ``False``.
* The factory functions in ipaddress were renamed to disambiguate
* The factory functions in ``ipaddress`` were renamed to disambiguate
them from classes.
* A few attributes were renamed to disambiguate their purpose as
well. (eg. network, network_address)
well. (eg. ``network``, ``network_address``)
* A number of methods and functions which returned containers in ipaddr now
return iterators. This includes, subnets, address_exclude,
summarize_address_range and collapse_address_list.
* A number of methods and functions which returned containers in ``ipaddr`` now
return iterators. This includes ``subnets``, ``address_exclude``,
``summarize_address_range`` and ``collapse_address_list``.
Due to the backwards incompatible API changes between ipaddress and ipaddr,
Due to the backwards incompatible API changes between ``ipaddress`` and ``ipaddr``,
the proposal is to add the module using the new provisional API status:
* http://docs.python.org/dev/glossary.html#term-provisional-package
@ -88,21 +88,21 @@ Relevant messages on python-dev:
Specification
=============
The ipaddr module defines a total of 6 new public classes, 3 for
The ``ipaddr`` module defines a total of 6 new public classes, 3 for
manipulating IPv4 objects and 3 for manipulating IPv6 objects.
The classes are as follows:
- IPv4Address/IPv6Address - These define individual addresses, for
- ``IPv4Address``/``IPv6Address`` - These define individual addresses, for
example the IPv4 address returned by an A record query for
www.google.com (74.125.224.84) or the IPv6 address returned by a
AAAA record query for ipv6.google.com (2001:4860:4001:801::1011).
- IPv4Network/IPv6Network - These define networks or groups of
- ``IPv4Network``/``IPv6Network`` - These define networks or groups of
addresses, for example the IPv4 network reserved for multicast use
(224.0.0.0/4) or the IPv6 network reserved for multicast
(ff00::/8, wow, that's big).
- IPv4Interface/IPv6Interface - These hybrid classes refer to an
- ``IPv4Interface``/``IPv6Interface`` - These hybrid classes refer to an
individual address on a given network. For example, the IPV4
address 192.0.2.1 on the network 192.0.2.0/24 could be referred to
as 192.0.2.1/24. Likewise, the IPv6 address 2001:DB8::1 on the
@ -115,30 +115,30 @@ number of bits needed to represent them, whether or not they
belong to certain special IPv4 network ranges, etc. Similarly,
all IPv6 classes share characteristics and methods.
ipaddr makes extensive use of inheritance to avoid code
``ipaddr`` makes extensive use of inheritance to avoid code
duplication as much as possible. The parent classes are private,
but they are outlined here:
- _IPAddrBase - Provides methods common to all ipaddr objects.
- ``_IPAddrBase`` - Provides methods common to all ``ipaddr`` objects.
- _BaseAddress - Provides methods common to IPv4Address and
IPv6Address.
- ``_BaseAddress`` - Provides methods common to ``IPv4Address`` and
``IPv6Address``.
- _BaseInterface - Provides methods common to IPv4Interface and
IPv6Interface, as well as IPv4Network and IPv6Network (ipaddr
- ``_BaseInterface`` - Provides methods common to ``IPv4Interface`` and
``IPv6Interface``, as well as ``IPv4Network`` and ``IPv6Network`` (``ipaddr``
treats the Network classes as a special case of Interface).
- _BaseV4 - Provides methods and variables (eg, _max_prefixlen)
- ``_BaseV4`` - Provides methods and variables (eg, ``_max_prefixlen``)
common to all IPv4 classes.
- _BaseV6 - Provides methods and variables common to all IPv6 classes.
- ``_BaseV6`` - Provides methods and variables common to all IPv6 classes.
Comparisons between objects of differing IP versions results in a
``TypeError`` [1]_. Additionally, comparisons of objects with
different _Base parent classes results in a ``TypeError``. The effect
of the _Base parent class limitation is that IPv4Interface's can
be compared to IPv4Network's and IPv6Interface's can be compared
to IPv6Network's.
of the _Base parent class limitation is that ``IPv4Interface``'s can
be compared to ``IPv4Network``'s and ``IPv6Interface``'s can be compared
to ``IPv6Network``'s.
Reference Implementation
@ -163,33 +163,31 @@ References
authority who can't be ignored. Full text of the email
follows:
"""
I have seen a substantial amount of traffic about IPv4 and
IPv6 comparisons and the general consensus is that these are
not comparable.
I have seen a substantial amount of traffic about IPv4 and
IPv6 comparisons and the general consensus is that these are
not comparable.
If we were to take a very simple minded view, we might treat
these as pure integers in which case there is an ordering but
not a useful one.
If we were to take a very simple minded view, we might treat
these as pure integers in which case there is an ordering but
not a useful one.
In the IPv4 world, "length" is important because we take
longest (most specific) address first for routing. Length is
determine by the mask, as you know.
In the IPv4 world, "length" is important because we take
longest (most specific) address first for routing. Length is
determine by the mask, as you know.
Assuming that the same style of argument works in IPv6, we
would have to conclude that treating an IPv6 value purely as
an integer for comparison with IPv4 would lead to some really
strange results.
Assuming that the same style of argument works in IPv6, we
would have to conclude that treating an IPv6 value purely as
an integer for comparison with IPv4 would lead to some really
strange results.
All of IPv4 space would lie in the host space of 0::0/96
prefix of IPv6. For any useful interpretation of IPv4, this is
a non-starter.
All of IPv4 space would lie in the host space of 0::0/96
prefix of IPv6. For any useful interpretation of IPv4, this is
a non-starter.
I think the only sensible conclusion is that IPv4 values and
IPv6 values should be treated as non-comparable.
I think the only sensible conclusion is that IPv4 values and
IPv6 values should be treated as non-comparable.
Vint
"""
Vint
Copyright

View File

@ -14,10 +14,10 @@ Post-History:
Abstract
========
In its present form, the subprocess.Popen implementation is prone to
In its present form, the ``subprocess.Popen`` implementation is prone to
dead-locking and blocking of the parent Python script while waiting on data
from the child process. This PEP proposes to make
subprocess.Popen more asynchronous to help alleviate these
``subprocess.Popen`` more asynchronous to help alleviate these
problems.
@ -42,7 +42,7 @@ A search for "python asynchronous subprocess" will turn up numerous
accounts of people wanting to execute a child process and communicate with
it from time to time reading only the data that is available instead of
blocking to wait for the program to produce data [1]_ [2]_ [3]_. The current
behavior of the subprocess module is that when a user sends or receives
behavior of the ``subprocess`` module is that when a user sends or receives
data via the stdin, stderr and stdout file objects, dead locks are common
and documented [4]_ [5]_. While communicate can be used to alleviate some of
the buffering issues, it will still cause the parent process to block while
@ -54,12 +54,12 @@ Rationale
=========
There is a documented need for asynchronous, non-blocking functionality in
subprocess.Popen [6]_ [7]_ [2]_ [3]_. Inclusion of the code would improve the
``subprocess.Popen`` [6]_ [7]_ [2]_ [3]_. Inclusion of the code would improve the
utility of the Python standard library that can be used on Unix based and
Windows builds of Python. Practically every I/O object in Python has a
file-like wrapper of some sort. Sockets already act as such and for
strings there is StringIO. Popen can be made to act like a file by simply
using the methods attached to the subprocess.Popen.stderr, stdout and
strings there is ``StringIO``. Popen can be made to act like a file by simply
using the methods attached to the ``subprocess.Popen.stderr``, stdout and
stdin file-like objects. But when using the read and write methods of
those options, you do not have the benefit of asynchronous I/O. In the
proposed solution the wrapper wraps the asynchronous methods to mimic a
@ -74,36 +74,36 @@ changes including tests and documentation [9]_ as well as blog detailing
the problems I have come across in the development process [10]_.
I have been working on implementing non-blocking asynchronous I/O in the
subprocess.Popen module as well as a wrapper class for subprocess.Popen
``subprocess`` module as well as a wrapper class for ``subprocess.Popen``
that makes it so that an executed process can take the place of a file by
duplicating all of the methods and attributes that file objects have.
There are two base functions that have been added to the subprocess.Popen
class: Popen.send and Popen._recv, each with two separate implementations,
There are two base functions that have been added to the ``subprocess.Popen``
class: ``Popen.send`` and ``Popen._recv``, each with two separate implementations,
one for Windows and one for Unix-based systems. The Windows
implementation uses ctypes to access the functions needed to control pipes
in the kernel 32 DLL in an asynchronous manner. On Unix based systems,
the Python interface for file control serves the same purpose. The
different implementations of Popen.send and Popen._recv have identical
different implementations of ``Popen.send`` and ``Popen._recv`` have identical
arguments to make code that uses these functions work across multiple
platforms.
When calling the Popen._recv function, it requires the pipe name be
passed as an argument so there exists the Popen.recv function that passes
selects stdout as the pipe for Popen._recv by default. Popen.recv_err
selects stderr as the pipe by default. Popen.recv and Popen.recv_err
are much easier to read and understand than Popen._recv('stdout' ...) and
Popen._recv('stderr' ...) respectively.
When calling the ``Popen._recv`` function, it requires the pipe name be
passed as an argument so there exists the ``Popen.recv`` function that passes
selects stdout as the pipe for ``Popen._recv`` by default. ``Popen.recv_err``
selects stderr as the pipe by default. ``Popen.recv`` and ``Popen.recv_err``
are much easier to read and understand than ``Popen._recv('stdout' ...)`` and
``Popen._recv('stderr' ...)`` respectively.
Since the Popen._recv function does not wait on data to be produced
before returning a value, it may return empty bytes. Popen.asyncread
Since the ``Popen._recv`` function does not wait on data to be produced
before returning a value, it may return empty bytes. ``Popen.asyncread``
handles this issue by returning all data read over a given time
interval.
The ``ProcessIOWrapper`` class uses the ``asyncread`` and ``asyncwrite`` functions to
allow a process to act like a file so that there are no blocking issues
that can arise from using the stdout and stdin file objects produced from
a subprocess.Popen call.
a ``subprocess.Popen`` call.
References