Fix lists-in-blockquotes in 0xxx PEPs. Ref: #26914

This commit is contained in:
Georg Brandl 2016-05-03 10:18:02 +02:00
parent 4d8ea1d0fe
commit af90430776
26 changed files with 918 additions and 868 deletions

View File

@ -34,15 +34,15 @@ The only way to customize the import mechanism is currently to override the
built-in ``__import__`` function. However, overriding ``__import__`` has many built-in ``__import__`` function. However, overriding ``__import__`` has many
problems. To begin with: problems. To begin with:
* An ``__import__`` replacement needs to *fully* reimplement the entire * An ``__import__`` replacement needs to *fully* reimplement the entire
import mechanism, or call the original ``__import__`` before or after the import mechanism, or call the original ``__import__`` before or after the
custom code. custom code.
* It has very complex semantics and responsibilities. * It has very complex semantics and responsibilities.
* ``__import__`` gets called even for modules that are already in * ``__import__`` gets called even for modules that are already in
``sys.modules``, which is almost never what you want, unless you're writing ``sys.modules``, which is almost never what you want, unless you're writing
some sort of monitoring tool. some sort of monitoring tool.
The situation gets worse when you need to extend the import mechanism from C: The situation gets worse when you need to extend the import mechanism from C:
it's currently impossible, apart from hacking Python's ``import.c`` or it's currently impossible, apart from hacking Python's ``import.c`` or
@ -233,61 +233,61 @@ being available in ``sys.modules``.
The ``load_module()`` method has a few responsibilities that it must fulfill The ``load_module()`` method has a few responsibilities that it must fulfill
*before* it runs any code: *before* it runs any code:
* If there is an existing module object named 'fullname' in ``sys.modules``, * If there is an existing module object named 'fullname' in ``sys.modules``,
the loader must use that existing module. (Otherwise, the ``reload()`` the loader must use that existing module. (Otherwise, the ``reload()``
builtin will not work correctly.) If a module named 'fullname' does not builtin will not work correctly.) If a module named 'fullname' does not
exist in ``sys.modules``, the loader must create a new module object and exist in ``sys.modules``, the loader must create a new module object and
add it to ``sys.modules``. add it to ``sys.modules``.
Note that the module object *must* be in ``sys.modules`` before the loader Note that the module object *must* be in ``sys.modules`` before the loader
executes the module code. This is crucial because the module code may executes the module code. This is crucial because the module code may
(directly or indirectly) import itself; adding it to ``sys.modules`` (directly or indirectly) import itself; adding it to ``sys.modules``
beforehand prevents unbounded recursion in the worst case and multiple beforehand prevents unbounded recursion in the worst case and multiple
loading in the best. loading in the best.
If the load fails, the loader needs to remove any module it may have If the load fails, the loader needs to remove any module it may have
inserted into ``sys.modules``. If the module was already in ``sys.modules`` inserted into ``sys.modules``. If the module was already in ``sys.modules``
then the loader should leave it alone. then the loader should leave it alone.
* The ``__file__`` attribute must be set. This must be a string, but it may * The ``__file__`` attribute must be set. This must be a string, but it may
be a dummy value, for example "<frozen>". The privilege of not having a be a dummy value, for example "<frozen>". The privilege of not having a
``__file__`` attribute at all is reserved for built-in modules. ``__file__`` attribute at all is reserved for built-in modules.
* The ``__name__`` attribute must be set. If one uses ``imp.new_module()`` * The ``__name__`` attribute must be set. If one uses ``imp.new_module()``
then the attribute is set automatically. then the attribute is set automatically.
* If it's a package, the ``__path__`` variable must be set. This must be a * If it's a package, the ``__path__`` variable must be set. This must be a
list, but may be empty if ``__path__`` has no further significance to the list, but may be empty if ``__path__`` has no further significance to the
importer (more on this later). importer (more on this later).
* The ``__loader__`` attribute must be set to the loader object. This is * The ``__loader__`` attribute must be set to the loader object. This is
mostly for introspection and reloading, but can be used for mostly for introspection and reloading, but can be used for
importer-specific extras, for example getting data associated with an importer-specific extras, for example getting data associated with an
importer. importer.
* The ``__package__`` attribute [8]_ must be set. * The ``__package__`` attribute [8]_ must be set.
If the module is a Python module (as opposed to a built-in module or a If the module is a Python module (as opposed to a built-in module or a
dynamically loaded extension), it should execute the module's code in the dynamically loaded extension), it should execute the module's code in the
module's global name space (``module.__dict__``). module's global name space (``module.__dict__``).
Here is a minimal pattern for a ``load_module()`` method:: Here is a minimal pattern for a ``load_module()`` method::
# Consider using importlib.util.module_for_loader() to handle # Consider using importlib.util.module_for_loader() to handle
# most of these details for you. # most of these details for you.
def load_module(self, fullname): def load_module(self, fullname):
code = self.get_code(fullname) code = self.get_code(fullname)
ispkg = self.is_package(fullname) ispkg = self.is_package(fullname)
mod = sys.modules.setdefault(fullname, imp.new_module(fullname)) mod = sys.modules.setdefault(fullname, imp.new_module(fullname))
mod.__file__ = "<%s>" % self.__class__.__name__ mod.__file__ = "<%s>" % self.__class__.__name__
mod.__loader__ = self mod.__loader__ = self
if ispkg: if ispkg:
mod.__path__ = [] mod.__path__ = []
mod.__package__ = fullname mod.__package__ = fullname
else: else:
mod.__package__ = fullname.rpartition('.')[0] mod.__package__ = fullname.rpartition('.')[0]
exec(code, mod.__dict__) exec(code, mod.__dict__)
return mod return mod
Specification part 2: Registering Hooks Specification part 2: Registering Hooks
@ -326,8 +326,8 @@ rescan of ``sys.path_hooks``, it is possible to manually clear all or part of
Just like ``sys.path`` itself, the new ``sys`` variables must have specific Just like ``sys.path`` itself, the new ``sys`` variables must have specific
types: types:
* ``sys.meta_path`` and ``sys.path_hooks`` must be Python lists. * ``sys.meta_path`` and ``sys.path_hooks`` must be Python lists.
* ``sys.path_importer_cache`` must be a Python dict. * ``sys.path_importer_cache`` must be a Python dict.
Modifying these variables in place is allowed, as is replacing them with new Modifying these variables in place is allowed, as is replacing them with new
objects. objects.
@ -457,26 +457,26 @@ hook.
There are a number of possible ways to address this problem: There are a number of possible ways to address this problem:
* "Don't do that". If a package needs to locate data files via its * "Don't do that". If a package needs to locate data files via its
``__path__``, it is not suitable for loading via an import hook. The ``__path__``, it is not suitable for loading via an import hook. The
package can still be located on a directory in ``sys.path``, as at present, package can still be located on a directory in ``sys.path``, as at present,
so this should not be seen as a major issue. so this should not be seen as a major issue.
* Locate data files from a standard location, rather than relative to the * Locate data files from a standard location, rather than relative to the
module file. A relatively simple approach (which is supported by module file. A relatively simple approach (which is supported by
distutils) would be to locate data files based on ``sys.prefix`` (or distutils) would be to locate data files based on ``sys.prefix`` (or
``sys.exec_prefix``). For example, looking in ``sys.exec_prefix``). For example, looking in
``os.path.join(sys.prefix, "data", package_name)``. ``os.path.join(sys.prefix, "data", package_name)``.
* Import hooks could offer a standard way of getting at data files relative * Import hooks could offer a standard way of getting at data files relative
to the module file. The standard ``zipimport`` object provides a method to the module file. The standard ``zipimport`` object provides a method
``get_data(name)`` which returns the content of the "file" called ``name``, ``get_data(name)`` which returns the content of the "file" called ``name``,
as a string. To allow modules to get at the importer object, ``zipimport`` as a string. To allow modules to get at the importer object, ``zipimport``
also adds an attribute ``__loader__`` to the module, containing the also adds an attribute ``__loader__`` to the module, containing the
``zipimport`` object used to load the module. If such an approach is used, ``zipimport`` object used to load the module. If such an approach is used,
it is important that client code takes care not to break if the it is important that client code takes care not to break if the
``get_data()`` method is not available, so it is not clear that this ``get_data()`` method is not available, so it is not clear that this
approach offers a general answer to the problem. approach offers a general answer to the problem.
It was suggested on python-dev that it would be useful to be able to receive a It was suggested on python-dev that it would be useful to be able to receive a
list of available modules from an importer and/or a list of available data list of available modules from an importer and/or a list of available data

View File

@ -499,12 +499,12 @@ From float
The initial discussion on this item was what should The initial discussion on this item was what should
happen when passing floating point to the constructor: happen when passing floating point to the constructor:
1. ``Decimal(1.1) == Decimal('1.1')`` 1. ``Decimal(1.1) == Decimal('1.1')``
2. ``Decimal(1.1) == 2. ``Decimal(1.1) ==
Decimal('110000000000000008881784197001252...e-51')`` Decimal('110000000000000008881784197001252...e-51')``
3. an exception is raised 3. an exception is raised
Several people alleged that (1) is the better option here, because Several people alleged that (1) is the better option here, because
it's what you expect when writing ``Decimal(1.1)``. And quoting John it's what you expect when writing ``Decimal(1.1)``. And quoting John
@ -1180,14 +1180,14 @@ Context.
These are methods that return useful information from the Context: These are methods that return useful information from the Context:
- ``Etiny()``: Minimum exponent considering precision. - ``Etiny()``: Minimum exponent considering precision. ::
>>> c.Emin >>> c.Emin
-999999999 -999999999
>>> c.Etiny() >>> c.Etiny()
-1000000007 -1000000007
- ``Etop()``: Maximum exponent considering precision. - ``Etop()``: Maximum exponent considering precision. ::
>>> c.Emax >>> c.Emax
999999999 999999999

View File

@ -428,12 +428,12 @@ Important Files
+ Parser/ + Parser/
- Python.asdl - Python.asdl
ASDL syntax file ASDL syntax file
- asdl.py - asdl.py
"An implementation of the Zephyr Abstract Syntax Definition "An implementation of the Zephyr Abstract Syntax Definition
Language." Uses SPARK_ to parse the ASDL files. Language." Uses SPARK_ to parse the ASDL files.
- asdl_c.py - asdl_c.py
"Generate C code from an ASDL description." Generates "Generate C code from an ASDL description." Generates
@ -444,86 +444,86 @@ Important Files
+ Python/ + Python/
- Python-ast.c - Python-ast.c
Creates C structs corresponding to the ASDL types. Also Creates C structs corresponding to the ASDL types. Also
contains code for marshaling AST nodes (core ASDL types have contains code for marshaling AST nodes (core ASDL types have
marshaling code in asdl.c). "File automatically generated by marshaling code in asdl.c). "File automatically generated by
Parser/asdl_c.py". This file must be committed separately Parser/asdl_c.py". This file must be committed separately
after every grammar change is committed since the __version__ after every grammar change is committed since the __version__
value is set to the latest grammar change revision number. value is set to the latest grammar change revision number.
- asdl.c - asdl.c
Contains code to handle the ASDL sequence type. Also has code Contains code to handle the ASDL sequence type. Also has code
to handle marshalling the core ASDL types, such as number and to handle marshalling the core ASDL types, such as number and
identifier. used by Python-ast.c for marshaling AST nodes. identifier. used by Python-ast.c for marshaling AST nodes.
- ast.c - ast.c
Converts Python's parse tree into the abstract syntax tree. Converts Python's parse tree into the abstract syntax tree.
- ceval.c - ceval.c
Executes byte code (aka, eval loop). Executes byte code (aka, eval loop).
- compile.c - compile.c
Emits bytecode based on the AST. Emits bytecode based on the AST.
- symtable.c - symtable.c
Generates a symbol table from AST. Generates a symbol table from AST.
- pyarena.c - pyarena.c
Implementation of the arena memory manager. Implementation of the arena memory manager.
- import.c - import.c
Home of the magic number (named ``MAGIC``) for bytecode versioning Home of the magic number (named ``MAGIC``) for bytecode versioning
+ Include/ + Include/
- Python-ast.h - Python-ast.h
Contains the actual definitions of the C structs as generated by Contains the actual definitions of the C structs as generated by
Python/Python-ast.c . Python/Python-ast.c .
"Automatically generated by Parser/asdl_c.py". "Automatically generated by Parser/asdl_c.py".
- asdl.h - asdl.h
Header for the corresponding Python/ast.c . Header for the corresponding Python/ast.c .
- ast.h - ast.h
Declares PyAST_FromNode() external (from Python/ast.c). Declares PyAST_FromNode() external (from Python/ast.c).
- code.h - code.h
Header file for Objects/codeobject.c; contains definition of Header file for Objects/codeobject.c; contains definition of
PyCodeObject. PyCodeObject.
- symtable.h - symtable.h
Header for Python/symtable.c . struct symtable and Header for Python/symtable.c . struct symtable and
PySTEntryObject are defined here. PySTEntryObject are defined here.
- pyarena.h - pyarena.h
Header file for the corresponding Python/pyarena.c . Header file for the corresponding Python/pyarena.c .
- opcode.h - opcode.h
Master list of bytecode; if this file is modified you must modify Master list of bytecode; if this file is modified you must modify
several other files accordingly (see "`Introducing New Bytecode`_") several other files accordingly (see "`Introducing New Bytecode`_")
+ Objects/ + Objects/
- codeobject.c - codeobject.c
Contains PyCodeObject-related code (originally in Contains PyCodeObject-related code (originally in
Python/compile.c). Python/compile.c).
+ Lib/ + Lib/
- opcode.py - opcode.py
One of the files that must be modified if Include/opcode.h is. One of the files that must be modified if Include/opcode.h is.
- compiler/ - compiler/
* pyassem.py * pyassem.py
One of the files that must be modified if Include/opcode.h is One of the files that must be modified if Include/opcode.h is
changed. changed.
* pycodegen.py * pycodegen.py
One of the files that must be modified if Include/opcode.h is One of the files that must be modified if Include/opcode.h is
changed. changed.
Known Compiler-related Experiments Known Compiler-related Experiments

View File

@ -151,32 +151,32 @@ A Parameter object has the following public attributes and methods:
Describes how argument values are bound to the parameter. Describes how argument values are bound to the parameter.
Possible values: Possible values:
* ``Parameter.POSITIONAL_ONLY`` - value must be supplied * ``Parameter.POSITIONAL_ONLY`` - value must be supplied
as a positional argument. as a positional argument.
Python has no explicit syntax for defining positional-only Python has no explicit syntax for defining positional-only
parameters, but many built-in and extension module functions parameters, but many built-in and extension module functions
(especially those that accept only one or two parameters) (especially those that accept only one or two parameters)
accept them. accept them.
* ``Parameter.POSITIONAL_OR_KEYWORD`` - value may be * ``Parameter.POSITIONAL_OR_KEYWORD`` - value may be
supplied as either a keyword or positional argument supplied as either a keyword or positional argument
(this is the standard binding behaviour for functions (this is the standard binding behaviour for functions
implemented in Python.) implemented in Python.)
* ``Parameter.KEYWORD_ONLY`` - value must be supplied * ``Parameter.KEYWORD_ONLY`` - value must be supplied
as a keyword argument. Keyword only parameters are those as a keyword argument. Keyword only parameters are those
which appear after a "*" or "\*args" entry in a Python which appear after a "*" or "\*args" entry in a Python
function definition. function definition.
* ``Parameter.VAR_POSITIONAL`` - a tuple of positional * ``Parameter.VAR_POSITIONAL`` - a tuple of positional
arguments that aren't bound to any other parameter. arguments that aren't bound to any other parameter.
This corresponds to a "\*args" parameter in a Python This corresponds to a "\*args" parameter in a Python
function definition. function definition.
* ``Parameter.VAR_KEYWORD`` - a dict of keyword arguments * ``Parameter.VAR_KEYWORD`` - a dict of keyword arguments
that aren't bound to any other parameter. This corresponds that aren't bound to any other parameter. This corresponds
to a "\*\*kwargs" parameter in a Python function definition. to a "\*\*kwargs" parameter in a Python function definition.
Always use ``Parameter.*`` constants for setting and checking Always use ``Parameter.*`` constants for setting and checking
value of the ``kind`` attribute. value of the ``kind`` attribute.
@ -271,39 +271,39 @@ a callable object.
The function implements the following algorithm: The function implements the following algorithm:
- If the object is not callable - raise a TypeError - If the object is not callable - raise a TypeError
- If the object has a ``__signature__`` attribute and if it - If the object has a ``__signature__`` attribute and if it
is not ``None`` - return it is not ``None`` - return it
- If it has a ``__wrapped__`` attribute, return - If it has a ``__wrapped__`` attribute, return
``signature(object.__wrapped__)`` ``signature(object.__wrapped__)``
- If the object is an instance of ``FunctionType``, construct - If the object is an instance of ``FunctionType``, construct
and return a new ``Signature`` for it and return a new ``Signature`` for it
- If the object is a bound method, construct and return a new ``Signature`` - If the object is a bound method, construct and return a new ``Signature``
object, with its first parameter (usually ``self`` or ``cls``) object, with its first parameter (usually ``self`` or ``cls``)
removed. (``classmethod`` and ``staticmethod`` are supported removed. (``classmethod`` and ``staticmethod`` are supported
too. Since both are descriptors, the former returns a bound method, too. Since both are descriptors, the former returns a bound method,
and the latter returns its wrapped function.) and the latter returns its wrapped function.)
- If the object is an instance of ``functools.partial``, construct - If the object is an instance of ``functools.partial``, construct
a new ``Signature`` from its ``partial.func`` attribute, and a new ``Signature`` from its ``partial.func`` attribute, and
account for already bound ``partial.args`` and ``partial.kwargs`` account for already bound ``partial.args`` and ``partial.kwargs``
- If the object is a class or metaclass: - If the object is a class or metaclass:
- If the object's type has a ``__call__`` method defined in - If the object's type has a ``__call__`` method defined in
its MRO, return a Signature for it its MRO, return a Signature for it
- If the object has a ``__new__`` method defined in its MRO, - If the object has a ``__new__`` method defined in its MRO,
return a Signature object for it return a Signature object for it
- If the object has a ``__init__`` method defined in its MRO, - If the object has a ``__init__`` method defined in its MRO,
return a Signature object for it return a Signature object for it
- Return ``signature(object.__call__)`` - Return ``signature(object.__call__)``
Note that the ``Signature`` object is created in a lazy manner, and Note that the ``Signature`` object is created in a lazy manner, and
is not automatically cached. However, the user can manually cache a is not automatically cached. However, the user can manually cache a
@ -323,13 +323,13 @@ The first PEP design had a provision for implicit caching of ``Signature``
objects in the ``inspect.signature()`` function. However, this has the objects in the ``inspect.signature()`` function. However, this has the
following downsides: following downsides:
* If the ``Signature`` object is cached then any changes to the function * If the ``Signature`` object is cached then any changes to the function
it describes will not be reflected in it. However, If the caching is it describes will not be reflected in it. However, If the caching is
needed, it can be always done manually and explicitly needed, it can be always done manually and explicitly
* It is better to reserve the ``__signature__`` attribute for the cases * It is better to reserve the ``__signature__`` attribute for the cases
when there is a need to explicitly set to a ``Signature`` object that when there is a need to explicitly set to a ``Signature`` object that
is different from the actual one is different from the actual one
Some functions may not be introspectable Some functions may not be introspectable

View File

@ -206,23 +206,23 @@ object:
Open Issues Open Issues
=========== ===========
- Should there be a command line switch and/or environment variable to - Should there be a command line switch and/or environment variable to
disable all remappings? disable all remappings?
- Should remappings occur recursively? - Should remappings occur recursively?
- Should we automatically parse package directories for .mv files when - Should we automatically parse package directories for .mv files when
the package's __init__.py is loaded? This would allow packages to the package's __init__.py is loaded? This would allow packages to
easily include .mv files for their own remappings. Compare what the easily include .mv files for their own remappings. Compare what the
email package currently has to do if we place its ``.mv`` file in email package currently has to do if we place its ``.mv`` file in
the email package instead of in the oldlib package:: the email package instead of in the oldlib package::
# Expose old names # Expose old names
import os, sys import os, sys
sys.stdlib_remapper.read_directory_mv_files(os.path.dirname(__file__)) sys.stdlib_remapper.read_directory_mv_files(os.path.dirname(__file__))
I think we should automatically read a package's directory for any I think we should automatically read a package's directory for any
``.mv`` files it might contain. ``.mv`` files it might contain.
Reference Implementation Reference Implementation

View File

@ -317,6 +317,7 @@ Copyright
This document has been placed in the public domain. This document has been placed in the public domain.
.. ..
Local Variables: Local Variables:
mode: indented-text mode: indented-text
@ -325,4 +326,3 @@ This document has been placed in the public domain.
fill-column: 70 fill-column: 70
coding: utf-8 coding: utf-8
End: End:

View File

@ -85,35 +85,35 @@ value becomes the value of the ``yield from`` expression.
The full semantics of the ``yield from`` expression can be described The full semantics of the ``yield from`` expression can be described
in terms of the generator protocol as follows: in terms of the generator protocol as follows:
* Any values that the iterator yields are passed directly to the * Any values that the iterator yields are passed directly to the
caller. caller.
* Any values sent to the delegating generator using ``send()`` are * Any values sent to the delegating generator using ``send()`` are
passed directly to the iterator. If the sent value is None, the passed directly to the iterator. If the sent value is None, the
iterator's ``__next__()`` method is called. If the sent value iterator's ``__next__()`` method is called. If the sent value
is not None, the iterator's ``send()`` method is called. If the is not None, the iterator's ``send()`` method is called. If the
call raises StopIteration, the delegating generator is resumed. call raises StopIteration, the delegating generator is resumed.
Any other exception is propagated to the delegating generator. Any other exception is propagated to the delegating generator.
* Exceptions other than GeneratorExit thrown into the delegating * Exceptions other than GeneratorExit thrown into the delegating
generator are passed to the ``throw()`` method of the iterator. generator are passed to the ``throw()`` method of the iterator.
If the call raises StopIteration, the delegating generator is If the call raises StopIteration, the delegating generator is
resumed. Any other exception is propagated to the delegating resumed. Any other exception is propagated to the delegating
generator. generator.
* If a GeneratorExit exception is thrown into the delegating * If a GeneratorExit exception is thrown into the delegating
generator, or the ``close()`` method of the delegating generator generator, or the ``close()`` method of the delegating generator
is called, then the ``close()`` method of the iterator is called is called, then the ``close()`` method of the iterator is called
if it has one. If this call results in an exception, it is if it has one. If this call results in an exception, it is
propagated to the delegating generator. Otherwise, propagated to the delegating generator. Otherwise,
GeneratorExit is raised in the delegating generator. GeneratorExit is raised in the delegating generator.
* The value of the ``yield from`` expression is the first argument * The value of the ``yield from`` expression is the first argument
to the ``StopIteration`` exception raised by the iterator when to the ``StopIteration`` exception raised by the iterator when
it terminates. it terminates.
* ``return expr`` in a generator causes ``StopIteration(expr)`` to * ``return expr`` in a generator causes ``StopIteration(expr)`` to
be raised upon exit from the generator. be raised upon exit from the generator.
Enhancements to StopIteration Enhancements to StopIteration
@ -133,7 +133,7 @@ Python 3 syntax is used in this section.
RESULT = yield from EXPR RESULT = yield from EXPR
is semantically equivalent to :: is semantically equivalent to ::
_i = iter(EXPR) _i = iter(EXPR)
try: try:
@ -180,12 +180,12 @@ is semantically equivalent to ::
return value return value
is semantically equivalent to :: is semantically equivalent to ::
raise StopIteration(value) raise StopIteration(value)
except that, as currently, the exception cannot be caught by except that, as currently, the exception cannot be caught by
``except`` clauses within the returning generator. ``except`` clauses within the returning generator.
3. The StopIteration exception behaves as though defined thusly:: 3. The StopIteration exception behaves as though defined thusly::
@ -469,6 +469,7 @@ Copyright
This document has been placed in the public domain. This document has been placed in the public domain.
.. ..
Local Variables: Local Variables:
mode: indented-text mode: indented-text

View File

@ -117,16 +117,16 @@ No other change to the importing mechanism is made; searching modules
encountered. In summary, the process import a package foo works like encountered. In summary, the process import a package foo works like
this: this:
1. sys.path is searched for directories foo or foo.pyp, or a file foo.<ext>. 1. sys.path is searched for directories foo or foo.pyp, or a file foo.<ext>.
If a file is found and no directory, it is treated as a module, and imported. If a file is found and no directory, it is treated as a module, and imported.
2. If a directory foo is found, a check is made whether it contains __init__.py. 2. If a directory foo is found, a check is made whether it contains __init__.py.
If so, the location of the __init__.py is remembered. Otherwise, the directory If so, the location of the __init__.py is remembered. Otherwise, the directory
is skipped. Once an __init__.py is found, further directories called foo are is skipped. Once an __init__.py is found, further directories called foo are
skipped. skipped.
3. For both directories foo and foo.pyp, the directories are added to the package's 3. For both directories foo and foo.pyp, the directories are added to the package's
__path__. __path__.
4. If an __init__ module was found, it is imported, with __path__ 4. If an __init__ module was found, it is imported, with __path__
being initialized to the path computed all ``.pyp`` directories. being initialized to the path computed all ``.pyp`` directories.
Impact on Import Hooks Impact on Import Hooks
---------------------- ----------------------

View File

@ -110,10 +110,12 @@ The fields have the following interpretations:
- length: number of code points in the string (result of sq_length) - length: number of code points in the string (result of sq_length)
- interned: interned-state (SSTATE_*) as in 3.2 - interned: interned-state (SSTATE_*) as in 3.2
- kind: form of string - kind: form of string
+ 00 => str is not initialized (data are in wstr)
+ 01 => 1 byte (Latin-1) + 00 => str is not initialized (data are in wstr)
+ 10 => 2 byte (UCS-2) + 01 => 1 byte (Latin-1)
+ 11 => 4 byte (UCS-4); + 10 => 2 byte (UCS-2)
+ 11 => 4 byte (UCS-4);
- compact: the object uses one of the compact representations - compact: the object uses one of the compact representations
(implies ready) (implies ready)
- ascii: the object uses the PyASCIIObject representation - ascii: the object uses the PyASCIIObject representation
@ -189,9 +191,9 @@ PyUnicode_2BYTE_KIND (2), or PyUnicode_4BYTE_KIND (3). PyUnicode_DATA
gives the void pointer to the data. Access to individual characters gives the void pointer to the data. Access to individual characters
should use PyUnicode_{READ|WRITE}[_CHAR]: should use PyUnicode_{READ|WRITE}[_CHAR]:
- PyUnicode_READ(kind, data, index) - PyUnicode_READ(kind, data, index)
- PyUnicode_WRITE(kind, data, index, value) - PyUnicode_WRITE(kind, data, index, value)
- PyUnicode_READ_CHAR(unicode, index) - PyUnicode_READ_CHAR(unicode, index)
All these macros assume that the string is in canonical form; All these macros assume that the string is in canonical form;
callers need to ensure this by calling PyUnicode_READY. callers need to ensure this by calling PyUnicode_READY.

View File

@ -77,72 +77,72 @@ Rationale
StreamReader and StreamWriter issues StreamReader and StreamWriter issues
'''''''''''''''''''''''''''''''''''' ''''''''''''''''''''''''''''''''''''
* StreamReader is unable to translate newlines. * StreamReader is unable to translate newlines.
* StreamWriter doesn't support "line buffering" (flush if the input * StreamWriter doesn't support "line buffering" (flush if the input
text contains a newline). text contains a newline).
* StreamReader classes of the CJK encodings (e.g. GB18030) only * StreamReader classes of the CJK encodings (e.g. GB18030) only
supports UNIX newlines ('\\n'). supports UNIX newlines ('\\n').
* StreamReader and StreamWriter are stateful codecs but don't expose * StreamReader and StreamWriter are stateful codecs but don't expose
functions to control their state (getstate() or setstate()). Each functions to control their state (getstate() or setstate()). Each
codec has to handle corner cases, see `Appendix A`_. codec has to handle corner cases, see `Appendix A`_.
* StreamReader and StreamWriter are very similar to IncrementalReader * StreamReader and StreamWriter are very similar to IncrementalReader
and IncrementalEncoder, some code is duplicated for stateful codecs and IncrementalEncoder, some code is duplicated for stateful codecs
(e.g. UTF-16). (e.g. UTF-16).
* Each codec has to reimplement its own StreamReader and StreamWriter * Each codec has to reimplement its own StreamReader and StreamWriter
class, even if it's trivial (just call the encoder/decoder). class, even if it's trivial (just call the encoder/decoder).
* codecs.open(filename, "r") creates a io.TextIOWrapper object. * codecs.open(filename, "r") creates a io.TextIOWrapper object.
* No codec implements an optimized method in StreamReader or * No codec implements an optimized method in StreamReader or
StreamWriter based on the specificities of the codec. StreamWriter based on the specificities of the codec.
Issues in the bug tracker: Issues in the bug tracker:
* `Issue #5445 <http://bugs.python.org/issue5445>`_ (2009-03-08): * `Issue #5445 <http://bugs.python.org/issue5445>`_ (2009-03-08):
codecs.StreamWriter.writelines problem when passed generator codecs.StreamWriter.writelines problem when passed generator
* `Issue #7262: <http://bugs.python.org/issue7262>`_ (2009-11-04): * `Issue #7262: <http://bugs.python.org/issue7262>`_ (2009-11-04):
codecs.open() + eol (windows) codecs.open() + eol (windows)
* `Issue #8260 <http://bugs.python.org/issue8260>`_ (2010-03-29): * `Issue #8260 <http://bugs.python.org/issue8260>`_ (2010-03-29):
When I use codecs.open(...) and f.readline() follow up by f.read() When I use codecs.open(...) and f.readline() follow up by f.read()
return bad result return bad result
* `Issue #8630 <http://bugs.python.org/issue8630>`_ (2010-05-05): * `Issue #8630 <http://bugs.python.org/issue8630>`_ (2010-05-05):
Keepends param in codec readline(s) Keepends param in codec readline(s)
* `Issue #10344 <http://bugs.python.org/issue10344>`_ (2010-11-06): * `Issue #10344 <http://bugs.python.org/issue10344>`_ (2010-11-06):
codecs.readline doesn't care buffering codecs.readline doesn't care buffering
* `Issue #11461 <http://bugs.python.org/issue11461>`_ (2011-03-10): * `Issue #11461 <http://bugs.python.org/issue11461>`_ (2011-03-10):
Reading UTF-16 with codecs.readline() breaks on surrogate pairs Reading UTF-16 with codecs.readline() breaks on surrogate pairs
* `Issue #12446 <http://bugs.python.org/issue12446>`_ (2011-06-30): * `Issue #12446 <http://bugs.python.org/issue12446>`_ (2011-06-30):
StreamReader Readlines behavior odd StreamReader Readlines behavior odd
* `Issue #12508 <http://bugs.python.org/issue12508>`_ (2011-07-06): * `Issue #12508 <http://bugs.python.org/issue12508>`_ (2011-07-06):
Codecs Anomaly Codecs Anomaly
* `Issue #12512 <http://bugs.python.org/issue12512>`_ (2011-07-07): * `Issue #12512 <http://bugs.python.org/issue12512>`_ (2011-07-07):
codecs: StreamWriter issues with stateful codecs after a seek or codecs: StreamWriter issues with stateful codecs after a seek or
with append mode with append mode
* `Issue #12513 <http://bugs.python.org/issue12513>`_ (2011-07-07): * `Issue #12513 <http://bugs.python.org/issue12513>`_ (2011-07-07):
codec.StreamReaderWriter: issues with interlaced read-write codec.StreamReaderWriter: issues with interlaced read-write
TextIOWrapper features TextIOWrapper features
'''''''''''''''''''''' ''''''''''''''''''''''
* TextIOWrapper supports any kind of newline, including translating * TextIOWrapper supports any kind of newline, including translating
newlines (to UNIX newlines), to read and write. newlines (to UNIX newlines), to read and write.
* TextIOWrapper reuses codecs incremental encoders and decoders (no * TextIOWrapper reuses codecs incremental encoders and decoders (no
duplication of code). duplication of code).
* The io module (TextIOWrapper) is faster than the codecs module * The io module (TextIOWrapper) is faster than the codecs module
(StreamReader). It is implemented in C, whereas codecs is (StreamReader). It is implemented in C, whereas codecs is
implemented in Python. implemented in Python.
* TextIOWrapper has a readahead algorithm which speeds up small * TextIOWrapper has a readahead algorithm which speeds up small
reads: read character by character or line by line (io is 10x reads: read character by character or line by line (io is 10x
through 25x faster than codecs on these operations). through 25x faster than codecs on these operations).
* TextIOWrapper has a write buffer. * TextIOWrapper has a write buffer.
* TextIOWrapper.tell() is optimized. * TextIOWrapper.tell() is optimized.
* TextIOWrapper supports random access (read+write) using a single * TextIOWrapper supports random access (read+write) using a single
class which permit to optimize interlaced read-write (but no such class which permit to optimize interlaced read-write (but no such
optimization is implemented). optimization is implemented).
TextIOWrapper issues TextIOWrapper issues
'''''''''''''''''''' ''''''''''''''''''''
* `Issue #12215 <http://bugs.python.org/issue12215>`_ (2011-05-30): * `Issue #12215 <http://bugs.python.org/issue12215>`_ (2011-05-30):
TextIOWrapper: issues with interlaced read-write TextIOWrapper: issues with interlaced read-write
Possible improvements of StreamReader and StreamWriter Possible improvements of StreamReader and StreamWriter
'''''''''''''''''''''''''''''''''''''''''''''''''''''' ''''''''''''''''''''''''''''''''''''''''''''''''''''''
@ -233,29 +233,29 @@ Stateful codecs
Python supports the following stateful codecs: Python supports the following stateful codecs:
* cp932 * cp932
* cp949 * cp949
* cp950 * cp950
* euc_jis_2004 * euc_jis_2004
* euc_jisx2003 * euc_jisx2003
* euc_jp * euc_jp
* euc_kr * euc_kr
* gb18030 * gb18030
* gbk * gbk
* hz * hz
* iso2022_jp * iso2022_jp
* iso2022_jp_1 * iso2022_jp_1
* iso2022_jp_2 * iso2022_jp_2
* iso2022_jp_2004 * iso2022_jp_2004
* iso2022_jp_3 * iso2022_jp_3
* iso2022_jp_ext * iso2022_jp_ext
* iso2022_kr * iso2022_kr
* shift_jis * shift_jis
* shift_jis_2004 * shift_jis_2004
* shift_jisx0213 * shift_jisx0213
* utf_8_sig * utf_8_sig
* utf_16 * utf_16
* utf_32 * utf_32
Read and seek(0) Read and seek(0)
'''''''''''''''' ''''''''''''''''
@ -312,13 +312,13 @@ writes a new BOM on the second write (`issue #12512
Links Links
===== =====
* `PEP 100: Python Unicode Integration * `PEP 100: Python Unicode Integration
<http://www.python.org/dev/peps/pep-0100/>`_ <http://www.python.org/dev/peps/pep-0100/>`_
* `PEP 3116: New I/O <http://www.python.org/dev/peps/pep-3116/>`_ * `PEP 3116: New I/O <http://www.python.org/dev/peps/pep-3116/>`_
* `Issue #8796: Deprecate codecs.open() * `Issue #8796: Deprecate codecs.open()
<http://bugs.python.org/issue8796>`_ <http://bugs.python.org/issue8796>`_
* `[python-dev] Deprecate codecs.open() and StreamWriter/StreamReader * `[python-dev] Deprecate codecs.open() and StreamWriter/StreamReader
<http://mail.python.org/pipermail/python-dev/2011-May/111591.html>`_ <http://mail.python.org/pipermail/python-dev/2011-May/111591.html>`_
Copyright Copyright

View File

@ -176,11 +176,11 @@ statement, anonymous functions could still be incredibly useful. Consider how
many constructs Python has where one expression is responsible for the bulk of many constructs Python has where one expression is responsible for the bulk of
the heavy lifting: the heavy lifting:
* comprehensions, generator expressions, map(), filter() * comprehensions, generator expressions, map(), filter()
* key arguments to sorted(), min(), max() * key arguments to sorted(), min(), max()
* partial function application * partial function application
* provision of callbacks (e.g. for weak references or aysnchronous IO) * provision of callbacks (e.g. for weak references or aysnchronous IO)
* array broadcast operations in NumPy * array broadcast operations in NumPy
However, adopting Ruby's block syntax directly won't work for Python, since However, adopting Ruby's block syntax directly won't work for Python, since
the effectiveness of Ruby's blocks relies heavily on various conventions in the effectiveness of Ruby's blocks relies heavily on various conventions in

View File

@ -32,7 +32,7 @@ Un-release Schedule
The current un-schedule is: The current un-schedule is:
- 2.8 final Never - 2.8 final Never
Official pronouncement Official pronouncement

View File

@ -32,8 +32,8 @@ Python 2.3 introduced float timestamps to support sub-second resolutions.
os.stat() uses float timestamps by default since Python 2.5. Python 3.3 os.stat() uses float timestamps by default since Python 2.5. Python 3.3
introduced functions supporting nanosecond resolutions: introduced functions supporting nanosecond resolutions:
* os module: futimens(), utimensat() * os module: futimens(), utimensat()
* time module: clock_gettime(), clock_getres(), monotonic(), wallclock() * time module: clock_gettime(), clock_getres(), monotonic(), wallclock()
os.stat() reads nanosecond timestamps but returns timestamps as float. os.stat() reads nanosecond timestamps but returns timestamps as float.
@ -74,24 +74,24 @@ precision. The clock resolution can also be stored in a Decimal object.
Add an optional *timestamp* argument to: Add an optional *timestamp* argument to:
* os module: fstat(), fstatat(), lstat(), stat() (st_atime, * os module: fstat(), fstatat(), lstat(), stat() (st_atime,
st_ctime and st_mtime fields of the stat structure), st_ctime and st_mtime fields of the stat structure),
sched_rr_get_interval(), times(), wait3() and wait4() sched_rr_get_interval(), times(), wait3() and wait4()
* resource module: ru_utime and ru_stime fields of getrusage() * resource module: ru_utime and ru_stime fields of getrusage()
* signal module: getitimer(), setitimer() * signal module: getitimer(), setitimer()
* time module: clock(), clock_gettime(), clock_getres(), * time module: clock(), clock_gettime(), clock_getres(),
monotonic(), time() and wallclock() monotonic(), time() and wallclock()
The *timestamp* argument value can be float or Decimal, float is still the The *timestamp* argument value can be float or Decimal, float is still the
default for backward compatibility. The following functions support Decimal as default for backward compatibility. The following functions support Decimal as
input: input:
* datetime module: date.fromtimestamp(), datetime.fromtimestamp() and * datetime module: date.fromtimestamp(), datetime.fromtimestamp() and
datetime.utcfromtimestamp() datetime.utcfromtimestamp()
* os module: futimes(), futimesat(), lutimes(), utime() * os module: futimes(), futimesat(), lutimes(), utime()
* select module: epoll.poll(), kqueue.control(), select() * select module: epoll.poll(), kqueue.control(), select()
* signal module: setitimer(), sigtimedwait() * signal module: setitimer(), sigtimedwait()
* time module: ctime(), gmtime(), localtime(), sleep() * time module: ctime(), gmtime(), localtime(), sleep()
The os.stat_float_times() function is deprecated: use an explicit cast using The os.stat_float_times() function is deprecated: use an explicit cast using
int() instead. int() instead.
@ -132,22 +132,22 @@ Alternatives: Timestamp types
To support timestamps with an arbitrary or nanosecond resolution, the following To support timestamps with an arbitrary or nanosecond resolution, the following
types have been considered: types have been considered:
* decimal.Decimal * decimal.Decimal
* number of nanoseconds * number of nanoseconds
* 128-bits float * 128-bits float
* datetime.datetime * datetime.datetime
* datetime.timedelta * datetime.timedelta
* tuple of integers * tuple of integers
* timespec structure * timespec structure
Criteria: Criteria:
* Doing arithmetic on timestamps must be possible * Doing arithmetic on timestamps must be possible
* Timestamps must be comparable * Timestamps must be comparable
* An arbitrary resolution, or at least a resolution of one nanosecond without * An arbitrary resolution, or at least a resolution of one nanosecond without
losing precision losing precision
* It should be possible to coerce the new timestamp to float for backward * It should be possible to coerce the new timestamp to float for backward
compatibility compatibility
A resolution of one nanosecond is enough to support all current C functions. A resolution of one nanosecond is enough to support all current C functions.
@ -264,39 +264,39 @@ an arbitrary limit like one nanosecond.
Different formats have been proposed: Different formats have been proposed:
* A: (numerator, denominator) * A: (numerator, denominator)
* value = numerator / denominator * value = numerator / denominator
* resolution = 1 / denominator * resolution = 1 / denominator
* denominator > 0 * denominator > 0
* B: (seconds, numerator, denominator) * B: (seconds, numerator, denominator)
* value = seconds + numerator / denominator * value = seconds + numerator / denominator
* resolution = 1 / denominator * resolution = 1 / denominator
* 0 <= numerator < denominator * 0 <= numerator < denominator
* denominator > 0 * denominator > 0
* C: (intpart, floatpart, base, exponent) * C: (intpart, floatpart, base, exponent)
* value = intpart + floatpart / base\ :sup:`exponent` * value = intpart + floatpart / base\ :sup:`exponent`
* resolution = 1 / base \ :sup:`exponent` * resolution = 1 / base \ :sup:`exponent`
* 0 <= floatpart < base \ :sup:`exponent` * 0 <= floatpart < base \ :sup:`exponent`
* base > 0 * base > 0
* exponent >= 0 * exponent >= 0
* D: (intpart, floatpart, exponent) * D: (intpart, floatpart, exponent)
* value = intpart + floatpart / 10\ :sup:`exponent` * value = intpart + floatpart / 10\ :sup:`exponent`
* resolution = 1 / 10 \ :sup:`exponent` * resolution = 1 / 10 \ :sup:`exponent`
* 0 <= floatpart < 10 \ :sup:`exponent` * 0 <= floatpart < 10 \ :sup:`exponent`
* exponent >= 0 * exponent >= 0
* E: (sec, nsec) * E: (sec, nsec)
* value = sec + nsec × 10\ :sup:`-9` * value = sec + nsec × 10\ :sup:`-9`
* resolution = 10 \ :sup:`-9` (nanosecond) * resolution = 10 \ :sup:`-9` (nanosecond)
* 0 <= nsec < 10 \ :sup:`9` * 0 <= nsec < 10 \ :sup:`9`
All formats support an arbitrary resolution, except of the format (E). All formats support an arbitrary resolution, except of the format (E).
@ -490,11 +490,11 @@ Add new functions
Add new functions for each type, examples: Add new functions for each type, examples:
* time.clock_decimal() * time.clock_decimal()
* time.time_decimal() * time.time_decimal()
* os.stat_decimal() * os.stat_decimal()
* os.stat_timespec() * os.stat_timespec()
* etc. * etc.
Adding a new function for each function creating timestamps duplicate a lot of Adding a new function for each function creating timestamps duplicate a lot of
code and would be a pain to maintain. code and would be a pain to maintain.

View File

@ -15,22 +15,22 @@ Rejection Notice
I'm rejecting this PEP. A number of reasons (not exhaustive): I'm rejecting this PEP. A number of reasons (not exhaustive):
* According to Raymond Hettinger, use of frozendict is low. Those * According to Raymond Hettinger, use of frozendict is low. Those
that do use it tend to use it as a hint only, such as declaring that do use it tend to use it as a hint only, such as declaring
global or class-level "constants": they aren't really immutable, global or class-level "constants": they aren't really immutable,
since anyone can still assign to the name. since anyone can still assign to the name.
* There are existing idioms for avoiding mutable default values. * There are existing idioms for avoiding mutable default values.
* The potential of optimizing code using frozendict in PyPy is * The potential of optimizing code using frozendict in PyPy is
unsure; a lot of other things would have to change first. The same unsure; a lot of other things would have to change first. The same
holds for compile-time lookups in general. holds for compile-time lookups in general.
* Multiple threads can agree by convention not to mutate a shared * Multiple threads can agree by convention not to mutate a shared
dict, there's no great need for enforcement. Multiple processes dict, there's no great need for enforcement. Multiple processes
can't share dicts. can't share dicts.
* Adding a security sandbox written in Python, even with a limited * Adding a security sandbox written in Python, even with a limited
scope, is frowned upon by many, due to the inherent difficulty with scope, is frowned upon by many, due to the inherent difficulty with
ever proving that the sandbox is actually secure. Because of this ever proving that the sandbox is actually secure. Because of this
we won't be adding one to the stdlib any time soon, so this use we won't be adding one to the stdlib any time soon, so this use
case falls outside the scope of a PEP. case falls outside the scope of a PEP.
On the other hand, exposing the existing read-only dict proxy as a On the other hand, exposing the existing read-only dict proxy as a
built-in type sounds good to me. (It would need to be changed to built-in type sounds good to me. (It would need to be changed to
@ -55,46 +55,46 @@ hashable. A frozendict is hashable if and only if all values are hashable.
Use cases: Use cases:
* Immutable global variable like a default configuration. * Immutable global variable like a default configuration.
* Default value of a function parameter. Avoid the issue of mutable default * Default value of a function parameter. Avoid the issue of mutable default
arguments. arguments.
* Implement a cache: frozendict can be used to store function keywords. * Implement a cache: frozendict can be used to store function keywords.
frozendict can be used as a key of a mapping or as a member of set. frozendict can be used as a key of a mapping or as a member of set.
* frozendict avoids the need of a lock when the frozendict is shared * frozendict avoids the need of a lock when the frozendict is shared
by multiple threads or processes, especially hashable frozendict. It would by multiple threads or processes, especially hashable frozendict. It would
also help to prohibe coroutines (generators + greenlets) to modify the also help to prohibe coroutines (generators + greenlets) to modify the
global state. global state.
* frozendict lookup can be done at compile time instead of runtime because the * frozendict lookup can be done at compile time instead of runtime because the
mapping is read-only. frozendict can be used instead of a preprocessor to mapping is read-only. frozendict can be used instead of a preprocessor to
remove conditional code at compilation, like code specific to a debug build. remove conditional code at compilation, like code specific to a debug build.
* frozendict helps to implement read-only object proxies for security modules. * frozendict helps to implement read-only object proxies for security modules.
For example, it would be possible to use frozendict type for __builtins__ For example, it would be possible to use frozendict type for __builtins__
mapping or type.__dict__. This is possible because frozendict is compatible mapping or type.__dict__. This is possible because frozendict is compatible
with the PyDict C API. with the PyDict C API.
* frozendict avoids the need of a read-only proxy in some cases. frozendict is * frozendict avoids the need of a read-only proxy in some cases. frozendict is
faster than a proxy because getting an item in a frozendict is a fast lookup faster than a proxy because getting an item in a frozendict is a fast lookup
whereas a proxy requires a function call. whereas a proxy requires a function call.
Constraints Constraints
=========== ===========
* frozendict has to implement the Mapping abstract base class * frozendict has to implement the Mapping abstract base class
* frozendict keys and values can be unorderable * frozendict keys and values can be unorderable
* a frozendict is hashable if all keys and values are hashable * a frozendict is hashable if all keys and values are hashable
* frozendict hash does not depend on the items creation order * frozendict hash does not depend on the items creation order
Implementation Implementation
============== ==============
* Add a PyFrozenDictObject structure based on PyDictObject with an extra * Add a PyFrozenDictObject structure based on PyDictObject with an extra
"Py_hash_t hash;" field "Py_hash_t hash;" field
* frozendict.__hash__() is implemented using hash(frozenset(self.items())) and * frozendict.__hash__() is implemented using hash(frozenset(self.items())) and
caches the result in its private hash attribute caches the result in its private hash attribute
* Register frozendict as a collections.abc.Mapping * Register frozendict as a collections.abc.Mapping
* frozendict can be used with PyDict_GetItem(), but PyDict_SetItem() and * frozendict can be used with PyDict_GetItem(), but PyDict_SetItem() and
PyDict_DelItem() raise a TypeError PyDict_DelItem() raise a TypeError
Recipe: hashable dict Recipe: hashable dict
@ -161,90 +161,90 @@ Existing implementations
Whitelist approach. Whitelist approach.
* `Implementing an Immutable Dictionary (Python recipe 498072) * `Implementing an Immutable Dictionary (Python recipe 498072)
<http://code.activestate.com/recipes/498072/>`_ by Aristotelis Mikropoulos. <http://code.activestate.com/recipes/498072/>`_ by Aristotelis Mikropoulos.
Similar to frozendict except that it is not truly read-only: it is possible Similar to frozendict except that it is not truly read-only: it is possible
to access to this private internal dict. It does not implement __hash__ and to access to this private internal dict. It does not implement __hash__ and
has an implementation issue: it is possible to call again __init__() to has an implementation issue: it is possible to call again __init__() to
modify the mapping. modify the mapping.
* PyWebmail contains an ImmutableDict type: `webmail.utils.ImmutableDict * PyWebmail contains an ImmutableDict type: `webmail.utils.ImmutableDict
<http://pywebmail.cvs.sourceforge.net/viewvc/pywebmail/webmail/webmail/utils/ImmutableDict.py?revision=1.2&view=markup>`_. <http://pywebmail.cvs.sourceforge.net/viewvc/pywebmail/webmail/webmail/utils/ImmutableDict.py?revision=1.2&view=markup>`_.
It is hashable if keys and values are hashable. It is not truly read-only: It is hashable if keys and values are hashable. It is not truly read-only:
its internal dict is a public attribute. its internal dict is a public attribute.
* remember project: `remember.dicts.FrozenDict * remember project: `remember.dicts.FrozenDict
<https://bitbucket.org/mikegraham/remember/src/tip/remember/dicts.py>`_. <https://bitbucket.org/mikegraham/remember/src/tip/remember/dicts.py>`_.
It is used to implement a cache: FrozenDict is used to store function callbacks. It is used to implement a cache: FrozenDict is used to store function callbacks.
FrozenDict may be hashable. It has an extra supply_dict() class method to FrozenDict may be hashable. It has an extra supply_dict() class method to
create a FrozenDict from a dict without copying the dict: store the dict as create a FrozenDict from a dict without copying the dict: store the dict as
the internal dict. Implementation issue: __init__() can be called to modify the internal dict. Implementation issue: __init__() can be called to modify
the mapping and the hash may differ depending on item creation order. The the mapping and the hash may differ depending on item creation order. The
mapping is not truly read-only: the internal dict is accessible in Python. mapping is not truly read-only: the internal dict is accessible in Python.
Blacklist approach: inherit from dict and override write methods to raise an Blacklist approach: inherit from dict and override write methods to raise an
exception. It is not truly read-only: it is still possible to call dict methods exception. It is not truly read-only: it is still possible to call dict methods
on such "frozen dictionary" to modify it. on such "frozen dictionary" to modify it.
* brownie: `brownie.datastructures.ImmuatableDict * brownie: `brownie.datastructures.ImmuatableDict
<https://github.com/DasIch/brownie/blob/HEAD/brownie/datastructures/mappings.py>`_. <https://github.com/DasIch/brownie/blob/HEAD/brownie/datastructures/mappings.py>`_.
It is hashable if keys and values are hashable. werkzeug project has the It is hashable if keys and values are hashable. werkzeug project has the
same code: `werkzeug.datastructures.ImmutableDict same code: `werkzeug.datastructures.ImmutableDict
<https://github.com/mitsuhiko/werkzeug/blob/master/werkzeug/datastructures.py>`_. <https://github.com/mitsuhiko/werkzeug/blob/master/werkzeug/datastructures.py>`_.
ImmutableDict is used for global constant (configuration options). The Flask ImmutableDict is used for global constant (configuration options). The Flask
project uses ImmutableDict of werkzeug for its default configuration. project uses ImmutableDict of werkzeug for its default configuration.
* SQLAchemy project: `sqlachemy.util.immutabledict * SQLAchemy project: `sqlachemy.util.immutabledict
<http://hg.sqlalchemy.org/sqlalchemy/file/tip/lib/sqlalchemy/util/_collections.py>`_. <http://hg.sqlalchemy.org/sqlalchemy/file/tip/lib/sqlalchemy/util/_collections.py>`_.
It is not hashable and has an extra method: union(). immutabledict is used It is not hashable and has an extra method: union(). immutabledict is used
for the default value of parameter of some functions expecting a mapping. for the default value of parameter of some functions expecting a mapping.
Example: mapper_args=immutabledict() in SqlSoup.map(). Example: mapper_args=immutabledict() in SqlSoup.map().
* `Frozen dictionaries (Python recipe 414283) <http://code.activestate.com/recipes/414283/>`_ * `Frozen dictionaries (Python recipe 414283) <http://code.activestate.com/recipes/414283/>`_
by Oren Tirosh. It is hashable if keys and values are hashable. Included in by Oren Tirosh. It is hashable if keys and values are hashable. Included in
the following projects: the following projects:
* lingospot: `frozendict/frozendict.py * lingospot: `frozendict/frozendict.py
<http://code.google.com/p/lingospot/source/browse/trunk/frozendict/frozendict.py>`_ <http://code.google.com/p/lingospot/source/browse/trunk/frozendict/frozendict.py>`_
* factor-graphics: frozendict type in `python/fglib/util_ext_frozendict.py * factor-graphics: frozendict type in `python/fglib/util_ext_frozendict.py
<https://github.com/ih/factor-graphics/blob/41006fb71a09377445cc140489da5ce8eeb9c8b1/python/fglib/util_ext_frozendict.py>`_ <https://github.com/ih/factor-graphics/blob/41006fb71a09377445cc140489da5ce8eeb9c8b1/python/fglib/util_ext_frozendict.py>`_
* The gsakkis-utils project written by George Sakkis includes a frozendict * The gsakkis-utils project written by George Sakkis includes a frozendict
type: `datastructs.frozendict type: `datastructs.frozendict
<http://code.google.com/p/gsakkis-utils/source/browse/trunk/datastructs/frozendict.py>`_ <http://code.google.com/p/gsakkis-utils/source/browse/trunk/datastructs/frozendict.py>`_
* characters: `scripts/python/frozendict.py * characters: `scripts/python/frozendict.py
<https://github.com/JasonGross/characters/blob/15a2af5f7861cd33a0dbce70f1569cda74e9a1e3/scripts/python/frozendict.py#L1>`_. <https://github.com/JasonGross/characters/blob/15a2af5f7861cd33a0dbce70f1569cda74e9a1e3/scripts/python/frozendict.py#L1>`_.
It is hashable. __init__() sets __init__ to None. It is hashable. __init__() sets __init__ to None.
* Old NLTK (1.x): `nltk.util.frozendict * Old NLTK (1.x): `nltk.util.frozendict
<http://nltk.googlecode.com/svn/trunk/nltk-old/src/nltk/util.py>`_. Keys and <http://nltk.googlecode.com/svn/trunk/nltk-old/src/nltk/util.py>`_. Keys and
values must be hashable. __init__() can be called twice to modify the values must be hashable. __init__() can be called twice to modify the
mapping. frozendict is used to "freeze" an object. mapping. frozendict is used to "freeze" an object.
Hashable dict: inherit from dict and just add an __hash__ method. Hashable dict: inherit from dict and just add an __hash__ method.
* `pypy.rpython.lltypesystem.lltype.frozendict * `pypy.rpython.lltypesystem.lltype.frozendict
<https://bitbucket.org/pypy/pypy/src/1f49987cc2fe/pypy/rpython/lltypesystem/lltype.py#cl-86>`_. <https://bitbucket.org/pypy/pypy/src/1f49987cc2fe/pypy/rpython/lltypesystem/lltype.py#cl-86>`_.
It is hashable but don't deny modification of the mapping. It is hashable but don't deny modification of the mapping.
* factor-graphics: hashabledict type in `python/fglib/util_ext_frozendict.py * factor-graphics: hashabledict type in `python/fglib/util_ext_frozendict.py
<https://github.com/ih/factor-graphics/blob/41006fb71a09377445cc140489da5ce8eeb9c8b1/python/fglib/util_ext_frozendict.py>`_ <https://github.com/ih/factor-graphics/blob/41006fb71a09377445cc140489da5ce8eeb9c8b1/python/fglib/util_ext_frozendict.py>`_
Links Links
===== =====
* `Issue #14162: PEP 416: Add a builtin frozendict type * `Issue #14162: PEP 416: Add a builtin frozendict type
<http://bugs.python.org/issue14162>`_ <http://bugs.python.org/issue14162>`_
* PEP 412: Key-Sharing Dictionary * PEP 412: Key-Sharing Dictionary
(`issue #13903 <http://bugs.python.org/issue13903>`_) (`issue #13903 <http://bugs.python.org/issue13903>`_)
* PEP 351: The freeze protocol * PEP 351: The freeze protocol
* `The case for immutable dictionaries; and the central misunderstanding of * `The case for immutable dictionaries; and the central misunderstanding of
PEP 351 <http://www.cs.toronto.edu/~tijmen/programming/immutableDictionaries.html>`_ PEP 351 <http://www.cs.toronto.edu/~tijmen/programming/immutableDictionaries.html>`_
* `make dictproxy object via ctypes.pythonapi and type() (Python recipe * `make dictproxy object via ctypes.pythonapi and type() (Python recipe
576540) <http://code.activestate.com/recipes/576540/>`_ by Ikkei Shimomura. 576540) <http://code.activestate.com/recipes/576540/>`_ by Ikkei Shimomura.
* Python security modules implementing read-only object proxies using a C * Python security modules implementing read-only object proxies using a C
extension: extension:
* `pysandbox <https://github.com/haypo/pysandbox/>`_ * `pysandbox <https://github.com/haypo/pysandbox/>`_
* `mxProxy <http://www.egenix.com/products/python/mxBase/mxProxy/>`_ * `mxProxy <http://www.egenix.com/products/python/mxBase/mxProxy/>`_
* `zope.proxy <http://pypi.python.org/pypi/zope.proxy>`_ * `zope.proxy <http://pypi.python.org/pypi/zope.proxy>`_
* `zope.security <http://pypi.python.org/pypi/zope.security>`_ * `zope.security <http://pypi.python.org/pypi/zope.security>`_
Copyright Copyright

View File

@ -116,14 +116,14 @@ Get information on the specified clock. Supported clock names:
Return a ``time.clock_info`` object which has the following attributes: Return a ``time.clock_info`` object which has the following attributes:
* ``implementation`` (str): name of the underlying operating system * ``implementation`` (str): name of the underlying operating system
function. Examples: ``"QueryPerformanceCounter()"``, function. Examples: ``"QueryPerformanceCounter()"``,
``"clock_gettime(CLOCK_REALTIME)"``. ``"clock_gettime(CLOCK_REALTIME)"``.
* ``monotonic`` (bool): True if the clock cannot go backward. * ``monotonic`` (bool): True if the clock cannot go backward.
* ``adjustable`` (bool): ``True`` if the clock can be changed automatically * ``adjustable`` (bool): ``True`` if the clock can be changed automatically
(e.g. by a NTP daemon) or manually by the system administrator, ``False`` (e.g. by a NTP daemon) or manually by the system administrator, ``False``
otherwise otherwise
* ``resolution`` (float): resolution in seconds of the clock. * ``resolution`` (float): resolution in seconds of the clock.
time.monotonic() time.monotonic()

View File

@ -868,27 +868,27 @@ should be installed for the specified activities:
* Implied runtime dependencies: * Implied runtime dependencies:
* ``run_requires`` * ``run_requires``
* ``meta_requires`` * ``meta_requires``
* Implied build dependencies: * Implied build dependencies:
* ``build_requires`` * ``build_requires``
* If running the distribution's test suite as part of the build process, * If running the distribution's test suite as part of the build process,
request the ``:run:``, ``:meta:``, and ``:test:`` extras to also request the ``:run:``, ``:meta:``, and ``:test:`` extras to also
install: install:
* ``run_requires``
* ``meta_requires``
* ``test_requires``
* Implied development and publication dependencies:
* ``run_requires`` * ``run_requires``
* ``meta_requires`` * ``meta_requires``
* ``build_requires``
* ``test_requires`` * ``test_requires``
* ``dev_requires``
* Implied development and publication dependencies:
* ``run_requires``
* ``meta_requires``
* ``build_requires``
* ``test_requires``
* ``dev_requires``
The notation described in `Extras (optional dependencies)`_ SHOULD be used The notation described in `Extras (optional dependencies)`_ SHOULD be used
to determine exactly what gets installed for various operations. to determine exactly what gets installed for various operations.

View File

@ -175,32 +175,50 @@ be able to control the following aspects of the final interpreter state:
* Whether or not to enable the import system (required by CPython's * Whether or not to enable the import system (required by CPython's
build process when freezing the importlib._bootstrap bytecode) build process when freezing the importlib._bootstrap bytecode)
* The "Where is Python located?" elements in the ``sys`` module: * The "Where is Python located?" elements in the ``sys`` module:
* ``sys.executable`` * ``sys.executable``
* ``sys.base_exec_prefix`` * ``sys.base_exec_prefix``
* ``sys.base_prefix`` * ``sys.base_prefix``
* ``sys.exec_prefix`` * ``sys.exec_prefix``
* ``sys.prefix`` * ``sys.prefix``
* The path searched for imports from the filesystem (and other path hooks): * The path searched for imports from the filesystem (and other path hooks):
* ``sys.path`` * ``sys.path``
* The command line arguments seen by the interpeter: * The command line arguments seen by the interpeter:
* ``sys.argv`` * ``sys.argv``
* The filesystem encoding used by: * The filesystem encoding used by:
* ``sys.getfsencoding`` * ``sys.getfsencoding``
* ``os.fsencode`` * ``os.fsencode``
* ``os.fsdecode`` * ``os.fsdecode``
* The IO encoding (if any) and the buffering used by: * The IO encoding (if any) and the buffering used by:
* ``sys.stdin`` * ``sys.stdin``
* ``sys.stdout`` * ``sys.stdout``
* ``sys.stderr`` * ``sys.stderr``
* The initial warning system state: * The initial warning system state:
* ``sys.warnoptions`` * ``sys.warnoptions``
* Arbitrary extended options (e.g. to automatically enable ``faulthandler``): * Arbitrary extended options (e.g. to automatically enable ``faulthandler``):
* ``sys._xoptions`` * ``sys._xoptions``
* Whether or not to implicitly cache bytecode files: * Whether or not to implicitly cache bytecode files:
* ``sys.dont_write_bytecode`` * ``sys.dont_write_bytecode``
* Whether or not to enforce correct case in filenames on case-insensitive * Whether or not to enforce correct case in filenames on case-insensitive
platforms platforms
* ``os.environ["PYTHONCASEOK"]`` * ``os.environ["PYTHONCASEOK"]``
* The other settings exposed to Python code in ``sys.flags``: * The other settings exposed to Python code in ``sys.flags``:
* ``debug`` (Enable debugging output in the pgen parser) * ``debug`` (Enable debugging output in the pgen parser)
@ -738,9 +756,9 @@ incomplete:
(typically a zipfile or directory) (typically a zipfile or directory)
* otherwise, it will be accurate: * otherwise, it will be accurate:
* the script name if running an ordinary script * the script name if running an ordinary script
* ``-c`` if executing a supplied string * ``-c`` if executing a supplied string
* ``-`` or the empty string if running from stdin * ``-`` or the empty string if running from stdin
* the metadata in the ``__main__`` module will still indicate it is a * the metadata in the ``__main__`` module will still indicate it is a
builtin module builtin module
@ -791,24 +809,26 @@ call will take whatever steps are needed to populate ``main_code``:
``main_code``. ``main_code``.
* For ``main_path``: * For ``main_path``:
* if the supplied path is recognised as a valid ``sys.path`` entry, it
is inserted as ``sys.path[0]``, ``main_module`` is set * if the supplied path is recognised as a valid ``sys.path`` entry, it
to ``__main__`` and processing continues as for ``main_module`` below. is inserted as ``sys.path[0]``, ``main_module`` is set
* otherwise, path is read as a CPython bytecode file to ``__main__`` and processing continues as for ``main_module`` below.
* if that fails, it is read as a Python source file and compiled * otherwise, path is read as a CPython bytecode file
* in the latter two cases, the code object is saved to ``main_code`` * if that fails, it is read as a Python source file and compiled
and ``__main__.__file__`` is set appropriately * in the latter two cases, the code object is saved to ``main_code``
and ``__main__.__file__`` is set appropriately
* For ``main_module``: * For ``main_module``:
* any parent package is imported
* the loader for the module is determined * any parent package is imported
* if the loader indicates the module is a package, add ``.__main__`` to * the loader for the module is determined
the end of ``main_module`` and try again (if the final name segment * if the loader indicates the module is a package, add ``.__main__`` to
is already ``.__main__`` then fail immediately) the end of ``main_module`` and try again (if the final name segment
* once the module source code is located, save the compiled module code is already ``.__main__`` then fail immediately)
as ``main_code`` and populate the following attributes in ``__main__`` * once the module source code is located, save the compiled module code
appropriately: ``__name__``, ``__loader__``, ``__file__``, as ``main_code`` and populate the following attributes in ``__main__``
``__cached__``, ``__package__``. appropriately: ``__name__``, ``__loader__``, ``__file__``,
``__cached__``, ``__package__``.
(Note: the behaviour described in this section isn't new, it's a write-up (Note: the behaviour described in this section isn't new, it's a write-up
@ -1222,26 +1242,38 @@ TBD: Cover the initialization of the following in more detail:
* Completely disabling the import system * Completely disabling the import system
* The initial warning system state: * The initial warning system state:
* ``sys.warnoptions`` * ``sys.warnoptions``
* (-W option, PYTHONWARNINGS) * (-W option, PYTHONWARNINGS)
* Arbitrary extended options (e.g. to automatically enable ``faulthandler``): * Arbitrary extended options (e.g. to automatically enable ``faulthandler``):
* ``sys._xoptions`` * ``sys._xoptions``
* (-X option) * (-X option)
* The filesystem encoding used by: * The filesystem encoding used by:
* ``sys.getfsencoding`` * ``sys.getfsencoding``
* ``os.fsencode`` * ``os.fsencode``
* ``os.fsdecode`` * ``os.fsdecode``
* The IO encoding and buffering used by: * The IO encoding and buffering used by:
* ``sys.stdin`` * ``sys.stdin``
* ``sys.stdout`` * ``sys.stdout``
* ``sys.stderr`` * ``sys.stderr``
* (-u option, PYTHONIOENCODING, PYTHONUNBUFFEREDIO) * (-u option, PYTHONIOENCODING, PYTHONUNBUFFEREDIO)
* Whether or not to implicitly cache bytecode files: * Whether or not to implicitly cache bytecode files:
* ``sys.dont_write_bytecode`` * ``sys.dont_write_bytecode``
* (-B option, PYTHONDONTWRITEBYTECODE) * (-B option, PYTHONDONTWRITEBYTECODE)
* Whether or not to enforce correct case in filenames on case-insensitive * Whether or not to enforce correct case in filenames on case-insensitive
platforms platforms
* ``os.environ["PYTHONCASEOK"]`` * ``os.environ["PYTHONCASEOK"]``
* The other settings exposed to Python code in ``sys.flags``: * The other settings exposed to Python code in ``sys.flags``:
* ``debug`` (Enable debugging output in the pgen parser) * ``debug`` (Enable debugging output in the pgen parser)

View File

@ -18,10 +18,10 @@ Add a new optional *cloexec* parameter on functions creating file
descriptors, add different ways to change default values of this descriptors, add different ways to change default values of this
parameter, and add four new functions: parameter, and add four new functions:
* ``os.get_cloexec(fd)`` * ``os.get_cloexec(fd)``
* ``os.set_cloexec(fd, cloexec=True)`` * ``os.set_cloexec(fd, cloexec=True)``
* ``sys.getdefaultcloexec()`` * ``sys.getdefaultcloexec()``
* ``sys.setdefaultcloexec(cloexec)`` * ``sys.setdefaultcloexec(cloexec)``
Rationale Rationale
@ -86,14 +86,14 @@ with "too many files" because files are still open in the child process.
See also the following issues: See also the following issues:
* `Issue #2320: Race condition in subprocess using stdin * `Issue #2320: Race condition in subprocess using stdin
<http://bugs.python.org/issue2320>`_ (2008) <http://bugs.python.org/issue2320>`_ (2008)
* `Issue #3006: subprocess.Popen causes socket to remain open after * `Issue #3006: subprocess.Popen causes socket to remain open after
close <http://bugs.python.org/issue3006>`_ (2008) close <http://bugs.python.org/issue3006>`_ (2008)
* `Issue #7213: subprocess leaks open file descriptors between Popen * `Issue #7213: subprocess leaks open file descriptors between Popen
instances causing hangs <http://bugs.python.org/issue7213>`_ (2009) instances causing hangs <http://bugs.python.org/issue7213>`_ (2009)
* `Issue #12786: subprocess wait() hangs when stdin is closed * `Issue #12786: subprocess wait() hangs when stdin is closed
<http://bugs.python.org/issue12786>`_ (2011) <http://bugs.python.org/issue12786>`_ (2011)
Security Security
@ -112,20 +112,20 @@ See also the CERT recommandation:
Example of vulnerabilities: Example of vulnerabilities:
* `OpenSSH Security Advisory: portable-keysign-rand-helper.adv * `OpenSSH Security Advisory: portable-keysign-rand-helper.adv
<http://www.openssh.com/txt/portable-keysign-rand-helper.adv>`_ <http://www.openssh.com/txt/portable-keysign-rand-helper.adv>`_
(April 2011) (April 2011)
* `CWE-403: Exposure of File Descriptor to Unintended Control Sphere * `CWE-403: Exposure of File Descriptor to Unintended Control Sphere
<http://cwe.mitre.org/data/definitions/403.html>`_ (2008) <http://cwe.mitre.org/data/definitions/403.html>`_ (2008)
* `Hijacking Apache https by mod_php * `Hijacking Apache https by mod_php
<http://www.securityfocus.com/archive/1/348368>`_ (Dec 2003) <http://www.securityfocus.com/archive/1/348368>`_ (Dec 2003)
* Apache: `Apr should set FD_CLOEXEC if APR_FOPEN_NOCLEANUP is not set * Apache: `Apr should set FD_CLOEXEC if APR_FOPEN_NOCLEANUP is not set
<https://issues.apache.org/bugzilla/show_bug.cgi?id=46425>`_ <https://issues.apache.org/bugzilla/show_bug.cgi?id=46425>`_
(fixed in 2009) (fixed in 2009)
* PHP: `system() (and similar) don't cleanup opened handles of Apache * PHP: `system() (and similar) don't cleanup opened handles of Apache
<https://bugs.php.net/bug.php?id=38915>`_ (not fixed in january <https://bugs.php.net/bug.php?id=38915>`_ (not fixed in january
2013) 2013)
Atomicity Atomicity
@ -189,37 +189,37 @@ parameter.
Add new functions: Add new functions:
* ``os.get_cloexec(fd:int) -> bool``: get the * ``os.get_cloexec(fd:int) -> bool``: get the
close-on-exec flag of a file descriptor. Not available on all close-on-exec flag of a file descriptor. Not available on all
platforms. platforms.
* ``os.set_cloexec(fd:int, cloexec:bool=True)``: set or clear the * ``os.set_cloexec(fd:int, cloexec:bool=True)``: set or clear the
close-on-exec flag on a file descriptor. Not available on all close-on-exec flag on a file descriptor. Not available on all
platforms. platforms.
* ``sys.getdefaultcloexec() -> bool``: get the current default value * ``sys.getdefaultcloexec() -> bool``: get the current default value
of the *cloexec* parameter of the *cloexec* parameter
* ``sys.setdefaultcloexec(cloexec: bool)``: set the default value * ``sys.setdefaultcloexec(cloexec: bool)``: set the default value
of the *cloexec* parameter of the *cloexec* parameter
Add a new optional *cloexec* parameter to: Add a new optional *cloexec* parameter to:
* ``asyncore.dispatcher.create_socket()`` * ``asyncore.dispatcher.create_socket()``
* ``io.FileIO`` * ``io.FileIO``
* ``io.open()`` * ``io.open()``
* ``open()`` * ``open()``
* ``os.dup()`` * ``os.dup()``
* ``os.dup2()`` * ``os.dup2()``
* ``os.fdopen()`` * ``os.fdopen()``
* ``os.open()`` * ``os.open()``
* ``os.openpty()`` * ``os.openpty()``
* ``os.pipe()`` * ``os.pipe()``
* ``select.devpoll()`` * ``select.devpoll()``
* ``select.epoll()`` * ``select.epoll()``
* ``select.kqueue()`` * ``select.kqueue()``
* ``socket.socket()`` * ``socket.socket()``
* ``socket.socket.accept()`` * ``socket.socket.accept()``
* ``socket.socket.dup()`` * ``socket.socket.dup()``
* ``socket.socket.fromfd`` * ``socket.socket.fromfd``
* ``socket.socketpair()`` * ``socket.socketpair()``
The default value of the *cloexec* parameter is The default value of the *cloexec* parameter is
``sys.getdefaultcloexec()``. ``sys.getdefaultcloexec()``.
@ -241,13 +241,13 @@ must be specified explicitly.
Drawbacks of the proposal: Drawbacks of the proposal:
* It is not more possible to know if the close-on-exec flag will be * It is not more possible to know if the close-on-exec flag will be
set or not on a newly created file descriptor just by reading the set or not on a newly created file descriptor just by reading the
source code. source code.
* If the inheritance of a file descriptor matters, the *cloexec* * If the inheritance of a file descriptor matters, the *cloexec*
parameter must now be specified explicitly, or the library or the parameter must now be specified explicitly, or the library or the
application will not work depending on the default value of the application will not work depending on the default value of the
*cloexec* parameter. *cloexec* parameter.
Alternatives Alternatives
@ -288,23 +288,23 @@ parameter can be used.
Advantages of setting close-on-exec flag by default: Advantages of setting close-on-exec flag by default:
* There are far more programs that are bitten by FD inheritance upon * There are far more programs that are bitten by FD inheritance upon
exec (see `Inherited file descriptors issues`_ and `Security`_) exec (see `Inherited file descriptors issues`_ and `Security`_)
than programs relying on it (see `Applications using inheritance of than programs relying on it (see `Applications using inheritance of
file descriptors`_). file descriptors`_).
Drawbacks of setting close-on-exec flag by default: Drawbacks of setting close-on-exec flag by default:
* It violates the principle of least surprise. Developers using the * It violates the principle of least surprise. Developers using the
os module may expect that Python respects the POSIX standard and so os module may expect that Python respects the POSIX standard and so
that close-on-exec flag is not set by default. that close-on-exec flag is not set by default.
* The os module is written as a thin wrapper to system calls (to * The os module is written as a thin wrapper to system calls (to
functions of the C standard library). If atomic flags to set functions of the C standard library). If atomic flags to set
close-on-exec flag are not supported (see `Appendix: Operating close-on-exec flag are not supported (see `Appendix: Operating
system support`_), a single Python function call may call 2 or 3 system support`_), a single Python function call may call 2 or 3
system calls (see `Performances`_ section). system calls (see `Performances`_ section).
* Extra system calls, if any, may slow down Python: see * Extra system calls, if any, may slow down Python: see
`Performances`_. `Performances`_.
Backward compatibility: only a few programs rely on inheritance of file Backward compatibility: only a few programs rely on inheritance of file
descriptors, and they only pass a few file descriptors, usually just descriptors, and they only pass a few file descriptors, usually just
@ -329,20 +329,20 @@ descriptors just after a ``fork()``.
Drawbacks: Drawbacks:
* It does not solve the problem on Windows: ``fork()`` does not exist * It does not solve the problem on Windows: ``fork()`` does not exist
on Windows on Windows
* This alternative does not solve the problem for programs using * This alternative does not solve the problem for programs using
``exec()`` without ``fork()``. ``exec()`` without ``fork()``.
* A third party module may call directly the C function ``fork()`` * A third party module may call directly the C function ``fork()``
which will not call "atfork" callbacks. which will not call "atfork" callbacks.
* All functions creating file descriptors must be changed to register * All functions creating file descriptors must be changed to register
a callback and then unregister their callback when the file is a callback and then unregister their callback when the file is
closed. Or a list of *all* open file descriptors must be closed. Or a list of *all* open file descriptors must be
maintained. maintained.
* The operating system is a better place than Python to close * The operating system is a better place than Python to close
automatically file descriptors. For example, it is not easy to automatically file descriptors. For example, it is not easy to
avoid a race condition between closing the file and unregistering avoid a race condition between closing the file and unregistering
the callback closing the file. the callback closing the file.
open(): add "e" flag to mode open(): add "e" flag to mode
@ -363,9 +363,9 @@ flag which uses ``O_NOINHERIT``.
Bikeshedding on the name of the new parameter Bikeshedding on the name of the new parameter
--------------------------------------------- ---------------------------------------------
* ``inherit``, ``inherited``: closer to Windows definition * ``inherit``, ``inherited``: closer to Windows definition
* ``sensitive`` * ``sensitive``
* ``sterile``: "Does not produce offspring." * ``sterile``: "Does not produce offspring."
@ -400,11 +400,11 @@ the previous child process crashed.
Example of programs taking file descriptors from the parent process Example of programs taking file descriptors from the parent process
using a command line option: using a command line option:
* gpg: ``--status-fd <fd>``, ``--logger-fd <fd>``, etc. * gpg: ``--status-fd <fd>``, ``--logger-fd <fd>``, etc.
* openssl: ``-pass fd:<fd>`` * openssl: ``-pass fd:<fd>``
* qemu: ``-add-fd <fd>`` * qemu: ``-add-fd <fd>``
* valgrind: ``--log-fd=<fd>``, ``--input-fd=<fd>``, etc. * valgrind: ``--log-fd=<fd>``, ``--input-fd=<fd>``, etc.
* xterm: ``-S <fd>`` * xterm: ``-S <fd>``
On Linux, it is possible to use ``"/dev/fd/<fd>"`` filename to pass a On Linux, it is possible to use ``"/dev/fd/<fd>"`` filename to pass a
file descriptor to a program expecting a filename. file descriptor to a program expecting a filename.
@ -417,24 +417,24 @@ Setting close-on-exec flag may require additional system calls for
each creation of new file descriptors. The number of additional system each creation of new file descriptors. The number of additional system
calls depends on the method used to set the flag: calls depends on the method used to set the flag:
* ``O_NOINHERIT``: no additional system call * ``O_NOINHERIT``: no additional system call
* ``O_CLOEXEC``: one additional system call, but only at the creation * ``O_CLOEXEC``: one additional system call, but only at the creation
of the first file descriptor, to check if the flag is supported. If of the first file descriptor, to check if the flag is supported. If
the flag is not supported, Python has to fallback to the next method. the flag is not supported, Python has to fallback to the next method.
* ``ioctl(fd, FIOCLEX)``: one additional system call per file * ``ioctl(fd, FIOCLEX)``: one additional system call per file
descriptor descriptor
* ``fcntl(fd, F_SETFD, flags)``: two additional system calls per file * ``fcntl(fd, F_SETFD, flags)``: two additional system calls per file
descriptor, one to get old flags and one to set new flags descriptor, one to get old flags and one to set new flags
On Linux, setting the close-on-flag has a low overhead on performances. On Linux, setting the close-on-flag has a low overhead on performances.
Results of Results of
`bench_cloexec.py <http://hg.python.org/peps/file/tip/pep-0433/bench_cloexec.py>`_ `bench_cloexec.py <http://hg.python.org/peps/file/tip/pep-0433/bench_cloexec.py>`_
on Linux 3.6: on Linux 3.6:
* close-on-flag not set: 7.8 us * close-on-flag not set: 7.8 us
* ``O_CLOEXEC``: 1% slower (7.9 us) * ``O_CLOEXEC``: 1% slower (7.9 us)
* ``ioctl()``: 3% slower (8.0 us) * ``ioctl()``: 3% slower (8.0 us)
* ``fcntl()``: 3% slower (8.0 us) * ``fcntl()``: 3% slower (8.0 us)
Implementation Implementation
@ -522,52 +522,52 @@ instead of two syscalls for fcntl.
open() open()
------ ------
* Windows: ``open()`` with ``O_NOINHERIT`` flag [atomic] * Windows: ``open()`` with ``O_NOINHERIT`` flag [atomic]
* ``open()`` with ``O_CLOEXEC flag`` [atomic] * ``open()`` with ``O_CLOEXEC flag`` [atomic]
* ``open()`` + ``os.set_cloexec(fd, True)`` [best-effort] * ``open()`` + ``os.set_cloexec(fd, True)`` [best-effort]
os.dup() os.dup()
-------- --------
* Windows: ``DuplicateHandle()`` [atomic] * Windows: ``DuplicateHandle()`` [atomic]
* ``fcntl(fd, F_DUPFD_CLOEXEC)`` [atomic] * ``fcntl(fd, F_DUPFD_CLOEXEC)`` [atomic]
* ``dup()`` + ``os.set_cloexec(fd, True)`` [best-effort] * ``dup()`` + ``os.set_cloexec(fd, True)`` [best-effort]
os.dup2() os.dup2()
--------- ---------
* ``fcntl(fd, F_DUP2FD_CLOEXEC, fd2)`` [atomic] * ``fcntl(fd, F_DUP2FD_CLOEXEC, fd2)`` [atomic]
* ``dup3()`` with ``O_CLOEXEC`` flag [atomic] * ``dup3()`` with ``O_CLOEXEC`` flag [atomic]
* ``dup2()`` + ``os.set_cloexec(fd, True)`` [best-effort] * ``dup2()`` + ``os.set_cloexec(fd, True)`` [best-effort]
os.pipe() os.pipe()
--------- ---------
* Windows: ``CreatePipe()`` with * Windows: ``CreatePipe()`` with
``SECURITY_ATTRIBUTES.bInheritHandle=TRUE``, or ``_pipe()`` with ``SECURITY_ATTRIBUTES.bInheritHandle=TRUE``, or ``_pipe()`` with
``O_NOINHERIT`` flag [atomic] ``O_NOINHERIT`` flag [atomic]
* ``pipe2()`` with ``O_CLOEXEC`` flag [atomic] * ``pipe2()`` with ``O_CLOEXEC`` flag [atomic]
* ``pipe()`` + ``os.set_cloexec(fd, True)`` [best-effort] * ``pipe()`` + ``os.set_cloexec(fd, True)`` [best-effort]
socket.socket() socket.socket()
--------------- ---------------
* Windows: ``WSASocket()`` with ``WSA_FLAG_NO_HANDLE_INHERIT`` flag * Windows: ``WSASocket()`` with ``WSA_FLAG_NO_HANDLE_INHERIT`` flag
[atomic] [atomic]
* ``socket()`` with ``SOCK_CLOEXEC`` flag [atomic] * ``socket()`` with ``SOCK_CLOEXEC`` flag [atomic]
* ``socket()`` + ``os.set_cloexec(fd, True)`` [best-effort] * ``socket()`` + ``os.set_cloexec(fd, True)`` [best-effort]
socket.socketpair() socket.socketpair()
------------------- -------------------
* ``socketpair()`` with ``SOCK_CLOEXEC`` flag [atomic] * ``socketpair()`` with ``SOCK_CLOEXEC`` flag [atomic]
* ``socketpair()`` + ``os.set_cloexec(fd, True)`` [best-effort] * ``socketpair()`` + ``os.set_cloexec(fd, True)`` [best-effort]
socket.socket.accept() socket.socket.accept()
---------------------- ----------------------
* ``accept4()`` with ``SOCK_CLOEXEC`` flag [atomic] * ``accept4()`` with ``SOCK_CLOEXEC`` flag [atomic]
* ``accept()`` + ``os.set_cloexec(fd, True)`` [best-effort] * ``accept()`` + ``os.set_cloexec(fd, True)`` [best-effort]
Backward compatibility Backward compatibility
@ -603,8 +603,8 @@ ioctl
Functions: Functions:
* ``ioctl(fd, FIOCLEX, 0)``: set the close-on-exec flag * ``ioctl(fd, FIOCLEX, 0)``: set the close-on-exec flag
* ``ioctl(fd, FIONCLEX, 0)``: clear the close-on-exec flag * ``ioctl(fd, FIONCLEX, 0)``: clear the close-on-exec flag
Availability: Linux, Mac OS X, QNX, NetBSD, OpenBSD, FreeBSD. Availability: Linux, Mac OS X, QNX, NetBSD, OpenBSD, FreeBSD.
@ -614,10 +614,10 @@ fcntl
Functions: Functions:
* ``flags = fcntl(fd, F_GETFD); fcntl(fd, F_SETFD, flags | FD_CLOEXEC)``: * ``flags = fcntl(fd, F_GETFD); fcntl(fd, F_SETFD, flags | FD_CLOEXEC)``:
set the close-on-exec flag set the close-on-exec flag
* ``flags = fcntl(fd, F_GETFD); fcntl(fd, F_SETFD, flags & ~FD_CLOEXEC)``: * ``flags = fcntl(fd, F_GETFD); fcntl(fd, F_SETFD, flags & ~FD_CLOEXEC)``:
clear the close-on-exec flag clear the close-on-exec flag
Availability: AIX, Digital UNIX, FreeBSD, HP-UX, IRIX, Linux, Mac OS Availability: AIX, Digital UNIX, FreeBSD, HP-UX, IRIX, Linux, Mac OS
X, OpenBSD, Solaris, SunOS, Unicos. X, OpenBSD, Solaris, SunOS, Unicos.
@ -628,20 +628,20 @@ Atomic flags
New flags: New flags:
* ``O_CLOEXEC``: available on Linux (2.6.23), FreeBSD (8.3), * ``O_CLOEXEC``: available on Linux (2.6.23), FreeBSD (8.3),
OpenBSD 5.0, Solaris 11, QNX, BeOS, next NetBSD release (6.1?). OpenBSD 5.0, Solaris 11, QNX, BeOS, next NetBSD release (6.1?).
This flag is part of POSIX.1-2008. This flag is part of POSIX.1-2008.
* ``SOCK_CLOEXEC`` flag for ``socket()`` and ``socketpair()``, * ``SOCK_CLOEXEC`` flag for ``socket()`` and ``socketpair()``,
available on Linux 2.6.27, OpenBSD 5.2, NetBSD 6.0. available on Linux 2.6.27, OpenBSD 5.2, NetBSD 6.0.
* ``WSA_FLAG_NO_HANDLE_INHERIT`` flag for ``WSASocket()``: supported * ``WSA_FLAG_NO_HANDLE_INHERIT`` flag for ``WSASocket()``: supported
on Windows 7 with SP1, Windows Server 2008 R2 with SP1, and later on Windows 7 with SP1, Windows Server 2008 R2 with SP1, and later
* ``fcntl()``: ``F_DUPFD_CLOEXEC`` flag, available on Linux 2.6.24, * ``fcntl()``: ``F_DUPFD_CLOEXEC`` flag, available on Linux 2.6.24,
OpenBSD 5.0, FreeBSD 9.1, NetBSD 6.0, Solaris 11. This flag is part OpenBSD 5.0, FreeBSD 9.1, NetBSD 6.0, Solaris 11. This flag is part
of POSIX.1-2008. of POSIX.1-2008.
* ``fcntl()``: ``F_DUP2FD_CLOEXEC`` flag, available on FreeBSD 9.1 * ``fcntl()``: ``F_DUP2FD_CLOEXEC`` flag, available on FreeBSD 9.1
and Solaris 11. and Solaris 11.
* ``recvmsg()``: ``MSG_CMSG_CLOEXEC``, available on Linux 2.6.23, * ``recvmsg()``: ``MSG_CMSG_CLOEXEC``, available on Linux 2.6.23,
NetBSD 6.0. NetBSD 6.0.
On Linux older than 2.6.23, ``O_CLOEXEC`` flag is simply ignored. So On Linux older than 2.6.23, ``O_CLOEXEC`` flag is simply ignored. So
we have to check that the flag is supported by calling ``fcntl()``. If we have to check that the flag is supported by calling ``fcntl()``. If
@ -657,9 +657,9 @@ On Windows XPS3, ``WSASocket()`` with with ``WSAEPROTOTYPE`` when
New functions: New functions:
* ``dup3()``: available on Linux 2.6.27 (and glibc 2.9) * ``dup3()``: available on Linux 2.6.27 (and glibc 2.9)
* ``pipe2()``: available on Linux 2.6.27 (and glibc 2.9) * ``pipe2()``: available on Linux 2.6.27 (and glibc 2.9)
* ``accept4()``: available on Linux 2.6.28 (and glibc 2.10) * ``accept4()``: available on Linux 2.6.28 (and glibc 2.10)
If ``accept4()`` is called on Linux older than 2.6.28, ``accept4()`` If ``accept4()`` is called on Linux older than 2.6.28, ``accept4()``
returns ``-1`` (fail) and ``errno`` is set to ``ENOSYS``. returns ``-1`` (fail) and ``errno`` is set to ``ENOSYS``.
@ -670,55 +670,55 @@ Links
Links: Links:
* `Secure File Descriptor Handling * `Secure File Descriptor Handling
<http://udrepper.livejournal.com/20407.html>`_ (Ulrich Drepper, <http://udrepper.livejournal.com/20407.html>`_ (Ulrich Drepper,
2008) 2008)
* `win32_support.py of the Tornado project * `win32_support.py of the Tornado project
<https://bitbucket.org/pvl/gaeseries-tornado/src/c2671cea1842/tornado/win32_support.py>`_: <https://bitbucket.org/pvl/gaeseries-tornado/src/c2671cea1842/tornado/win32_support.py>`_:
emulate fcntl(fd, F_SETFD, FD_CLOEXEC) using emulate fcntl(fd, F_SETFD, FD_CLOEXEC) using
``SetHandleInformation(fd, HANDLE_FLAG_INHERIT, 1)`` ``SetHandleInformation(fd, HANDLE_FLAG_INHERIT, 1)``
* `LKML: [PATCH] nextfd(2) * `LKML: [PATCH] nextfd(2)
<https://lkml.org/lkml/2012/4/1/71>`_ <https://lkml.org/lkml/2012/4/1/71>`_
Python issues: Python issues:
* `#10115: Support accept4() for atomic setting of flags at socket * `#10115: Support accept4() for atomic setting of flags at socket
creation <http://bugs.python.org/issue10115>`_ creation <http://bugs.python.org/issue10115>`_
* `#12105: open() does not able to set flags, such as O_CLOEXEC * `#12105: open() does not able to set flags, such as O_CLOEXEC
<http://bugs.python.org/issue12105>`_ <http://bugs.python.org/issue12105>`_
* `#12107: TCP listening sockets created without FD_CLOEXEC flag * `#12107: TCP listening sockets created without FD_CLOEXEC flag
<http://bugs.python.org/issue12107>`_ <http://bugs.python.org/issue12107>`_
* `#16500: Add an atfork module * `#16500: Add an atfork module
<http://bugs.python.org/issue16500>`_ <http://bugs.python.org/issue16500>`_
* `#16850: Add "e" mode to open(): close-and-exec * `#16850: Add "e" mode to open(): close-and-exec
(O_CLOEXEC) / O_NOINHERIT <http://bugs.python.org/issue16850>`_ (O_CLOEXEC) / O_NOINHERIT <http://bugs.python.org/issue16850>`_
* `#16860: Use O_CLOEXEC in the tempfile module * `#16860: Use O_CLOEXEC in the tempfile module
<http://bugs.python.org/issue16860>`_ <http://bugs.python.org/issue16860>`_
* `#17036: Implementation of the PEP 433 * `#17036: Implementation of the PEP 433
<http://bugs.python.org/issue17036>`_ <http://bugs.python.org/issue17036>`_
* `#16946: subprocess: _close_open_fd_range_safe() does not set * `#16946: subprocess: _close_open_fd_range_safe() does not set
close-on-exec flag on Linux < 2.6.23 if O_CLOEXEC is defined close-on-exec flag on Linux < 2.6.23 if O_CLOEXEC is defined
<http://bugs.python.org/issue16946>`_ <http://bugs.python.org/issue16946>`_
* `#17070: PEP 433: Use the new cloexec to improve security and avoid * `#17070: PEP 433: Use the new cloexec to improve security and avoid
bugs <http://bugs.python.org/issue17070>`_ bugs <http://bugs.python.org/issue17070>`_
Other languages: Other languages:
* Perl sets the close-on-exec flag on newly created file decriptor if * Perl sets the close-on-exec flag on newly created file decriptor if
their number is greater than ``$SYSTEM_FD_MAX`` (``$^F``). their number is greater than ``$SYSTEM_FD_MAX`` (``$^F``).
See `$SYSTEM_FD_MAX documentation See `$SYSTEM_FD_MAX documentation
<http://perldoc.perl.org/perlvar.html#%24SYSTEM_FD_MAX>`_. Perl does <http://perldoc.perl.org/perlvar.html#%24SYSTEM_FD_MAX>`_. Perl does
this since the creation of Perl (it was already present in Perl 1). this since the creation of Perl (it was already present in Perl 1).
* Ruby: `Set FD_CLOEXEC for all fds (except 0, 1, 2) * Ruby: `Set FD_CLOEXEC for all fds (except 0, 1, 2)
<http://bugs.ruby-lang.org/issues/5041>`_ <http://bugs.ruby-lang.org/issues/5041>`_
* Ruby: `O_CLOEXEC flag missing for Kernel::open * Ruby: `O_CLOEXEC flag missing for Kernel::open
<http://bugs.ruby-lang.org/issues/1291>`_: the <http://bugs.ruby-lang.org/issues/1291>`_: the
`commit was reverted later `commit was reverted later
<http://bugs.ruby-lang.org/projects/ruby-trunk/repository/revisions/31643>`_ <http://bugs.ruby-lang.org/projects/ruby-trunk/repository/revisions/31643>`_
* OCaml: `PR#5256: Processes opened using Unix.open_process* inherit * OCaml: `PR#5256: Processes opened using Unix.open_process* inherit
all opened file descriptors (including sockets) all opened file descriptors (including sockets)
<http://caml.inria.fr/mantis/view.php?id=5256>`_. OCaml has a <http://caml.inria.fr/mantis/view.php?id=5256>`_. OCaml has a
``Unix.set_close_on_exec`` function. ``Unix.set_close_on_exec`` function.
Footnotes Footnotes
@ -732,3 +732,18 @@ Footnotes
has a descriptor smaller than 3, ``ValueError`` is raised. has a descriptor smaller than 3, ``ValueError`` is raised.
Copyright
=========
This document has been placed in the public domain.
..
Local Variables:
mode: indented-text
indent-tabs-mode: nil
sentence-end-double-space: t
fill-column: 70
coding: utf-8
End:

View File

@ -313,28 +313,28 @@ The grammar is conflict-free and available in ml-yacc readable BNF form.
Two tools are available: Two tools are available:
* *printsemant* reads a converter header and a .c file and dumps * *printsemant* reads a converter header and a .c file and dumps
the semantically checked parse tree to stdout. the semantically checked parse tree to stdout.
* *preprocess* reads a converter header and a .c file and dumps * *preprocess* reads a converter header and a .c file and dumps
the preprocessed .c file to stdout. the preprocessed .c file to stdout.
Known deficiencies: Known deficiencies:
* The Python 'test' expression is not semantically checked. The syntax * The Python 'test' expression is not semantically checked. The syntax
however is checked since it is part of the grammar. however is checked since it is part of the grammar.
* The lexer does not handle triple quoted strings. * The lexer does not handle triple quoted strings.
* C declarations are parsed in a primitive way. The final implementation * C declarations are parsed in a primitive way. The final implementation
should utilize 'declarator' and 'init-declarator' from the C grammar. should utilize 'declarator' and 'init-declarator' from the C grammar.
* The *preprocess* tool does not emit code for the left-and-right optional * The *preprocess* tool does not emit code for the left-and-right optional
arguments case. The *printsemant* tool can deal with this case. arguments case. The *printsemant* tool can deal with this case.
* Since the *preprocess* tool generates the output from the parse * Since the *preprocess* tool generates the output from the parse
tree, the original indentation of the define block is lost. tree, the original indentation of the define block is lost.
Grammar Grammar
@ -350,37 +350,37 @@ Comparison with PEP 436
The author of this PEP has the following concerns about the DSL proposed The author of this PEP has the following concerns about the DSL proposed
in PEP 436: in PEP 436:
* The whitespace sensitive configuration file like syntax looks out * The whitespace sensitive configuration file like syntax looks out
of place in a C file. of place in a C file.
* The structure of the function definition gets lost in the per-parameter * The structure of the function definition gets lost in the per-parameter
specifications. Keywords like positional-only, required and keyword-only specifications. Keywords like positional-only, required and keyword-only
are scattered across too many different places. are scattered across too many different places.
By contrast, in the alternative DSL the structure of the function By contrast, in the alternative DSL the structure of the function
definition can be understood at a single glance. definition can be understood at a single glance.
* The PEP 436 DSL has 14 documented flags and at least one undocumented * The PEP 436 DSL has 14 documented flags and at least one undocumented
(allow_fd) flag. Figuring out which of the 2**15 possible combinations (allow_fd) flag. Figuring out which of the 2**15 possible combinations
are valid places an unnecessary burden on the user. are valid places an unnecessary burden on the user.
Experience with the PEP-3118 buffer flags has shown that sorting out Experience with the PEP-3118 buffer flags has shown that sorting out
(and exhaustively testing!) valid combinations is an extremely tedious (and exhaustively testing!) valid combinations is an extremely tedious
task. The PEP-3118 flags are still not well understood by many people. task. The PEP-3118 flags are still not well understood by many people.
By contrast, the alternative DSL has a central file Include/converters.h By contrast, the alternative DSL has a central file Include/converters.h
that can be quickly searched for the desired converter. Many of the that can be quickly searched for the desired converter. Many of the
converters are already known, perhaps even memorized by people (due converters are already known, perhaps even memorized by people (due
to frequent use). to frequent use).
* The PEP 436 DSL allows too much freedom. Types can apparently be omitted, * The PEP 436 DSL allows too much freedom. Types can apparently be omitted,
the preprocessor accepts (and ignores) unknown keywords, sometimes adding the preprocessor accepts (and ignores) unknown keywords, sometimes adding
white space after a docstring results in an assertion error. white space after a docstring results in an assertion error.
The alternative DSL on the other hand allows no such freedoms. Omitting The alternative DSL on the other hand allows no such freedoms. Omitting
converter or return value annotations is plainly a syntax error. The converter or return value annotations is plainly a syntax error. The
LALR(1) grammar is unambiguous and specified for the complete translation LALR(1) grammar is unambiguous and specified for the complete translation
unit. unit.
Copyright Copyright

View File

@ -330,9 +330,9 @@ flag is missing. On Linux older than 2.6.27, ``socket()`` or
New functions: New functions:
* ``dup3()``: available on Linux 2.6.27 (and glibc 2.9) * ``dup3()``: available on Linux 2.6.27 (and glibc 2.9)
* ``pipe2()``: available on Linux 2.6.27 (and glibc 2.9) * ``pipe2()``: available on Linux 2.6.27 (and glibc 2.9)
* ``accept4()``: available on Linux 2.6.28 (and glibc 2.10) * ``accept4()``: available on Linux 2.6.28 (and glibc 2.10)
On Linux older than 2.6.28, ``accept4()`` fails with ``errno`` set to On Linux older than 2.6.28, ``accept4()`` fails with ``errno`` set to
``ENOSYS``. ``ENOSYS``.
@ -468,23 +468,23 @@ Non-inheritable File Descriptors
The following functions are modified to make newly created file The following functions are modified to make newly created file
descriptors non-inheritable by default: descriptors non-inheritable by default:
* ``asyncore.dispatcher.create_socket()`` * ``asyncore.dispatcher.create_socket()``
* ``io.FileIO`` * ``io.FileIO``
* ``io.open()`` * ``io.open()``
* ``open()`` * ``open()``
* ``os.dup()`` * ``os.dup()``
* ``os.fdopen()`` * ``os.fdopen()``
* ``os.open()`` * ``os.open()``
* ``os.openpty()`` * ``os.openpty()``
* ``os.pipe()`` * ``os.pipe()``
* ``select.devpoll()`` * ``select.devpoll()``
* ``select.epoll()`` * ``select.epoll()``
* ``select.kqueue()`` * ``select.kqueue()``
* ``socket.socket()`` * ``socket.socket()``
* ``socket.socket.accept()`` * ``socket.socket.accept()``
* ``socket.socket.dup()`` * ``socket.socket.dup()``
* ``socket.socket.fromfd()`` * ``socket.socket.fromfd()``
* ``socket.socketpair()`` * ``socket.socketpair()``
``os.dup2()`` still creates inheritable by default, see below. ``os.dup2()`` still creates inheritable by default, see below.

View File

@ -62,14 +62,14 @@ arguments by name::
In addition, there are some functions with particularly In addition, there are some functions with particularly
interesting semantics: interesting semantics:
* ``range()``, which accepts an optional parameter * ``range()``, which accepts an optional parameter
to the *left* of its required parameter. [#RANGE]_ to the *left* of its required parameter. [#RANGE]_
* ``dict()``, whose mapping/iterator parameter is optional and * ``dict()``, whose mapping/iterator parameter is optional and
semantically must be positional-only. Any externally semantically must be positional-only. Any externally
visible name for this parameter would occlude visible name for this parameter would occlude
that name going into the ``**kwarg`` keyword variadic that name going into the ``**kwarg`` keyword variadic
parameter dict! [#DICT]_ parameter dict! [#DICT]_
Obviously one can simulate any of these in pure Python code Obviously one can simulate any of these in pure Python code
by accepting ``(*args, **kwargs)`` and parsing the arguments by accepting ``(*args, **kwargs)`` and parsing the arguments
@ -85,17 +85,17 @@ This PEP does not propose we implement positional-only
parameters in Python. The goal of this PEP is simply parameters in Python. The goal of this PEP is simply
to define the syntax, so that: to define the syntax, so that:
* Documentation can clearly, unambiguously, and * Documentation can clearly, unambiguously, and
consistently express exactly how the arguments consistently express exactly how the arguments
for a function will be interpreted. for a function will be interpreted.
* The syntax is reserved for future use, in case * The syntax is reserved for future use, in case
the community decides someday to add positional-only the community decides someday to add positional-only
parameters to the language. parameters to the language.
* Argument Clinic can use a variant of the syntax * Argument Clinic can use a variant of the syntax
as part of its input when defining as part of its input when defining
the arguments for built-in functions. the arguments for built-in functions.
================================================================= =================================================================
The Current State Of Documentation For Positional-Only Parameters The Current State Of Documentation For Positional-Only Parameters
@ -179,28 +179,28 @@ in the following way:
More semantics of positional-only parameters: More semantics of positional-only parameters:
* Although positional-only parameter technically have names, * Although positional-only parameter technically have names,
these names are internal-only; positional-only parameters these names are internal-only; positional-only parameters
are *never* externally addressable by name. (Similarly are *never* externally addressable by name. (Similarly
to ``*args`` and ``**kwargs``.) to ``*args`` and ``**kwargs``.)
* It's possible to nest option groups. * It's possible to nest option groups.
* If there are no required parameters, all option groups behave * If there are no required parameters, all option groups behave
as if they're to the right of the required parameter group. as if they're to the right of the required parameter group.
* For clarity and consistency, the comma for a parameter always * For clarity and consistency, the comma for a parameter always
comes immediately after the parameter name. It's a syntax error comes immediately after the parameter name. It's a syntax error
to specify a square bracket between the name of a parameter and to specify a square bracket between the name of a parameter and
the following comma. (This is far more readable than putting the following comma. (This is far more readable than putting
the comma outside the square bracket, particularly for nested the comma outside the square bracket, particularly for nested
groups.) groups.)
* If there are arguments after the ``/``, then you must specify * If there are arguments after the ``/``, then you must specify
a comma after the ``/``, just as there is a comma a comma after the ``/``, just as there is a comma
after the ``*`` denoting the shift to keyword-only parameters. after the ``*`` denoting the shift to keyword-only parameters.
* This syntax has no effect on ``*args`` or ``**kwargs``. * This syntax has no effect on ``*args`` or ``**kwargs``.
It's possible to specify a function prototype where the mapping It's possible to specify a function prototype where the mapping
of arguments to parameters is ambiguous. Consider:: of arguments to parameters is ambiguous. Consider::
@ -273,9 +273,9 @@ Unresolved Questions
There are three types of parameters in Python: There are three types of parameters in Python:
1. positional-only parameters, 1. positional-only parameters,
2. positional-or-keyword parameters, and 2. positional-or-keyword parameters, and
3. keyword-only parameters. 3. keyword-only parameters.
Python allows functions to have both 2 and 3. And some Python allows functions to have both 2 and 3. And some
builtins (e.g. range) have both 1 and 3. Does it make builtins (e.g. range) have both 1 and 3. Does it make

View File

@ -95,22 +95,22 @@ Examples::
``%b`` will insert a series of bytes. These bytes are collected in one of two ``%b`` will insert a series of bytes. These bytes are collected in one of two
ways:: ways::
- input type supports ``Py_buffer`` [4]_? - input type supports ``Py_buffer`` [4]_?
use it to collect the necessary bytes use it to collect the necessary bytes
- input type is something else? - input type is something else?
use its ``__bytes__`` method [5]_ ; if there isn't one, raise a ``TypeError`` use its ``__bytes__`` method [5]_ ; if there isn't one, raise a ``TypeError``
In particular, ``%b`` will not accept numbers nor ``str``. ``str`` is rejected In particular, ``%b`` will not accept numbers nor ``str``. ``str`` is rejected
as the string to bytes conversion requires an encoding, and we are refusing to as the string to bytes conversion requires an encoding, and we are refusing to
guess; numbers are rejected because: guess; numbers are rejected because:
- what makes a number is fuzzy (float? Decimal? Fraction? some user type?) - what makes a number is fuzzy (float? Decimal? Fraction? some user type?)
- allowing numbers would lead to ambiguity between numbers and textual - allowing numbers would lead to ambiguity between numbers and textual
representations of numbers (3.14 vs '3.14') representations of numbers (3.14 vs '3.14')
- given the nature of wire formats, explicit is definitely better than implicit - given the nature of wire formats, explicit is definitely better than implicit
``%s`` is included as a synonym for ``%b`` for the sole purpose of making 2/3 code ``%s`` is included as a synonym for ``%b`` for the sole purpose of making 2/3 code
bases easier to maintain. Python 3 only code should use ``%b``. bases easier to maintain. Python 3 only code should use ``%b``.
@ -177,15 +177,15 @@ Proposed variations
It has been proposed to automatically use ``.encode('ascii','strict')`` for It has been proposed to automatically use ``.encode('ascii','strict')`` for
``str`` arguments to ``%b``. ``str`` arguments to ``%b``.
- Rejected as this would lead to intermittent failures. Better to have the - Rejected as this would lead to intermittent failures. Better to have the
operation always fail so the trouble-spot can be correctly fixed. operation always fail so the trouble-spot can be correctly fixed.
It has been proposed to have ``%b`` return the ascii-encoded repr when the It has been proposed to have ``%b`` return the ascii-encoded repr when the
value is a ``str`` (b'%b' % 'abc' --> b"'abc'"). value is a ``str`` (b'%b' % 'abc' --> b"'abc'").
- Rejected as this would lead to hard to debug failures far from the problem - Rejected as this would lead to hard to debug failures far from the problem
site. Better to have the operation always fail so the trouble-spot can be site. Better to have the operation always fail so the trouble-spot can be
easily fixed. easily fixed.
Originally this PEP also proposed adding format-style formatting, but it was Originally this PEP also proposed adding format-style formatting, but it was
decided that format and its related machinery were all strictly text (aka decided that format and its related machinery were all strictly text (aka
@ -204,12 +204,12 @@ Objections
The objections raised against this PEP were mainly variations on two themes: The objections raised against this PEP were mainly variations on two themes:
- the ``bytes`` and ``bytearray`` types are for pure binary data, with no - the ``bytes`` and ``bytearray`` types are for pure binary data, with no
assumptions about encodings assumptions about encodings
- offering %-interpolation that assumes an ASCII encoding will be an - offering %-interpolation that assumes an ASCII encoding will be an
attractive nuisance and lead us back to the problems of the Python 2 attractive nuisance and lead us back to the problems of the Python 2
``str``/``unicode`` text model ``str``/``unicode`` text model
As was seen during the discussion, ``bytes`` and ``bytearray`` are also used As was seen during the discussion, ``bytes`` and ``bytearray`` are also used
for mixed binary data and ASCII-compatible segments: file formats such as for mixed binary data and ASCII-compatible segments: file formats such as

View File

@ -136,12 +136,12 @@ When an indexing operation is performed, ``__getitem__(self, idx)`` is called.
Traditionally, the full content between square brackets is turned into a single Traditionally, the full content between square brackets is turned into a single
object passed to argument ``idx``: object passed to argument ``idx``:
- When a single element is passed, e.g. ``a[2]``, ``idx`` will be ``2``. - When a single element is passed, e.g. ``a[2]``, ``idx`` will be ``2``.
- When multiple elements are passed, they must be separated by commas: ``a[2, 3]``. - When multiple elements are passed, they must be separated by commas: ``a[2, 3]``.
In this case, ``idx`` will be a tuple ``(2, 3)``. With ``a[2, 3, "hello", {}]`` In this case, ``idx`` will be a tuple ``(2, 3)``. With ``a[2, 3, "hello", {}]``
``idx`` will be ``(2, 3, "hello", {})``. ``idx`` will be ``(2, 3, "hello", {})``.
- A slicing notation e.g. ``a[2:10]`` will produce a slice object, or a tuple - A slicing notation e.g. ``a[2:10]`` will produce a slice object, or a tuple
containing slice objects if multiple values were passed. containing slice objects if multiple values were passed.
Except for its unique ability to handle slice notation, the indexing operation Except for its unique ability to handle slice notation, the indexing operation
has similarities to a plain method call: it acts like one when invoked with has similarities to a plain method call: it acts like one when invoked with

View File

@ -37,9 +37,9 @@ generated -- namely no optimizations beyond the peepholer -- the same
is not true for PYO files. To put this in terms of optimization is not true for PYO files. To put this in terms of optimization
levels and the file extension: levels and the file extension:
- 0: ``.pyc`` - 0: ``.pyc``
- 1 (``-O``): ``.pyo`` - 1 (``-O``): ``.pyo``
- 2 (``-OO``): ``.pyo`` - 2 (``-OO``): ``.pyo``
The reuse of the ``.pyo`` file extension for both level 1 and 2 The reuse of the ``.pyo`` file extension for both level 1 and 2
optimizations means that there is no clear way to tell what optimizations means that there is no clear way to tell what
@ -85,9 +85,9 @@ will be specified in the file name). For example, a source file named
based on the interpreter's optimization level (none, ``-O``, and based on the interpreter's optimization level (none, ``-O``, and
``-OO``): ``-OO``):
- 0: ``foo.cpython-35.pyc`` (i.e., no change) - 0: ``foo.cpython-35.pyc`` (i.e., no change)
- 1: ``foo.cpython-35.opt-1.pyc`` - 1: ``foo.cpython-35.opt-1.pyc``
- 2: ``foo.cpython-35.opt-2.pyc`` - 2: ``foo.cpython-35.opt-2.pyc``
Currently bytecode file names are created by Currently bytecode file names are created by
``importlib.util.cache_from_source()``, approximately using the ``importlib.util.cache_from_source()``, approximately using the

View File

@ -465,9 +465,9 @@ python-ideas discussion
Most of the discussions on python-ideas [#]_ focused on three issues: Most of the discussions on python-ideas [#]_ focused on three issues:
- How to denote f-strings, - How to denote f-strings,
- How to specify the location of expressions in f-strings, and - How to specify the location of expressions in f-strings, and
- Whether to allow full Python expressions. - Whether to allow full Python expressions.
How to denote f-strings How to denote f-strings
*********************** ***********************

View File

@ -72,12 +72,12 @@ when conceived in terms of ``tau`` rather than ``pi``. If you don't find my
specific examples sufficiently persausive, here are some more resources that specific examples sufficiently persausive, here are some more resources that
may be of interest: may be of interest:
* Michael Hartl is the primary instigator of Tau Day in his `Tau Manifesto`_ * Michael Hartl is the primary instigator of Tau Day in his `Tau Manifesto`_
* Bob Palais, the author of the original mathematics journal article * Bob Palais, the author of the original mathematics journal article
highlighting the problems with ``pi`` has `a page of resources`_ on the highlighting the problems with ``pi`` has `a page of resources`_ on the
topic topic
* For those that prefer videos to written text, `Pi is wrong!`_ and * For those that prefer videos to written text, `Pi is wrong!`_ and
`Pi is (still) wrong`_ are available on YouTube `Pi is (still) wrong`_ are available on YouTube
.. _Tau Manifesto: http://tauday.com/ .. _Tau Manifesto: http://tauday.com/
.. _Pi is (still) wrong: http://www.youtube.com/watch?v=jG7vhMMXagQ .. _Pi is (still) wrong: http://www.youtube.com/watch?v=jG7vhMMXagQ