Convert PEPs 261, 267, 325, 358, 361 (#204)

* Convert PEPs 261, 267, 325, 358, 361

* Fixes to PEP 261 and PEP 361
This commit is contained in:
Mariatta 2017-02-10 14:19:22 -08:00 committed by GitHub
parent c5881cf2b5
commit 9c9560962a
5 changed files with 1060 additions and 986 deletions

View File

@ -5,12 +5,14 @@ Last-Modified: $Date$
Author: Paul Prescod <paul@prescod.net>
Status: Final
Type: Standards Track
Content-Type: text/x-rst
Created: 27-Jun-2001
Python-Version: 2.2
Post-History: 27-Jun-2001
Abstract
========
Python 2.1 unicode characters can have ordinals only up to 2**16 -1.
This range corresponds to a range in Unicode known as the Basic
@ -22,14 +24,13 @@ Abstract
Glossary
========
Character
Used by itself, means the addressable units of a Python
Unicode string.
Code point
A code point is an integer between 0 and TOPCHAR.
If you imagine Unicode as a mapping from integers to
characters, each integer is a code point. But the
@ -39,33 +40,30 @@ Glossary
to be used for characters.
Codec
A set of functions for translating between physical
encodings (e.g. on disk or coming in from a network)
into logical Python objects.
Encoding
Mechanism for representing abstract characters in terms of
physical bits and bytes. Encodings allow us to store
Unicode characters on disk and transmit them over networks
in a manner that is compatible with other Unicode software.
Surrogate pair
Two physical characters that represent a single logical
character. Part of a convention for representing 32-bit
code points in terms of two 16-bit code points.
Unicode string
A Python type representing a sequence of code points with
"string semantics" (e.g. case conversions, regular
expression compatibility, etc.) Constructed with the
unicode() function.
``unicode()`` function.
Proposed Solution
=================
One solution would be to merely increase the maximum ordinal
to a larger value. Unfortunately the only straightforward
@ -76,8 +74,8 @@ Proposed Solution
build-time option. Users can choose whether they care about
wide characters or prefer to preserve memory.
The 4-byte option is called "wide Py_UNICODE". The 2-byte option
is called "narrow Py_UNICODE".
The 4-byte option is called ``wide Py_UNICODE``. The 2-byte option
is called ``narrow Py_UNICODE``.
Most things will behave identically in the wide and narrow worlds.
@ -86,11 +84,11 @@ Proposed Solution
* unichr(i) for 2**16 <= i <= TOPCHAR will return a
length-one string on wide Python builds. On narrow builds it will
raise ValueError.
raise ``ValueError``.
ISSUE
Python currently allows \U literals that cannot be
Python currently allows ``\U`` literals that cannot be
represented as a single Python character. It generates two
Python characters known as a "surrogate pair". Should this
be disallowed on future narrow Python builds?
@ -135,14 +133,16 @@ Proposed Solution
careful of these characters which are disallowed by the
Unicode specification.
* ord() is always the inverse of unichr()
* ``ord()`` is always the inverse of ``unichr()``
* There is an integer value in the sys module that describes the
largest ordinal for a character in a Unicode string on the current
interpreter. sys.maxunicode is 2**16-1 (0xffff) on narrow builds
interpreter. ``sys.maxunicode`` is 2**16-1 (0xffff) on narrow builds
of Python and TOPCHAR on wide builds.
ISSUE: Should there be distinct constants for accessing
ISSUE:
Should there be distinct constants for accessing
TOPCHAR and the real upper bound for the domain of
unichr (if they differ)? There has also been a
suggestion of sys.unicodewidth which can take the
@ -174,7 +174,6 @@ Proposed Solution
fixed-width characters and does not have to worry about
surrogates.
Con:
No clear proposal of how to communicate this to codecs.
@ -183,30 +182,33 @@ Proposed Solution
code points "reserved for surrogates" improperly. These are
called "isolated surrogates". The codecs should disallow reading
these from files, but you could construct them using string
literals or unichr().
literals or ``unichr()``.
Implementation
==============
There is a new define:
There is a new define::
#define Py_UNICODE_SIZE 2
To test whether UCS2 or UCS4 is in use, the derived macro
Py_UNICODE_WIDE should be used, which is defined when UCS-4 is in
``Py_UNICODE_WIDE`` should be used, which is defined when UCS-4 is in
use.
There is a new configure option:
===================== ==========================================
--enable-unicode=ucs2 configures a narrow Py_UNICODE, and uses
wchar_t if it fits
--enable-unicode=ucs4 configures a wide Py_UNICODE, and uses
wchar_t if it fits
--enable-unicode same as "=ucs2"
--disable-unicode entirely remove the Unicode functionality.
===================== ==========================================
It is also proposed that one day --enable-unicode will just
default to the width of your platforms wchar_t.
It is also proposed that one day ``--enable-unicode`` will just
default to the width of your platforms ``wchar_t``.
Windows builds will be narrow for a while based on the fact that
there have been few requests for wide characters, those requests
@ -216,6 +218,7 @@ Implementation
Notes
=====
This PEP does NOT imply that people using Unicode need to use a
4-byte encoding for their files on disk or sent over the network.
@ -230,6 +233,7 @@ Notes
Rejected Suggestions
====================
More or less the status-quo
@ -269,15 +273,18 @@ Rejected Suggestions
References
==========
Unicode Glossary: http://www.unicode.org/glossary/
Copyright
=========
This document has been placed in the public domain.
..
Local Variables:
mode: indented-text
indent-tabs-mode: nil

View File

@ -5,11 +5,14 @@ Last-Modified: $Date$
Author: jeremy@alum.mit.edu (Jeremy Hylton)
Status: Deferred
Type: Standards Track
Content-Type: text/x-rst
Created: 23-May-2001
Python-Version: 2.2
Post-History:
Deferral
========
While this PEP is a nice idea, no-one has yet emerged to do the work of
hashing out the differences between this PEP, PEP 266 and PEP 280.
@ -17,6 +20,7 @@ Deferral
Abstract
========
This PEP proposes a new implementation of global module namespaces
and the builtin namespace that speeds name resolution. The
@ -44,6 +48,7 @@ Abstract
Introduction
============
This PEP proposes a new implementation of attribute access for
module objects that optimizes access to module variables known at
@ -63,17 +68,18 @@ Introduction
DLict design
============
The namespaces are implemented using a data structure that has
sometimes gone under the name dlict. It is a dictionary that has
sometimes gone under the name ``dlict``. It is a dictionary that has
numbered slots for some dictionary entries. The type must be
implemented in C to achieve acceptable performance. The new
type-class unification work should make this fairly easy. The
DLict will presumably be a subclass of dictionary with an
``DLict`` will presumably be a subclass of dictionary with an
alternate storage module for some keys.
A Python implementation is included here to illustrate the basic
design:
design::
"""A dictionary-list hybrid"""
@ -183,6 +189,7 @@ DLict design
Compiler issues
===============
The compiler currently collects the names of all global variables
in a module. These are names bound at the module level or bound
@ -202,6 +209,7 @@ Compiler issues
Runtime model
=============
The PythonVM will be extended with new opcodes to access globals
and module attributes via a module-level array.
@ -209,10 +217,10 @@ Runtime model
A function object would need to point to the module that defined
it in order to provide access to the module-level global array.
For module attributes stored in the dlict (call them static
For module attributes stored in the ``dlict`` (call them static
attributes), the get/delattr implementation would need to track
access to these attributes using the old by-name interface. If a
static attribute is updated dynamically, e.g.
static attribute is updated dynamically, e.g.::
mod.__dict__["foo"] = 2
@ -221,8 +229,9 @@ Runtime model
Backwards compatibility
=======================
The dlict will need to maintain meta-information about whether a
The ``dlict`` will need to maintain meta-information about whether a
slot is currently used or not. It will also need to maintain a
pointer to the builtin namespace. When a name is not currently
used in the global namespace, the lookup will have to fail over to
@ -232,7 +241,7 @@ Backwards compatibility
function for the builtin namespace that checks to see if a global
shadowing the builtin has been added dynamically. This check
would only occur if there was a dynamic change to the module's
dlict, i.e. when a name is bound that wasn't discovered at
``dlict``, i.e. when a name is bound that wasn't discovered at
compile-time.
These mechanisms would have little if any cost for the common case
@ -247,11 +256,12 @@ Backwards compatibility
Related PEPs
============
PEP 266, Optimizing Global Variable/Attribute Access, proposes a
different mechanism for optimizing access to global variables as
well as attributes of objects. The mechanism uses two new opcodes
TRACK_OBJECT and UNTRACK_OBJECT to create a slot in the local
``TRACK_OBJECT`` and ``UNTRACK_OBJECT`` to create a slot in the local
variables array that aliases the global or object attribute. If
the object being aliases is rebound, the rebind operation is
responsible for updating the aliases.
@ -273,11 +283,13 @@ Related PEPs
Copyright
=========
This document has been placed in the public domain.
..
Local Variables:
mode: indented-text
indent-tabs-mode: nil

View File

@ -5,13 +5,14 @@ Last-Modified: $Date$
Author: Samuele Pedroni <pedronis@python.org>
Status: Rejected
Type: Standards Track
Content-Type: text/plain
Content-Type: text/x-rst
Created: 25-Aug-2003
Python-Version: 2.4
Post-History:
Abstract
========
Generators allow for natural coding and abstraction of traversal
over data. Currently if external resources needing proper timely
@ -26,12 +27,16 @@ Abstract
on yield placement can be lifted, expanding the applicability of
generators.
Pronouncement
=============
Rejected in favor of PEP 342 which includes substantially all of
the requested behavior in a more refined form.
Rationale
=========
Python generators allow for natural coding of many data traversal
scenarios. Their instantiation produces iterators,
@ -45,7 +50,7 @@ Rationale
handling and proper resource acquisition and release.
Let's consider an example (for simplicity, files in read-mode are
used):
used)::
def all_lines(index_path):
for path in file(index_path, "r"):
@ -59,7 +64,7 @@ Rationale
files opened depending on the contents of the index).
If we want timely release, we have to sacrifice the simplicity and
directness of the generator-only approach: (e.g.)
directness of the generator-only approach: (e.g.)::
class AllLines:
@ -83,7 +88,7 @@ Rationale
if self.document:
self.document.close()
to be used as:
to be used as::
all_lines = AllLines("index.txt")
try:
@ -97,7 +102,7 @@ Rationale
traversal in an object (iterator) with a close method.
This PEP proposes that generators should grow such a close method
with such semantics that the example could be rewritten as:
with such semantics that the example could be rewritten as::
# Today this is not valid Python: yield is not allowed between
# try and finally, and generator type instances support no
@ -123,7 +128,7 @@ Rationale
finally:
all.close() # close on generator
Currently PEP 255 [1] disallows yield inside a try clause of a
Currently PEP 255 [1]_ disallows yield inside a try clause of a
try-finally statement, because the execution of the finally clause
cannot be guaranteed as required by try-finally semantics.
@ -137,7 +142,7 @@ Rationale
The semantics of generator destruction on the other hand should be
extended in order to implement a best-effort policy for the
general case. Specifically, destruction should invoke close().
general case. Specifically, destruction should invoke ``close()``.
The best-effort limitation comes from the fact that the
destructor's execution is not guaranteed in the first place.
@ -146,13 +151,14 @@ Rationale
Possible Semantics
==================
The built-in generator type should have a close method
implemented, which can then be invoked as:
implemented, which can then be invoked as::
gen.close()
where gen is an instance of the built-in generator type.
where ``gen`` is an instance of the built-in generator type.
Generator destruction should also invoke close method behavior.
If a generator is already terminated, close should be a no-op.
@ -184,16 +190,16 @@ Possible Semantics
implementation should consume and not propagate further this
exception.
Issues: should StopIteration be reused for this purpose? Probably
Issues: should ``StopIteration`` be reused for this purpose? Probably
not. We would like close to be a harmless operation for legacy
generators, which could contain code catching StopIteration to
generators, which could contain code catching ``StopIteration`` to
deal with other generators/iterators.
In general, with exception semantics, it is unclear what to do if
the generator does not terminate or we do not receive the special
exception propagated back. Other different exceptions should
probably be propagated, but consider this possible legacy
generator code:
generator code::
try:
...
@ -214,6 +220,7 @@ Possible Semantics
Remarks
=======
If this proposal is accepted, it should become common practice to
document whether a generator acquires resources, so that its close
@ -227,13 +234,14 @@ Remarks
The rare case of code that has acquired ownership of and need to
properly deal with all of iterators, generators and generators
acquiring resources that need timely release, is easily solved:
acquiring resources that need timely release, is easily solved::
if hasattr(iterator, 'close'):
iterator.close()
Open Issues
===========
Definitive semantics ought to be chosen. Currently Guido favors
Exception Semantics. If the generator yields a value instead of
@ -248,6 +256,7 @@ Open Issues
Alternative Ideas
=================
The idea that the yield placement limitation should be removed and
that generator destruction should trigger execution of finally
@ -255,7 +264,7 @@ Alternative Ideas
guarantee that timely release of resources acquired by a generator
can be enforced.
PEP 288 [2] proposes a more general solution, allowing custom
PEP 288 [2]_ proposes a more general solution, allowing custom
exception passing to generators. The proposal in this PEP
addresses more directly the problem of resource release. Were PEP
288 implemented, Exceptions Semantics for close could be layered
@ -264,20 +273,23 @@ Alternative Ideas
References
==========
[1] PEP 255 Simple Generators
.. [1] PEP 255 Simple Generators
http://www.python.org/dev/peps/pep-0255/
[2] PEP 288 Generators Attributes and Exceptions
.. [2] PEP 288 Generators Attributes and Exceptions
http://www.python.org/dev/peps/pep-0288/
Copyright
=========
This document has been placed in the public domain.
..
Local Variables:
mode: indented-text
indent-tabs-mode: nil

View File

@ -5,18 +5,20 @@ Last-Modified: $Date$
Author: Neil Schemenauer <nas@arctrix.com>, Guido van Rossum <guido@python.org>
Status: Final
Type: Standards Track
Content-Type: text/plain
Content-Type: text/x-rst
Created: 15-Feb-2006
Python-Version: 2.6, 3.0
Post-History:
Update
======
This PEP has partially been superseded by PEP 3137.
Abstract
========
This PEP outlines the introduction of a raw bytes sequence type.
Adding the bytes type is one step in the transition to
@ -31,6 +33,7 @@ Abstract
Motivation
==========
Python's current string objects are overloaded. They serve to hold
both sequences of characters and sequences of bytes. This
@ -42,17 +45,18 @@ Motivation
Specification
=============
A bytes object stores a mutable sequence of integers that are in
the range 0 to 255. Unlike string objects, indexing a bytes
object returns an integer. Assigning or comparing an object that
is not an integer to an element causes a TypeError exception.
is not an integer to an element causes a ``TypeError`` exception.
Assigning an element to a value outside the range 0 to 255 causes
a ValueError exception. The .__len__() method of bytes returns
a ``ValueError`` exception. The ``.__len__()`` method of bytes returns
the number of integers stored in the sequence (i.e. the number of
bytes).
The constructor of the bytes object has the following signature:
The constructor of the bytes object has the following signature::
bytes([initializer[, encoding]])
@ -60,7 +64,7 @@ Specification
elements is created and returned. The initializer argument can be
a string (in 2.6, either str or unicode), an iterable of integers,
or a single integer. The pseudo-code for the constructor
(optimized for clear semantics, not for speed) is:
(optimized for clear semantics, not for speed) is::
def bytes(initializer=0, encoding=None):
if isinstance(initializer, int): # In 2.6, int -> (int, long)
@ -88,32 +92,32 @@ Specification
new[i] = c
return new
The .__repr__() method returns a string that can be evaluated to
generate a new bytes object containing a bytes literal:
The ``.__repr__()`` method returns a string that can be evaluated to
generate a new bytes object containing a bytes literal::
>>> bytes([10, 20, 30])
b'\n\x14\x1e'
The object has a .decode() method equivalent to the .decode()
method of the str object. The object has a classmethod .fromhex()
that takes a string of characters from the set [0-9a-fA-F ] and
The object has a ``.decode()`` method equivalent to the ``.decode()``
method of the str object. The object has a classmethod ``.fromhex()``
that takes a string of characters from the set ``[0-9a-fA-F ]`` and
returns a bytes object (similar to binascii.unhexlify). For
example:
example::
>>> bytes.fromhex('5c5350ff')
b'\\SP\xff'
>>> bytes.fromhex('5c 53 50 ff')
b'\\SP\xff'
The object has a .hex() method that does the reverse conversion
(similar to binascii.hexlify):
The object has a ``.hex()`` method that does the reverse conversion
(similar to binascii.hexlify)::
>> bytes([92, 83, 80, 255]).hex()
'5c5350ff'
The bytes object has some methods similar to list methods, and
others similar to str methods. Here is a complete list of
methods, with their approximate signatures:
methods, with their approximate signatures::
.__add__(bytes) -> bytes
.__contains__(int | bytes) -> bool
@ -162,15 +166,16 @@ Specification
.rsplit(bytes) -> list[bytes]
.translate(bytes, [bytes]) -> bytes
Note the conspicuous absence of .isupper(), .upper(), and friends.
(But see "Open Issues" below.) There is no .__hash__() because
the object is mutable. There is no use case for a .sort() method.
Note the conspicuous absence of ``.isupper()``, ``.upper()``, and friends.
(But see "Open Issues" below.) There is no ``.__hash__()`` because
the object is mutable. There is no use case for a ``.sort()`` method.
The bytes type also supports the buffer interface, supporting
reading and writing binary (but not character) data.
Out of Scope Issues
===================
* Python 3k will have a much different I/O subsystem. Deciding
how that I/O subsystem will work and interact with the bytes
@ -180,19 +185,20 @@ Out of Scope Issues
interface, the existing binary I/O operations in Python 2.6 will
support bytes objects.
* It has been suggested that a special method named .__bytes__()
* It has been suggested that a special method named ``.__bytes__()``
be added to the language to allow objects to be converted into
byte arrays. This decision is out of scope.
* A bytes literal of the form b"..." is also proposed. This is
* A bytes literal of the form ``b"..."`` is also proposed. This is
the subject of PEP 3112.
Open Issues
===========
* The .decode() method is redundant since a bytes object b can
also be decoded by calling unicode(b, <encoding>) (in 2.6) or
str(b, <encoding>) (in 3.0). Do we need encode/decode methods
* The ``.decode()`` method is redundant since a bytes object ``b`` can
also be decoded by calling ``unicode(b, <encoding>)`` (in 2.6) or
``str(b, <encoding>)`` (in 3.0). Do we need encode/decode methods
at all? In a sense the spelling using a constructor is cleaner.
* Need to specify the methods still more carefully.
@ -201,30 +207,31 @@ Open Issues
* Should all those list methods really be implemented?
* A case could be made for supporting .ljust(), .rjust(),
.center() with a mandatory second argument.
* A case could be made for supporting ``.ljust()``, ``.rjust()``,
``.center()`` with a mandatory second argument.
* A case could be made for supporting .split() with a mandatory
* A case could be made for supporting ``.split()`` with a mandatory
argument.
* A case could even be made for supporting .islower(), .isupper(),
.isspace(), .isalpha(), .isalnum(), .isdigit() and the
corresponding conversions (.lower() etc.), using the ASCII
* A case could even be made for supporting ``.islower()``, ``.isupper()``,
``.isspace()``, ``.isalpha()``, ``.isalnum()``, ``.isdigit()`` and the
corresponding conversions (``.lower()`` etc.), using the ASCII
definitions for letters, digits and whitespace. If this is
accepted, the cases for .ljust(), .rjust(), .center() and
.split() become much stronger, and they should have default
accepted, the cases for ``.ljust()``, ``.rjust()``, ``.center()`` and
``.split()`` become much stronger, and they should have default
arguments as well, using an ASCII space or all ASCII whitespace
(for .split()).
(for ``.split()``).
Frequently Asked Questions
==========================
Q: Why have the optional encoding argument when the encode method of
Unicode objects does the same thing?
A: In the current version of Python, the encode method returns a str
object and we cannot change that without breaking code. The
construct bytes(s.encode(...)) is expensive because it has to
construct bytes(``s.encode(...)``) is expensive because it has to
copy the byte sequence multiple times. Also, Python generally
provides two ways of converting an object of type A into an
object of type B: ask an A instance to convert itself to a B, or
@ -243,7 +250,7 @@ Frequently Asked Questions
encoding of character data they contain. We need to assume that
the programmer has provided a str object that already uses the
desired encoding. If you need something other than a pure copy of
the bytes then you need to first decode the string. For example:
the bytes then you need to first decode the string. For example::
bytes(s.decode(encoding1), encoding2)
@ -261,11 +268,13 @@ Frequently Asked Questions
Copyright
=========
This document has been placed in the public domain.
..
Local Variables:
mode: indented-text
indent-tabs-mode: nil

View File

@ -5,11 +5,14 @@ Last-Modified: $Date$
Author: Neal Norwitz, Barry Warsaw
Status: Final
Type: Informational
Content-Type: text/x-rst
Created: 29-June-2006
Python-Version: 2.6 and 3.0
Post-History: 17-Mar-2008
Abstract
========
This document describes the development and release schedule for
Python 2.6 and 3.0. The schedule primarily concerns itself with
@ -37,15 +40,17 @@ Abstract
Release Manager and Crew
========================
2.6/3.0 Release Manager: Barry Warsaw
Windows installers: Martin v. Loewis
Mac installers: Ronald Oussoren
Documentation: Georg Brandl
RPMs: Sean Reifschneider
- 2.6/3.0 Release Manager: Barry Warsaw
- Windows installers: Martin v. Loewis
- Mac installers: Ronald Oussoren
- Documentation: Georg Brandl
- RPMs: Sean Reifschneider
Release Lifespan
================
Python 3.0 is no longer being maintained for any purpose.
@ -56,49 +61,52 @@ Release Lifespan
Release Schedule
================
Feb 29 2008: Python 2.6a1 and 3.0a3 are released
Apr 02 2008: Python 2.6a2 and 3.0a4 are released
May 08 2008: Python 2.6a3 and 3.0a5 are released
Jun 18 2008: Python 2.6b1 and 3.0b1 are released
Jul 17 2008: Python 2.6b2 and 3.0b2 are released
Aug 20 2008: Python 2.6b3 and 3.0b3 are released
Sep 12 2008: Python 2.6rc1 is released
Sep 17 2008: Python 2.6rc2 and 3.0rc1 released
Oct 01 2008: Python 2.6 final released
Nov 06 2008: Python 3.0rc2 released
Nov 21 2008: Python 3.0rc3 released
Dec 03 2008: Python 3.0 final released
Dec 04 2008: Python 2.6.1 final released
Apr 14 2009: Python 2.6.2 final released
Oct 02 2009: Python 2.6.3 final released
Oct 25 2009: Python 2.6.4 final released
Mar 19 2010: Python 2.6.5 final released
Aug 24 2010: Python 2.6.6 final released
Jun 03 2011: Python 2.6.7 final released (security-only)
Apr 10 2012: Python 2.6.8 final released (security-only)
Oct 29 2013: Python 2.6.9 final released (security-only)
- Feb 29 2008: Python 2.6a1 and 3.0a3 are released
- Apr 02 2008: Python 2.6a2 and 3.0a4 are released
- May 08 2008: Python 2.6a3 and 3.0a5 are released
- Jun 18 2008: Python 2.6b1 and 3.0b1 are released
- Jul 17 2008: Python 2.6b2 and 3.0b2 are released
- Aug 20 2008: Python 2.6b3 and 3.0b3 are released
- Sep 12 2008: Python 2.6rc1 is released
- Sep 17 2008: Python 2.6rc2 and 3.0rc1 released
- Oct 01 2008: Python 2.6 final released
- Nov 06 2008: Python 3.0rc2 released
- Nov 21 2008: Python 3.0rc3 released
- Dec 03 2008: Python 3.0 final released
- Dec 04 2008: Python 2.6.1 final released
- Apr 14 2009: Python 2.6.2 final released
- Oct 02 2009: Python 2.6.3 final released
- Oct 25 2009: Python 2.6.4 final released
- Mar 19 2010: Python 2.6.5 final released
- Aug 24 2010: Python 2.6.6 final released
- Jun 03 2011: Python 2.6.7 final released (security-only)
- Apr 10 2012: Python 2.6.8 final released (security-only)
- Oct 29 2013: Python 2.6.9 final released (security-only)
Completed features for 3.0
==========================
See PEP 3000 [#pep3000] and PEP 3100 [#pep3100] for details on the
See PEP 3000 [pep3000]_ and PEP 3100 [pep3100]_ for details on the
Python 3.0 project.
Completed features for 2.6
==========================
PEPs:
- 352: Raising a string exception now triggers a TypeError.
Attempting to catch a string exception raises DeprecationWarning.
BaseException.message has been deprecated. [#pep352]
- 358: The "bytes" Object [#pep358]
- 366: Main module explicit relative imports [#pep366]
- 370: Per user site-packages directory [#pep370]
- 3112: Bytes literals in Python 3000 [#pep3112]
- 3127: Integer Literal Support and Syntax [#pep3127]
- 371: Addition of the multiprocessing package [#pep371]
BaseException.message has been deprecated. [pep352]_
- 358: The "bytes" Object [pep358]_
- 366: Main module explicit relative imports [pep366]_
- 370: Per user site-packages directory [pep370]_
- 3112: Bytes literals in Python 3000 [pep3112]_
- 3127: Integer Literal Support and Syntax [pep3127]_
- 371: Addition of the multiprocessing package [pep371]_
New modules in the standard library:
@ -149,6 +157,7 @@ Completed features for 2.6
Possible features for 2.6
=========================
New features *should* be implemented prior to alpha2, particularly
any C modifications or behavioral changes. New features *must* be
@ -182,21 +191,34 @@ Possible features for 2.6
- multifile
- compiler package (or a Py3K warning instead?)
- Convert Parser/*.c to use the C warnings module rather than printf
- Convert Parser/\*.c to use the C warnings module rather than printf
- Add warnings for Py3k features removed:
* __getslice__/__setslice__/__delslice__
* float args to PyArgs_ParseTuple
* __cmp__?
* other comparison changes?
* int division?
* All PendingDeprecationWarnings (e.g. exceptions)
* using zip() result as a list
* the exec statement (use function syntax)
* function attributes that start with func_* (should use __*__)
* the L suffix for long literals
* renaming of __nonzero__ to __bool__
* multiple inheritance with classic classes? (MRO might change)
* properties and classic classes? (instance attrs shadow property)
- use __bool__ method if available and there's no __nonzero__
@ -230,56 +252,68 @@ Possible features for 2.6
Deferred until 2.7
==================
None
Open issues
===========
How should import warnings be handled?
http://mail.python.org/pipermail/python-dev/2006-June/066345.html
http://python.org/sf/1515609
http://python.org/sf/1515361
- http://mail.python.org/pipermail/python-dev/2006-June/066345.html
- http://python.org/sf/1515609
- http://python.org/sf/1515361
References
==========
.. [1] Adding a __dir__() magic method
http://mail.python.org/pipermail/python-dev/2006-July/067139.html
.. [#pep358] PEP 358 (The "bytes" Object)
.. [pep352] PEP 352 (Required Superclass for Exceptions)
http://www.python.org/dev/peps/pep-0352
.. [pep358] PEP 358 (The "bytes" Object)
http://www.python.org/dev/peps/pep-0358
.. [#pep366] PEP 366 (Main module explicit relative imports)
.. [pep366] PEP 366 (Main module explicit relative imports)
http://www.python.org/dev/peps/pep-0366
.. [#pep367] PEP 367 (New Super)
.. [pep367] PEP 367 (New Super)
http://www.python.org/dev/peps/pep-0367
.. [#pep371] PEP 371 (Addition of the multiprocessing package)
.. [pep370] PEP 370 (Per user site-packages directory)
http://www.python.org/dev/peps/pep-0370
.. [pep371] PEP 371 (Addition of the multiprocessing package)
http://www.python.org/dev/peps/pep-0371
.. [#pep3000] PEP 3000 (Python 3000)
.. [pep3000] PEP 3000 (Python 3000)
http://www.python.org/dev/peps/pep-3000
.. [#pep3100] PEP 3100 (Miscellaneous Python 3.0 Plans)
.. [pep3100] PEP 3100 (Miscellaneous Python 3.0 Plans)
http://www.python.org/dev/peps/pep-3100
.. [#pep3112] PEP 3112 (Bytes literals in Python 3000)
.. [pep3112] PEP 3112 (Bytes literals in Python 3000)
http://www.python.org/dev/peps/pep-3112
.. [#pep3127] PEP 3127 (Integer Literal Support and Syntax)
.. [pep3127] PEP 3127 (Integer Literal Support and Syntax)
http://www.python.org/dev/peps/pep-3127
.. _Google calendar:
http://www.google.com/calendar/ical/b6v58qvojllt0i6ql654r1vh00%40group.calendar.google.com/public/basic.ics
.. _Google calendar: http://www.google.com/calendar/ical/b6v58qvojllt0i6ql654r1vh00%40group.calendar.google.com/public/basic.ics
Copyright
=========
This document has been placed in the public domain.
..
Local Variables:
mode: indented-text
indent-tabs-mode: nil