diff --git a/pep-0517.txt b/pep-0517.txt index d441c991c..7952fb8c8 100644 --- a/pep-0517.txt +++ b/pep-0517.txt @@ -2,7 +2,8 @@ PEP: 517 Title: A build-system independent format for source trees Version: $Revision$ Last-Modified: $Date$ -Author: Nathaniel J. Smith +Author: Nathaniel J. Smith , + Thomas Kluyver BDFL-Delegate: Nick Coghlan Discussions-To: Status: Draft @@ -99,63 +100,55 @@ specification is encoded in the source code and documentation of ``distutils``, ``setuptools``, ``pip``, and other tools. We'll refer to it as the ``setup.py``\-style. -Here we define a new ``pypackage.json``\-style source tree. This -consists of any directory which contains a file named -``pypackage.json``. (If a tree contains both ``pypackage.json`` and -``setup.py`` then it is a ``pypackage.json``\-style source tree, and -``pypackage.json``\-aware tools should ignore the ``setup.py``; this -allows packages to include a ``setup.py`` for compatibility with old -build frontends, while using the new system with new build frontends.) +Here we define a new style of source tree based around the +``pyproject.toml`` file defined in PEP 518, extending the +``[build-system]`` table in that file with one additional key, +``build_backend``. Here's an example of how it would look:: -This file has the following schema. Extra keys are ignored. + [build-system] + # Defined by PEP 518: + requires = ["flit"] + # Defined by this PEP: + build_backend = "flit.api:main" -schema - The version of the schema. This PEP defines version "1". Defaults to "1" - when absent. All tools reading the file MUST error on an unrecognised - schema version. +``build_backend`` is a string naming a Python object that will be +used to perform the build (see below for details). This is formatted +following the same ``module:object`` syntax as a ``setuptools`` entry +point. For instance, if the string is ``"flit.api:main"`` as in the +example above, this object would be looked up by executing the +equivalent of:: -bootstrap_requires - Optional list of PEP 508 dependency specifications that the - build frontend must ensure are available before invoking the build - backend. For instance, if using flit, then the requirements might - be:: + import flit.api + backend = flit.api.main - "bootstrap_requires": ["flit"] +It's also legal to leave out the ``:object`` part, e.g. :: -build_backend - A mandatory string naming a Python object that will be used to - perform the build (see below for details). This is formatted - following the same ``module:object`` syntax as a ``setuptools`` - entry point. For instance, if using flit, then the build system - might be specified as:: + build_backend = "flit.api" - "build_system": "flit.api:main" +which acts like:: - and this object would be looked up by executing the equivalent of:: + import flit.api + backend = flit.api - import flit.api - backend = flit.api.main +Formally, the string should satisfy this grammar:: - It's also legal to leave out the ``:object`` part, e.g. :: + identifier = (letter | '_') (letter | '_' | digit)* + module_path = identifier ('.' identifier)* + object_path = identifier ('.' identifier)* + entry_point = module_path (':' object_path)? - "build_system": "flit.api" +And we import ``module_path`` and then lookup +``module_path.object_path`` (or just ``module_path`` if +``object_path`` is missing). - which acts like:: - - import flit.api - backend = flit.api - - Formally, the string should satisfy this grammar:: - - identifier = (letter | '_') (letter | '_' | digit)* - module_path = identifier ('.' identifier)* - object_path = identifier ('.' identifier)* - entry_point = module_path (':' object_path)? - - And we import ``module_path`` and then lookup - ``module_path.object_path`` (or just ``module_path`` if - ``object_path`` is missing). +If the ``pyproject.toml`` file is absent, or the ``build_backend`` +key is missing, the source tree is not using this specification, and +tools should fall back to running ``setup.py``. +Where the ``build_backend`` key exists, it takes precedence over +``setup.py``, and source trees need not include ``setup.py`` at all. +Projects may still wish to include a ``setup.py`` for compatibility +with tools that do not use this spec. ========================= Build backend interface @@ -170,7 +163,7 @@ argument is described after the individual hooks:: This hook MUST return an additional list of strings containing PEP 508 dependency specifications, above and beyond those specified in the -``pypackage.json`` file. Example:: +``pyproject.toml`` file. Example:: def get_build_requires(config_settings): return ["wheel >= 0.25", "setuptools"] @@ -304,11 +297,11 @@ following criteria: - The ``get_build_requires`` hook is executed in an environment which contains the bootstrap requirements specified in the - ``pypackage.json`` file. + ``pyproject.toml`` file. - All other hooks are executed in an environment which contains both - the bootstrap requirements specified in the ``pypackage.json`` hook - and those specified by the ``get_build_requires`` hook. + the bootstrap requirements specified in the ``pyproject.toml`` + hook and those specified by the ``get_build_requires`` hook. - This must remain true even for new Python subprocesses spawned by the build environment, e.g. code like:: @@ -370,8 +363,8 @@ explicitly requested build-dependencies. This has two benefits: However, there will also be situations where build-requirements are problematic in various ways. For example, a package author might accidentally leave off some crucial requirement despite our best -efforts; or, a package might declare a build-requirement on `foo >= -1.0` which worked great when 1.0 was the latest version, but now 1.1 +efforts; or, a package might declare a build-requirement on ``foo >= +1.0`` which worked great when 1.0 was the latest version, but now 1.1 is out and it has a showstopper bug; or, the user might decide to build a package against numpy==1.7 -- overriding the package's preferred numpy==1.8 -- to guarantee that the resulting build will be @@ -399,7 +392,7 @@ undefined, but basically comes down to: a file named ``{NAME}-{VERSION}.{EXT}``, which unpacks into a buildable source tree called ``{NAME}-{VERSION}/``. Traditionally these have always contained ``setup.py``\-style source trees; we now allow them to also -contain ``pypackage.json``\-style source trees. +contain ``pyproject.toml``\-style source trees. Integration frontends require that an sdist named ``{NAME}-{VERSION}.{EXT}`` will generate a wheel named @@ -410,9 +403,8 @@ Integration frontends require that an sdist named Comparison to competing proposals =================================== -The primary difference between this and competing proposals (`in -particular -`_) is +The primary difference between this and competing proposals (in +particular, PEP 516) is that our build backend is defined via a Python hook-based interface rather than a command-line based interface. @@ -580,7 +572,7 @@ Furthermore, our mechanism should also fulfill two more goals: (a) If new versions of e.g. ``pip`` and ``flit`` are both updated to support the new interface, then this should be sufficient for it to be used; in particular, it should *not* be necessary for every project that -*uses* ``flit`` to update its individual ``pypackage.json`` file. (b) +*uses* ``flit`` to update its individual ``pyproject.toml`` file. (b) We do not want to have to spawn extra processes just to perform this negotiation, because process spawns can easily become a bottleneck when deploying large multi-package stacks on some platforms (Windows). @@ -601,11 +593,10 @@ process, it can easily write it to do something like:: In the alternative where the public interface boundary is placed at the subprocess call, this is not possible -- either we need to spawn an extra process just to query what interfaces are supported (as was -included in an earlier version of `this alternative PEP -`_), or +included in an earlier draft of PEP 516, an alternative to this), or else we give up on autonegotiation entirely (as in the current version of that PEP), meaning that any changes in the interface will require -N individual packages to update their ``pypackage.json`` files before +N individual packages to update their ``pyproject.toml`` files before any change can go live, and that any changes will necessarily be restricted to new releases. @@ -658,10 +649,6 @@ above, there are a few other differences in this proposal: step and the wheel building step. I guess everyone probably will agree this is a good idea? -* We call our config file ``pypackage.json`` instead of - ``pypa.json``. This is because it describes a package, rather than - describing a packaging authority. But really, who cares. - * We provide more detailed recommendations about the build environment, but these aren't normative anyway. @@ -673,7 +660,7 @@ above, there are a few other differences in this proposal: A goal here is to make it as simple as possible to convert old-style sdists to new-style sdists. (E.g., this is one motivation for supporting dynamic build requirements.) The ideal would be that there -would be a single static pypackage.json that could be dropped into any +would be a single static ``pyproject.toml`` that could be dropped into any "version 0" VCS checkout to convert it to the new shiny. This is probably not 100% possible, but we can get close, and it's important to keep track of how close we are... hence this section. @@ -700,7 +687,7 @@ automatically upgrade packages to the new format: check whether they do this, and if so then when upgrading to the new system they will have to start explicitly declaring these dependencies (either via ``setup_requires=`` or via static - declaration in ``pypackage.json``). + declaration in ``pyproject.toml``). 2) There currently exist packages which do not declare consistent metadata (e.g. ``egg_info`` and ``bdist_wheel`` might get different