PEP694: Grammar fixes (#2679)

This commit is contained in:
Donald Stufft 2022-06-27 21:00:42 -04:00 committed by GitHub
parent a21c1f27de
commit a4041667da
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
1 changed files with 9 additions and 9 deletions

View File

@ -441,7 +441,7 @@ A successful deletion request **MUST** response with a ``204 No Content``.
Session Status Session Status
~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~
Similiarly to file upload, the session URL is provided in the response to Similarly to file upload, the session URL is provided in the response to
creating the upload session, and clients **MUST NOT** assume that there is any creating the upload session, and clients **MUST NOT** assume that there is any
stability to what those URLs look like from one session to the next. stability to what those URLs look like from one session to the next.
@ -461,7 +461,7 @@ and future attempts to access that session URL or any of the file upload URLs
**MAY** return a ``404 Not Found``. **MAY** return a ``404 Not Found``.
To prevent a lot of dangling sessions, servers may also choose to cancel a To prevent a lot of dangling sessions, servers may also choose to cancel a
session on it's own accord. It is recommended that servers expunge their session on their own accord. It is recommended that servers expunge their
sessions after no less than a week, but each server may choose their own sessions after no less than a week, but each server may choose their own
schedule. schedule.
@ -476,7 +476,7 @@ session status payload.
If the server is able to immediately complete the session, it may do so If the server is able to immediately complete the session, it may do so
and return a ``201 Created`` response. If it is unable to immediately and return a ``201 Created`` response. If it is unable to immediately
complete the session (for instance, if it needs to do processing that may complete the session (for instance, if it needs to do processing that may
take longer than reasonable in a single http request), then it may return take longer than reasonable in a single HTTP request), then it may return
a ``202 Accepted`` response. a ``202 Accepted`` response.
In either case, the server should include a ``Location`` header pointing In either case, the server should include a ``Location`` header pointing
@ -654,11 +654,11 @@ The other benefit is that even if you do want to support resumption, you can
still just ``POST`` the file, and unless you *need* to resume the download, still just ``POST`` the file, and unless you *need* to resume the download,
that's all you have to do. that's all you have to do.
Another, possibly theortical, benefit is that for hashing the uploaded files, Another, possibly theoretical, benefit is that for hashing the uploaded files,
the serial chunks requirement means that the server can maintain hashing state the serial chunks requirement means that the server can maintain hashing state
between requests, update it for each request, then write that file back to between requests, update it for each request, then write that file back to
storage. Unfortunately this isn't actually possible to do with Python's hashlib, storage. Unfortunately this isn't actually possible to do with Python's hashlib,
though there is some libraries like `Rehash <https://github.com/kislyuk/rehash>`_ though there are some libraries like `Rehash <https://github.com/kislyuk/rehash>`_
that implement it, but they don't support every hash that hashlib does that implement it, but they don't support every hash that hashlib does
(specifically not blake2 or sha3 at the time of writing). (specifically not blake2 or sha3 at the time of writing).
@ -668,7 +668,7 @@ things like extract metadata, etc from it, which would make it a moot point.
The downside is that there is no ability to parallelize the upload of a single The downside is that there is no ability to parallelize the upload of a single
file because each chunk has to be submitted serially. file because each chunk has to be submitted serially.
AWS S3 has a similiar API (and most blob stores have copied it either wholesale AWS S3 has a similar API (and most blob stores have copied it either wholesale
or something like it) which they call multipart uploading. or something like it) which they call multipart uploading.
The basic flow for a multipart upload is: The basic flow for a multipart upload is:
@ -690,10 +690,10 @@ the data.
We wouldn't need an explicit step (1), because our session would implicitly We wouldn't need an explicit step (1), because our session would implicitly
initiate a multipart upload for each file. initiate a multipart upload for each file.
It does have it's own downsides: It does have its own downsides:
- Clients have to do more work on every request to have something resembling - Clients have to do more work on every request to have something resembling
resumble uploads. They would *have* to break the file up into multiple parts resumable uploads. They would *have* to break the file up into multiple parts
rather than just making a single POST request, and only needing to deal rather than just making a single POST request, and only needing to deal
with the complexity if something fails. with the complexity if something fails.
@ -708,7 +708,7 @@ It does have it's own downsides:
multipart uploads by hashing each part, then the overall hash is just a multipart uploads by hashing each part, then the overall hash is just a
hash of those hashes, not of the content itself. We need to know the hash of those hashes, not of the content itself. We need to know the
actual hash of the file itself for PyPI, so we would have to reconstitute actual hash of the file itself for PyPI, so we would have to reconstitute
the file and read it's content and hash it once it's been fully uploaded, the file and read its content and hash it once it's been fully uploaded,
though we could still use the hash of hashes trick for checksumming the though we could still use the hash of hashes trick for checksumming the
upload itself. upload itself.