HADOOP-11579. Documentation for truncate. Contributed by Konstantin Shvachko.
This commit is contained in:
parent
22441ab7b2
commit
6f6737b3bf
|
@ -582,6 +582,8 @@ Release 2.7.0 - UNRELEASED
|
||||||
HADOOP-11520. Clean incomplete multi-part uploads in S3A tests.
|
HADOOP-11520. Clean incomplete multi-part uploads in S3A tests.
|
||||||
(Thomas Demoor via stevel)
|
(Thomas Demoor via stevel)
|
||||||
|
|
||||||
|
HADOOP-11579. Documentation for truncate. (shv)
|
||||||
|
|
||||||
OPTIMIZATIONS
|
OPTIMIZATIONS
|
||||||
|
|
||||||
HADOOP-11323. WritableComparator#compare keeps reference to byte array.
|
HADOOP-11323. WritableComparator#compare keeps reference to byte array.
|
||||||
|
|
|
@ -54,6 +54,7 @@
|
||||||
* [test](#test)
|
* [test](#test)
|
||||||
* [text](#text)
|
* [text](#text)
|
||||||
* [touchz](#touchz)
|
* [touchz](#touchz)
|
||||||
|
* [truncate](#truncate)
|
||||||
* [usage](#usage)
|
* [usage](#usage)
|
||||||
|
|
||||||
Overview
|
Overview
|
||||||
|
@ -681,6 +682,25 @@ Example:
|
||||||
|
|
||||||
Exit Code: Returns 0 on success and -1 on error.
|
Exit Code: Returns 0 on success and -1 on error.
|
||||||
|
|
||||||
|
truncate
|
||||||
|
--------
|
||||||
|
|
||||||
|
Usage: `hadoop fs -truncate [-w] <length> <paths>`
|
||||||
|
|
||||||
|
Truncate all files that match the specified file pattern
|
||||||
|
to the specified length.
|
||||||
|
|
||||||
|
Options:
|
||||||
|
|
||||||
|
* The -w flag requests that the command waits for block recovery to complete, if necessary.
|
||||||
|
Without -w flag the file may remain unclosed for some time while the recovery is in progress.
|
||||||
|
During this time file cannot be reopened for append.
|
||||||
|
|
||||||
|
Example:
|
||||||
|
|
||||||
|
* `hadoop fs -truncate 55 /user/hadoop/file1 /user/hadoop/file2`
|
||||||
|
* `hadoop fs -truncate -w 127 hdfs://nn1.example.com/user/hadoop/file1`
|
||||||
|
|
||||||
usage
|
usage
|
||||||
-----
|
-----
|
||||||
|
|
||||||
|
|
|
@ -831,3 +831,38 @@ HDFS's restrictions may be an implementation detail of how it implements
|
||||||
a sequence. As no other filesystem in the Hadoop core codebase
|
a sequence. As no other filesystem in the Hadoop core codebase
|
||||||
implements this method, there is no way to distinguish implementation detail.
|
implements this method, there is no way to distinguish implementation detail.
|
||||||
from specification.
|
from specification.
|
||||||
|
|
||||||
|
|
||||||
|
### `boolean truncate(Path p, long newLength)`
|
||||||
|
|
||||||
|
Truncate file `p` to the specified `newLength`.
|
||||||
|
|
||||||
|
Implementations MAY throw `UnsupportedOperationException`.
|
||||||
|
|
||||||
|
#### Preconditions
|
||||||
|
|
||||||
|
if not exists(FS, p) : raise FileNotFoundException
|
||||||
|
|
||||||
|
if isDir(FS, p) : raise [FileNotFoundException, IOException]
|
||||||
|
|
||||||
|
if newLength < 0 || newLength > len(FS.Files[p]) : raise HadoopIllegalArgumentException
|
||||||
|
|
||||||
|
HDFS: The source file MUST be closed.
|
||||||
|
Truncate cannot be performed on a file, which is open for writing or appending.
|
||||||
|
|
||||||
|
#### Postconditions
|
||||||
|
|
||||||
|
FS' where:
|
||||||
|
len(FS.Files[p]) = newLength
|
||||||
|
|
||||||
|
Return: `true`, if truncation is finished and the file can be immediately
|
||||||
|
opened for appending, or `false` otherwise.
|
||||||
|
|
||||||
|
HDFS: HDFS reutrns `false` to indicate that a background process of adjusting
|
||||||
|
the length of the last block has been started, and clients should wait for it
|
||||||
|
to complete before they can proceed with further file updates.
|
||||||
|
|
||||||
|
#### Concurrency
|
||||||
|
|
||||||
|
If an input stream is open when truncate() occurs, the outcome of read
|
||||||
|
operations related to the part of the file being truncated is undefined.
|
||||||
|
|
Loading…
Reference in New Issue