HDFS-11561. HttpFS doc errors. Contributed by Yuanbo Liu.

This commit is contained in:
John Zhuge 2017-03-22 22:15:24 -07:00
parent f462e1ff68
commit ed2d03f2a5
2 changed files with 22 additions and 4 deletions

View File

@ -36,13 +36,13 @@ HttpFS itself is Java Jetty web-application.
HttpFS HTTP web-service API calls are HTTP REST calls that map to a HDFS file system operation. For example, using the `curl` Unix command:
* `$ curl http://httpfs-host:14000/webhdfs/v1/user/foo/README.txt` returns the contents of the HDFS `/user/foo/README.txt` file.
* `$ curl 'http://httpfs-host:14000/webhdfs/v1/user/foo/README.txt?op=OPEN&user.name=foo'` returns the contents of the HDFS `/user/foo/README.txt` file.
* `$ curl http://httpfs-host:14000/webhdfs/v1/user/foo?op=list` returns the contents of the HDFS `/user/foo` directory in JSON format.
* `$ curl 'http://httpfs-host:14000/webhdfs/v1/user/foo?op=LISTSTATUS&user.name=foo'` returns the contents of the HDFS `/user/foo` directory in JSON format.
* `$ curl http://httpfs-host:14000/webhdfs/v1/user/foo?op=GETTRASHROOT` returns the path `/user/foo/.Trash`, if `/` is an encrypted zone, returns the path `/.Trash/foo`. See [more details](../hadoop-project-dist/hadoop-hdfs/TransparentEncryption.html#Rename_and_Trash_considerations) about trash path in an encrypted zone.
* `$ curl 'http://httpfs-host:14000/webhdfs/v1/user/foo?op=GETTRASHROOT&user.name=foo'` returns the path `/user/foo/.Trash`, if `/` is an encrypted zone, returns the path `/.Trash/foo`. See [more details](../hadoop-project-dist/hadoop-hdfs/TransparentEncryption.html#Rename_and_Trash_considerations) about trash path in an encrypted zone.
* `$ curl -X POST http://httpfs-host:14000/webhdfs/v1/user/foo/bar?op=mkdirs` creates the HDFS `/user/foo.bar` directory.
* `$ curl -X POST 'http://httpfs-host:14000/webhdfs/v1/user/foo/bar?op=MKDIRS&user.name=foo'` creates the HDFS `/user/foo/bar` directory.
User and Developer Documentation
--------------------------------

View File

@ -223,6 +223,24 @@ public class TestHttpFSServer extends HFSTestCase {
reader.close();
}
@Test
@TestDir
@TestJetty
@TestHdfs
public void testMkdirs() throws Exception {
createHttpFSServer(false);
String user = HadoopUsersConfTestHelper.getHadoopUsers()[0];
URL url = new URL(TestJettyHelper.getJettyURL(), MessageFormat.format(
"/webhdfs/v1/tmp/sub-tmp?user.name={0}&op=MKDIRS", user));
HttpURLConnection conn = (HttpURLConnection) url.openConnection();
conn.setRequestMethod("PUT");
conn.connect();
Assert.assertEquals(conn.getResponseCode(), HttpURLConnection.HTTP_OK);
getStatus("/tmp/sub-tmp", "LISTSTATUS");
}
@Test
@TestDir
@TestJetty