Merge pull request #1136 from metamx/examples-use-default-ports

Use default ports in examples and fix incorrect docs
This commit is contained in:
Fangjin Yang 2015-02-18 13:21:09 -08:00
commit c1a7e6a31c
17 changed files with 38 additions and 38 deletions

View File

@ -359,7 +359,7 @@ The Hadoop Index Config submitted as part of an Hadoop Index Task is identical t
To run the task:
```
curl -X 'POST' -H 'Content-Type:application/json' -d @example_index_hadoop_task.json localhost:8087/druid/indexer/v1/task
curl -X 'POST' -H 'Content-Type:application/json' -d @example_index_hadoop_task.json localhost:8090/druid/indexer/v1/task
```
If the task succeeds, you should see in the logs of the indexing service:

View File

@ -84,7 +84,7 @@ Now you can run an indexing task and a simple query to see if all the nodes have
```bash
curl -X 'POST' -H 'Content-Type:application/json' -d @#{PATH_TO}/wikipedia_realtime_task.json #{OVERLORD_PUBLIC_IP_ADDR}:#{PORT}/druid/indexer/v1/task
```
where OVERLORD_PUBLIC_IP_ADDR should be available from the EC2 information logged to STDOUT, the Overlord port is 8080 by default, and `wikipedia_realtime_task.json` is discussed above.
where OVERLORD_PUBLIC_IP_ADDR should be available from the EC2 information logged to STDOUT, the Overlord port is 8090 by default, and `wikipedia_realtime_task.json` is discussed above.
Issuing this request should return a task ID.

View File

@ -87,9 +87,9 @@ druid.port=8080
`druid.server.type` should be set to "historical" for your historical nodes and realtime for the realtime nodes. The Coordinator will only assign segments to a "historical" node and the broker has some intelligence around its ability to cache results when talking to a realtime node. This does not need to be set for the coordinator or the broker.
`druid.host` should be set to the hostname and port that can be used to talk to the given server process. Basically, someone should be able to send a request to http://${druid.host}/ and actually talk to the process.
`druid.host` should be set to the hostname that can be used to talk to the given server process. Basically, someone should be able to send a request to http://${druid.host}:${druid.port}/ and actually talk to the process.
`druid.port` should be set to the port that the server should listen on. In the vast majority of cases, this port should be the same as what is on `druid.host`.
`druid.port` should be set to the port that the server should listen on.
Build/Run
---------

View File

@ -66,7 +66,7 @@ This guide walks you through the steps to create the cluster and then how to cre
1. Use the following URL to bring up the Druid Demo Cluster query interface (replace **IPAddressDruidCoordinator** with the actual druid coordinator IP Address):
**`http://IPAddressDruidCoordinator:8082/druid/v3/demoServlet`**
**`http://IPAddressDruidCoordinator:8081/druid/v3/demoServlet`**
As you can see from the image below, there are default values in the Dimensions and Granularity fields. Clicking **Execute** will produce a basic query result.
![Demo Query Interface](images/demo/query-1.png)

View File

@ -198,5 +198,5 @@ Middle managers pass their configurations down to their child peons. The middle
|`druid.indexer.runner.javaCommand`|Command required to execute java.|java|
|`druid.indexer.runner.javaOpts`|-X Java options to run the peon in its own JVM.|""|
|`druid.indexer.runner.classpath`|Java classpath for the peon.|System.getProperty("java.class.path")|
|`druid.indexer.runner.startPort`|The port that peons begin running on.|8081|
|`druid.indexer.runner.startPort`|The port that peons begin running on.|8100|
|`druid.indexer.runner.allowedPrefixes`|Whitelist of prefixes for configs that can be passed down to child peons.|"com.metamx", "druid", "io.druid", "user.timezone","file.encoding"|

View File

@ -51,6 +51,6 @@ Middle managers pass their configurations down to their child peons. The middle
|`druid.indexer.runner.javaCommand`|Command required to execute java.|java|
|`druid.indexer.runner.javaOpts`|-X Java options to run the peon in its own JVM.|""|
|`druid.indexer.runner.classpath`|Java classpath for the peon.|System.getProperty("java.class.path")|
|`druid.indexer.runner.startPort`|The port that peons begin running on.|8081|
|`druid.indexer.runner.startPort`|The port that peons begin running on.|8100|
|`druid.indexer.runner.allowedPrefixes`|Whitelist of prefixes for configs that can be passed down to child peons.|"com.metamx", "druid", "io.druid", "user.timezone","file.encoding"|

View File

@ -81,7 +81,7 @@ Select "wikipedia".
Note that the first time you start the example, it may take some extra time due to its fetching various dependencies. Once the node starts up you will see a bunch of logs about setting up properties and connecting to the data source. If everything was successful, you should see messages of the form shown below.
```
2015-02-17T21:46:36,804 INFO [main] org.eclipse.jetty.server.ServerConnector - Started ServerConnector@79b6cf95{HTTP/1.1}{0.0.0.0:8083}
2015-02-17T21:46:36,804 INFO [main] org.eclipse.jetty.server.ServerConnector - Started ServerConnector@79b6cf95{HTTP/1.1}{0.0.0.0:8084}
2015-02-17T21:46:36,804 INFO [main] org.eclipse.jetty.server.Server - Started @9580ms
2015-02-17T21:46:36,862 INFO [ApiDaemon] io.druid.segment.realtime.firehose.IrcFirehoseFactory - irc connection to server [irc.wikimedia.org] established
2015-02-17T21:46:36,862 INFO [ApiDaemon] io.druid.segment.realtime.firehose.IrcFirehoseFactory - Joining channel #en.wikipedia
@ -152,7 +152,7 @@ Our query has now expanded to include a time interval, [Granularities](Granulari
To issue the query and get some results, run the following in your command line:
```
curl -X POST 'http://localhost:8083/druid/v2/?pretty' -H 'content-type: application/json' -d @timeseries.json
curl -X POST 'http://localhost:8084/druid/v2/?pretty' -H 'content-type: application/json' -d @timeseries.json
```
Once again, you should get a JSON blob of text back with your results, that looks something like this:
@ -237,7 +237,7 @@ Note that our query now includes [Filters](Filters.html). Filters are like `WHER
If you issue the query:
```
curl -X POST 'http://localhost:8083/druid/v2/?pretty' -H 'content-type: application/json' -d @topn.json
curl -X POST 'http://localhost:8084/druid/v2/?pretty' -H 'content-type: application/json' -d @topn.json
```
You should see an answer to our question. As an example, some results are shown below:

View File

@ -59,7 +59,7 @@ Make sure you've completed [Loading Your Data](Loading-Your-Data-Part-1.html) so
Run the query against your broker:
```bash
curl -X POST "http://localhost:8080/druid/v2/?pretty" -H 'Content-type: application/json' -d @query.body
curl -X POST "http://localhost:8082/druid/v2/?pretty" -H 'Content-type: application/json' -d @query.body
```
And get:

View File

@ -166,13 +166,13 @@ Okay, so what is happening here? The "type" field indicates the type of task we
Let's send our task to the indexing service now:
```bash
curl -X 'POST' -H 'Content-Type:application/json' -d @examples/indexing/wikipedia_index_task.json localhost:8087/druid/indexer/v1/task
curl -X 'POST' -H 'Content-Type:application/json' -d @examples/indexing/wikipedia_index_task.json localhost:8090/druid/indexer/v1/task
```
Issuing the request should return a task ID like so:
```bash
curl -X 'POST' -H 'Content-Type:application/json' -d @examples/indexing/wikipedia_index_task.json localhost:8087/druid/indexer/v1/task
curl -X 'POST' -H 'Content-Type:application/json' -d @examples/indexing/wikipedia_index_task.json localhost:8090/druid/indexer/v1/task
{"task":"index_wikipedia_2013-10-09T21:30:32.802Z"}
```
@ -200,7 +200,7 @@ You should see the following logs on the coordinator:
```bash
2013-10-09 21:41:54,368 INFO [Coordinator-Exec--0] io.druid.server.coordinator.helper.DruidCoordinatorLogger - [_default_tier] : Assigned 1 segments among 1 servers
2013-10-09 21:41:54,369 INFO [Coordinator-Exec--0] io.druid.server.coordinator.helper.DruidCoordinatorLogger - Load Queues:
2013-10-09 21:41:54,369 INFO [Coordinator-Exec--0] io.druid.server.coordinator.helper.DruidCoordinatorLogger - Server[localhost:8081, historical, _default_tier] has 1 left to load, 0 left to drop, 4,477 bytes queued, 4,477 bytes served.
2013-10-09 21:41:54,369 INFO [Coordinator-Exec--0] io.druid.server.coordinator.helper.DruidCoordinatorLogger - Server[localhost:8083, historical, _default_tier] has 1 left to load, 0 left to drop, 4,477 bytes queued, 4,477 bytes served.
```
These logs indicate that the coordinator has assigned our new segment to the historical node to download and serve. If you look at the historical node logs, you should see:
@ -209,7 +209,7 @@ These logs indicate that the coordinator has assigned our new segment to the his
2013-10-09 21:41:54,369 INFO [ZkCoordinator-0] io.druid.server.coordination.ZkCoordinator - Loading segment wikipedia_2013-08-31T00:00:00.000Z_2013-09-01T00:00:00.000Z_2013-10-09T21:41:41.151Z
2013-10-09 21:41:54,369 INFO [ZkCoordinator-0] io.druid.segment.loading.LocalDataSegmentPuller - Unzipping local file[/tmp/druid/localStorage/wikipedia/2013-08-31T00:00:00.000Z_2013-09-01T00:00:00.000Z/2013-10-09T21:41:41.151Z/0/index.zip] to [/tmp/druid/indexCache/wikipedia/2013-08-31T00:00:00.000Z_2013-09-01T00:00:00.000Z/2013-10-09T21:41:41.151Z/0]
2013-10-09 21:41:54,370 INFO [ZkCoordinator-0] io.druid.utils.CompressionUtils - Unzipping file[/tmp/druid/localStorage/wikipedia/2013-08-31T00:00:00.000Z_2013-09-01T00:00:00.000Z/2013-10-09T21:41:41.151Z/0/index.zip] to [/tmp/druid/indexCache/wikipedia/2013-08-31T00:00:00.000Z_2013-09-01T00:00:00.000Z/2013-10-09T21:41:41.151Z/0]
2013-10-09 21:41:54,380 INFO [ZkCoordinator-0] io.druid.server.coordination.SingleDataSegmentAnnouncer - Announcing segment[wikipedia_2013-08-31T00:00:00.000Z_2013-09-01T00:00:00.000Z_2013-10-09T21:41:41.151Z] to path[/druid/servedSegments/localhost:8081/wikipedia_2013-08-31T00:00:00.000Z_2013-09-01T00:00:00.000Z_2013-10-09T21:41:41.151Z]
2013-10-09 21:41:54,380 INFO [ZkCoordinator-0] io.druid.server.coordination.SingleDataSegmentAnnouncer - Announcing segment[wikipedia_2013-08-31T00:00:00.000Z_2013-09-01T00:00:00.000Z_2013-10-09T21:41:41.151Z] to path[/druid/servedSegments/localhost:8083/wikipedia_2013-08-31T00:00:00.000Z_2013-09-01T00:00:00.000Z_2013-10-09T21:41:41.151Z]
```
Once the segment is announced the segment is queryable. Now you should be able to query the data.
@ -232,7 +232,7 @@ Console
The indexing service overlord has a console located at:
```bash
localhost:8087/console.html
localhost:8090/console.html
```
On this console, you can look at statuses and logs of recently submitted and completed tasks.
@ -322,7 +322,7 @@ If you are curious about what all this configuration means, see [here](Tasks.htm
To submit the task:
```bash
curl -X 'POST' -H 'Content-Type:application/json' -d @examples/indexing/wikipedia_index_hadoop_task.json localhost:8087/druid/indexer/v1/task
curl -X 'POST' -H 'Content-Type:application/json' -d @examples/indexing/wikipedia_index_hadoop_task.json localhost:8090/druid/indexer/v1/task
```
After the task is completed, the segment should be assigned to your historical node. You should be able to query the segment.

View File

@ -115,7 +115,7 @@ download](http://static.druid.io/artifacts/releases/druid-services-0.7.0-rc3-bin
```
...
2015-02-17T23:01:50,220 INFO [chief-wikipedia] io.druid.server.coordination.BatchDataSegmentAnnouncer - Announcing segment[wikipedia_2013-08-31T00:00:00.000Z_2013-09-01T00:00:00.000Z_2013-08-31T00:00:00.000Z] at path[/druid/segments/localhost:8083/2015-02-17T23:01:50.219Z0]
2015-02-17T23:01:50,220 INFO [chief-wikipedia] io.druid.server.coordination.BatchDataSegmentAnnouncer - Announcing segment[wikipedia_2013-08-31T00:00:00.000Z_2013-09-01T00:00:00.000Z_2013-08-31T00:00:00.000Z] at path[/druid/segments/localhost:8084/2015-02-17T23:01:50.219Z0]
...
```
@ -126,7 +126,7 @@ download](http://static.druid.io/artifacts/releases/druid-services-0.7.0-rc3-bin
```bash
curl -XPOST -H'Content-type: application/json' \
"http://localhost:8083/druid/v2/?pretty" \
"http://localhost:8084/druid/v2/?pretty" \
-d'{"queryType":"timeBoundary","dataSource":"wikipedia"}'
```

View File

@ -158,7 +158,7 @@ In the directory, there should be a `runtime.properties` file with the following
```
druid.host=localhost
druid.port=8082
druid.port=8081
druid.service=coordinator
# The coordinator begins assignment operations after the start delay.
@ -189,7 +189,7 @@ In the directory we just created, we should have the file `runtime.properties` w
```
druid.host=localhost
druid.port=8081
druid.port=8083
druid.service=historical
# We can only 1 scan segment in parallel with these configs.
@ -222,7 +222,7 @@ In the directory, there should be a `runtime.properties` file with the following
```
druid.host=localhost
druid.port=8080
druid.port=8082
druid.service=broker
druid.broker.cache.useCache=true
@ -289,7 +289,7 @@ The configurations are located in `config/realtime/runtime.properties` and shoul
```
druid.host=localhost
druid.port=8083
druid.port=8084
druid.service=realtime
# We can only 1 scan segment in parallel with these configs.
@ -308,24 +308,24 @@ Once the real-time node starts up, it should begin ingesting data and handing th
At any point during ingestion, we can query for data. For example:
```
curl -X POST 'http://localhost:8080/druid/v2/?pretty' -H 'content-type: application/json' -d@examples/wikipedia/query.body
curl -X POST 'http://localhost:8082/druid/v2/?pretty' -H 'content-type: application/json' -d@examples/wikipedia/query.body
```
This query will span across both realtime and historical nodes. If you're curious, you can query the historical node directly by sending the same query to the historical node's port:
```
curl -X POST 'http://localhost:8081/druid/v2/?pretty' -H 'content-type: application/json' -d@examples/wikipedia/query.body
curl -X POST 'http://localhost:8083/druid/v2/?pretty' -H 'content-type: application/json' -d@examples/wikipedia/query.body
```
This query may produce no results if the realtime node hasn't run long enough to hand off the segment (we configured it above to be 5 minutes). Query the realtime node directly by sending the same query to the realtime node's port:
```
curl -X POST 'http://localhost:8083/druid/v2/?pretty' -H 'content-type: application/json' -d@examples/wikipedia/query.body
curl -X POST 'http://localhost:8084/druid/v2/?pretty' -H 'content-type: application/json' -d@examples/wikipedia/query.body
```
The realtime query results will reflect the data that was recently indexed from wikipedia, and not handed off to the historical node yet. Once the historical node acknowledges it has loaded the segment, the realtime node will drop the segment.
Querying the historical and realtime node directly is useful for understanding how the segment handling is working, but if you just want to run a query for all the data (realtime and historical), then send the query to the broker at port 8080 (which is what we did in the first example). The broker will send the query to the historical and realtime nodes and merge the results.
Querying the historical and realtime node directly is useful for understanding how the segment handling is working, but if you just want to run a query for all the data (realtime and historical), then send the query to the broker at port 8082 (which is what we did in the first example). The broker will send the query to the historical and realtime nodes and merge the results.
For more information on querying, see this [link](Querying.html).

View File

@ -45,7 +45,7 @@ for delay in 5 30 30 30 30 30 30 30 30 30 30
echo "sleep for $delay seconds..."
echo " "
sleep $delay
curl -X POST 'http://localhost:8083/druid/v2/?pretty' -H 'content-type: application/json' -d "`cat ${QUERY_FILE}`"
curl -X POST 'http://localhost:8084/druid/v2/?pretty' -H 'content-type: application/json' -d "`cat ${QUERY_FILE}`"
echo " "
echo " "
done

View File

@ -15,8 +15,8 @@
# limitations under the License.
#
druid.host=localhost
druid.port=8080
#druid.host=localhost
#druid.port=8082
druid.service=broker
# We enable using the local query cache here

View File

@ -15,8 +15,8 @@
# limitations under the License.
#
druid.host=localhost
druid.port=8082
#druid.host=localhost
#druid.port=8081
druid.service=coordinator
# The coordinator begins assignment operations after the start delay.

View File

@ -15,8 +15,8 @@
# limitations under the License.
#
druid.host=localhost
druid.port=8081
#druid.host=localhost
#druid.port=8083
druid.service=historical
# We can only 1 scan segment in parallel with these configs.

View File

@ -15,8 +15,8 @@
# limitations under the License.
#
druid.host=localhost
druid.port=8087
#druid.host=localhost
#druid.port=8090
druid.service=overlord
# Run the overlord in local mode with a single peon to execute tasks

View File

@ -15,8 +15,8 @@
# limitations under the License.
#
druid.host=localhost
druid.port=8083
#druid.host=localhost
#druid.port=8084
druid.service=realtime
# We can only 1 scan segment in parallel with these configs.