HBASE-18311 fix formatting in quickstart
Signed-off-by: tedyu <yuzhihong@gmail.com>
This commit is contained in:
parent
61534931cc
commit
8318a092ac
|
@ -145,7 +145,7 @@ NOTE: Java needs to be installed and available.
|
|||
If you get an error indicating that Java is not installed,
|
||||
but it is on your system, perhaps in a non-standard location,
|
||||
edit the _conf/hbase-env.sh_ file and modify the `JAVA_HOME`
|
||||
setting to point to the directory that contains _bin/java_ your system.
|
||||
setting to point to the directory that contains _bin/java_ on your system.
|
||||
|
||||
|
||||
[[shell_exercises]]
|
||||
|
@ -320,8 +320,7 @@ This procedure will create a totally new directory where HBase will store its da
|
|||
. Configure HBase.
|
||||
+
|
||||
Edit the _hbase-site.xml_ configuration.
|
||||
First, add the following property.
|
||||
which directs HBase to run in distributed mode, with one JVM instance per daemon.
|
||||
First, add the following property which directs HBase to run in distributed mode, with one JVM instance per daemon.
|
||||
+
|
||||
[source,xml]
|
||||
----
|
||||
|
@ -494,15 +493,14 @@ $ cat id_rsa.pub >> ~/.ssh/authorized_keys
|
|||
|
||||
. Test password-less login.
|
||||
+
|
||||
If you performed the procedure correctly, if you SSH from `node-a` to either of the other nodes, using the same username, you should not be prompted for a password.
|
||||
If you performed the procedure correctly, you should not be prompted for a password when you SSH from `node-a` to either of the other nodes using the same username.
|
||||
|
||||
. Since `node-b` will run a backup Master, repeat the procedure above, substituting `node-b` everywhere you see `node-a`.
|
||||
Be sure not to overwrite your existing _.ssh/authorized_keys_ files, but concatenate the new key onto the existing file using the `>>` operator rather than the `>` operator.
|
||||
|
||||
.Procedure: Prepare `node-a`
|
||||
|
||||
`node-a` will run your primary master and ZooKeeper processes, but no RegionServers.
|
||||
. Stop the RegionServer from starting on `node-a`.
|
||||
`node-a` will run your primary master and ZooKeeper processes, but no RegionServers. Stop the RegionServer from starting on `node-a`.
|
||||
|
||||
. Edit _conf/regionservers_ and remove the line which contains `localhost`. Add lines with the hostnames or IP addresses for `node-b` and `node-c`.
|
||||
+
|
||||
|
@ -519,7 +517,7 @@ In this demonstration, the hostname is `node-b.example.com`.
|
|||
. Configure ZooKeeper
|
||||
+
|
||||
In reality, you should carefully consider your ZooKeeper configuration.
|
||||
You can find out more about configuring ZooKeeper in <<zookeeper,zookeeper>>.
|
||||
You can find out more about configuring ZooKeeper in <<zookeeper,zookeeper>> section.
|
||||
This configuration will direct HBase to start and manage a ZooKeeper instance on each node of the cluster.
|
||||
+
|
||||
On `node-a`, edit _conf/hbase-site.xml_ and add the following properties.
|
||||
|
@ -607,7 +605,7 @@ $ jps
|
|||
----
|
||||
====
|
||||
+
|
||||
.`node-a` `jps` Output
|
||||
.`node-c` `jps` Output
|
||||
====
|
||||
----
|
||||
$ jps
|
||||
|
@ -621,9 +619,9 @@ $ jps
|
|||
[NOTE]
|
||||
====
|
||||
The `HQuorumPeer` process is a ZooKeeper instance which is controlled and started by HBase.
|
||||
If you use ZooKeeper this way, it is limited to one instance per cluster node, , and is appropriate for testing only.
|
||||
If you use ZooKeeper this way, it is limited to one instance per cluster node and is appropriate for testing only.
|
||||
If ZooKeeper is run outside of HBase, the process is called `QuorumPeer`.
|
||||
For more about ZooKeeper configuration, including using an external ZooKeeper instance with HBase, see <<zookeeper,zookeeper>>.
|
||||
For more about ZooKeeper configuration, including using an external ZooKeeper instance with HBase, see <<zookeeper,zookeeper>> section.
|
||||
====
|
||||
|
||||
. Browse to the Web UI.
|
||||
|
@ -637,15 +635,15 @@ Master and 60030 for each RegionServer to 16010 for the Master and 16030 for the
|
|||
+
|
||||
If everything is set up correctly, you should be able to connect to the UI for the Master
|
||||
`http://node-a.example.com:16010/` or the secondary master at `http://node-b.example.com:16010/`
|
||||
for the secondary master, using a web browser.
|
||||
using a web browser.
|
||||
If you can connect via `localhost` but not from another host, check your firewall rules.
|
||||
You can see the web UI for each of the RegionServers at port 16030 of their IP addresses, or by
|
||||
clicking their links in the web UI for the Master.
|
||||
|
||||
. Test what happens when nodes or services disappear.
|
||||
+
|
||||
With a three-node cluster like you have configured, things will not be very resilient.
|
||||
Still, you can test what happens when the primary Master or a RegionServer disappears, by killing the processes and watching the logs.
|
||||
With a three-node cluster you have configured, things will not be very resilient.
|
||||
You can still test the behavior of the primary Master or a RegionServer by killing the associated processes and watching the logs.
|
||||
|
||||
|
||||
=== Where to go next
|
||||
|
|
Loading…
Reference in New Issue