From 39a0f56d47e2149bda6cc224163271b93d9fc89c Mon Sep 17 00:00:00 2001 From: Michael Stack Date: Wed, 5 Dec 2012 21:31:57 +0000 Subject: [PATCH] More on bad disk handling git-svn-id: https://svn.apache.org/repos/asf/hbase/trunk@1417657 13f79535-47bb-0310-9956-ffa450edef68 --- src/docbkx/ops_mgt.xml | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/src/docbkx/ops_mgt.xml b/src/docbkx/ops_mgt.xml index 16c19da8960..1edd5c05130 100644 --- a/src/docbkx/ops_mgt.xml +++ b/src/docbkx/ops_mgt.xml @@ -387,11 +387,18 @@ false to go down spewing errors in dmesg -- or for some reason, run much slower than their companions. In this case you want to decommission the disk. You have two options. You can decommission the datanode - or, less disruptive in that only the bad disks data will be rereplicated, is that you can stop the datanode, + or, less disruptive in that only the bad disks data will be rereplicated, can stop the datanode, unmount the bad volume (You can't umount a volume while the datanode is using it), and then restart the datanode (presuming you have set dfs.datanode.failed.volumes.tolerated > 0). The regionserver will throw some errors in its logs as it recalibrates where to get its data from -- it will likely roll its WAL log too -- but in general but for some latency spikes, it should keep on chugging. + + If you are doing short-circuit reads, you will have to move the regions off the regionserver + before you stop the datanode; when short-circuiting reading, though chmod'd so regionserver cannot + have access, because it already has the files open, it will be able to keep reading the file blocks + from the bad disk even though the datanode is down. Move the regions back after you restart the + datanode. +