HBASE-4968 Add to troubleshooting workaround for direct buffer oome's.
git-svn-id: https://svn.apache.org/repos/asf/hbase/trunk@1211254 13f79535-47bb-0310-9956-ffa450edef68
This commit is contained in:
parent
f349a155af
commit
888fad5f0d
|
@ -543,6 +543,21 @@ hadoop 17789 155 35.2 9067824 8604364 ? S<l Mar04 9855:48 /usr/java/j
|
|||
</para>
|
||||
|
||||
</section>
|
||||
<section xml:id="trouble.client.oome.directmemory.leak">
|
||||
<title>Client running out of memory though heap size seems to be stable (but the off-heap/direct heap keeps growing)</title>
|
||||
<para>
|
||||
You are likely running into the issue that is described and worked through in
|
||||
the mail thread <link xhref="http://search-hadoop.com/m/ubhrX8KvcH/Suspected+memory+leak&subj=Re+Suspected+memory+leak">HBase, mail # user - Suspected memory leak</link>
|
||||
and continued over in <link xhref="http://search-hadoop.com/m/p2Agc1Zy7Va/MaxDirectMemorySize+Was%253A+Suspected+memory+leak&subj=Re+FeedbackRe+Suspected+memory+leak">HBase, mail # dev - FeedbackRe: Suspected memory leak</link>.
|
||||
A workaround is passing your client-side JVM a reasonable value for <code>-XX:MaxDirectMemorySize</code>. By default,
|
||||
the <varname>MaxDirectMemorySize</varname> is equal to your <code>-Xmx</code> max heapsize setting (if <code>-Xmx</code> is set).
|
||||
Try seting it to something smaller (for example, one user had success setting it to <code>1g</code> when
|
||||
they had a client-side heap of <code>12g</code>). If you set it too small, it will bring on <code>FullGCs</code> so keep
|
||||
it a bit hefty. You want to make this setting client-side only especially if you are running the new experiemental
|
||||
server-side off-heap cache since this feature depends on being able to use big direct buffers (You may have to keep
|
||||
separate client-side and server-side config dirs).
|
||||
</para>
|
||||
</section>
|
||||
</section>
|
||||
|
||||
<section xml:id="trouble.mapreduce">
|
||||
|
|
Loading…
Reference in New Issue