HBASE-10392 Correct references to hbase.regionserver.global.memstore.upperLimit
git-svn-id: https://svn.apache.org/repos/asf/hbase/trunk@1570721 13f79535-47bb-0310-9956-ffa450edef68
This commit is contained in:
parent
e4f8a7419f
commit
9924b66e25
|
@ -74,8 +74,13 @@ public class HBaseConfiguration extends Configuration {
|
|||
}
|
||||
|
||||
private static void checkForClusterFreeMemoryLimit(Configuration conf) {
|
||||
float globalMemstoreLimit = conf.getFloat("hbase.regionserver.global.memstore.upperLimit", 0.4f);
|
||||
int gml = (int)(globalMemstoreLimit * CONVERT_TO_PERCENTAGE);
|
||||
if (conf.get("hbase.regionserver.global.memstore.upperLimit") != null) {
|
||||
LOG.warn("hbase.regionserver.global.memstore.upperLimit is deprecated by "
|
||||
+ "hbase.regionserver.global.memstore.size");
|
||||
}
|
||||
float globalMemstoreSize = conf.getFloat("hbase.regionserver.global.memstore.size",
|
||||
conf.getFloat("hbase.regionserver.global.memstore.upperLimit", 0.4f));
|
||||
int gml = (int)(globalMemstoreSize * CONVERT_TO_PERCENTAGE);
|
||||
float blockCacheUpperLimit =
|
||||
conf.getFloat(HConstants.HFILE_BLOCK_CACHE_SIZE_KEY,
|
||||
HConstants.HFILE_BLOCK_CACHE_SIZE_DEFAULT);
|
||||
|
@ -87,10 +92,10 @@ public class HBaseConfiguration extends Configuration {
|
|||
"Current heap configuration for MemStore and BlockCache exceeds " +
|
||||
"the threshold required for successful cluster operation. " +
|
||||
"The combined value cannot exceed 0.8. Please check " +
|
||||
"the settings for hbase.regionserver.global.memstore.upperLimit and " +
|
||||
"the settings for hbase.regionserver.global.memstore.size and " +
|
||||
"hfile.block.cache.size in your configuration. " +
|
||||
"hbase.regionserver.global.memstore.upperLimit is " +
|
||||
globalMemstoreLimit +
|
||||
"hbase.regionserver.global.memstore.size is " +
|
||||
globalMemstoreSize +
|
||||
" hfile.block.cache.size is " + blockCacheUpperLimit);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -132,10 +132,7 @@ class MemStoreFlusher implements FlushRequester {
|
|||
}
|
||||
|
||||
/**
|
||||
* Calculate global memstore size for configured percentage of <code>max</code>.
|
||||
* @param max
|
||||
* @param c
|
||||
* @return Limit.
|
||||
* Retrieve global memstore configured size as percentage of total heap.
|
||||
*/
|
||||
static float getGlobalMemStorePercent(final Configuration c) {
|
||||
float limit = c.getFloat(MEMSTORE_SIZE_KEY,
|
||||
|
@ -742,4 +739,4 @@ class MemStoreFlusher implements FlushRequester {
|
|||
|
||||
enum FlushType {
|
||||
NORMAL, ABOVE_LOWER_MARK, ABOVE_HIGHER_MARK;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1051,7 +1051,7 @@ false
|
|||
<para>When configuring regions for multiple tables, note that most region settings can be set on a per-table basis via <link xlink:href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HTableDescriptor.html">HTableDescriptor</link>, as well as shell commands. These settings will override the ones in <varname>hbase-site.xml</varname>. That is useful if your tables have different workloads/use cases.</para>
|
||||
<para>Also note that in the discussion of region sizes here, <emphasis role="bold">HDFS replication factor is not (and should not be) taken into account, whereas other factors <xref linkend="ops.capacity.nodes.datasize" xrefstyle="template:above" /> should be.</emphasis> So, if your data is compressed and replicated 3 ways by HDFS, "9 Gb region" means 9 Gb of compressed data. HDFS replication factor only affects your disk usage and is invisible to most HBase code.</para>
|
||||
<section xml:id="ops.capacity.regions.count"><title>Number of regions per RS - upper bound</title>
|
||||
<para>In production scenarios, where you have a lot of data, you are normally concerned with the maximum number of regions you can have per server. <xref linkend="too_many_regions" /> has technical discussion on the subject; in short, maximum number of regions is mostly determined by memstore memory usage. Each region has its own memstores; these grow up to a configurable size; usually in 128-256Mb range, see <xref linkend="hbase.hregion.memstore.flush.size" />. There's one memstore per column family (so there's only one per region if there's one CF in the table). RS dedicates some fraction of total memory (see <xref linkend="hbase.regionserver.global.memstore.upperLimit" />) to region memstores. If this memory is exceeded (too much memstore usage), undesirable consequences such as unresponsive server, or later compaction storms, can result. Thus, a good starting point for the number of regions per RS (assuming one table) is <programlisting>(RS memory)*(total memstore fraction)/((memstore size)*(# column families))</programlisting>
|
||||
<para>In production scenarios, where you have a lot of data, you are normally concerned with the maximum number of regions you can have per server. <xref linkend="too_many_regions" /> has technical discussion on the subject; in short, maximum number of regions is mostly determined by memstore memory usage. Each region has its own memstores; these grow up to a configurable size; usually in 128-256Mb range, see <xref linkend="hbase.hregion.memstore.flush.size" />. There's one memstore per column family (so there's only one per region if there's one CF in the table). RS dedicates some fraction of total memory (see <xref linkend="hbase.regionserver.global.memstore.size" />) to region memstores. If this memory is exceeded (too much memstore usage), undesirable consequences such as unresponsive server, or later compaction storms, can result. Thus, a good starting point for the number of regions per RS (assuming one table) is <programlisting>(RS memory)*(total memstore fraction)/((memstore size)*(# column families))</programlisting>
|
||||
E.g. if RS has 16Gb RAM, with default settings, it is 16384*0.4/128 ~ 51 regions per RS is a starting point. The formula can be extended to multiple tables; if they all have the same configuration, just use total number of families.</para>
|
||||
<para>This number can be adjusted; the formula above assumes all your regions are filled at approximately the same rate. If only a fraction of your regions are going to be actively written to, you can divide the result by that fraction to get a larger region count. Then, even if all regions are written to, all region memstores are not filled evenly, and eventually jitter appears even if they are (due to limited number of concurrent flushes). Thus, one can have as many as 2-3 times more regions than the starting point; however, increased numbers carry increased risk.</para>
|
||||
<para>For write-heavy workload, memstore fraction can be increased in configuration at the expense of block cache; this will also allow one to have more regions.</para>
|
||||
|
|
|
@ -175,15 +175,15 @@
|
|||
A memory setting for the RegionServer process.
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="perf.rs.memstore.upperlimit">
|
||||
<title><varname>hbase.regionserver.global.memstore.upperLimit</varname></title>
|
||||
<para>See <xref linkend="hbase.regionserver.global.memstore.upperLimit"/>.
|
||||
<section xml:id="perf.rs.memstore.size">
|
||||
<title><varname>hbase.regionserver.global.memstore.size</varname></title>
|
||||
<para>See <xref linkend="hbase.regionserver.global.memstore.size"/>.
|
||||
This memory setting is often adjusted for the RegionServer process depending on needs.
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="perf.rs.memstore.lowerlimit">
|
||||
<title><varname>hbase.regionserver.global.memstore.lowerLimit</varname></title>
|
||||
<para>See <xref linkend="hbase.regionserver.global.memstore.lowerLimit"/>.
|
||||
<section xml:id="perf.rs.memstore.size.lower.limit">
|
||||
<title><varname>hbase.regionserver.global.memstore.size.lower.limit</varname></title>
|
||||
<para>See <xref linkend="hbase.regionserver.global.memstore.size.lower.limit"/>.
|
||||
This memory setting is often adjusted for the RegionServer process depending on needs.
|
||||
</para>
|
||||
</section>
|
||||
|
|
Loading…
Reference in New Issue