HBASE-13483 onheap is not a valid bucket cacne IO engine

This commit is contained in:
Misty Stanley-Jones 2015-08-10 09:50:29 +10:00
parent 38b94709ee
commit a78e6e9499
1 changed files with 35 additions and 35 deletions

View File

@ -660,7 +660,7 @@ possible configurations would overwhelm and obscure the important.
<name>hbase.hregion.max.filesize</name> <name>hbase.hregion.max.filesize</name>
<value>10737418240</value> <value>10737418240</value>
<description> <description>
Maximum HFile size. If the sum of the sizes of a region's HFiles has grown to exceed this Maximum HFile size. If the sum of the sizes of a region's HFiles has grown to exceed this
value, the region is split in two.</description> value, the region is split in two.</description>
</property> </property>
<property> <property>
@ -686,8 +686,8 @@ possible configurations would overwhelm and obscure the important.
<property> <property>
<name>hbase.hstore.compactionThreshold</name> <name>hbase.hstore.compactionThreshold</name>
<value>3</value> <value>3</value>
<description> If more than this number of StoreFiles exist in any one Store <description> If more than this number of StoreFiles exist in any one Store
(one StoreFile is written per flush of MemStore), a compaction is run to rewrite all (one StoreFile is written per flush of MemStore), a compaction is run to rewrite all
StoreFiles into a single StoreFile. Larger values delay compaction, but when compaction does StoreFiles into a single StoreFile. Larger values delay compaction, but when compaction does
occur, it takes longer to complete.</description> occur, it takes longer to complete.</description>
</property> </property>
@ -709,24 +709,24 @@ possible configurations would overwhelm and obscure the important.
<name>hbase.hstore.blockingWaitTime</name> <name>hbase.hstore.blockingWaitTime</name>
<value>90000</value> <value>90000</value>
<description> The time for which a region will block updates after reaching the StoreFile limit <description> The time for which a region will block updates after reaching the StoreFile limit
defined by hbase.hstore.blockingStoreFiles. After this time has elapsed, the region will stop defined by hbase.hstore.blockingStoreFiles. After this time has elapsed, the region will stop
blocking updates even if a compaction has not been completed.</description> blocking updates even if a compaction has not been completed.</description>
</property> </property>
<property> <property>
<name>hbase.hstore.compaction.min</name> <name>hbase.hstore.compaction.min</name>
<value>3</value> <value>3</value>
<description>The minimum number of StoreFiles which must be eligible for compaction before <description>The minimum number of StoreFiles which must be eligible for compaction before
compaction can run. The goal of tuning hbase.hstore.compaction.min is to avoid ending up with compaction can run. The goal of tuning hbase.hstore.compaction.min is to avoid ending up with
too many tiny StoreFiles to compact. Setting this value to 2 would cause a minor compaction too many tiny StoreFiles to compact. Setting this value to 2 would cause a minor compaction
each time you have two StoreFiles in a Store, and this is probably not appropriate. If you each time you have two StoreFiles in a Store, and this is probably not appropriate. If you
set this value too high, all the other values will need to be adjusted accordingly. For most set this value too high, all the other values will need to be adjusted accordingly. For most
cases, the default value is appropriate. In previous versions of HBase, the parameter cases, the default value is appropriate. In previous versions of HBase, the parameter
hbase.hstore.compaction.min was named hbase.hstore.compactionThreshold.</description> hbase.hstore.compaction.min was named hbase.hstore.compactionThreshold.</description>
</property> </property>
<property> <property>
<name>hbase.hstore.compaction.max</name> <name>hbase.hstore.compaction.max</name>
<value>10</value> <value>10</value>
<description>The maximum number of StoreFiles which will be selected for a single minor <description>The maximum number of StoreFiles which will be selected for a single minor
compaction, regardless of the number of eligible StoreFiles. Effectively, the value of compaction, regardless of the number of eligible StoreFiles. Effectively, the value of
hbase.hstore.compaction.max controls the length of time it takes a single compaction to hbase.hstore.compaction.max controls the length of time it takes a single compaction to
complete. Setting it larger means that more StoreFiles are included in a compaction. For most complete. Setting it larger means that more StoreFiles are included in a compaction. For most
@ -735,48 +735,48 @@ possible configurations would overwhelm and obscure the important.
<property> <property>
<name>hbase.hstore.compaction.min.size</name> <name>hbase.hstore.compaction.min.size</name>
<value>134217728</value> <value>134217728</value>
<description>A StoreFile smaller than this size will always be eligible for minor compaction. <description>A StoreFile smaller than this size will always be eligible for minor compaction.
HFiles this size or larger are evaluated by hbase.hstore.compaction.ratio to determine if HFiles this size or larger are evaluated by hbase.hstore.compaction.ratio to determine if
they are eligible. Because this limit represents the "automatic include"limit for all they are eligible. Because this limit represents the "automatic include"limit for all
StoreFiles smaller than this value, this value may need to be reduced in write-heavy StoreFiles smaller than this value, this value may need to be reduced in write-heavy
environments where many StoreFiles in the 1-2 MB range are being flushed, because every environments where many StoreFiles in the 1-2 MB range are being flushed, because every
StoreFile will be targeted for compaction and the resulting StoreFiles may still be under the StoreFile will be targeted for compaction and the resulting StoreFiles may still be under the
minimum size and require further compaction. If this parameter is lowered, the ratio check is minimum size and require further compaction. If this parameter is lowered, the ratio check is
triggered more quickly. This addressed some issues seen in earlier versions of HBase but triggered more quickly. This addressed some issues seen in earlier versions of HBase but
changing this parameter is no longer necessary in most situations. Default: 128 MB expressed changing this parameter is no longer necessary in most situations. Default: 128 MB expressed
in bytes.</description> in bytes.</description>
</property> </property>
<property> <property>
<name>hbase.hstore.compaction.max.size</name> <name>hbase.hstore.compaction.max.size</name>
<value>9223372036854775807</value> <value>9223372036854775807</value>
<description>A StoreFile larger than this size will be excluded from compaction. The effect of <description>A StoreFile larger than this size will be excluded from compaction. The effect of
raising hbase.hstore.compaction.max.size is fewer, larger StoreFiles that do not get raising hbase.hstore.compaction.max.size is fewer, larger StoreFiles that do not get
compacted often. If you feel that compaction is happening too often without much benefit, you compacted often. If you feel that compaction is happening too often without much benefit, you
can try raising this value. Default: the value of LONG.MAX_VALUE, expressed in bytes.</description> can try raising this value. Default: the value of LONG.MAX_VALUE, expressed in bytes.</description>
</property> </property>
<property> <property>
<name>hbase.hstore.compaction.ratio</name> <name>hbase.hstore.compaction.ratio</name>
<value>1.2F</value> <value>1.2F</value>
<description>For minor compaction, this ratio is used to determine whether a given StoreFile <description>For minor compaction, this ratio is used to determine whether a given StoreFile
which is larger than hbase.hstore.compaction.min.size is eligible for compaction. Its which is larger than hbase.hstore.compaction.min.size is eligible for compaction. Its
effect is to limit compaction of large StoreFiles. The value of hbase.hstore.compaction.ratio effect is to limit compaction of large StoreFiles. The value of hbase.hstore.compaction.ratio
is expressed as a floating-point decimal. A large ratio, such as 10, will produce a single is expressed as a floating-point decimal. A large ratio, such as 10, will produce a single
giant StoreFile. Conversely, a low value, such as .25, will produce behavior similar to the giant StoreFile. Conversely, a low value, such as .25, will produce behavior similar to the
BigTable compaction algorithm, producing four StoreFiles. A moderate value of between 1.0 and BigTable compaction algorithm, producing four StoreFiles. A moderate value of between 1.0 and
1.4 is recommended. When tuning this value, you are balancing write costs with read costs. 1.4 is recommended. When tuning this value, you are balancing write costs with read costs.
Raising the value (to something like 1.4) will have more write costs, because you will Raising the value (to something like 1.4) will have more write costs, because you will
compact larger StoreFiles. However, during reads, HBase will need to seek through fewer compact larger StoreFiles. However, during reads, HBase will need to seek through fewer
StoreFiles to accomplish the read. Consider this approach if you cannot take advantage of StoreFiles to accomplish the read. Consider this approach if you cannot take advantage of
Bloom filters. Otherwise, you can lower this value to something like 1.0 to reduce the Bloom filters. Otherwise, you can lower this value to something like 1.0 to reduce the
background cost of writes, and use Bloom filters to control the number of StoreFiles touched background cost of writes, and use Bloom filters to control the number of StoreFiles touched
during reads. For most cases, the default value is appropriate.</description> during reads. For most cases, the default value is appropriate.</description>
</property> </property>
<property> <property>
<name>hbase.hstore.compaction.ratio.offpeak</name> <name>hbase.hstore.compaction.ratio.offpeak</name>
<value>5.0F</value> <value>5.0F</value>
<description>Allows you to set a different (by default, more aggressive) ratio for determining <description>Allows you to set a different (by default, more aggressive) ratio for determining
whether larger StoreFiles are included in compactions during off-peak hours. Works in the whether larger StoreFiles are included in compactions during off-peak hours. Works in the
same way as hbase.hstore.compaction.ratio. Only applies if hbase.offpeak.start.hour and same way as hbase.hstore.compaction.ratio. Only applies if hbase.offpeak.start.hour and
hbase.offpeak.end.hour are also enabled.</description> hbase.offpeak.end.hour are also enabled.</description>
</property> </property>
<property> <property>
@ -855,28 +855,28 @@ possible configurations would overwhelm and obscure the important.
<property> <property>
<name>hbase.bucketcache.ioengine</name> <name>hbase.bucketcache.ioengine</name>
<value></value> <value></value>
<description>Where to store the contents of the bucketcache. One of: onheap, <description>Where to store the contents of the bucketcache. One of: heap,
offheap, or file. If a file, set it to file:PATH_TO_FILE. See https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/io/hfile/CacheConfig.html for more information. offheap, or file. If a file, set it to file:PATH_TO_FILE. See https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/io/hfile/CacheConfig.html for more information.
</description> </description>
</property> </property>
<property> <property>
<name>hbase.bucketcache.combinedcache.enabled</name> <name>hbase.bucketcache.combinedcache.enabled</name>
<value>true</value> <value>true</value>
<description>Whether or not the bucketcache is used in league with the LRU <description>Whether or not the bucketcache is used in league with the LRU
on-heap block cache. In this mode, indices and blooms are kept in the LRU on-heap block cache. In this mode, indices and blooms are kept in the LRU
blockcache and the data blocks are kept in the bucketcache.</description> blockcache and the data blocks are kept in the bucketcache.</description>
</property> </property>
<property> <property>
<name>hbase.bucketcache.size</name> <name>hbase.bucketcache.size</name>
<value>65536</value> <value>65536</value>
<description>The size of the buckets for the bucketcache if you only use a single size. <description>The size of the buckets for the bucketcache if you only use a single size.
Defaults to the default blocksize, which is 64 * 1024.</description> Defaults to the default blocksize, which is 64 * 1024.</description>
</property> </property>
<property> <property>
<name>hbase.bucketcache.sizes</name> <name>hbase.bucketcache.sizes</name>
<value></value> <value></value>
<description>A comma-separated list of sizes for buckets for the bucketcache <description>A comma-separated list of sizes for buckets for the bucketcache
if you use multiple sizes. Should be a list of block sizes in order from smallest if you use multiple sizes. Should be a list of block sizes in order from smallest
to largest. The sizes you use will depend on your data access patterns.</description> to largest. The sizes you use will depend on your data access patterns.</description>
</property> </property>
<property> <property>