From a78e6e94994aaba2bee7747054ea9a55f1edd421 Mon Sep 17 00:00:00 2001 From: Misty Stanley-Jones Date: Mon, 10 Aug 2015 09:50:29 +1000 Subject: [PATCH] HBASE-13483 onheap is not a valid bucket cacne IO engine --- .../src/main/resources/hbase-default.xml | 70 +++++++++---------- 1 file changed, 35 insertions(+), 35 deletions(-) diff --git a/hbase-common/src/main/resources/hbase-default.xml b/hbase-common/src/main/resources/hbase-default.xml index 3d0a57a8109..c3dcab1234a 100644 --- a/hbase-common/src/main/resources/hbase-default.xml +++ b/hbase-common/src/main/resources/hbase-default.xml @@ -660,7 +660,7 @@ possible configurations would overwhelm and obscure the important. hbase.hregion.max.filesize 10737418240 - Maximum HFile size. If the sum of the sizes of a region's HFiles has grown to exceed this + Maximum HFile size. If the sum of the sizes of a region's HFiles has grown to exceed this value, the region is split in two. @@ -686,8 +686,8 @@ possible configurations would overwhelm and obscure the important. hbase.hstore.compactionThreshold 3 - If more than this number of StoreFiles exist in any one Store - (one StoreFile is written per flush of MemStore), a compaction is run to rewrite all + If more than this number of StoreFiles exist in any one Store + (one StoreFile is written per flush of MemStore), a compaction is run to rewrite all StoreFiles into a single StoreFile. Larger values delay compaction, but when compaction does occur, it takes longer to complete. @@ -709,24 +709,24 @@ possible configurations would overwhelm and obscure the important. hbase.hstore.blockingWaitTime 90000 The time for which a region will block updates after reaching the StoreFile limit - defined by hbase.hstore.blockingStoreFiles. After this time has elapsed, the region will stop + defined by hbase.hstore.blockingStoreFiles. After this time has elapsed, the region will stop blocking updates even if a compaction has not been completed. hbase.hstore.compaction.min 3 - The minimum number of StoreFiles which must be eligible for compaction before - compaction can run. The goal of tuning hbase.hstore.compaction.min is to avoid ending up with - too many tiny StoreFiles to compact. Setting this value to 2 would cause a minor compaction + The minimum number of StoreFiles which must be eligible for compaction before + compaction can run. The goal of tuning hbase.hstore.compaction.min is to avoid ending up with + too many tiny StoreFiles to compact. Setting this value to 2 would cause a minor compaction each time you have two StoreFiles in a Store, and this is probably not appropriate. If you - set this value too high, all the other values will need to be adjusted accordingly. For most + set this value too high, all the other values will need to be adjusted accordingly. For most cases, the default value is appropriate. In previous versions of HBase, the parameter hbase.hstore.compaction.min was named hbase.hstore.compactionThreshold. hbase.hstore.compaction.max 10 - The maximum number of StoreFiles which will be selected for a single minor + The maximum number of StoreFiles which will be selected for a single minor compaction, regardless of the number of eligible StoreFiles. Effectively, the value of hbase.hstore.compaction.max controls the length of time it takes a single compaction to complete. Setting it larger means that more StoreFiles are included in a compaction. For most @@ -735,48 +735,48 @@ possible configurations would overwhelm and obscure the important. hbase.hstore.compaction.min.size 134217728 - A StoreFile smaller than this size will always be eligible for minor compaction. - HFiles this size or larger are evaluated by hbase.hstore.compaction.ratio to determine if - they are eligible. Because this limit represents the "automatic include"limit for all - StoreFiles smaller than this value, this value may need to be reduced in write-heavy - environments where many StoreFiles in the 1-2 MB range are being flushed, because every + A StoreFile smaller than this size will always be eligible for minor compaction. + HFiles this size or larger are evaluated by hbase.hstore.compaction.ratio to determine if + they are eligible. Because this limit represents the "automatic include"limit for all + StoreFiles smaller than this value, this value may need to be reduced in write-heavy + environments where many StoreFiles in the 1-2 MB range are being flushed, because every StoreFile will be targeted for compaction and the resulting StoreFiles may still be under the minimum size and require further compaction. If this parameter is lowered, the ratio check is - triggered more quickly. This addressed some issues seen in earlier versions of HBase but - changing this parameter is no longer necessary in most situations. Default: 128 MB expressed + triggered more quickly. This addressed some issues seen in earlier versions of HBase but + changing this parameter is no longer necessary in most situations. Default: 128 MB expressed in bytes. hbase.hstore.compaction.max.size 9223372036854775807 - A StoreFile larger than this size will be excluded from compaction. The effect of - raising hbase.hstore.compaction.max.size is fewer, larger StoreFiles that do not get + A StoreFile larger than this size will be excluded from compaction. The effect of + raising hbase.hstore.compaction.max.size is fewer, larger StoreFiles that do not get compacted often. If you feel that compaction is happening too often without much benefit, you can try raising this value. Default: the value of LONG.MAX_VALUE, expressed in bytes. hbase.hstore.compaction.ratio 1.2F - For minor compaction, this ratio is used to determine whether a given StoreFile + For minor compaction, this ratio is used to determine whether a given StoreFile which is larger than hbase.hstore.compaction.min.size is eligible for compaction. Its effect is to limit compaction of large StoreFiles. The value of hbase.hstore.compaction.ratio - is expressed as a floating-point decimal. A large ratio, such as 10, will produce a single - giant StoreFile. Conversely, a low value, such as .25, will produce behavior similar to the + is expressed as a floating-point decimal. A large ratio, such as 10, will produce a single + giant StoreFile. Conversely, a low value, such as .25, will produce behavior similar to the BigTable compaction algorithm, producing four StoreFiles. A moderate value of between 1.0 and - 1.4 is recommended. When tuning this value, you are balancing write costs with read costs. - Raising the value (to something like 1.4) will have more write costs, because you will - compact larger StoreFiles. However, during reads, HBase will need to seek through fewer - StoreFiles to accomplish the read. Consider this approach if you cannot take advantage of - Bloom filters. Otherwise, you can lower this value to something like 1.0 to reduce the - background cost of writes, and use Bloom filters to control the number of StoreFiles touched + 1.4 is recommended. When tuning this value, you are balancing write costs with read costs. + Raising the value (to something like 1.4) will have more write costs, because you will + compact larger StoreFiles. However, during reads, HBase will need to seek through fewer + StoreFiles to accomplish the read. Consider this approach if you cannot take advantage of + Bloom filters. Otherwise, you can lower this value to something like 1.0 to reduce the + background cost of writes, and use Bloom filters to control the number of StoreFiles touched during reads. For most cases, the default value is appropriate. hbase.hstore.compaction.ratio.offpeak 5.0F Allows you to set a different (by default, more aggressive) ratio for determining - whether larger StoreFiles are included in compactions during off-peak hours. Works in the - same way as hbase.hstore.compaction.ratio. Only applies if hbase.offpeak.start.hour and + whether larger StoreFiles are included in compactions during off-peak hours. Works in the + same way as hbase.hstore.compaction.ratio. Only applies if hbase.offpeak.start.hour and hbase.offpeak.end.hour are also enabled. @@ -855,28 +855,28 @@ possible configurations would overwhelm and obscure the important. hbase.bucketcache.ioengine - Where to store the contents of the bucketcache. One of: onheap, + Where to store the contents of the bucketcache. One of: heap, offheap, or file. If a file, set it to file:PATH_TO_FILE. See https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/io/hfile/CacheConfig.html for more information. hbase.bucketcache.combinedcache.enabled true - Whether or not the bucketcache is used in league with the LRU - on-heap block cache. In this mode, indices and blooms are kept in the LRU + Whether or not the bucketcache is used in league with the LRU + on-heap block cache. In this mode, indices and blooms are kept in the LRU blockcache and the data blocks are kept in the bucketcache. hbase.bucketcache.size 65536 - The size of the buckets for the bucketcache if you only use a single size. + The size of the buckets for the bucketcache if you only use a single size. Defaults to the default blocksize, which is 64 * 1024. hbase.bucketcache.sizes - A comma-separated list of sizes for buckets for the bucketcache - if you use multiple sizes. Should be a list of block sizes in order from smallest + A comma-separated list of sizes for buckets for the bucketcache + if you use multiple sizes. Should be a list of block sizes in order from smallest to largest. The sizes you use will depend on your data access patterns.