mirror of https://github.com/apache/lucene.git
SOLR-13817: Remove legacy SolrCache implementations.
This commit is contained in:
parent
e466d622c8
commit
b4fe911cc8
|
@ -72,6 +72,9 @@ Upgrade Notes
|
|||
followed by a space (e.g. {shape=square, color=yellow} rather than {shape=square,color=yellow}) for consistency with
|
||||
other java.util.Map implementations based on AbstractMap (Chris Hennick).
|
||||
|
||||
* SOLR-13817: Legacy SolrCache implementations (LRUCache, LFUCache, FastLRUCache) have been removed.
|
||||
Users have to modify their existing configurations to use CaffeineCache instead. (ab)
|
||||
|
||||
Improvements
|
||||
----------------------
|
||||
|
||||
|
@ -104,6 +107,9 @@ Upgrade Notes
|
|||
* org.apache.solr.search.grouping.distributed.command.QueryCommand.Builder has new method 'setMainQuery' which is used
|
||||
to set top-level query. build() would fail if called without setting mainQuery
|
||||
|
||||
* SOLR-13817: Deprecate legacy SolrCache implementations. Users are encouraged to transition their
|
||||
configurations to use org.apache.solr.search.CaffeineCache instead. (ab)
|
||||
|
||||
New Features
|
||||
---------------------
|
||||
* SOLR-13821: A Package store to store and load package artifacts (noble, Ishan Chattopadhyaya)
|
||||
|
|
|
@ -77,10 +77,10 @@
|
|||
unordered sets of *all* documents that match a query.
|
||||
When a new searcher is opened, its caches may be prepopulated
|
||||
or "autowarmed" using data from caches in the old searcher.
|
||||
autowarmCount is the number of items to prepopulate. For LRUCache,
|
||||
autowarmCount is the number of items to prepopulate. For CaffeineCache,
|
||||
the autowarmed items will be the most recently accessed items.
|
||||
Parameters:
|
||||
class - the SolrCache implementation (currently only LRUCache)
|
||||
class - the SolrCache implementation (currently only CaffeineCache)
|
||||
size - the maximum number of entries in the cache
|
||||
initialSize - the initial capacity (number of entries) of
|
||||
the cache. (seel java.util.HashMap)
|
||||
|
@ -88,7 +88,7 @@
|
|||
and old cache.
|
||||
-->
|
||||
<filterCache
|
||||
class="solr.LRUCache"
|
||||
class="solr.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="128"/>
|
||||
|
@ -97,7 +97,7 @@
|
|||
document ids (DocList) based on a query, a sort, and the range
|
||||
of documents requested. -->
|
||||
<queryResultCache
|
||||
class="solr.LRUCache"
|
||||
class="solr.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="32"/>
|
||||
|
@ -105,7 +105,7 @@
|
|||
<!-- documentCache caches Lucene Document objects (the stored fields for each document).
|
||||
Since Lucene internal document ids are transient, this cache will not be autowarmed. -->
|
||||
<documentCache
|
||||
class="solr.LRUCache"
|
||||
class="solr.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="0"/>
|
||||
|
@ -125,7 +125,7 @@
|
|||
of solr.search.CacheRegenerator if autowarming is desired. -->
|
||||
<!--
|
||||
<cache name="myUserCache"
|
||||
class="solr.LRUCache"
|
||||
class="solr.CaffeineCache"
|
||||
size="4096"
|
||||
initialSize="1024"
|
||||
autowarmCount="1024"
|
||||
|
|
|
@ -58,10 +58,10 @@
|
|||
unordered sets of *all* documents that match a query.
|
||||
When a new searcher is opened, its caches may be prepopulated
|
||||
or "autowarmed" using data from caches in the old searcher.
|
||||
autowarmCount is the number of items to prepopulate. For LRUCache,
|
||||
autowarmCount is the number of items to prepopulate. For CaffeineCache,
|
||||
the autowarmed items will be the most recently accessed items.
|
||||
Parameters:
|
||||
class - the SolrCache implementation (currently only LRUCache)
|
||||
class - the SolrCache implementation (currently only CaffeineCache)
|
||||
size - the maximum number of entries in the cache
|
||||
initialSize - the initial capacity (number of entries) of
|
||||
the cache. (seel java.util.HashMap)
|
||||
|
@ -69,7 +69,7 @@
|
|||
and old cache.
|
||||
-->
|
||||
<filterCache
|
||||
class="solr.LRUCache"
|
||||
class="solr.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="256"/>
|
||||
|
@ -78,7 +78,7 @@
|
|||
document ids (DocList) based on a query, a sort, and the range
|
||||
of documents requested. -->
|
||||
<queryResultCache
|
||||
class="solr.LRUCache"
|
||||
class="solr.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="256"/>
|
||||
|
@ -86,7 +86,7 @@
|
|||
<!-- documentCache caches Lucene Document objects (the stored fields for each document).
|
||||
Since Lucene internal document ids are transient, this cache will not be autowarmed. -->
|
||||
<documentCache
|
||||
class="solr.LRUCache"
|
||||
class="solr.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="0"/>
|
||||
|
@ -106,7 +106,7 @@
|
|||
of solr.search.CacheRegenerator if autowarming is desired. -->
|
||||
<!--
|
||||
<cache name="myUserCache"
|
||||
class="solr.LRUCache"
|
||||
class="solr.CaffeineCache"
|
||||
size="4096"
|
||||
initialSize="1024"
|
||||
autowarmCount="1024"
|
||||
|
|
|
@ -58,10 +58,10 @@
|
|||
unordered sets of *all* documents that match a query.
|
||||
When a new searcher is opened, its caches may be prepopulated
|
||||
or "autowarmed" using data from caches in the old searcher.
|
||||
autowarmCount is the number of items to prepopulate. For LRUCache,
|
||||
autowarmCount is the number of items to prepopulate. For CaffeineCache,
|
||||
the autowarmed items will be the most recently accessed items.
|
||||
Parameters:
|
||||
class - the SolrCache implementation (currently only LRUCache)
|
||||
class - the SolrCache implementation (currently only CaffeineCache)
|
||||
size - the maximum number of entries in the cache
|
||||
initialSize - the initial capacity (number of entries) of
|
||||
the cache. (seel java.util.HashMap)
|
||||
|
@ -69,7 +69,7 @@
|
|||
and old cache.
|
||||
-->
|
||||
<filterCache
|
||||
class="solr.LRUCache"
|
||||
class="solr.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="256"/>
|
||||
|
@ -78,7 +78,7 @@
|
|||
document ids (DocList) based on a query, a sort, and the range
|
||||
of documents requested. -->
|
||||
<queryResultCache
|
||||
class="solr.LRUCache"
|
||||
class="solr.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="256"/>
|
||||
|
@ -86,7 +86,7 @@
|
|||
<!-- documentCache caches Lucene Document objects (the stored fields for each document).
|
||||
Since Lucene internal document ids are transient, this cache will not be autowarmed. -->
|
||||
<documentCache
|
||||
class="solr.LRUCache"
|
||||
class="solr.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="0"/>
|
||||
|
@ -106,7 +106,7 @@
|
|||
of solr.search.CacheRegenerator if autowarming is desired. -->
|
||||
<!--
|
||||
<cache name="myUserCache"
|
||||
class="solr.LRUCache"
|
||||
class="solr.CaffeineCache"
|
||||
size="4096"
|
||||
initialSize="1024"
|
||||
autowarmCount="1024"
|
||||
|
|
|
@ -60,10 +60,10 @@
|
|||
unordered sets of *all* documents that match a query.
|
||||
When a new searcher is opened, its caches may be prepopulated
|
||||
or "autowarmed" using data from caches in the old searcher.
|
||||
autowarmCount is the number of items to prepopulate. For LRUCache,
|
||||
autowarmCount is the number of items to prepopulate. For CaffeineCache,
|
||||
the autowarmed items will be the most recently accessed items.
|
||||
Parameters:
|
||||
class - the SolrCache implementation (currently only LRUCache)
|
||||
class - the SolrCache implementation (currently only CaffeineCache)
|
||||
size - the maximum number of entries in the cache
|
||||
initialSize - the initial capacity (number of entries) of
|
||||
the cache. (seel java.util.HashMap)
|
||||
|
@ -71,7 +71,7 @@
|
|||
and old cache.
|
||||
-->
|
||||
<filterCache
|
||||
class="solr.LRUCache"
|
||||
class="solr.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="256"/>
|
||||
|
@ -80,7 +80,7 @@
|
|||
document ids (DocList) based on a query, a sort, and the range
|
||||
of documents requested. -->
|
||||
<queryResultCache
|
||||
class="solr.LRUCache"
|
||||
class="solr.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="256"/>
|
||||
|
@ -88,7 +88,7 @@
|
|||
<!-- documentCache caches Lucene Document objects (the stored fields for each document).
|
||||
Since Lucene internal document ids are transient, this cache will not be autowarmed. -->
|
||||
<documentCache
|
||||
class="solr.LRUCache"
|
||||
class="solr.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="0"/>
|
||||
|
@ -108,7 +108,7 @@
|
|||
of solr.search.CacheRegenerator if autowarming is desired. -->
|
||||
<!--
|
||||
<cache name="myUserCache"
|
||||
class="solr.LRUCache"
|
||||
class="solr.CaffeineCache"
|
||||
size="4096"
|
||||
initialSize="1024"
|
||||
autowarmCount="1024"
|
||||
|
|
|
@ -58,10 +58,10 @@
|
|||
unordered sets of *all* documents that match a query.
|
||||
When a new searcher is opened, its caches may be prepopulated
|
||||
or "autowarmed" using data from caches in the old searcher.
|
||||
autowarmCount is the number of items to prepopulate. For LRUCache,
|
||||
autowarmCount is the number of items to prepopulate. For CaffeineCache,
|
||||
the autowarmed items will be the most recently accessed items.
|
||||
Parameters:
|
||||
class - the SolrCache implementation (currently only LRUCache)
|
||||
class - the SolrCache implementation (currently only CaffeineCache)
|
||||
size - the maximum number of entries in the cache
|
||||
initialSize - the initial capacity (number of entries) of
|
||||
the cache. (seel java.util.HashMap)
|
||||
|
@ -69,7 +69,7 @@
|
|||
and old cache.
|
||||
-->
|
||||
<filterCache
|
||||
class="solr.LRUCache"
|
||||
class="solr.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="256"/>
|
||||
|
@ -78,7 +78,7 @@
|
|||
document ids (DocList) based on a query, a sort, and the range
|
||||
of documents requested. -->
|
||||
<queryResultCache
|
||||
class="solr.LRUCache"
|
||||
class="solr.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="256"/>
|
||||
|
@ -86,7 +86,7 @@
|
|||
<!-- documentCache caches Lucene Document objects (the stored fields for each document).
|
||||
Since Lucene internal document ids are transient, this cache will not be autowarmed. -->
|
||||
<documentCache
|
||||
class="solr.LRUCache"
|
||||
class="solr.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="0"/>
|
||||
|
@ -106,7 +106,7 @@
|
|||
of solr.search.CacheRegenerator if autowarming is desired. -->
|
||||
<!--
|
||||
<cache name="myUserCache"
|
||||
class="solr.LRUCache"
|
||||
class="solr.CaffeineCache"
|
||||
size="4096"
|
||||
initialSize="1024"
|
||||
autowarmCount="1024"
|
||||
|
|
|
@ -74,19 +74,19 @@
|
|||
that match a particular query.
|
||||
-->
|
||||
<filterCache
|
||||
class="solr.search.LRUCache"
|
||||
class="solr.search.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="256"/>
|
||||
|
||||
<queryResultCache
|
||||
class="solr.search.LRUCache"
|
||||
class="solr.search.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="1024"/>
|
||||
|
||||
<documentCache
|
||||
class="solr.search.LRUCache"
|
||||
class="solr.search.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="0"/>
|
||||
|
@ -98,7 +98,7 @@
|
|||
<!--
|
||||
|
||||
<cache name="myUserCache"
|
||||
class="solr.search.LRUCache"
|
||||
class="solr.search.CaffeineCache"
|
||||
size="4096"
|
||||
initialSize="1024"
|
||||
autowarmCount="1024"
|
||||
|
|
|
@ -30,9 +30,9 @@
|
|||
class="org.apache.solr.ltr.search.LTRQParserPlugin" />
|
||||
|
||||
<query>
|
||||
<filterCache class="solr.FastLRUCache" size="4096"
|
||||
<filterCache class="solr.CaffeineCache" size="4096"
|
||||
initialSize="2048" autowarmCount="0" />
|
||||
<cache name="QUERY_DOC_FV" class="solr.search.LRUCache" size="4096"
|
||||
<cache name="QUERY_DOC_FV" class="solr.search.CaffeineCache" size="4096"
|
||||
initialSize="2048" autowarmCount="4096" regenerator="solr.search.NoOpRegenerator" />
|
||||
</query>
|
||||
|
||||
|
|
|
@ -31,9 +31,9 @@
|
|||
|
||||
|
||||
<query>
|
||||
<filterCache class="solr.FastLRUCache" size="4096"
|
||||
<filterCache class="solr.CaffeineCache" size="4096"
|
||||
initialSize="2048" autowarmCount="0" />
|
||||
<cache name="QUERY_DOC_FV" class="solr.search.LRUCache" size="4096"
|
||||
<cache name="QUERY_DOC_FV" class="solr.search.CaffeineCache" size="4096"
|
||||
initialSize="2048" autowarmCount="4096" regenerator="solr.search.NoOpRegenerator" />
|
||||
</query>
|
||||
|
||||
|
|
|
@ -26,9 +26,9 @@
|
|||
<queryParser name="ltr" class="org.apache.solr.ltr.search.LTRQParserPlugin" />
|
||||
|
||||
<query>
|
||||
<filterCache class="solr.FastLRUCache" size="4096"
|
||||
<filterCache class="solr.CaffeineCache" size="4096"
|
||||
initialSize="2048" autowarmCount="0" />
|
||||
<cache name="QUERY_DOC_FV" class="solr.search.LRUCache" size="4096"
|
||||
<cache name="QUERY_DOC_FV" class="solr.search.CaffeineCache" size="4096"
|
||||
initialSize="2048" autowarmCount="4096" regenerator="solr.search.NoOpRegenerator" />
|
||||
</query>
|
||||
|
||||
|
|
|
@ -55,23 +55,23 @@
|
|||
|
||||
<maxBooleanClauses>1024</maxBooleanClauses>
|
||||
|
||||
<filterCache class="solr.FastLRUCache"
|
||||
<filterCache class="solr.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="0"/>
|
||||
|
||||
<queryResultCache class="solr.LRUCache"
|
||||
<queryResultCache class="solr.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="0"/>
|
||||
|
||||
<documentCache class="solr.LRUCache"
|
||||
<documentCache class="solr.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="0"/>
|
||||
|
||||
<cache name="perSegFilter"
|
||||
class="solr.search.LRUCache"
|
||||
class="solr.search.CaffeineCache"
|
||||
size="10"
|
||||
initialSize="0"
|
||||
autowarmCount="10"
|
||||
|
|
|
@ -63,7 +63,7 @@ import org.apache.solr.rest.RestManager;
|
|||
import org.apache.solr.schema.IndexSchema;
|
||||
import org.apache.solr.schema.IndexSchemaFactory;
|
||||
import org.apache.solr.search.CacheConfig;
|
||||
import org.apache.solr.search.FastLRUCache;
|
||||
import org.apache.solr.search.CaffeineCache;
|
||||
import org.apache.solr.search.QParserPlugin;
|
||||
import org.apache.solr.search.SolrCache;
|
||||
import org.apache.solr.search.ValueSourceParser;
|
||||
|
@ -271,7 +271,7 @@ public class SolrConfig extends XmlConfigFile implements MapSerializable {
|
|||
args.put("size", "10000");
|
||||
args.put("initialSize", "10");
|
||||
args.put("showItems", "-1");
|
||||
conf = new CacheConfig(FastLRUCache.class, args, null);
|
||||
conf = new CacheConfig(CaffeineCache.class, args, null);
|
||||
}
|
||||
fieldValueCacheConfig = conf;
|
||||
useColdSearcher = getBool("query/useColdSearcher", false);
|
||||
|
|
|
@ -124,7 +124,7 @@ public class CacheConfig implements MapSerializable{
|
|||
|
||||
SolrResourceLoader loader = solrConfig.getResourceLoader();
|
||||
config.cacheImpl = config.args.get("class");
|
||||
if(config.cacheImpl == null) config.cacheImpl = "solr.LRUCache";
|
||||
if(config.cacheImpl == null) config.cacheImpl = "solr.CaffeineCache";
|
||||
config.regenImpl = config.args.get("regenerator");
|
||||
config.clazz = loader.findClass(config.cacheImpl, SolrCache.class);
|
||||
if (config.regenImpl != null) {
|
||||
|
|
|
@ -181,6 +181,9 @@ public class CaffeineCache<K, V> extends SolrCacheBase implements SolrCache<K, V
|
|||
return cache.get(key, k -> {
|
||||
inserts.increment();
|
||||
V value = mappingFunction.apply(k);
|
||||
if (value == null) {
|
||||
return null;
|
||||
}
|
||||
ramBytes.add(RamUsageEstimator.sizeOfObject(key, RamUsageEstimator.QUERY_DEFAULT_RAM_BYTES_USED) +
|
||||
RamUsageEstimator.sizeOfObject(value, RamUsageEstimator.QUERY_DEFAULT_RAM_BYTES_USED));
|
||||
ramBytes.add(RamUsageEstimator.LINKED_HASHTABLE_RAM_BYTES_PER_ENTRY);
|
||||
|
|
|
@ -1,394 +0,0 @@
|
|||
/*
|
||||
* Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
* contributor license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright ownership.
|
||||
* The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
* (the "License"); you may not use this file except in compliance with
|
||||
* the License. You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
package org.apache.solr.search;
|
||||
|
||||
import java.lang.invoke.MethodHandles;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Set;
|
||||
import java.util.concurrent.ConcurrentHashMap;
|
||||
import java.util.concurrent.CopyOnWriteArrayList;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
import java.util.function.Function;
|
||||
|
||||
import org.apache.lucene.util.Accountable;
|
||||
import org.apache.lucene.util.RamUsageEstimator;
|
||||
import org.apache.solr.common.SolrException;
|
||||
import org.apache.solr.metrics.MetricsMap;
|
||||
import org.apache.solr.metrics.SolrMetricsContext;
|
||||
import org.apache.solr.util.ConcurrentLRUCache;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
/**
|
||||
* SolrCache based on ConcurrentLRUCache implementation.
|
||||
* <p>
|
||||
* This implementation does not use a separate cleanup thread. Instead it uses the calling thread
|
||||
* itself to do the cleanup when the size of the cache exceeds certain limits.
|
||||
* <p>
|
||||
* Also see <a href="http://wiki.apache.org/solr/SolrCaching">SolrCaching</a>
|
||||
*
|
||||
* @see org.apache.solr.util.ConcurrentLRUCache
|
||||
* @see org.apache.solr.search.SolrCache
|
||||
* @since solr 1.4
|
||||
*/
|
||||
public class FastLRUCache<K, V> extends SolrCacheBase implements SolrCache<K, V>, Accountable {
|
||||
private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
|
||||
|
||||
private static final long BASE_RAM_BYTES_USED = RamUsageEstimator.shallowSizeOfInstance(FastLRUCache.class);
|
||||
|
||||
public static final String MIN_SIZE_PARAM = "minSize";
|
||||
public static final String ACCEPTABLE_SIZE_PARAM = "acceptableSize";
|
||||
|
||||
// contains the statistics objects for all open caches of the same type
|
||||
private List<ConcurrentLRUCache.Stats> statsList;
|
||||
|
||||
private long warmupTime = 0;
|
||||
|
||||
private String description = "Concurrent LRU Cache";
|
||||
private ConcurrentLRUCache<K, V> cache;
|
||||
private int showItems = 0;
|
||||
|
||||
private long maxRamBytes;
|
||||
private int maxSize;
|
||||
private int minSizeLimit;
|
||||
private int initialSize;
|
||||
private int acceptableSize;
|
||||
private boolean cleanupThread;
|
||||
private int maxIdleTimeSec;
|
||||
private long ramLowerWatermark;
|
||||
|
||||
private MetricsMap cacheMap;
|
||||
private Set<String> metricNames = ConcurrentHashMap.newKeySet();
|
||||
private SolrMetricsContext solrMetricsContext;
|
||||
|
||||
@Override
|
||||
public Object init(Map args, Object persistence, CacheRegenerator regenerator) {
|
||||
super.init(args, regenerator);
|
||||
String str = (String) args.get(SIZE_PARAM);
|
||||
maxSize = str == null ? 1024 : Integer.parseInt(str);
|
||||
str = (String) args.get(MIN_SIZE_PARAM);
|
||||
if (str == null) {
|
||||
minSizeLimit = (int) (maxSize * 0.9);
|
||||
} else {
|
||||
minSizeLimit = Integer.parseInt(str);
|
||||
}
|
||||
checkAndAdjustLimits();
|
||||
|
||||
str = (String) args.get(ACCEPTABLE_SIZE_PARAM);
|
||||
if (str == null) {
|
||||
acceptableSize = (int) (maxSize * 0.95);
|
||||
} else {
|
||||
acceptableSize = Integer.parseInt(str);
|
||||
}
|
||||
// acceptable limit should be somewhere between minLimit and limit
|
||||
acceptableSize = Math.max(minSizeLimit, acceptableSize);
|
||||
|
||||
str = (String) args.get(INITIAL_SIZE_PARAM);
|
||||
initialSize = str == null ? maxSize : Integer.parseInt(str);
|
||||
str = (String) args.get(CLEANUP_THREAD_PARAM);
|
||||
cleanupThread = str == null ? false : Boolean.parseBoolean(str);
|
||||
|
||||
str = (String) args.get(SHOW_ITEMS_PARAM);
|
||||
showItems = str == null ? 0 : Integer.parseInt(str);
|
||||
|
||||
str = (String) args.get(MAX_IDLE_TIME_PARAM);
|
||||
if (str == null) {
|
||||
maxIdleTimeSec = -1;
|
||||
} else {
|
||||
maxIdleTimeSec = Integer.parseInt(str);
|
||||
}
|
||||
|
||||
str = (String) args.get(MAX_RAM_MB_PARAM);
|
||||
long maxRamMB = str == null ? -1 : (long) Double.parseDouble(str);
|
||||
this.maxRamBytes = maxRamMB < 0 ? Long.MAX_VALUE : maxRamMB * 1024L * 1024L;
|
||||
if (maxRamBytes != Long.MAX_VALUE) {
|
||||
ramLowerWatermark = Math.round(maxRamBytes * 0.8);
|
||||
description = generateDescription(maxRamBytes, ramLowerWatermark, cleanupThread);
|
||||
cache = new ConcurrentLRUCache<>(ramLowerWatermark, maxRamBytes, cleanupThread, null, maxIdleTimeSec);
|
||||
} else {
|
||||
ramLowerWatermark = -1L;
|
||||
description = generateDescription(maxSize, initialSize, minSizeLimit, acceptableSize, cleanupThread);
|
||||
cache = new ConcurrentLRUCache<>(maxSize, minSizeLimit, acceptableSize, initialSize, cleanupThread, false, null, maxIdleTimeSec);
|
||||
}
|
||||
|
||||
cache.setAlive(false);
|
||||
|
||||
statsList = (List<ConcurrentLRUCache.Stats>) persistence;
|
||||
if (statsList == null) {
|
||||
// must be the first time a cache of this type is being created
|
||||
// Use a CopyOnWriteArrayList since puts are very rare and iteration may be a frequent operation
|
||||
// because it is used in getStatistics()
|
||||
statsList = new CopyOnWriteArrayList<>();
|
||||
|
||||
// the first entry will be for cumulative stats of caches that have been closed.
|
||||
statsList.add(new ConcurrentLRUCache.Stats());
|
||||
}
|
||||
statsList.add(cache.getStats());
|
||||
cacheMap = new MetricsMap((detailed, map) -> {
|
||||
if (cache != null) {
|
||||
ConcurrentLRUCache.Stats stats = cache.getStats();
|
||||
long lookups = stats.getCumulativeLookups();
|
||||
long hits = stats.getCumulativeHits();
|
||||
long inserts = stats.getCumulativePuts();
|
||||
long evictions = stats.getCumulativeEvictions();
|
||||
long idleEvictions = stats.getCumulativeIdleEvictions();
|
||||
long size = stats.getCurrentSize();
|
||||
long clookups = 0;
|
||||
long chits = 0;
|
||||
long cinserts = 0;
|
||||
long cevictions = 0;
|
||||
long cIdleEvictions = 0;
|
||||
|
||||
// NOTE: It is safe to iterate on a CopyOnWriteArrayList
|
||||
for (ConcurrentLRUCache.Stats statistiscs : statsList) {
|
||||
clookups += statistiscs.getCumulativeLookups();
|
||||
chits += statistiscs.getCumulativeHits();
|
||||
cinserts += statistiscs.getCumulativePuts();
|
||||
cevictions += statistiscs.getCumulativeEvictions();
|
||||
cIdleEvictions += statistiscs.getCumulativeIdleEvictions();
|
||||
}
|
||||
|
||||
map.put(LOOKUPS_PARAM, lookups);
|
||||
map.put(HITS_PARAM, hits);
|
||||
map.put(HIT_RATIO_PARAM, calcHitRatio(lookups, hits));
|
||||
map.put(INSERTS_PARAM, inserts);
|
||||
map.put(EVICTIONS_PARAM, evictions);
|
||||
map.put(SIZE_PARAM, size);
|
||||
map.put("cleanupThread", cleanupThread);
|
||||
map.put("idleEvictions", idleEvictions);
|
||||
map.put(RAM_BYTES_USED_PARAM, ramBytesUsed());
|
||||
map.put(MAX_RAM_MB_PARAM, getMaxRamMB());
|
||||
|
||||
map.put("warmupTime", warmupTime);
|
||||
map.put("cumulative_lookups", clookups);
|
||||
map.put("cumulative_hits", chits);
|
||||
map.put("cumulative_hitratio", calcHitRatio(clookups, chits));
|
||||
map.put("cumulative_inserts", cinserts);
|
||||
map.put("cumulative_evictions", cevictions);
|
||||
map.put("cumulative_idleEvictions", cIdleEvictions);
|
||||
|
||||
if (detailed && showItems != 0) {
|
||||
Map items = cache.getLatestAccessedItems(showItems == -1 ? Integer.MAX_VALUE : showItems);
|
||||
for (Map.Entry e : (Set<Map.Entry>) items.entrySet()) {
|
||||
Object k = e.getKey();
|
||||
Object v = e.getValue();
|
||||
|
||||
String ks = "item_" + k;
|
||||
String vs = v.toString();
|
||||
map.put(ks, vs);
|
||||
}
|
||||
|
||||
}
|
||||
}
|
||||
});
|
||||
return statsList;
|
||||
}
|
||||
|
||||
protected String generateDescription() {
|
||||
if (maxRamBytes != Long.MAX_VALUE) {
|
||||
return generateDescription(maxRamBytes, ramLowerWatermark, cleanupThread);
|
||||
} else {
|
||||
return generateDescription(maxSize, initialSize, minSizeLimit, acceptableSize, cleanupThread);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* @return Returns the description of this Cache.
|
||||
*/
|
||||
protected String generateDescription(int limit, int initialSize, int minLimit, int acceptableLimit, boolean newThread) {
|
||||
String description = "Concurrent LRU Cache(maxSize=" + limit + ", initialSize=" + initialSize +
|
||||
", minSize=" + minLimit + ", acceptableSize=" + acceptableLimit + ", cleanupThread=" + newThread;
|
||||
if (isAutowarmingOn()) {
|
||||
description += ", " + getAutowarmDescription();
|
||||
}
|
||||
description += ')';
|
||||
return description;
|
||||
}
|
||||
|
||||
protected String generateDescription(long maxRamBytes, long ramLowerWatermark, boolean newThread) {
|
||||
String description = "Concurrent LRU Cache(ramMinSize=" + ramLowerWatermark + ", ramMaxSize=" + maxRamBytes
|
||||
+ ", cleanupThread=" + newThread;
|
||||
if (isAutowarmingOn()) {
|
||||
description += ", " + getAutowarmDescription();
|
||||
}
|
||||
description += ')';
|
||||
return description;
|
||||
}
|
||||
|
||||
@Override
|
||||
public int size() {
|
||||
return cache.size();
|
||||
}
|
||||
|
||||
@Override
|
||||
public V put(K key, V value) {
|
||||
return cache.put(key, value);
|
||||
}
|
||||
|
||||
@Override
|
||||
public V remove(K key) {
|
||||
return cache.remove(key);
|
||||
}
|
||||
|
||||
@Override
|
||||
public V get(K key) {
|
||||
return cache.get(key);
|
||||
}
|
||||
|
||||
@Override
|
||||
public V computeIfAbsent(K key, Function<? super K, ? extends V> mappingFunction) {
|
||||
return cache.computeIfAbsent(key, mappingFunction);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void clear() {
|
||||
cache.clear();
|
||||
}
|
||||
|
||||
@Override
|
||||
public void setState(State state) {
|
||||
super.setState(state);
|
||||
cache.setAlive(state == State.LIVE);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void warm(SolrIndexSearcher searcher, SolrCache old) {
|
||||
if (regenerator == null) return;
|
||||
long warmingStartTime = System.nanoTime();
|
||||
FastLRUCache other = (FastLRUCache) old;
|
||||
// warm entries
|
||||
if (isAutowarmingOn()) {
|
||||
int sz = autowarm.getWarmCount(other.size());
|
||||
Map items = other.cache.getLatestAccessedItems(sz);
|
||||
Map.Entry[] itemsArr = new Map.Entry[items.size()];
|
||||
int counter = 0;
|
||||
for (Object mapEntry : items.entrySet()) {
|
||||
itemsArr[counter++] = (Map.Entry) mapEntry;
|
||||
}
|
||||
for (int i = itemsArr.length - 1; i >= 0; i--) {
|
||||
try {
|
||||
boolean continueRegen = regenerator.regenerateItem(searcher,
|
||||
this, old, itemsArr[i].getKey(), itemsArr[i].getValue());
|
||||
if (!continueRegen) break;
|
||||
} catch (Exception e) {
|
||||
SolrException.log(log, "Error during auto-warming of key:" + itemsArr[i].getKey(), e);
|
||||
}
|
||||
}
|
||||
}
|
||||
warmupTime = TimeUnit.MILLISECONDS.convert(System.nanoTime() - warmingStartTime, TimeUnit.NANOSECONDS);
|
||||
}
|
||||
|
||||
|
||||
@Override
|
||||
public void close() throws Exception {
|
||||
SolrCache.super.close();
|
||||
// add the stats to the cumulative stats object (the first in the statsList)
|
||||
statsList.get(0).add(cache.getStats());
|
||||
statsList.remove(cache.getStats());
|
||||
cache.destroy();
|
||||
}
|
||||
|
||||
//////////////////////// SolrInfoMBeans methods //////////////////////
|
||||
@Override
|
||||
public String getName() {
|
||||
return FastLRUCache.class.getName();
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getDescription() {
|
||||
return description;
|
||||
}
|
||||
|
||||
@Override
|
||||
public SolrMetricsContext getSolrMetricsContext() {
|
||||
return solrMetricsContext;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void initializeMetrics(SolrMetricsContext parentContext, String scope) {
|
||||
this.solrMetricsContext = parentContext.getChildContext(this);
|
||||
this.solrMetricsContext.gauge(cacheMap, true, scope, getCategory().toString());
|
||||
}
|
||||
|
||||
// for unit tests only
|
||||
MetricsMap getMetricsMap() {
|
||||
return cacheMap;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String toString() {
|
||||
return name() + (cacheMap != null ? cacheMap.getValue().toString() : "");
|
||||
}
|
||||
|
||||
@Override
|
||||
public long ramBytesUsed() {
|
||||
return BASE_RAM_BYTES_USED +
|
||||
RamUsageEstimator.sizeOfObject(cache) +
|
||||
RamUsageEstimator.sizeOfObject(statsList);
|
||||
}
|
||||
|
||||
@Override
|
||||
public int getMaxSize() {
|
||||
return maxSize != Integer.MAX_VALUE ? maxSize : -1;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void setMaxSize(int maxSize) {
|
||||
if (maxSize > 0) {
|
||||
this.maxSize = maxSize;
|
||||
} else {
|
||||
this.maxSize = Integer.MAX_VALUE;
|
||||
}
|
||||
checkAndAdjustLimits();
|
||||
cache.setUpperWaterMark(maxSize);
|
||||
cache.setLowerWaterMark(minSizeLimit);
|
||||
description = generateDescription();
|
||||
}
|
||||
|
||||
@Override
|
||||
public int getMaxRamMB() {
|
||||
return maxRamBytes != Long.MAX_VALUE ? (int) (maxRamBytes / 1024L / 1024L) : -1;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void setMaxRamMB(int maxRamMB) {
|
||||
maxRamBytes = maxRamMB < 0 ? Long.MAX_VALUE : maxRamMB * 1024L * 1024L;
|
||||
if (maxRamMB < 0) {
|
||||
ramLowerWatermark = Long.MIN_VALUE;
|
||||
} else {
|
||||
ramLowerWatermark = Math.round(maxRamBytes * 0.8);
|
||||
}
|
||||
cache.setRamUpperWatermark(maxRamBytes);
|
||||
cache.setRamLowerWatermark(ramLowerWatermark);
|
||||
description = generateDescription();
|
||||
}
|
||||
|
||||
private void checkAndAdjustLimits() {
|
||||
if (minSizeLimit <= 0) minSizeLimit = 1;
|
||||
if (maxSize <= minSizeLimit) {
|
||||
if (maxSize > 1) {
|
||||
minSizeLimit = maxSize - 1;
|
||||
} else {
|
||||
maxSize = minSizeLimit + 1;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
|
|
@ -1,412 +0,0 @@
|
|||
/*
|
||||
* Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
* contributor license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright ownership.
|
||||
* The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
* (the "License"); you may not use this file except in compliance with
|
||||
* the License. You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
package org.apache.solr.search;
|
||||
|
||||
import java.lang.invoke.MethodHandles;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Set;
|
||||
import java.util.concurrent.ConcurrentHashMap;
|
||||
import java.util.concurrent.CopyOnWriteArrayList;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
import java.util.function.Function;
|
||||
|
||||
import org.apache.lucene.util.Accountable;
|
||||
import org.apache.lucene.util.RamUsageEstimator;
|
||||
import org.apache.solr.common.SolrException;
|
||||
import org.apache.solr.metrics.MetricsMap;
|
||||
import org.apache.solr.metrics.SolrMetricsContext;
|
||||
import org.apache.solr.util.ConcurrentLFUCache;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
import static org.apache.solr.common.params.CommonParams.NAME;
|
||||
|
||||
/**
|
||||
* SolrCache based on ConcurrentLFUCache implementation.
|
||||
* <p>
|
||||
* This implementation does not use a separate cleanup thread. Instead it uses the calling thread
|
||||
* itself to do the cleanup when the size of the cache exceeds certain limits.
|
||||
* <p>
|
||||
* Also see <a href="http://wiki.apache.org/solr/SolrCaching">SolrCaching</a>
|
||||
* <p>
|
||||
* <b>This API is experimental and subject to change</b>
|
||||
*
|
||||
* @see org.apache.solr.util.ConcurrentLFUCache
|
||||
* @see org.apache.solr.search.SolrCache
|
||||
* @since solr 3.6
|
||||
*/
|
||||
public class LFUCache<K, V> implements SolrCache<K, V>, Accountable {
|
||||
private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
|
||||
|
||||
private static final long BASE_RAM_BYTES_USED = RamUsageEstimator.shallowSizeOfInstance(LFUCache.class);
|
||||
|
||||
public static final String TIME_DECAY_PARAM = "timeDecay";
|
||||
public static final String CLEANUP_THREAD_PARAM = "cleanupThread";
|
||||
public static final String INITIAL_SIZE_PARAM = "initialSize";
|
||||
public static final String MIN_SIZE_PARAM = "minSize";
|
||||
public static final String ACCEPTABLE_SIZE_PARAM = "acceptableSize";
|
||||
public static final String AUTOWARM_COUNT_PARAM = "autowarmCount";
|
||||
public static final String SHOW_ITEMS_PARAM = "showItems";
|
||||
|
||||
// contains the statistics objects for all open caches of the same type
|
||||
private List<ConcurrentLFUCache.Stats> statsList;
|
||||
|
||||
private long warmupTime = 0;
|
||||
|
||||
private String name;
|
||||
private int autowarmCount;
|
||||
private State state;
|
||||
private CacheRegenerator regenerator;
|
||||
private String description = "Concurrent LFU Cache";
|
||||
private ConcurrentLFUCache<K, V> cache;
|
||||
private int showItems = 0;
|
||||
private Boolean timeDecay = true;
|
||||
private int maxIdleTimeSec;
|
||||
private MetricsMap cacheMap;
|
||||
private Set<String> metricNames = ConcurrentHashMap.newKeySet();
|
||||
private SolrMetricsContext solrMetricsContext;
|
||||
|
||||
|
||||
private int maxSize;
|
||||
private int minSizeLimit;
|
||||
private int initialSize;
|
||||
private int acceptableSize;
|
||||
private boolean cleanupThread;
|
||||
|
||||
@Override
|
||||
public Object init(Map args, Object persistence, CacheRegenerator regenerator) {
|
||||
state = State.CREATED;
|
||||
this.regenerator = regenerator;
|
||||
name = (String) args.get(NAME);
|
||||
String str = (String) args.get(SIZE_PARAM);
|
||||
maxSize = str == null ? 1024 : Integer.parseInt(str);
|
||||
str = (String) args.get(MIN_SIZE_PARAM);
|
||||
if (str == null) {
|
||||
minSizeLimit = (int) (maxSize * 0.9);
|
||||
} else {
|
||||
minSizeLimit = Integer.parseInt(str);
|
||||
}
|
||||
checkAndAdjustLimits();
|
||||
|
||||
str = (String) args.get(ACCEPTABLE_SIZE_PARAM);
|
||||
if (str == null) {
|
||||
acceptableSize = (int) (maxSize * 0.95);
|
||||
} else {
|
||||
acceptableSize = Integer.parseInt(str);
|
||||
}
|
||||
// acceptable limit should be somewhere between minLimit and limit
|
||||
acceptableSize = Math.max(minSizeLimit, acceptableSize);
|
||||
|
||||
str = (String) args.get(INITIAL_SIZE_PARAM);
|
||||
initialSize = str == null ? maxSize : Integer.parseInt(str);
|
||||
str = (String) args.get(AUTOWARM_COUNT_PARAM);
|
||||
autowarmCount = str == null ? 0 : Integer.parseInt(str);
|
||||
str = (String) args.get(CLEANUP_THREAD_PARAM);
|
||||
cleanupThread = str == null ? false : Boolean.parseBoolean(str);
|
||||
|
||||
str = (String) args.get(SHOW_ITEMS_PARAM);
|
||||
showItems = str == null ? 0 : Integer.parseInt(str);
|
||||
|
||||
// Don't make this "efficient" by removing the test, default is true and omitting the param will make it false.
|
||||
str = (String) args.get(TIME_DECAY_PARAM);
|
||||
timeDecay = (str == null) ? true : Boolean.parseBoolean(str);
|
||||
|
||||
str = (String) args.get(MAX_IDLE_TIME_PARAM);
|
||||
if (str == null) {
|
||||
maxIdleTimeSec = -1;
|
||||
} else {
|
||||
maxIdleTimeSec = Integer.parseInt(str);
|
||||
}
|
||||
description = generateDescription();
|
||||
|
||||
cache = new ConcurrentLFUCache<>(maxSize, minSizeLimit, acceptableSize, initialSize,
|
||||
cleanupThread, false, null, timeDecay, maxIdleTimeSec);
|
||||
cache.setAlive(false);
|
||||
|
||||
statsList = (List<ConcurrentLFUCache.Stats>) persistence;
|
||||
if (statsList == null) {
|
||||
// must be the first time a cache of this type is being created
|
||||
// Use a CopyOnWriteArrayList since puts are very rare and iteration may be a frequent operation
|
||||
// because it is used in getStatistics()
|
||||
statsList = new CopyOnWriteArrayList<>();
|
||||
|
||||
// the first entry will be for cumulative stats of caches that have been closed.
|
||||
statsList.add(new ConcurrentLFUCache.Stats());
|
||||
}
|
||||
statsList.add(cache.getStats());
|
||||
return statsList;
|
||||
}
|
||||
|
||||
private String generateDescription() {
|
||||
String descr = "Concurrent LFU Cache(maxSize=" + maxSize + ", initialSize=" + initialSize +
|
||||
", minSize=" + minSizeLimit + ", acceptableSize=" + acceptableSize + ", cleanupThread=" + cleanupThread +
|
||||
", timeDecay=" + timeDecay +
|
||||
", maxIdleTime=" + maxIdleTimeSec;
|
||||
if (autowarmCount > 0) {
|
||||
descr += ", autowarmCount=" + autowarmCount + ", regenerator=" + regenerator;
|
||||
}
|
||||
descr += ')';
|
||||
return descr;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String name() {
|
||||
return name;
|
||||
}
|
||||
|
||||
@Override
|
||||
public int size() {
|
||||
return cache.size();
|
||||
|
||||
}
|
||||
|
||||
@Override
|
||||
public V put(K key, V value) {
|
||||
return cache.put(key, value);
|
||||
}
|
||||
|
||||
@Override
|
||||
public V remove(K key) {
|
||||
return cache.remove(key);
|
||||
}
|
||||
|
||||
@Override
|
||||
public V computeIfAbsent(K key, Function<? super K, ? extends V> mappingFunction) {
|
||||
return cache.computeIfAbsent(key, mappingFunction);
|
||||
}
|
||||
|
||||
@Override
|
||||
public V get(K key) {
|
||||
return cache.get(key);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void clear() {
|
||||
cache.clear();
|
||||
}
|
||||
|
||||
@Override
|
||||
public void setState(State state) {
|
||||
this.state = state;
|
||||
cache.setAlive(state == State.LIVE);
|
||||
}
|
||||
|
||||
@Override
|
||||
public State getState() {
|
||||
return state;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void warm(SolrIndexSearcher searcher, SolrCache old) {
|
||||
if (regenerator == null) return;
|
||||
long warmingStartTime = System.nanoTime();
|
||||
LFUCache other = (LFUCache) old;
|
||||
// warm entries
|
||||
if (autowarmCount != 0) {
|
||||
int sz = other.size();
|
||||
if (autowarmCount != -1) sz = Math.min(sz, autowarmCount);
|
||||
Map items = other.cache.getMostUsedItems(sz);
|
||||
Map.Entry[] itemsArr = new Map.Entry[items.size()];
|
||||
int counter = 0;
|
||||
for (Object mapEntry : items.entrySet()) {
|
||||
itemsArr[counter++] = (Map.Entry) mapEntry;
|
||||
}
|
||||
for (int i = itemsArr.length - 1; i >= 0; i--) {
|
||||
try {
|
||||
boolean continueRegen = regenerator.regenerateItem(searcher,
|
||||
this, old, itemsArr[i].getKey(), itemsArr[i].getValue());
|
||||
if (!continueRegen) break;
|
||||
} catch (Exception e) {
|
||||
SolrException.log(log, "Error during auto-warming of key:" + itemsArr[i].getKey(), e);
|
||||
}
|
||||
}
|
||||
}
|
||||
warmupTime = TimeUnit.MILLISECONDS.convert(System.nanoTime() - warmingStartTime, TimeUnit.NANOSECONDS);
|
||||
}
|
||||
|
||||
|
||||
@Override
|
||||
public void close() throws Exception {
|
||||
SolrCache.super.close();
|
||||
// add the stats to the cumulative stats object (the first in the statsList)
|
||||
statsList.get(0).add(cache.getStats());
|
||||
statsList.remove(cache.getStats());
|
||||
cache.destroy();
|
||||
}
|
||||
|
||||
//////////////////////// SolrInfoMBeans methods //////////////////////
|
||||
@Override
|
||||
public String getName() {
|
||||
return LFUCache.class.getName();
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getDescription() {
|
||||
return description;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Category getCategory() {
|
||||
return Category.CACHE;
|
||||
}
|
||||
|
||||
// returns a ratio, not a percent.
|
||||
private static String calcHitRatio(long lookups, long hits) {
|
||||
if (lookups == 0) return "0.00";
|
||||
if (lookups == hits) return "1.00";
|
||||
int hundredths = (int) (hits * 100 / lookups); // rounded down
|
||||
if (hundredths < 10) return "0.0" + hundredths;
|
||||
return "0." + hundredths;
|
||||
}
|
||||
|
||||
@Override
|
||||
public SolrMetricsContext getSolrMetricsContext() {
|
||||
return solrMetricsContext;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void initializeMetrics(SolrMetricsContext parentContext, String scope) {
|
||||
solrMetricsContext = parentContext.getChildContext(this);
|
||||
cacheMap = new MetricsMap((detailed, map) -> {
|
||||
if (cache != null) {
|
||||
ConcurrentLFUCache.Stats stats = cache.getStats();
|
||||
long lookups = stats.getCumulativeLookups();
|
||||
long hits = stats.getCumulativeHits();
|
||||
long inserts = stats.getCumulativePuts();
|
||||
long evictions = stats.getCumulativeEvictions();
|
||||
long idleEvictions = stats.getCumulativeIdleEvictions();
|
||||
long size = stats.getCurrentSize();
|
||||
|
||||
map.put(LOOKUPS_PARAM, lookups);
|
||||
map.put(HITS_PARAM, hits);
|
||||
map.put(HIT_RATIO_PARAM, calcHitRatio(lookups, hits));
|
||||
map.put(INSERTS_PARAM, inserts);
|
||||
map.put(EVICTIONS_PARAM, evictions);
|
||||
map.put(SIZE_PARAM, size);
|
||||
map.put(MAX_SIZE_PARAM, maxSize);
|
||||
map.put(MIN_SIZE_PARAM, minSizeLimit);
|
||||
map.put(ACCEPTABLE_SIZE_PARAM, acceptableSize);
|
||||
map.put(AUTOWARM_COUNT_PARAM, autowarmCount);
|
||||
map.put(CLEANUP_THREAD_PARAM, cleanupThread);
|
||||
map.put(SHOW_ITEMS_PARAM, showItems);
|
||||
map.put(TIME_DECAY_PARAM, timeDecay);
|
||||
map.put(RAM_BYTES_USED_PARAM, ramBytesUsed());
|
||||
map.put(MAX_IDLE_TIME_PARAM, maxIdleTimeSec);
|
||||
map.put("idleEvictions", idleEvictions);
|
||||
|
||||
map.put("warmupTime", warmupTime);
|
||||
|
||||
long clookups = 0;
|
||||
long chits = 0;
|
||||
long cinserts = 0;
|
||||
long cevictions = 0;
|
||||
long cidleEvictions = 0;
|
||||
|
||||
// NOTE: It is safe to iterate on a CopyOnWriteArrayList
|
||||
for (ConcurrentLFUCache.Stats statistics : statsList) {
|
||||
clookups += statistics.getCumulativeLookups();
|
||||
chits += statistics.getCumulativeHits();
|
||||
cinserts += statistics.getCumulativePuts();
|
||||
cevictions += statistics.getCumulativeEvictions();
|
||||
cidleEvictions += statistics.getCumulativeIdleEvictions();
|
||||
}
|
||||
map.put("cumulative_lookups", clookups);
|
||||
map.put("cumulative_hits", chits);
|
||||
map.put("cumulative_hitratio", calcHitRatio(clookups, chits));
|
||||
map.put("cumulative_inserts", cinserts);
|
||||
map.put("cumulative_evictions", cevictions);
|
||||
map.put("cumulative_idleEvictions", cidleEvictions);
|
||||
|
||||
if (detailed && showItems != 0) {
|
||||
Map items = cache.getMostUsedItems(showItems == -1 ? Integer.MAX_VALUE : showItems);
|
||||
for (Map.Entry e : (Set<Map.Entry>) items.entrySet()) {
|
||||
Object k = e.getKey();
|
||||
Object v = e.getValue();
|
||||
|
||||
String ks = "item_" + k;
|
||||
String vs = v.toString();
|
||||
map.put(ks, vs);
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
}
|
||||
});
|
||||
solrMetricsContext.gauge(cacheMap, true, scope, getCategory().toString());
|
||||
}
|
||||
|
||||
// for unit tests only
|
||||
MetricsMap getMetricsMap() {
|
||||
return cacheMap;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String toString() {
|
||||
return name + (cacheMap != null ? cacheMap.getValue().toString() : "");
|
||||
}
|
||||
|
||||
@Override
|
||||
public long ramBytesUsed() {
|
||||
synchronized (statsList) {
|
||||
return BASE_RAM_BYTES_USED +
|
||||
RamUsageEstimator.sizeOfObject(name) +
|
||||
RamUsageEstimator.sizeOfObject(metricNames) +
|
||||
RamUsageEstimator.sizeOfObject(statsList) +
|
||||
RamUsageEstimator.sizeOfObject(cache);
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public int getMaxSize() {
|
||||
return maxSize != Integer.MAX_VALUE ? maxSize : -1;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void setMaxSize(int maxSize) {
|
||||
if (maxSize > 0) {
|
||||
this.maxSize = maxSize;
|
||||
} else {
|
||||
this.maxSize = Integer.MAX_VALUE;
|
||||
}
|
||||
checkAndAdjustLimits();
|
||||
cache.setUpperWaterMark(maxSize);
|
||||
cache.setLowerWaterMark(minSizeLimit);
|
||||
description = generateDescription();
|
||||
}
|
||||
|
||||
@Override
|
||||
public int getMaxRamMB() {
|
||||
return -1;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void setMaxRamMB(int maxRamMB) {
|
||||
// no-op
|
||||
}
|
||||
|
||||
private void checkAndAdjustLimits() {
|
||||
if (minSizeLimit <= 0) minSizeLimit = 1;
|
||||
if (maxSize <= minSizeLimit) {
|
||||
if (maxSize > 1) {
|
||||
minSizeLimit = maxSize - 1;
|
||||
} else {
|
||||
maxSize = minSizeLimit + 1;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
|
@ -1,547 +0,0 @@
|
|||
/*
|
||||
* Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
* contributor license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright ownership.
|
||||
* The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
* (the "License"); you may not use this file except in compliance with
|
||||
* the License. You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
package org.apache.solr.search;
|
||||
|
||||
import java.lang.invoke.MethodHandles;
|
||||
import java.util.Collection;
|
||||
import java.util.Iterator;
|
||||
import java.util.LinkedHashMap;
|
||||
import java.util.Map;
|
||||
import java.util.Set;
|
||||
import java.util.concurrent.ConcurrentHashMap;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
import java.util.concurrent.atomic.AtomicBoolean;
|
||||
import java.util.concurrent.atomic.LongAdder;
|
||||
import java.util.function.Function;
|
||||
|
||||
import org.apache.lucene.util.Accountable;
|
||||
import org.apache.lucene.util.Accountables;
|
||||
import org.apache.lucene.util.RamUsageEstimator;
|
||||
import org.apache.solr.common.SolrException;
|
||||
import org.apache.solr.common.util.TimeSource;
|
||||
import org.apache.solr.metrics.MetricsMap;
|
||||
import org.apache.solr.metrics.SolrMetricsContext;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
import static org.apache.lucene.util.RamUsageEstimator.LINKED_HASHTABLE_RAM_BYTES_PER_ENTRY;
|
||||
import static org.apache.lucene.util.RamUsageEstimator.QUERY_DEFAULT_RAM_BYTES_USED;
|
||||
|
||||
/**
|
||||
*
|
||||
*/
|
||||
public class LRUCache<K,V> extends SolrCacheBase implements SolrCache<K,V>, Accountable {
|
||||
private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
|
||||
|
||||
static final long BASE_RAM_BYTES_USED = RamUsageEstimator.shallowSizeOfInstance(LRUCache.class);
|
||||
|
||||
/* An instance of this class will be shared across multiple instances
|
||||
* of an LRUCache at the same time. Make sure everything is thread safe.
|
||||
*/
|
||||
private static class CumulativeStats {
|
||||
LongAdder lookups = new LongAdder();
|
||||
LongAdder hits = new LongAdder();
|
||||
LongAdder inserts = new LongAdder();
|
||||
LongAdder evictions = new LongAdder();
|
||||
LongAdder evictionsRamUsage = new LongAdder();
|
||||
LongAdder evictionsIdleTime = new LongAdder();
|
||||
}
|
||||
|
||||
private CumulativeStats stats;
|
||||
|
||||
// per instance stats. The synchronization used for the map will also be
|
||||
// used for updating these statistics (and hence they are not AtomicLongs
|
||||
private long lookups;
|
||||
private long hits;
|
||||
private long inserts;
|
||||
private long evictions;
|
||||
private long evictionsRamUsage;
|
||||
private long evictionsIdleTime;
|
||||
|
||||
private long warmupTime = 0;
|
||||
|
||||
private Map<K, CacheValue<V>> map;
|
||||
private String description="LRU Cache";
|
||||
private MetricsMap cacheMap;
|
||||
private Set<String> metricNames = ConcurrentHashMap.newKeySet();
|
||||
private SolrMetricsContext solrMetricsContext;
|
||||
private int maxSize;
|
||||
private int initialSize;
|
||||
|
||||
private long maxRamBytes = Long.MAX_VALUE;
|
||||
private long maxIdleTimeNs;
|
||||
private final TimeSource timeSource = TimeSource.NANO_TIME;
|
||||
private long oldestEntry = 0L;
|
||||
// for unit testing
|
||||
private boolean syntheticEntries = false;
|
||||
|
||||
// The synchronization used for the map will be used to update this,
|
||||
// hence not an AtomicLong
|
||||
private long ramBytesUsed = 0L;
|
||||
|
||||
public static final class CacheValue<V> implements Accountable {
|
||||
public static final long BASE_RAM_BYTES_USED = RamUsageEstimator.shallowSizeOfInstance(CacheValue.class);
|
||||
final long ramBytesUsed;
|
||||
public final long createTime;
|
||||
public final V value;
|
||||
|
||||
public CacheValue(V value, long createTime) {
|
||||
this.value = value;
|
||||
this.createTime = createTime;
|
||||
ramBytesUsed = BASE_RAM_BYTES_USED +
|
||||
RamUsageEstimator.sizeOfObject(value, QUERY_DEFAULT_RAM_BYTES_USED);
|
||||
}
|
||||
|
||||
@Override
|
||||
public long ramBytesUsed() {
|
||||
return ramBytesUsed;
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public Object init(Map args, Object persistence, CacheRegenerator regenerator) {
|
||||
super.init(args, regenerator);
|
||||
String str = (String)args.get(SIZE_PARAM);
|
||||
this.maxSize = str==null ? 1024 : Integer.parseInt(str);
|
||||
str = (String)args.get("initialSize");
|
||||
initialSize = Math.min(str==null ? 1024 : Integer.parseInt(str), maxSize);
|
||||
str = (String) args.get(MAX_RAM_MB_PARAM);
|
||||
this.maxRamBytes = str == null ? Long.MAX_VALUE : (long) (Double.parseDouble(str) * 1024L * 1024L);
|
||||
str = (String) args.get(MAX_IDLE_TIME_PARAM);
|
||||
if (str == null) {
|
||||
maxIdleTimeNs = Long.MAX_VALUE;
|
||||
} else {
|
||||
int maxIdleTime = Integer.parseInt(str);
|
||||
if (maxIdleTime > 0) {
|
||||
maxIdleTimeNs = TimeUnit.NANOSECONDS.convert(Integer.parseInt(str), TimeUnit.SECONDS);
|
||||
} else {
|
||||
maxIdleTimeNs = Long.MAX_VALUE;
|
||||
}
|
||||
}
|
||||
description = generateDescription();
|
||||
|
||||
map = new LinkedHashMap<K, CacheValue<V>>(initialSize, 0.75f, true) {
|
||||
@Override
|
||||
protected boolean removeEldestEntry(Map.Entry eldest) {
|
||||
// remove items older than maxIdleTimeNs
|
||||
if (maxIdleTimeNs != Long.MAX_VALUE) {
|
||||
long idleCutoff = timeSource.getEpochTimeNs() - maxIdleTimeNs;
|
||||
if (oldestEntry < idleCutoff) {
|
||||
long currentOldestEntry = Long.MAX_VALUE;
|
||||
Iterator<Map.Entry<K, CacheValue<V>>> iterator = entrySet().iterator();
|
||||
while (iterator.hasNext()) {
|
||||
Map.Entry<K, CacheValue<V>> entry = iterator.next();
|
||||
if (entry.getValue().createTime < idleCutoff) {
|
||||
long bytesToDecrement = RamUsageEstimator.sizeOfObject(entry.getKey(), QUERY_DEFAULT_RAM_BYTES_USED);
|
||||
bytesToDecrement += RamUsageEstimator.sizeOfObject(entry.getValue(), QUERY_DEFAULT_RAM_BYTES_USED);
|
||||
bytesToDecrement += LINKED_HASHTABLE_RAM_BYTES_PER_ENTRY;
|
||||
ramBytesUsed -= bytesToDecrement;
|
||||
iterator.remove();
|
||||
evictions++;
|
||||
evictionsIdleTime++;
|
||||
stats.evictionsIdleTime.increment();
|
||||
stats.evictions.increment();
|
||||
} else {
|
||||
if (syntheticEntries) {
|
||||
// no guarantee on the actual create time - make a full sweep
|
||||
if (currentOldestEntry > entry.getValue().createTime) {
|
||||
currentOldestEntry = entry.getValue().createTime;
|
||||
}
|
||||
} else {
|
||||
// iterator is sorted by insertion order (and time)
|
||||
// so we can quickly terminate the sweep
|
||||
currentOldestEntry = entry.getValue().createTime;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
if (currentOldestEntry != Long.MAX_VALUE) {
|
||||
oldestEntry = currentOldestEntry;
|
||||
}
|
||||
}
|
||||
}
|
||||
if (ramBytesUsed > getMaxRamBytes()) {
|
||||
Iterator<Map.Entry<K, CacheValue<V>>> iterator = entrySet().iterator();
|
||||
do {
|
||||
Map.Entry<K, CacheValue<V>> entry = iterator.next();
|
||||
long bytesToDecrement = RamUsageEstimator.sizeOfObject(entry.getKey(), QUERY_DEFAULT_RAM_BYTES_USED);
|
||||
bytesToDecrement += RamUsageEstimator.sizeOfObject(entry.getValue(), QUERY_DEFAULT_RAM_BYTES_USED);
|
||||
bytesToDecrement += LINKED_HASHTABLE_RAM_BYTES_PER_ENTRY;
|
||||
ramBytesUsed -= bytesToDecrement;
|
||||
iterator.remove();
|
||||
evictions++;
|
||||
evictionsRamUsage++;
|
||||
stats.evictions.increment();
|
||||
stats.evictionsRamUsage.increment();
|
||||
} while (iterator.hasNext() && ramBytesUsed > getMaxRamBytes());
|
||||
} else if (size() > getMaxSize()) {
|
||||
Iterator<Map.Entry<K, CacheValue<V>>> iterator = entrySet().iterator();
|
||||
do {
|
||||
Map.Entry<K, CacheValue<V>> entry = iterator.next();
|
||||
long bytesToDecrement = RamUsageEstimator.sizeOfObject(entry.getKey(), QUERY_DEFAULT_RAM_BYTES_USED);
|
||||
bytesToDecrement += RamUsageEstimator.sizeOfObject(entry.getValue(), QUERY_DEFAULT_RAM_BYTES_USED);
|
||||
bytesToDecrement += LINKED_HASHTABLE_RAM_BYTES_PER_ENTRY;
|
||||
ramBytesUsed -= bytesToDecrement;
|
||||
// increment evictions regardless of state.
|
||||
// this doesn't need to be synchronized because it will
|
||||
// only be called in the context of a higher level synchronized block.
|
||||
iterator.remove();
|
||||
evictions++;
|
||||
stats.evictions.increment();
|
||||
} while (iterator.hasNext() && size() > getMaxSize());
|
||||
}
|
||||
// must return false according to javadocs of removeEldestEntry if we're modifying
|
||||
// the map ourselves
|
||||
return false;
|
||||
}
|
||||
};
|
||||
|
||||
if (persistence==null) {
|
||||
// must be the first time a cache of this type is being created
|
||||
persistence = new CumulativeStats();
|
||||
}
|
||||
|
||||
stats = (CumulativeStats)persistence;
|
||||
|
||||
return persistence;
|
||||
}
|
||||
|
||||
/**
|
||||
* Visible for testing. This flag tells the eviction code that (unlike with real entries)
|
||||
* there's no guarantee on the order of entries being inserted with monotonically ascending creation
|
||||
* time. Setting this to true causes a full sweep when looking for entries to evict.
|
||||
* @lucene.internal
|
||||
*/
|
||||
public void setSyntheticEntries(boolean syntheticEntries) {
|
||||
this.syntheticEntries = syntheticEntries;
|
||||
}
|
||||
|
||||
public long getMaxRamBytes() {
|
||||
return maxRamBytes;
|
||||
}
|
||||
|
||||
/**
|
||||
*
|
||||
* @return Returns the description of this cache.
|
||||
*/
|
||||
private String generateDescription() {
|
||||
String description = "LRU Cache(maxSize=" + getMaxSize() + ", initialSize=" + initialSize;
|
||||
if (isAutowarmingOn()) {
|
||||
description += ", " + getAutowarmDescription();
|
||||
}
|
||||
if (getMaxRamBytes() != Long.MAX_VALUE) {
|
||||
description += ", maxRamMB=" + (getMaxRamBytes() / 1024L / 1024L);
|
||||
}
|
||||
if (maxIdleTimeNs != Long.MAX_VALUE) {
|
||||
description += ", " + MAX_IDLE_TIME_PARAM + "=" + TimeUnit.SECONDS.convert(maxIdleTimeNs, TimeUnit.NANOSECONDS);
|
||||
}
|
||||
description += ')';
|
||||
return description;
|
||||
}
|
||||
|
||||
@Override
|
||||
public int size() {
|
||||
synchronized(map) {
|
||||
return map.size();
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public V computeIfAbsent(K key, Function<? super K, ? extends V> mappingFunction) {
|
||||
synchronized (map) {
|
||||
if (getState() == State.LIVE) {
|
||||
lookups++;
|
||||
stats.lookups.increment();
|
||||
}
|
||||
AtomicBoolean newEntry = new AtomicBoolean();
|
||||
CacheValue<V> entry = map.computeIfAbsent(key, k -> {
|
||||
V value = mappingFunction.apply(k);
|
||||
// preserve the semantics of computeIfAbsent
|
||||
if (value == null) {
|
||||
return null;
|
||||
}
|
||||
CacheValue<V> cacheValue = new CacheValue<>(value, timeSource.getEpochTimeNs());
|
||||
if (getState() == State.LIVE) {
|
||||
stats.inserts.increment();
|
||||
}
|
||||
if (syntheticEntries) {
|
||||
if (cacheValue.createTime < oldestEntry) {
|
||||
oldestEntry = cacheValue.createTime;
|
||||
}
|
||||
}
|
||||
// increment local inserts regardless of state???
|
||||
// it does make it more consistent with the current size...
|
||||
inserts++;
|
||||
|
||||
// important to calc and add new ram bytes first so that removeEldestEntry can compare correctly
|
||||
long keySize = RamUsageEstimator.sizeOfObject(key, QUERY_DEFAULT_RAM_BYTES_USED);
|
||||
long valueSize = RamUsageEstimator.sizeOfObject(cacheValue, QUERY_DEFAULT_RAM_BYTES_USED);
|
||||
ramBytesUsed += keySize + valueSize + LINKED_HASHTABLE_RAM_BYTES_PER_ENTRY;
|
||||
newEntry.set(true);
|
||||
return cacheValue;
|
||||
});
|
||||
if (!newEntry.get()) {
|
||||
if (getState() == State.LIVE) {
|
||||
hits++;
|
||||
stats.hits.increment();
|
||||
}
|
||||
}
|
||||
return entry != null ? entry.value : null;
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public V remove(K key) {
|
||||
synchronized (map) {
|
||||
CacheValue<V> entry = map.remove(key);
|
||||
if (entry != null) {
|
||||
long delta = RamUsageEstimator.sizeOfObject(key, QUERY_DEFAULT_RAM_BYTES_USED)
|
||||
+ RamUsageEstimator.sizeOfObject(entry, QUERY_DEFAULT_RAM_BYTES_USED)
|
||||
+ LINKED_HASHTABLE_RAM_BYTES_PER_ENTRY;
|
||||
ramBytesUsed -= delta;
|
||||
return entry.value;
|
||||
} else {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public V put(K key, V value) {
|
||||
if (maxSize == Integer.MAX_VALUE && maxRamBytes == Long.MAX_VALUE) {
|
||||
throw new IllegalStateException("Cache: " + getName() + " has neither size nor RAM limit!");
|
||||
}
|
||||
CacheValue<V> cacheValue = new CacheValue<>(value, timeSource.getEpochTimeNs());
|
||||
return putCacheValue(key, cacheValue);
|
||||
}
|
||||
|
||||
/**
|
||||
* Visible for testing to create synthetic cache entries.
|
||||
* @lucene.internal
|
||||
*/
|
||||
public V putCacheValue(K key, CacheValue<V> cacheValue) {
|
||||
synchronized (map) {
|
||||
if (getState() == State.LIVE) {
|
||||
stats.inserts.increment();
|
||||
}
|
||||
|
||||
if (syntheticEntries) {
|
||||
if (cacheValue.createTime < oldestEntry) {
|
||||
oldestEntry = cacheValue.createTime;
|
||||
}
|
||||
}
|
||||
|
||||
// increment local inserts regardless of state???
|
||||
// it does make it more consistent with the current size...
|
||||
inserts++;
|
||||
|
||||
// important to calc and add new ram bytes first so that removeEldestEntry can compare correctly
|
||||
long keySize = RamUsageEstimator.sizeOfObject(key, QUERY_DEFAULT_RAM_BYTES_USED);
|
||||
long valueSize = RamUsageEstimator.sizeOfObject(cacheValue, QUERY_DEFAULT_RAM_BYTES_USED);
|
||||
ramBytesUsed += keySize + valueSize + LINKED_HASHTABLE_RAM_BYTES_PER_ENTRY;
|
||||
CacheValue<V> old = map.put(key, cacheValue);
|
||||
if (old != null) {
|
||||
long bytesToDecrement = RamUsageEstimator.sizeOfObject(old, QUERY_DEFAULT_RAM_BYTES_USED);
|
||||
// the key existed in the map but we added its size before the put, so let's back out
|
||||
bytesToDecrement += LINKED_HASHTABLE_RAM_BYTES_PER_ENTRY;
|
||||
bytesToDecrement += RamUsageEstimator.sizeOfObject(key, QUERY_DEFAULT_RAM_BYTES_USED);
|
||||
ramBytesUsed -= bytesToDecrement;
|
||||
}
|
||||
return old == null ? null : old.value;
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public V get(K key) {
|
||||
synchronized (map) {
|
||||
CacheValue<V> val = map.get(key);
|
||||
if (getState() == State.LIVE) {
|
||||
// only increment lookups and hits if we are live.
|
||||
lookups++;
|
||||
stats.lookups.increment();
|
||||
if (val!=null) {
|
||||
hits++;
|
||||
stats.hits.increment();
|
||||
}
|
||||
}
|
||||
return val == null ? null : val.value;
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public void clear() {
|
||||
synchronized(map) {
|
||||
map.clear();
|
||||
ramBytesUsed = 0;
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public void warm(SolrIndexSearcher searcher, SolrCache<K,V> old) {
|
||||
if (regenerator==null) return;
|
||||
long warmingStartTime = System.nanoTime();
|
||||
LRUCache<K,V> other = (LRUCache<K,V>)old;
|
||||
|
||||
// warm entries
|
||||
if (isAutowarmingOn()) {
|
||||
Object[] keys,vals = null;
|
||||
|
||||
// Don't do the autowarming in the synchronized block, just pull out the keys and values.
|
||||
synchronized (other.map) {
|
||||
|
||||
int sz = autowarm.getWarmCount(other.map.size());
|
||||
|
||||
keys = new Object[sz];
|
||||
vals = new Object[sz];
|
||||
|
||||
Iterator<Map.Entry<K, CacheValue<V>>> iter = other.map.entrySet().iterator();
|
||||
|
||||
// iteration goes from oldest (least recently used) to most recently used,
|
||||
// so we need to skip over the oldest entries.
|
||||
int skip = other.map.size() - sz;
|
||||
for (int i=0; i<skip; i++) iter.next();
|
||||
|
||||
|
||||
for (int i=0; i<sz; i++) {
|
||||
Map.Entry<K, CacheValue<V>> entry = iter.next();
|
||||
keys[i]=entry.getKey();
|
||||
vals[i]=entry.getValue().value;
|
||||
}
|
||||
}
|
||||
|
||||
// autowarm from the oldest to the newest entries so that the ordering will be
|
||||
// correct in the new cache.
|
||||
for (int i=0; i<keys.length; i++) {
|
||||
try {
|
||||
boolean continueRegen = regenerator.regenerateItem(searcher, this, old, keys[i], vals[i]);
|
||||
if (!continueRegen) break;
|
||||
}
|
||||
catch (Exception e) {
|
||||
SolrException.log(log,"Error during auto-warming of key:" + keys[i], e);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
warmupTime = TimeUnit.MILLISECONDS.convert(System.nanoTime() - warmingStartTime, TimeUnit.NANOSECONDS);
|
||||
}
|
||||
|
||||
//////////////////////// SolrInfoMBeans methods //////////////////////
|
||||
|
||||
|
||||
@Override
|
||||
public String getName() {
|
||||
return LRUCache.class.getName();
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getDescription() {
|
||||
return description;
|
||||
}
|
||||
|
||||
@Override
|
||||
public SolrMetricsContext getSolrMetricsContext() {
|
||||
return solrMetricsContext;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void initializeMetrics(SolrMetricsContext parentContext, String scope) {
|
||||
solrMetricsContext = parentContext.getChildContext(this);
|
||||
cacheMap = new MetricsMap((detailed, res) -> {
|
||||
synchronized (map) {
|
||||
res.put(LOOKUPS_PARAM, lookups);
|
||||
res.put(HITS_PARAM, hits);
|
||||
res.put(HIT_RATIO_PARAM, calcHitRatio(lookups,hits));
|
||||
res.put(INSERTS_PARAM, inserts);
|
||||
res.put(EVICTIONS_PARAM, evictions);
|
||||
res.put(SIZE_PARAM, map.size());
|
||||
res.put(RAM_BYTES_USED_PARAM, ramBytesUsed());
|
||||
res.put(MAX_RAM_MB_PARAM, getMaxRamMB());
|
||||
res.put(MAX_SIZE_PARAM, maxSize);
|
||||
res.put(MAX_IDLE_TIME_PARAM, maxIdleTimeNs != Long.MAX_VALUE ?
|
||||
TimeUnit.SECONDS.convert(maxIdleTimeNs, TimeUnit.NANOSECONDS) : -1);
|
||||
res.put("evictionsRamUsage", evictionsRamUsage);
|
||||
res.put("evictionsIdleTime", evictionsIdleTime);
|
||||
}
|
||||
res.put("warmupTime", warmupTime);
|
||||
|
||||
long clookups = stats.lookups.longValue();
|
||||
long chits = stats.hits.longValue();
|
||||
res.put("cumulative_lookups", clookups);
|
||||
res.put("cumulative_hits", chits);
|
||||
res.put("cumulative_hitratio", calcHitRatio(clookups, chits));
|
||||
res.put("cumulative_inserts", stats.inserts.longValue());
|
||||
res.put("cumulative_evictions", stats.evictions.longValue());
|
||||
res.put("cumulative_evictionsRamUsage", stats.evictionsRamUsage.longValue());
|
||||
res.put("cumulative_evictionsIdleTime", stats.evictionsIdleTime.longValue());
|
||||
});
|
||||
solrMetricsContext.gauge(cacheMap, true, scope, getCategory().toString());
|
||||
}
|
||||
|
||||
// for unit tests only
|
||||
MetricsMap getMetricsMap() {
|
||||
return cacheMap;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String toString() {
|
||||
return name() + (cacheMap != null ? cacheMap.getValue().toString() : "");
|
||||
}
|
||||
|
||||
@Override
|
||||
public long ramBytesUsed() {
|
||||
synchronized (map) {
|
||||
return BASE_RAM_BYTES_USED + ramBytesUsed;
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public Collection<Accountable> getChildResources() {
|
||||
synchronized (map) {
|
||||
return Accountables.namedAccountables(getName(), (Map<?, ? extends Accountable>) map);
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public int getMaxSize() {
|
||||
return maxSize != Integer.MAX_VALUE ? maxSize : -1;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void setMaxSize(int maxSize) {
|
||||
if (maxSize > 0) {
|
||||
this.maxSize = maxSize;
|
||||
} else {
|
||||
this.maxSize = Integer.MAX_VALUE;
|
||||
}
|
||||
description = generateDescription();
|
||||
}
|
||||
|
||||
@Override
|
||||
public int getMaxRamMB() {
|
||||
return maxRamBytes != Long.MAX_VALUE ? (int) (maxRamBytes / 1024L / 1024L) : -1;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void setMaxRamMB(int maxRamMB) {
|
||||
if (maxRamMB > 0) {
|
||||
maxRamBytes = maxRamMB * 1024L * 1024L;
|
||||
} else {
|
||||
maxRamBytes = Long.MAX_VALUE;
|
||||
}
|
||||
description = generateDescription();
|
||||
}
|
||||
}
|
|
@ -33,7 +33,7 @@ import org.apache.solr.core.PluginInfo;
|
|||
import org.apache.solr.handler.component.ResponseBuilder;
|
||||
import org.apache.solr.handler.component.ShardRequest;
|
||||
import org.apache.solr.request.SolrQueryRequest;
|
||||
import org.apache.solr.search.FastLRUCache;
|
||||
import org.apache.solr.search.CaffeineCache;
|
||||
import org.apache.solr.search.SolrCache;
|
||||
import org.apache.solr.search.SolrIndexSearcher;
|
||||
import org.slf4j.Logger;
|
||||
|
@ -68,9 +68,9 @@ public class LRUStatsCache extends ExactStatsCache {
|
|||
// global stats synchronized from the master
|
||||
|
||||
// cache of <term, termStats>
|
||||
private final FastLRUCache<String,TermStats> currentGlobalTermStats = new FastLRUCache<>();
|
||||
private final CaffeineCache<String,TermStats> currentGlobalTermStats = new CaffeineCache<>();
|
||||
// cache of <field, colStats>
|
||||
private final FastLRUCache<String,CollectionStats> currentGlobalColStats = new FastLRUCache<>();
|
||||
private final CaffeineCache<String,CollectionStats> currentGlobalColStats = new CaffeineCache<>();
|
||||
|
||||
// missing stats to be fetched with the next request
|
||||
private Set<String> missingColStats = ConcurrentHashMap.newKeySet();
|
||||
|
@ -184,7 +184,7 @@ public class LRUStatsCache extends ExactStatsCache {
|
|||
Map<String,TermStats> termStats = StatsUtil.termStatsMapFromString(termStatsString);
|
||||
if (termStats != null) {
|
||||
SolrCache<String,TermStats> cache = perShardTermStats.computeIfAbsent(shard, s -> {
|
||||
FastLRUCache c = new FastLRUCache<>();
|
||||
CaffeineCache c = new CaffeineCache<>();
|
||||
Map<String, String> map = new HashMap<>(lruCacheInitArgs);
|
||||
map.put(CommonParams.NAME, s);
|
||||
c.init(map, null, null);
|
||||
|
|
|
@ -1,692 +0,0 @@
|
|||
/*
|
||||
* Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
* contributor license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright ownership.
|
||||
* The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
* (the "License"); you may not use this file except in compliance with
|
||||
* the License. You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
package org.apache.solr.util;
|
||||
|
||||
import java.lang.ref.WeakReference;
|
||||
import java.util.Iterator;
|
||||
import java.util.LinkedHashMap;
|
||||
import java.util.Map;
|
||||
import java.util.TreeSet;
|
||||
import java.util.concurrent.ConcurrentHashMap;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
//import java.util.concurrent.atomic.AtomicInteger;
|
||||
import java.util.concurrent.atomic.AtomicBoolean;
|
||||
import java.util.concurrent.atomic.AtomicLong;
|
||||
import java.util.concurrent.atomic.LongAdder;
|
||||
import java.util.concurrent.locks.ReentrantLock;
|
||||
import java.util.function.Function;
|
||||
|
||||
import org.apache.lucene.util.Accountable;
|
||||
import org.apache.lucene.util.RamUsageEstimator;
|
||||
import org.apache.solr.common.util.Cache;
|
||||
import org.apache.solr.common.util.TimeSource;
|
||||
|
||||
import static org.apache.lucene.util.RamUsageEstimator.HASHTABLE_RAM_BYTES_PER_ENTRY;
|
||||
import static org.apache.lucene.util.RamUsageEstimator.QUERY_DEFAULT_RAM_BYTES_USED;
|
||||
|
||||
/**
|
||||
* A LFU cache implementation based upon ConcurrentHashMap.
|
||||
* <p>
|
||||
* This is not a terribly efficient implementation. The tricks used in the
|
||||
* LRU version were not directly usable, perhaps it might be possible to
|
||||
* rewrite them with LFU in mind.
|
||||
* <p>
|
||||
* <b>This API is experimental and subject to change</b>
|
||||
*
|
||||
* @since solr 1.6
|
||||
*/
|
||||
public class ConcurrentLFUCache<K, V> implements Cache<K,V>, Accountable {
|
||||
private static final long BASE_RAM_BYTES_USED =
|
||||
RamUsageEstimator.shallowSizeOfInstance(ConcurrentLFUCache.class) +
|
||||
new Stats().ramBytesUsed() +
|
||||
RamUsageEstimator.shallowSizeOfInstance(ConcurrentHashMap.class);
|
||||
|
||||
private final ConcurrentHashMap<Object, CacheEntry<K, V>> map;
|
||||
private int upperWaterMark, lowerWaterMark;
|
||||
private final ReentrantLock markAndSweepLock = new ReentrantLock(true);
|
||||
private boolean isCleaning = false; // not volatile... piggybacked on other volatile vars
|
||||
private boolean newThreadForCleanup;
|
||||
private boolean runCleanupThread;
|
||||
private volatile boolean islive = true;
|
||||
private final Stats stats = new Stats();
|
||||
@SuppressWarnings("unused")
|
||||
private int acceptableWaterMark;
|
||||
private long lowHitCount = 0; // not volatile, only accessed in the cleaning method
|
||||
private final EvictionListener<K, V> evictionListener;
|
||||
private CleanupThread cleanupThread;
|
||||
private boolean timeDecay;
|
||||
private long maxIdleTimeNs;
|
||||
private final TimeSource timeSource = TimeSource.NANO_TIME;
|
||||
private final AtomicLong oldestEntry = new AtomicLong(0L);
|
||||
private final LongAdder ramBytes = new LongAdder();
|
||||
|
||||
public ConcurrentLFUCache(int upperWaterMark, final int lowerWaterMark, int acceptableSize,
|
||||
int initialSize, boolean runCleanupThread, boolean runNewThreadForCleanup,
|
||||
EvictionListener<K, V> evictionListener, boolean timeDecay) {
|
||||
this(upperWaterMark, lowerWaterMark, acceptableSize, initialSize, runCleanupThread,
|
||||
runNewThreadForCleanup, evictionListener, timeDecay, -1);
|
||||
}
|
||||
|
||||
public ConcurrentLFUCache(int upperWaterMark, final int lowerWaterMark, int acceptableSize,
|
||||
int initialSize, boolean runCleanupThread, boolean runNewThreadForCleanup,
|
||||
EvictionListener<K, V> evictionListener, boolean timeDecay, int maxIdleTimeSec) {
|
||||
setUpperWaterMark(upperWaterMark);
|
||||
setLowerWaterMark(lowerWaterMark);
|
||||
setAcceptableWaterMark(acceptableSize);
|
||||
map = new ConcurrentHashMap<>(initialSize);
|
||||
this.evictionListener = evictionListener;
|
||||
setNewThreadForCleanup(runNewThreadForCleanup);
|
||||
setTimeDecay(timeDecay);
|
||||
setMaxIdleTime(maxIdleTimeSec);
|
||||
setRunCleanupThread(runCleanupThread);
|
||||
}
|
||||
|
||||
public ConcurrentLFUCache(int size, int lowerWatermark) {
|
||||
this(size, lowerWatermark, (int) Math.floor((lowerWatermark + size) / 2),
|
||||
(int) Math.ceil(0.75 * size), false, false, null, true, -1);
|
||||
}
|
||||
|
||||
public void setAlive(boolean live) {
|
||||
islive = live;
|
||||
}
|
||||
|
||||
public void setUpperWaterMark(int upperWaterMark) {
|
||||
if (upperWaterMark < 1) throw new IllegalArgumentException("upperWaterMark must be > 0");
|
||||
this.upperWaterMark = upperWaterMark;
|
||||
}
|
||||
|
||||
public void setLowerWaterMark(int lowerWaterMark) {
|
||||
if (lowerWaterMark >= upperWaterMark)
|
||||
throw new IllegalArgumentException("lowerWaterMark must be < upperWaterMark");
|
||||
this.lowerWaterMark = lowerWaterMark;
|
||||
}
|
||||
|
||||
public void setAcceptableWaterMark(int acceptableWaterMark) {
|
||||
this.acceptableWaterMark = acceptableWaterMark;
|
||||
}
|
||||
|
||||
public void setTimeDecay(boolean timeDecay) {
|
||||
this.timeDecay = timeDecay;
|
||||
}
|
||||
|
||||
public void setMaxIdleTime(int maxIdleTime) {
|
||||
long oldMaxIdleTimeNs = maxIdleTimeNs;
|
||||
maxIdleTimeNs = maxIdleTime > 0 ? TimeUnit.NANOSECONDS.convert(maxIdleTime, TimeUnit.SECONDS) : Long.MAX_VALUE;
|
||||
if (cleanupThread != null && maxIdleTimeNs < oldMaxIdleTimeNs) {
|
||||
cleanupThread.wakeThread();
|
||||
}
|
||||
}
|
||||
|
||||
public synchronized void setNewThreadForCleanup(boolean newThreadForCleanup) {
|
||||
this.newThreadForCleanup = newThreadForCleanup;
|
||||
if (newThreadForCleanup) {
|
||||
setRunCleanupThread(false);
|
||||
}
|
||||
}
|
||||
|
||||
public synchronized void setRunCleanupThread(boolean runCleanupThread) {
|
||||
this.runCleanupThread = runCleanupThread;
|
||||
if (this.runCleanupThread) {
|
||||
newThreadForCleanup = false;
|
||||
if (cleanupThread == null) {
|
||||
cleanupThread = new CleanupThread(this);
|
||||
cleanupThread.start();
|
||||
}
|
||||
} else {
|
||||
if (cleanupThread != null) {
|
||||
cleanupThread.stopThread();
|
||||
cleanupThread = null;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public V get(K key) {
|
||||
CacheEntry<K, V> e = map.get(key);
|
||||
if (e == null) {
|
||||
if (islive) stats.missCounter.increment();
|
||||
} else if (islive) {
|
||||
e.lastAccessed = timeSource.getEpochTimeNs();
|
||||
stats.accessCounter.increment();
|
||||
e.hits.increment();
|
||||
}
|
||||
return e != null ? e.value : null;
|
||||
}
|
||||
|
||||
@Override
|
||||
public V computeIfAbsent(K key, Function<? super K, ? extends V> mappingFunction) {
|
||||
// prescreen access first
|
||||
V val = get(key);
|
||||
if (val != null) {
|
||||
return val;
|
||||
}
|
||||
AtomicBoolean newValue = new AtomicBoolean();
|
||||
if (islive) {
|
||||
stats.accessCounter.increment();
|
||||
}
|
||||
CacheEntry<K, V> entry = map.computeIfAbsent(key, k -> {
|
||||
V value = mappingFunction.apply(key);
|
||||
// preserve the semantics of computeIfAbsent
|
||||
if (value == null) {
|
||||
return null;
|
||||
}
|
||||
CacheEntry<K, V> e = new CacheEntry<>(key, value, timeSource.getEpochTimeNs());
|
||||
newValue.set(true);
|
||||
oldestEntry.updateAndGet(x -> x > e.lastAccessed || x == 0 ? e.lastAccessed : x);
|
||||
stats.size.increment();
|
||||
ramBytes.add(e.ramBytesUsed() + HASHTABLE_RAM_BYTES_PER_ENTRY); // added key + value + entry
|
||||
if (islive) {
|
||||
stats.putCounter.increment();
|
||||
} else {
|
||||
stats.nonLivePutCounter.increment();
|
||||
}
|
||||
return e;
|
||||
});
|
||||
if (newValue.get()) {
|
||||
maybeMarkAndSweep();
|
||||
} else {
|
||||
if (islive && entry != null) {
|
||||
entry.lastAccessed = timeSource.getEpochTimeNs();
|
||||
entry.hits.increment();
|
||||
}
|
||||
}
|
||||
return entry != null ? entry.value : null;
|
||||
}
|
||||
|
||||
@Override
|
||||
public V remove(K key) {
|
||||
CacheEntry<K, V> cacheEntry = map.remove(key);
|
||||
if (cacheEntry != null) {
|
||||
stats.size.decrement();
|
||||
ramBytes.add(-cacheEntry.ramBytesUsed() - HASHTABLE_RAM_BYTES_PER_ENTRY);
|
||||
return cacheEntry.value;
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
@Override
|
||||
public V put(K key, V val) {
|
||||
if (val == null) return null;
|
||||
CacheEntry<K, V> e = new CacheEntry<>(key, val, timeSource.getEpochTimeNs());
|
||||
return putCacheEntry(e);
|
||||
}
|
||||
|
||||
|
||||
|
||||
/**
|
||||
* Visible for testing to create synthetic cache entries.
|
||||
* @lucene.internal
|
||||
*/
|
||||
public V putCacheEntry(CacheEntry<K, V> e) {
|
||||
stats.accessCounter.increment();
|
||||
// initialize oldestEntry
|
||||
oldestEntry.updateAndGet(x -> x > e.lastAccessed || x == 0 ? e.lastAccessed : x);
|
||||
CacheEntry<K, V> oldCacheEntry = map.put(e.key, e);
|
||||
if (oldCacheEntry == null) {
|
||||
stats.size.increment();
|
||||
ramBytes.add(e.ramBytesUsed() + HASHTABLE_RAM_BYTES_PER_ENTRY); // added key + value + entry
|
||||
} else {
|
||||
ramBytes.add(-oldCacheEntry.ramBytesUsed());
|
||||
ramBytes.add(e.ramBytesUsed());
|
||||
}
|
||||
if (islive) {
|
||||
stats.putCounter.increment();
|
||||
} else {
|
||||
stats.nonLivePutCounter.increment();
|
||||
}
|
||||
maybeMarkAndSweep();
|
||||
return oldCacheEntry == null ? null : oldCacheEntry.value;
|
||||
}
|
||||
|
||||
private void maybeMarkAndSweep() {
|
||||
// Check if we need to clear out old entries from the cache.
|
||||
// isCleaning variable is checked instead of markAndSweepLock.isLocked()
|
||||
// for performance because every put invokation will check until
|
||||
// the size is back to an acceptable level.
|
||||
//
|
||||
// There is a race between the check and the call to markAndSweep, but
|
||||
// it's unimportant because markAndSweep actually aquires the lock or returns if it can't.
|
||||
//
|
||||
// Thread safety note: isCleaning read is piggybacked (comes after) other volatile reads
|
||||
// in this method.
|
||||
boolean evictByIdleTime = maxIdleTimeNs != Long.MAX_VALUE;
|
||||
int currentSize = stats.size.intValue();
|
||||
long idleCutoff = evictByIdleTime ? timeSource.getEpochTimeNs() - maxIdleTimeNs : -1L;
|
||||
if ((currentSize > upperWaterMark || (evictByIdleTime && oldestEntry.get() < idleCutoff)) && !isCleaning) {
|
||||
if (newThreadForCleanup) {
|
||||
new Thread(this::markAndSweep).start();
|
||||
} else if (cleanupThread != null) {
|
||||
cleanupThread.wakeThread();
|
||||
} else {
|
||||
markAndSweep();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Removes items from the cache to bring the size down to the lowerWaterMark.
|
||||
* <p>Visible for unit testing.</p>
|
||||
* @lucene.internal
|
||||
*/
|
||||
public void markAndSweep() {
|
||||
if (!markAndSweepLock.tryLock()) return;
|
||||
try {
|
||||
long lowHitCount = this.lowHitCount;
|
||||
isCleaning = true;
|
||||
this.lowHitCount = lowHitCount; // volatile write to make isCleaning visible
|
||||
|
||||
int sz = stats.size.intValue();
|
||||
boolean evictByIdleTime = maxIdleTimeNs != Long.MAX_VALUE;
|
||||
long idleCutoff = evictByIdleTime ? timeSource.getEpochTimeNs() - maxIdleTimeNs : -1L;
|
||||
if (sz <= upperWaterMark && (evictByIdleTime && oldestEntry.get() > idleCutoff)) {
|
||||
/* SOLR-7585: Even though we acquired a lock, multiple threads might detect a need for calling this method.
|
||||
* Locking keeps these from executing at the same time, so they run sequentially. The second and subsequent
|
||||
* sequential runs of this method don't need to be done, since there are no elements to remove.
|
||||
*/
|
||||
return;
|
||||
}
|
||||
|
||||
// first evict by idleTime - it's less costly to do an additional pass over the
|
||||
// map than to manage the outdated entries in a TreeSet
|
||||
if (evictByIdleTime) {
|
||||
long currentOldestEntry = Long.MAX_VALUE;
|
||||
Iterator<Map.Entry<Object, CacheEntry<K, V>>> iterator = map.entrySet().iterator();
|
||||
while (iterator.hasNext()) {
|
||||
Map.Entry<Object, CacheEntry<K, V>> entry = iterator.next();
|
||||
entry.getValue().lastAccessedCopy = entry.getValue().lastAccessed;
|
||||
if (entry.getValue().lastAccessedCopy < idleCutoff) {
|
||||
iterator.remove();
|
||||
postRemoveEntry(entry.getValue());
|
||||
stats.evictionIdleCounter.increment();
|
||||
} else {
|
||||
if (entry.getValue().lastAccessedCopy < currentOldestEntry) {
|
||||
currentOldestEntry = entry.getValue().lastAccessedCopy;
|
||||
}
|
||||
}
|
||||
}
|
||||
if (currentOldestEntry != Long.MAX_VALUE) {
|
||||
oldestEntry.set(currentOldestEntry);
|
||||
}
|
||||
// refresh size and maybe return
|
||||
sz = stats.size.intValue();
|
||||
if (sz <= upperWaterMark) {
|
||||
return;
|
||||
}
|
||||
}
|
||||
int wantToRemove = sz - lowerWaterMark;
|
||||
|
||||
TreeSet<CacheEntry<K, V>> tree = new TreeSet<>();
|
||||
|
||||
for (CacheEntry<K, V> ce : map.values()) {
|
||||
// set hitsCopy to avoid later Atomic reads. Primitive types are faster than the atomic get().
|
||||
ce.hitsCopy = ce.hits.longValue();
|
||||
ce.lastAccessedCopy = ce.lastAccessed;
|
||||
if (timeDecay) {
|
||||
ce.hits.reset();
|
||||
ce.hits.add(ce.hitsCopy >>> 1);
|
||||
}
|
||||
if (tree.size() < wantToRemove) {
|
||||
tree.add(ce);
|
||||
} else {
|
||||
/*
|
||||
* SOLR-7585: Before doing this part, make sure the TreeSet actually has an element, since the first() method
|
||||
* fails with NoSuchElementException if the set is empty. If that test passes, check hits. This test may
|
||||
* never actually fail due to the upperWaterMark check above, but we'll do it anyway.
|
||||
*/
|
||||
if (tree.size() > 0) {
|
||||
/* If hits are not equal, we can remove before adding which is slightly faster. I can no longer remember
|
||||
* why removing first is faster, but I vaguely remember being sure about it!
|
||||
*/
|
||||
if (ce.hitsCopy < tree.first().hitsCopy) {
|
||||
tree.remove(tree.first());
|
||||
tree.add(ce);
|
||||
} else if (ce.hitsCopy == tree.first().hitsCopy) {
|
||||
tree.add(ce);
|
||||
tree.remove(tree.first());
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
for (CacheEntry<K, V> e : tree) {
|
||||
evictEntry(e.key);
|
||||
}
|
||||
if (evictByIdleTime) {
|
||||
// do a full pass because we don't what is the max. age of remaining items
|
||||
long currentOldestEntry = Long.MAX_VALUE;
|
||||
for (CacheEntry<K, V> e : map.values()) {
|
||||
if (e.lastAccessedCopy < currentOldestEntry) {
|
||||
currentOldestEntry = e.lastAccessedCopy;
|
||||
}
|
||||
}
|
||||
if (currentOldestEntry != Long.MAX_VALUE) {
|
||||
oldestEntry.set(currentOldestEntry);
|
||||
}
|
||||
}
|
||||
} finally {
|
||||
isCleaning = false; // set before markAndSweep.unlock() for visibility
|
||||
markAndSweepLock.unlock();
|
||||
}
|
||||
}
|
||||
|
||||
private void evictEntry(K key) {
|
||||
CacheEntry<K, V> o = map.remove(key);
|
||||
postRemoveEntry(o);
|
||||
}
|
||||
|
||||
private void postRemoveEntry(CacheEntry<K, V> o) {
|
||||
if (o == null) return;
|
||||
ramBytes.add(-(o.ramBytesUsed() + HASHTABLE_RAM_BYTES_PER_ENTRY));
|
||||
stats.size.decrement();
|
||||
stats.evictionCounter.increment();
|
||||
if (evictionListener != null) evictionListener.evictedEntry(o.key, o.value);
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns 'n' number of least used entries present in this cache.
|
||||
* <p>
|
||||
* This uses a TreeSet to collect the 'n' least used items ordered by ascending hitcount
|
||||
* and returns a LinkedHashMap containing 'n' or less than 'n' entries.
|
||||
*
|
||||
* @param n the number of items needed
|
||||
* @return a LinkedHashMap containing 'n' or less than 'n' entries
|
||||
*/
|
||||
public Map<K, V> getLeastUsedItems(int n) {
|
||||
Map<K, V> result = new LinkedHashMap<>();
|
||||
if (n <= 0)
|
||||
return result;
|
||||
TreeSet<CacheEntry<K, V>> tree = new TreeSet<>();
|
||||
// we need to grab the lock since we are changing the copy variables
|
||||
markAndSweepLock.lock();
|
||||
try {
|
||||
for (Map.Entry<Object, CacheEntry<K, V>> entry : map.entrySet()) {
|
||||
CacheEntry<K, V> ce = entry.getValue();
|
||||
ce.hitsCopy = ce.hits.longValue();
|
||||
ce.lastAccessedCopy = ce.lastAccessed;
|
||||
if (tree.size() < n) {
|
||||
tree.add(ce);
|
||||
} else {
|
||||
// If the hits are not equal, we can remove before adding
|
||||
// which is slightly faster
|
||||
if (ce.hitsCopy < tree.first().hitsCopy) {
|
||||
tree.remove(tree.first());
|
||||
tree.add(ce);
|
||||
} else if (ce.hitsCopy == tree.first().hitsCopy) {
|
||||
tree.add(ce);
|
||||
tree.remove(tree.first());
|
||||
}
|
||||
}
|
||||
}
|
||||
} finally {
|
||||
markAndSweepLock.unlock();
|
||||
}
|
||||
for (CacheEntry<K, V> e : tree) {
|
||||
result.put(e.key, e.value);
|
||||
}
|
||||
return result;
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns 'n' number of most used entries present in this cache.
|
||||
* <p>
|
||||
* This uses a TreeSet to collect the 'n' most used items ordered by descending hitcount
|
||||
* and returns a LinkedHashMap containing 'n' or less than 'n' entries.
|
||||
*
|
||||
* @param n the number of items needed
|
||||
* @return a LinkedHashMap containing 'n' or less than 'n' entries
|
||||
*/
|
||||
public Map<K, V> getMostUsedItems(int n) {
|
||||
Map<K, V> result = new LinkedHashMap<>();
|
||||
if (n <= 0)
|
||||
return result;
|
||||
TreeSet<CacheEntry<K, V>> tree = new TreeSet<>();
|
||||
// we need to grab the lock since we are changing the copy variables
|
||||
markAndSweepLock.lock();
|
||||
try {
|
||||
for (Map.Entry<Object, CacheEntry<K, V>> entry : map.entrySet()) {
|
||||
CacheEntry<K, V> ce = entry.getValue();
|
||||
ce.hitsCopy = ce.hits.longValue();
|
||||
ce.lastAccessedCopy = ce.lastAccessed;
|
||||
if (tree.size() < n) {
|
||||
tree.add(ce);
|
||||
} else {
|
||||
// If the hits are not equal, we can remove before adding
|
||||
// which is slightly faster
|
||||
if (ce.hitsCopy > tree.last().hitsCopy) {
|
||||
tree.remove(tree.last());
|
||||
tree.add(ce);
|
||||
} else if (ce.hitsCopy == tree.last().hitsCopy) {
|
||||
tree.add(ce);
|
||||
tree.remove(tree.last());
|
||||
}
|
||||
}
|
||||
}
|
||||
} finally {
|
||||
markAndSweepLock.unlock();
|
||||
}
|
||||
for (CacheEntry<K, V> e : tree) {
|
||||
result.put(e.key, e.value);
|
||||
}
|
||||
return result;
|
||||
}
|
||||
|
||||
public int size() {
|
||||
return stats.size.intValue();
|
||||
}
|
||||
|
||||
@Override
|
||||
public void clear() {
|
||||
map.clear();
|
||||
ramBytes.reset();
|
||||
}
|
||||
|
||||
public Map<Object, CacheEntry<K, V>> getMap() {
|
||||
return map;
|
||||
}
|
||||
|
||||
@Override
|
||||
public long ramBytesUsed() {
|
||||
return BASE_RAM_BYTES_USED + ramBytes.sum();
|
||||
}
|
||||
|
||||
public static class CacheEntry<K, V> implements Comparable<CacheEntry<K, V>>, Accountable {
|
||||
public static final long BASE_RAM_BYTES_USED = RamUsageEstimator.shallowSizeOfInstance(CacheEntry.class)
|
||||
// AtomicLong
|
||||
+ RamUsageEstimator.primitiveSizes.get(long.class);
|
||||
|
||||
final K key;
|
||||
final V value;
|
||||
final long ramBytesUsed;
|
||||
final LongAdder hits = new LongAdder();
|
||||
long hitsCopy = 0;
|
||||
volatile long lastAccessed = 0;
|
||||
long lastAccessedCopy = 0;
|
||||
|
||||
public CacheEntry(K key, V value, long lastAccessed) {
|
||||
this.key = key;
|
||||
this.value = value;
|
||||
this.lastAccessed = lastAccessed;
|
||||
ramBytesUsed = BASE_RAM_BYTES_USED +
|
||||
RamUsageEstimator.sizeOfObject(key, QUERY_DEFAULT_RAM_BYTES_USED) +
|
||||
RamUsageEstimator.sizeOfObject(value, QUERY_DEFAULT_RAM_BYTES_USED);
|
||||
}
|
||||
|
||||
@Override
|
||||
public int compareTo(CacheEntry<K, V> that) {
|
||||
if (this.hitsCopy == that.hitsCopy) {
|
||||
if (this.lastAccessedCopy == that.lastAccessedCopy) {
|
||||
return 0;
|
||||
}
|
||||
return this.lastAccessedCopy < that.lastAccessedCopy ? 1 : -1;
|
||||
}
|
||||
return this.hitsCopy < that.hitsCopy ? 1 : -1;
|
||||
}
|
||||
|
||||
@Override
|
||||
public int hashCode() {
|
||||
return value.hashCode();
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean equals(Object obj) {
|
||||
return value.equals(obj);
|
||||
}
|
||||
|
||||
@Override
|
||||
public String toString() {
|
||||
return "key: " + key + " value: " + value + " hits:" + hits.longValue();
|
||||
}
|
||||
|
||||
@Override
|
||||
public long ramBytesUsed() {
|
||||
return ramBytesUsed;
|
||||
}
|
||||
}
|
||||
|
||||
private boolean isDestroyed = false;
|
||||
|
||||
public void destroy() {
|
||||
try {
|
||||
if (cleanupThread != null) {
|
||||
cleanupThread.stopThread();
|
||||
}
|
||||
} finally {
|
||||
isDestroyed = true;
|
||||
}
|
||||
}
|
||||
|
||||
public Stats getStats() {
|
||||
return stats;
|
||||
}
|
||||
|
||||
|
||||
public static class Stats implements Accountable {
|
||||
private static final long RAM_BYTES_USED =
|
||||
RamUsageEstimator.shallowSizeOfInstance(Stats.class) +
|
||||
// LongAdder
|
||||
7 * (
|
||||
RamUsageEstimator.NUM_BYTES_ARRAY_HEADER +
|
||||
RamUsageEstimator.primitiveSizes.get(long.class) +
|
||||
2 * (RamUsageEstimator.NUM_BYTES_OBJECT_REF + RamUsageEstimator.primitiveSizes.get(long.class))
|
||||
);
|
||||
|
||||
private final LongAdder accessCounter = new LongAdder();
|
||||
private final LongAdder putCounter = new LongAdder();
|
||||
private final LongAdder nonLivePutCounter = new LongAdder();
|
||||
private final LongAdder missCounter = new LongAdder();
|
||||
private final LongAdder size = new LongAdder();
|
||||
private LongAdder evictionCounter = new LongAdder();
|
||||
private LongAdder evictionIdleCounter = new LongAdder();
|
||||
|
||||
public long getCumulativeLookups() {
|
||||
return (accessCounter.longValue() - putCounter.longValue() - nonLivePutCounter.longValue()) + missCounter.longValue();
|
||||
}
|
||||
|
||||
public long getCumulativeHits() {
|
||||
return accessCounter.longValue() - putCounter.longValue() - nonLivePutCounter.longValue();
|
||||
}
|
||||
|
||||
public long getCumulativePuts() {
|
||||
return putCounter.longValue();
|
||||
}
|
||||
|
||||
public long getCumulativeEvictions() {
|
||||
return evictionCounter.longValue();
|
||||
}
|
||||
|
||||
public long getCumulativeIdleEvictions() {
|
||||
return evictionIdleCounter.longValue();
|
||||
}
|
||||
|
||||
public int getCurrentSize() {
|
||||
return size.intValue();
|
||||
}
|
||||
|
||||
public long getCumulativeNonLivePuts() {
|
||||
return nonLivePutCounter.longValue();
|
||||
}
|
||||
|
||||
public long getCumulativeMisses() {
|
||||
return missCounter.longValue();
|
||||
}
|
||||
|
||||
public void add(Stats other) {
|
||||
accessCounter.add(other.accessCounter.longValue());
|
||||
putCounter.add(other.putCounter.longValue());
|
||||
nonLivePutCounter.add(other.nonLivePutCounter.longValue());
|
||||
missCounter.add(other.missCounter.longValue());
|
||||
evictionCounter.add(other.evictionCounter.longValue());
|
||||
evictionIdleCounter.add(other.evictionIdleCounter.longValue());
|
||||
long maxSize = Math.max(size.longValue(), other.size.longValue());
|
||||
size.reset();
|
||||
size.add(maxSize);
|
||||
}
|
||||
|
||||
@Override
|
||||
public long ramBytesUsed() {
|
||||
return RAM_BYTES_USED;
|
||||
}
|
||||
}
|
||||
|
||||
public static interface EvictionListener<K, V> {
|
||||
public void evictedEntry(K key, V value);
|
||||
}
|
||||
|
||||
private static class CleanupThread extends Thread {
|
||||
private WeakReference<ConcurrentLFUCache> cache;
|
||||
|
||||
private boolean stop = false;
|
||||
|
||||
public CleanupThread(ConcurrentLFUCache c) {
|
||||
cache = new WeakReference<>(c);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void run() {
|
||||
while (true) {
|
||||
ConcurrentLFUCache c = cache.get();
|
||||
if(c == null) break;
|
||||
synchronized (this) {
|
||||
if (stop) break;
|
||||
long waitTimeMs = c.maxIdleTimeNs != Long.MAX_VALUE ? TimeUnit.MILLISECONDS.convert(c.maxIdleTimeNs, TimeUnit.NANOSECONDS) : 0L;
|
||||
try {
|
||||
this.wait(waitTimeMs);
|
||||
} catch (InterruptedException e) {
|
||||
}
|
||||
}
|
||||
if (stop) break;
|
||||
c = cache.get();
|
||||
if (c == null) break;
|
||||
c.markAndSweep();
|
||||
}
|
||||
}
|
||||
|
||||
void wakeThread() {
|
||||
synchronized (this) {
|
||||
this.notify();
|
||||
}
|
||||
}
|
||||
|
||||
void stopThread() {
|
||||
synchronized (this) {
|
||||
stop = true;
|
||||
this.notify();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
}
|
|
@ -83,25 +83,25 @@
|
|||
that match a particular query.
|
||||
-->
|
||||
<filterCache
|
||||
class="solr.search.FastLRUCache"
|
||||
class="solr.search.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="2"/>
|
||||
|
||||
<queryResultCache
|
||||
class="solr.search.LRUCache"
|
||||
class="solr.search.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="2"/>
|
||||
|
||||
<documentCache
|
||||
class="solr.search.LRUCache"
|
||||
class="solr.search.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="0"/>
|
||||
|
||||
<cache name="perSegFilter"
|
||||
class="solr.search.LRUCache"
|
||||
class="solr.search.CaffeineCache"
|
||||
size="10"
|
||||
initialSize="0"
|
||||
autowarmCount="10" />
|
||||
|
@ -113,7 +113,7 @@
|
|||
<!--
|
||||
|
||||
<cache name="myUserCache"
|
||||
class="solr.search.LRUCache"
|
||||
class="solr.search.CaffeineCache"
|
||||
size="4096"
|
||||
initialSize="1024"
|
||||
autowarmCount="1024"
|
||||
|
|
|
@ -37,21 +37,21 @@
|
|||
-->
|
||||
<filterCache
|
||||
enabled="${filterCache.enabled}"
|
||||
class="solr.search.FastLRUCache"
|
||||
class="solr.search.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="2"/>
|
||||
|
||||
<queryResultCache
|
||||
enabled="${queryResultCache.enabled}"
|
||||
class="solr.search.LRUCache"
|
||||
class="solr.search.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="2"/>
|
||||
|
||||
<documentCache
|
||||
enabled="${documentCache.enabled}"
|
||||
class="solr.search.LRUCache"
|
||||
class="solr.search.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="0"/>
|
||||
|
|
|
@ -21,19 +21,19 @@
|
|||
<schemaFactory class="ClassicIndexSchemaFactory"/>
|
||||
<query>
|
||||
<cache name="lfuCacheDecayFalse"
|
||||
class="solr.search.LFUCache"
|
||||
class="solr.search.CaffeineCache"
|
||||
size="10"
|
||||
initialSize="9"
|
||||
timeDecay="false" />
|
||||
|
||||
<cache name="lfuCacheDecayTrue"
|
||||
class="solr.search.LFUCache"
|
||||
class="solr.search.CaffeineCache"
|
||||
size="10"
|
||||
initialSize="9"
|
||||
timeDecay="true" />
|
||||
|
||||
<cache name="lfuCacheDecayDefault"
|
||||
class="solr.search.LFUCache"
|
||||
class="solr.search.CaffeineCache"
|
||||
size="10"
|
||||
initialSize="9" />
|
||||
</query>
|
||||
|
|
|
@ -83,25 +83,25 @@
|
|||
that match a particular query.
|
||||
-->
|
||||
<filterCache
|
||||
class="solr.search.FastLRUCache"
|
||||
class="solr.search.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="2"/>
|
||||
|
||||
<queryResultCache
|
||||
class="solr.search.LRUCache"
|
||||
class="solr.search.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="2"/>
|
||||
|
||||
<documentCache
|
||||
class="solr.search.LRUCache"
|
||||
class="solr.search.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="0"/>
|
||||
|
||||
<cache name="perSegFilter"
|
||||
class="solr.search.LRUCache"
|
||||
class="solr.search.CaffeineCache"
|
||||
size="10"
|
||||
initialSize="0"
|
||||
autowarmCount="10" />
|
||||
|
@ -113,7 +113,7 @@
|
|||
<!--
|
||||
|
||||
<cache name="myUserCache"
|
||||
class="solr.search.LRUCache"
|
||||
class="solr.search.CaffeineCache"
|
||||
size="4096"
|
||||
initialSize="1024"
|
||||
autowarmCount="1024"
|
||||
|
|
|
@ -39,8 +39,8 @@
|
|||
<!-- deep paging better play nice with caching -->
|
||||
<query>
|
||||
<!-- no wautowarming, it screws up our ability to sanity check cache stats in tests -->
|
||||
<filterCache class="solr.FastLRUCache" size="50" initialSize="50" autowarmCount="0"/>
|
||||
<queryResultCache class="solr.LRUCache" size="50" initialSize="50" autowarmCount="0"/>
|
||||
<filterCache class="solr.CaffeineCache" size="50" initialSize="50" autowarmCount="0"/>
|
||||
<queryResultCache class="solr.CaffeineCache" size="50" initialSize="50" autowarmCount="0"/>
|
||||
<queryResultWindowSize>50</queryResultWindowSize>
|
||||
<queryResultMaxDocsCached>500</queryResultMaxDocsCached>
|
||||
<!-- randomized so we excersize cursors using various paths in SolrIndexSearcher -->
|
||||
|
|
|
@ -41,19 +41,19 @@
|
|||
|
||||
<query>
|
||||
<filterCache
|
||||
class="solr.search.FastLRUCache"
|
||||
class="solr.search.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="0"/>
|
||||
|
||||
<queryResultCache
|
||||
class="solr.search.LRUCache"
|
||||
class="solr.search.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="0"/>
|
||||
|
||||
<documentCache
|
||||
class="solr.search.LRUCache"
|
||||
class="solr.search.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="0"/>
|
||||
|
|
|
@ -56,19 +56,19 @@
|
|||
that match a particular query.
|
||||
-->
|
||||
<filterCache
|
||||
class="solr.search.FastLRUCache"
|
||||
class="solr.search.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="256"/>
|
||||
|
||||
<queryResultCache
|
||||
class="solr.search.LRUCache"
|
||||
class="solr.search.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="1024"/>
|
||||
|
||||
<documentCache
|
||||
class="solr.search.LRUCache"
|
||||
class="solr.search.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="0"/>
|
||||
|
|
|
@ -37,21 +37,21 @@
|
|||
<query>
|
||||
<filterCache
|
||||
enabled="${filterCache.enabled:false}"
|
||||
class="solr.search.FastLRUCache"
|
||||
class="solr.search.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="2"/>
|
||||
|
||||
<queryResultCache
|
||||
enabled="${queryResultCache.enabled:false}"
|
||||
class="solr.search.LRUCache"
|
||||
class="solr.search.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="2"/>
|
||||
|
||||
<documentCache
|
||||
enabled="${documentCache.enabled:false}"
|
||||
class="solr.search.LRUCache"
|
||||
class="solr.search.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="0"/>
|
||||
|
|
|
@ -95,25 +95,25 @@
|
|||
that match a particular query.
|
||||
-->
|
||||
<filterCache
|
||||
class="solr.search.FastLRUCache"
|
||||
class="solr.search.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="2"/>
|
||||
|
||||
<queryResultCache
|
||||
class="solr.search.LRUCache"
|
||||
class="solr.search.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="2"/>
|
||||
|
||||
<documentCache
|
||||
class="solr.search.LRUCache"
|
||||
class="solr.search.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="0"/>
|
||||
|
||||
<cache name="perSegFilter"
|
||||
class="solr.search.LRUCache"
|
||||
class="solr.search.CaffeineCache"
|
||||
size="10"
|
||||
initialSize="0"
|
||||
autowarmCount="10" />
|
||||
|
@ -125,7 +125,7 @@
|
|||
<!--
|
||||
|
||||
<cache name="myUserCache"
|
||||
class="solr.search.LRUCache"
|
||||
class="solr.search.CaffeineCache"
|
||||
size="4096"
|
||||
initialSize="1024"
|
||||
autowarmCount="1024"
|
||||
|
|
|
@ -27,7 +27,7 @@
|
|||
<requestHandler name="/select" class="solr.SearchHandler" />
|
||||
<query>
|
||||
<cache name="myPerSegmentCache"
|
||||
class="solr.LRUCache"
|
||||
class="solr.CaffeineCache"
|
||||
size="3"
|
||||
initialSize="0"
|
||||
autowarmCount="100%"
|
||||
|
|
|
@ -82,25 +82,25 @@
|
|||
that match a particular query.
|
||||
-->
|
||||
<filterCache
|
||||
class="solr.search.FastLRUCache"
|
||||
class="solr.search.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="2"/>
|
||||
|
||||
<queryResultCache
|
||||
class="solr.search.LRUCache"
|
||||
class="solr.search.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="2"/>
|
||||
|
||||
<documentCache
|
||||
class="solr.search.LRUCache"
|
||||
class="solr.search.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="0"/>
|
||||
|
||||
<cache name="perSegFilter"
|
||||
class="solr.search.LRUCache"
|
||||
class="solr.search.CaffeineCache"
|
||||
size="10"
|
||||
initialSize="0"
|
||||
autowarmCount="10" />
|
||||
|
@ -112,7 +112,7 @@
|
|||
<!--
|
||||
|
||||
<cache name="myUserCache"
|
||||
class="solr.search.LRUCache"
|
||||
class="solr.search.CaffeineCache"
|
||||
size="4096"
|
||||
initialSize="1024"
|
||||
autowarmCount="1024"
|
||||
|
|
|
@ -35,7 +35,7 @@
|
|||
|
||||
<queryResultMaxDocsCached>200</queryResultMaxDocsCached>
|
||||
|
||||
<documentCache class="solr.LRUCache"
|
||||
<documentCache class="solr.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="0"/>
|
||||
|
|
|
@ -27,13 +27,13 @@
|
|||
<requestHandler name="/select" class="solr.SearchHandler" />
|
||||
<query>
|
||||
<cache name="perSegSpatialFieldCache_srptgeom"
|
||||
class="solr.LRUCache"
|
||||
class="solr.CaffeineCache"
|
||||
size="3"
|
||||
initialSize="0"
|
||||
autowarmCount="100%"
|
||||
regenerator="solr.NoOpRegenerator"/>
|
||||
<cache name="perSegSpatialFieldCache_srptgeom_geo3d"
|
||||
class="solr.LRUCache"
|
||||
class="solr.CaffeineCache"
|
||||
size="3"
|
||||
initialSize="0"
|
||||
autowarmCount="100%"
|
||||
|
|
|
@ -140,23 +140,23 @@
|
|||
|
||||
|
||||
<query>
|
||||
<filterCache class="solr.FastLRUCache"
|
||||
<filterCache class="solr.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="0" />
|
||||
|
||||
<queryResultCache class="solr.LRUCache"
|
||||
<queryResultCache class="solr.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="0" />
|
||||
|
||||
<documentCache class="solr.LRUCache"
|
||||
<documentCache class="solr.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="0" />
|
||||
|
||||
<cache name="perSegFilter"
|
||||
class="solr.search.LRUCache"
|
||||
class="solr.search.CaffeineCache"
|
||||
size="10"
|
||||
initialSize="0"
|
||||
autowarmCount="10"
|
||||
|
|
|
@ -95,25 +95,25 @@
|
|||
that match a particular query.
|
||||
-->
|
||||
<filterCache
|
||||
class="solr.search.FastLRUCache"
|
||||
class="solr.search.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="2"/>
|
||||
|
||||
<queryResultCache
|
||||
class="solr.search.LRUCache"
|
||||
class="solr.search.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="2"/>
|
||||
|
||||
<documentCache
|
||||
class="solr.search.LRUCache"
|
||||
class="solr.search.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="0"/>
|
||||
|
||||
<cache name="perSegFilter"
|
||||
class="solr.search.LRUCache"
|
||||
class="solr.search.CaffeineCache"
|
||||
size="10"
|
||||
initialSize="0"
|
||||
autowarmCount="10" />
|
||||
|
@ -125,7 +125,7 @@
|
|||
<!--
|
||||
|
||||
<cache name="myUserCache"
|
||||
class="solr.search.LRUCache"
|
||||
class="solr.search.CaffeineCache"
|
||||
size="4096"
|
||||
initialSize="1024"
|
||||
autowarmCount="1024"
|
||||
|
|
|
@ -36,19 +36,19 @@
|
|||
|
||||
<query>
|
||||
<filterCache
|
||||
class="solr.FastLRUCache"
|
||||
class="solr.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="0"/>
|
||||
|
||||
<queryResultCache
|
||||
class="solr.LRUCache"
|
||||
class="solr.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="0"/>
|
||||
|
||||
<documentCache
|
||||
class="solr.LRUCache"
|
||||
class="solr.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="0"/>
|
||||
|
|
|
@ -391,15 +391,7 @@
|
|||
<maxBooleanClauses>${solr.max.booleanClauses:1024}</maxBooleanClauses>
|
||||
|
||||
<!-- Solr Internal Query Caches
|
||||
|
||||
There are two implementations of cache available for Solr,
|
||||
LRUCache, based on a synchronized LinkedHashMap, and
|
||||
FastLRUCache, based on a ConcurrentHashMap.
|
||||
|
||||
FastLRUCache has faster gets and slower puts in single
|
||||
threaded operation and thus is generally faster than LRUCache
|
||||
when the hit ratio of the cache is high (> 75%), and may be
|
||||
faster under other scenarios on multi-cpu systems.
|
||||
Starting with Solr 9.0 the default cache implementation used is CaffeineCache.
|
||||
-->
|
||||
|
||||
<!-- Filter Cache
|
||||
|
@ -409,12 +401,11 @@
|
|||
new searcher is opened, its caches may be prepopulated or
|
||||
"autowarmed" using data from caches in the old searcher.
|
||||
autowarmCount is the number of items to prepopulate. For
|
||||
LRUCache, the autowarmed items will be the most recently
|
||||
CaffeineCache, the autowarmed items will be the most recently
|
||||
accessed items.
|
||||
|
||||
Parameters:
|
||||
class - the SolrCache implementation LRUCache or
|
||||
(LRUCache or FastLRUCache)
|
||||
class - the SolrCache implementation (CaffeineCache by default)
|
||||
size - the maximum number of entries in the cache
|
||||
initialSize - the initial capacity (number of entries) of
|
||||
the cache. (see java.util.HashMap)
|
||||
|
@ -424,7 +415,7 @@
|
|||
to occupy. Note that when this option is specified, the size
|
||||
and initialSize parameters are ignored.
|
||||
-->
|
||||
<filterCache class="solr.FastLRUCache"
|
||||
<filterCache class="solr.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="0"/>
|
||||
|
@ -433,11 +424,11 @@
|
|||
|
||||
Caches results of searches - ordered lists of document ids
|
||||
(DocList) based on a query, a sort, and the range of documents requested.
|
||||
Additional supported parameter by LRUCache:
|
||||
Additional supported parameter by CaffeineCache:
|
||||
maxRamMB - the maximum amount of RAM (in MB) that this cache is allowed
|
||||
to occupy
|
||||
-->
|
||||
<queryResultCache class="solr.LRUCache"
|
||||
<queryResultCache class="solr.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="0"/>
|
||||
|
@ -448,14 +439,14 @@
|
|||
document). Since Lucene internal document ids are transient,
|
||||
this cache will not be autowarmed.
|
||||
-->
|
||||
<documentCache class="solr.LRUCache"
|
||||
<documentCache class="solr.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="0"/>
|
||||
|
||||
<!-- custom cache currently used by block join -->
|
||||
<cache name="perSegFilter"
|
||||
class="solr.search.LRUCache"
|
||||
class="solr.search.CaffeineCache"
|
||||
size="10"
|
||||
initialSize="0"
|
||||
autowarmCount="10"
|
||||
|
@ -468,7 +459,7 @@
|
|||
even if not configured here.
|
||||
-->
|
||||
<!--
|
||||
<fieldValueCache class="solr.FastLRUCache"
|
||||
<fieldValueCache class="solr.CaffeineCache"
|
||||
size="512"
|
||||
autowarmCount="128"
|
||||
showItems="32" />
|
||||
|
@ -485,7 +476,7 @@
|
|||
-->
|
||||
<!--
|
||||
<cache name="myUserCache"
|
||||
class="solr.LRUCache"
|
||||
class="solr.CaffeineCache"
|
||||
size="4096"
|
||||
initialSize="1024"
|
||||
autowarmCount="1024"
|
||||
|
|
|
@ -56,19 +56,19 @@
|
|||
</updateHandler>
|
||||
|
||||
<query>
|
||||
<filterCache class="solr.FastLRUCache"
|
||||
<filterCache class="solr.CaffeineCache"
|
||||
size="0"
|
||||
initialSize="0"
|
||||
autowarmCount="0"/>
|
||||
<queryResultCache class="solr.LRUCache"
|
||||
<queryResultCache class="solr.CaffeineCache"
|
||||
size="0"
|
||||
initialSize="0"
|
||||
autowarmCount="0"/>
|
||||
<documentCache class="solr.LRUCache"
|
||||
<documentCache class="solr.CaffeineCache"
|
||||
size="0"
|
||||
initialSize="0"
|
||||
autowarmCount="0"/>
|
||||
<fieldValueCache class="solr.FastLRUCache"
|
||||
<fieldValueCache class="solr.CaffeineCache"
|
||||
size="0"
|
||||
autowarmCount="0"
|
||||
showItems="0" />
|
||||
|
|
|
@ -25,7 +25,7 @@ import org.apache.solr.highlight.DefaultSolrHighlighter;
|
|||
import org.apache.solr.metrics.SolrMetricManager;
|
||||
import org.apache.solr.metrics.SolrMetricProducer;
|
||||
import org.apache.solr.metrics.SolrMetricsContext;
|
||||
import org.apache.solr.search.LRUCache;
|
||||
import org.apache.solr.search.CaffeineCache;
|
||||
import org.junit.BeforeClass;
|
||||
import java.io.File;
|
||||
import java.net.URI;
|
||||
|
@ -54,7 +54,7 @@ public class SolrInfoBeanTest extends SolrTestCaseJ4
|
|||
classes.addAll(getClassesForPackage(SearchComponent.class.getPackage().getName()));
|
||||
classes.addAll(getClassesForPackage(LukeRequestHandler.class.getPackage().getName()));
|
||||
classes.addAll(getClassesForPackage(DefaultSolrHighlighter.class.getPackage().getName()));
|
||||
classes.addAll(getClassesForPackage(LRUCache.class.getPackage().getName()));
|
||||
classes.addAll(getClassesForPackage(CaffeineCache.class.getPackage().getName()));
|
||||
// System.out.println(classes);
|
||||
|
||||
int checked = 0;
|
||||
|
@ -75,7 +75,7 @@ public class SolrInfoBeanTest extends SolrTestCaseJ4
|
|||
assertNotNull( info.getClass().getCanonicalName(), info.getDescription() );
|
||||
assertNotNull( info.getClass().getCanonicalName(), info.getCategory() );
|
||||
|
||||
if( info instanceof LRUCache ) {
|
||||
if( info instanceof CaffeineCache ) {
|
||||
continue;
|
||||
}
|
||||
|
||||
|
|
|
@ -490,8 +490,8 @@ public class TestSolrConfigHandler extends RestTestBase {
|
|||
TIMEOUT_S);
|
||||
|
||||
payload = "{\n" +
|
||||
"'add-cache' : {name:'lfuCacheDecayFalse', class:'solr.search.LFUCache', size:10 ,initialSize:9 , timeDecay:false }," +
|
||||
"'add-cache' : {name: 'perSegFilter', class: 'solr.search.LRUCache', size:10, initialSize:0 , autowarmCount:10}}";
|
||||
"'add-cache' : {name:'lfuCacheDecayFalse', class:'solr.search.CaffeineCache', size:10 ,initialSize:9 , timeDecay:false }," +
|
||||
"'add-cache' : {name: 'perSegFilter', class: 'solr.search.CaffeineCache', size:10, initialSize:0 , autowarmCount:10}}";
|
||||
runConfigCommand(writeHarness, "/config", payload);
|
||||
|
||||
map = testForResponseElement(writeHarness,
|
||||
|
@ -499,13 +499,13 @@ public class TestSolrConfigHandler extends RestTestBase {
|
|||
"/config/overlay",
|
||||
cloudSolrClient,
|
||||
asList("overlay", "cache", "lfuCacheDecayFalse", "class"),
|
||||
"solr.search.LFUCache",
|
||||
"solr.search.CaffeineCache",
|
||||
TIMEOUT_S);
|
||||
assertEquals("solr.search.LRUCache",getObjectByPath(map, true, ImmutableList.of("overlay", "cache", "perSegFilter", "class")));
|
||||
assertEquals("solr.search.CaffeineCache",getObjectByPath(map, true, ImmutableList.of("overlay", "cache", "perSegFilter", "class")));
|
||||
|
||||
map = getRespMap("/dump101?cacheNames=lfuCacheDecayFalse&cacheNames=perSegFilter", writeHarness);
|
||||
assertEquals("Actual output "+ Utils.toJSONString(map), "org.apache.solr.search.LRUCache",getObjectByPath(map, true, ImmutableList.of( "caches", "perSegFilter")));
|
||||
assertEquals("Actual output "+ Utils.toJSONString(map), "org.apache.solr.search.LFUCache",getObjectByPath(map, true, ImmutableList.of( "caches", "lfuCacheDecayFalse")));
|
||||
assertEquals("Actual output "+ Utils.toJSONString(map), "org.apache.solr.search.CaffeineCache",getObjectByPath(map, true, ImmutableList.of( "caches", "perSegFilter")));
|
||||
assertEquals("Actual output "+ Utils.toJSONString(map), "org.apache.solr.search.CaffeineCache",getObjectByPath(map, true, ImmutableList.of( "caches", "lfuCacheDecayFalse")));
|
||||
|
||||
}
|
||||
|
||||
|
|
|
@ -1,594 +0,0 @@
|
|||
/*
|
||||
* Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
* contributor license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright ownership.
|
||||
* The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
* (the "License"); you may not use this file except in compliance with
|
||||
* the License. You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
package org.apache.solr.search;
|
||||
|
||||
import org.apache.commons.math3.stat.descriptive.SummaryStatistics;
|
||||
import org.apache.lucene.index.Term;
|
||||
import org.apache.lucene.search.Query;
|
||||
import org.apache.lucene.search.WildcardQuery;
|
||||
import org.apache.lucene.util.Accountable;
|
||||
import org.apache.lucene.util.RamUsageEstimator;
|
||||
import org.apache.lucene.util.TestUtil;
|
||||
import org.apache.solr.SolrTestCase;
|
||||
import org.apache.solr.common.util.TimeSource;
|
||||
import org.apache.solr.metrics.MetricsMap;
|
||||
import org.apache.solr.metrics.SolrMetricManager;
|
||||
import org.apache.solr.metrics.SolrMetricsContext;
|
||||
import org.apache.solr.util.ConcurrentLRUCache;
|
||||
import org.apache.solr.util.RTimer;
|
||||
|
||||
import java.util.HashMap;
|
||||
import java.util.Map;
|
||||
import java.util.Random;
|
||||
import java.util.TreeMap;
|
||||
import java.util.concurrent.CountDownLatch;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
import java.util.concurrent.atomic.AtomicInteger;
|
||||
|
||||
|
||||
/**
|
||||
* Test for FastLRUCache
|
||||
*
|
||||
*
|
||||
* @see org.apache.solr.search.FastLRUCache
|
||||
* @since solr 1.4
|
||||
*/
|
||||
public class TestFastLRUCache extends SolrTestCase {
|
||||
SolrMetricManager metricManager = new SolrMetricManager();
|
||||
String registry = TestUtil.randomSimpleString(random(), 2, 10);
|
||||
String scope = TestUtil.randomSimpleString(random(), 2, 10);
|
||||
|
||||
public void testPercentageAutowarm() throws Exception {
|
||||
FastLRUCache<Object, Object> fastCache = new FastLRUCache<>();
|
||||
Map<String, String> params = new HashMap<>();
|
||||
params.put("size", "100");
|
||||
params.put("initialSize", "10");
|
||||
params.put("autowarmCount", "100%");
|
||||
CacheRegenerator cr = new NoOpRegenerator();
|
||||
Object o = fastCache.init(params, null, cr);
|
||||
SolrMetricsContext solrMetricsContext = new SolrMetricsContext(metricManager, registry, "foo");
|
||||
fastCache.initializeMetrics(solrMetricsContext, scope);
|
||||
MetricsMap metrics = fastCache.getMetricsMap();
|
||||
fastCache.setState(SolrCache.State.LIVE);
|
||||
for (int i = 0; i < 101; i++) {
|
||||
fastCache.put(i + 1, "" + (i + 1));
|
||||
}
|
||||
assertEquals("25", fastCache.get(25));
|
||||
assertEquals(null, fastCache.get(110));
|
||||
Map<String,Object> nl = metrics.getValue();
|
||||
assertEquals(2L, nl.get("lookups"));
|
||||
assertEquals(1L, nl.get("hits"));
|
||||
assertEquals(101L, nl.get("inserts"));
|
||||
assertEquals(null, fastCache.get(1)); // first item put in should be the first out
|
||||
FastLRUCache<Object, Object> fastCacheNew = new FastLRUCache<>();
|
||||
fastCacheNew.init(params, o, cr);
|
||||
fastCacheNew.initializeMetrics(solrMetricsContext, scope);
|
||||
metrics = fastCacheNew.getMetricsMap();
|
||||
fastCacheNew.warm(null, fastCache);
|
||||
fastCacheNew.setState(SolrCache.State.LIVE);
|
||||
fastCache.close();
|
||||
fastCacheNew.put(103, "103");
|
||||
assertEquals("90", fastCacheNew.get(90));
|
||||
assertEquals("50", fastCacheNew.get(50));
|
||||
nl = metrics.getValue();
|
||||
assertEquals(2L, nl.get("lookups"));
|
||||
assertEquals(2L, nl.get("hits"));
|
||||
assertEquals(1L, nl.get("inserts"));
|
||||
assertEquals(0L, nl.get("evictions"));
|
||||
assertEquals(5L, nl.get("cumulative_lookups"));
|
||||
assertEquals(3L, nl.get("cumulative_hits"));
|
||||
assertEquals(102L, nl.get("cumulative_inserts"));
|
||||
fastCacheNew.close();
|
||||
}
|
||||
|
||||
public void testPercentageAutowarmMultiple() throws Exception {
|
||||
doTestPercentageAutowarm(100, 50, new int[]{51, 55, 60, 70, 80, 99, 100}, new int[]{1, 2, 3, 5, 10, 20, 30, 40, 50});
|
||||
doTestPercentageAutowarm(100, 25, new int[]{76, 80, 99, 100}, new int[]{1, 2, 3, 5, 10, 20, 30, 40, 50, 51, 55, 60, 70});
|
||||
doTestPercentageAutowarm(1000, 10, new int[]{901, 930, 950, 999, 1000}, new int[]{1, 5, 100, 200, 300, 400, 800, 899, 900});
|
||||
doTestPercentageAutowarm(100, 200, new int[]{1, 10, 25, 51, 55, 60, 70, 80, 99, 100}, new int[]{200, 300});
|
||||
doTestPercentageAutowarm(100, 0, new int[]{}, new int[]{1, 10, 25, 51, 55, 60, 70, 80, 99, 100, 200, 300});
|
||||
}
|
||||
|
||||
private void doTestPercentageAutowarm(int limit, int percentage, int[] hits, int[]misses) throws Exception {
|
||||
FastLRUCache<Object, Object> fastCache = new FastLRUCache<>();
|
||||
Map<String, String> params = new HashMap<>();
|
||||
params.put("size", String.valueOf(limit));
|
||||
params.put("initialSize", "10");
|
||||
params.put("autowarmCount", percentage + "%");
|
||||
CacheRegenerator cr = new NoOpRegenerator();
|
||||
Object o = fastCache.init(params, null, cr);
|
||||
SolrMetricsContext solrMetricsContext = new SolrMetricsContext(metricManager, registry, "foo");
|
||||
fastCache.initializeMetrics(solrMetricsContext, scope);
|
||||
fastCache.setState(SolrCache.State.LIVE);
|
||||
for (int i = 1; i <= limit; i++) {
|
||||
fastCache.put(i, "" + i);//adds numbers from 1 to 100
|
||||
}
|
||||
|
||||
FastLRUCache<Object, Object> fastCacheNew = new FastLRUCache<>();
|
||||
fastCacheNew.init(params, o, cr);
|
||||
fastCacheNew.initializeMetrics(solrMetricsContext, scope);
|
||||
fastCacheNew.warm(null, fastCache);
|
||||
fastCacheNew.setState(SolrCache.State.LIVE);
|
||||
fastCache.close();
|
||||
|
||||
for(int hit:hits) {
|
||||
assertEquals("The value " + hit + " should be on new cache", String.valueOf(hit), fastCacheNew.get(hit));
|
||||
}
|
||||
|
||||
for(int miss:misses) {
|
||||
assertEquals("The value " + miss + " should NOT be on new cache", null, fastCacheNew.get(miss));
|
||||
}
|
||||
Map<String,Object> nl = fastCacheNew.getMetricsMap().getValue();
|
||||
assertEquals(Long.valueOf(hits.length + misses.length), nl.get("lookups"));
|
||||
assertEquals(Long.valueOf(hits.length), nl.get("hits"));
|
||||
fastCacheNew.close();
|
||||
}
|
||||
|
||||
public void testNoAutowarm() throws Exception {
|
||||
FastLRUCache<Object, Object> fastCache = new FastLRUCache<>();
|
||||
Map<String, String> params = new HashMap<>();
|
||||
params.put("size", "100");
|
||||
params.put("initialSize", "10");
|
||||
CacheRegenerator cr = new NoOpRegenerator();
|
||||
Object o = fastCache.init(params, null, cr);
|
||||
SolrMetricsContext solrMetricsContext = new SolrMetricsContext(metricManager, registry, "foo");
|
||||
fastCache.initializeMetrics(solrMetricsContext, scope);
|
||||
fastCache.setState(SolrCache.State.LIVE);
|
||||
for (int i = 0; i < 101; i++) {
|
||||
fastCache.put(i + 1, "" + (i + 1));
|
||||
}
|
||||
assertEquals("25", fastCache.get(25));
|
||||
assertEquals(null, fastCache.get(110));
|
||||
Map<String,Object> nl = fastCache.getMetricsMap().getValue();
|
||||
assertEquals(2L, nl.get("lookups"));
|
||||
assertEquals(1L, nl.get("hits"));
|
||||
assertEquals(101L, nl.get("inserts"));
|
||||
assertEquals(null, fastCache.get(1)); // first item put in should be the first out
|
||||
FastLRUCache<Object, Object> fastCacheNew = new FastLRUCache<>();
|
||||
fastCacheNew.init(params, o, cr);
|
||||
fastCacheNew.warm(null, fastCache);
|
||||
fastCacheNew.setState(SolrCache.State.LIVE);
|
||||
fastCache.close();
|
||||
fastCacheNew.put(103, "103");
|
||||
assertEquals(null, fastCacheNew.get(90));
|
||||
assertEquals(null, fastCacheNew.get(50));
|
||||
fastCacheNew.close();
|
||||
}
|
||||
|
||||
public void testFullAutowarm() throws Exception {
|
||||
FastLRUCache<Object, Object> cache = new FastLRUCache<>();
|
||||
Map<Object, Object> params = new HashMap<>();
|
||||
params.put("size", "100");
|
||||
params.put("initialSize", "10");
|
||||
params.put("autowarmCount", "-1");
|
||||
CacheRegenerator cr = new NoOpRegenerator();
|
||||
Object o = cache.init(params, null, cr);
|
||||
cache.setState(SolrCache.State.LIVE);
|
||||
for (int i = 0; i < 101; i++) {
|
||||
cache.put(i + 1, "" + (i + 1));
|
||||
}
|
||||
assertEquals("25", cache.get(25));
|
||||
assertEquals(null, cache.get(110));
|
||||
|
||||
assertEquals(null, cache.get(1)); // first item put in should be the first out
|
||||
|
||||
|
||||
FastLRUCache<Object, Object> cacheNew = new FastLRUCache<>();
|
||||
cacheNew.init(params, o, cr);
|
||||
cacheNew.warm(null, cache);
|
||||
cacheNew.setState(SolrCache.State.LIVE);
|
||||
cache.close();
|
||||
cacheNew.put(103, "103");
|
||||
assertEquals("90", cacheNew.get(90));
|
||||
assertEquals("50", cacheNew.get(50));
|
||||
assertEquals("103", cacheNew.get(103));
|
||||
cacheNew.close();
|
||||
}
|
||||
|
||||
public void testSimple() throws Exception {
|
||||
FastLRUCache sc = new FastLRUCache();
|
||||
Map l = new HashMap();
|
||||
l.put("size", "100");
|
||||
l.put("initialSize", "10");
|
||||
l.put("autowarmCount", "25");
|
||||
CacheRegenerator cr = new NoOpRegenerator();
|
||||
Object o = sc.init(l, null, cr);
|
||||
SolrMetricsContext solrMetricsContext = new SolrMetricsContext(metricManager, registry, "foo");
|
||||
sc.initializeMetrics(solrMetricsContext, scope);
|
||||
sc.setState(SolrCache.State.LIVE);
|
||||
for (int i = 0; i < 101; i++) {
|
||||
sc.put(i + 1, "" + (i + 1));
|
||||
}
|
||||
assertEquals("25", sc.get(25));
|
||||
assertEquals(null, sc.get(110));
|
||||
MetricsMap metrics = sc.getMetricsMap();
|
||||
Map<String,Object> nl = metrics.getValue();
|
||||
assertEquals(2L, nl.get("lookups"));
|
||||
assertEquals(1L, nl.get("hits"));
|
||||
assertEquals(101L, nl.get("inserts"));
|
||||
|
||||
assertEquals(null, sc.get(1)); // first item put in should be the first out
|
||||
|
||||
|
||||
FastLRUCache scNew = new FastLRUCache();
|
||||
scNew.init(l, o, cr);
|
||||
scNew.initializeMetrics(solrMetricsContext, scope);
|
||||
scNew.warm(null, sc);
|
||||
scNew.setState(SolrCache.State.LIVE);
|
||||
sc.close();
|
||||
scNew.put(103, "103");
|
||||
assertEquals("90", scNew.get(90));
|
||||
assertEquals(null, scNew.get(50));
|
||||
nl = scNew.getMetricsMap().getValue();
|
||||
assertEquals(2L, nl.get("lookups"));
|
||||
assertEquals(1L, nl.get("hits"));
|
||||
assertEquals(1L, nl.get("inserts"));
|
||||
assertEquals(0L, nl.get("evictions"));
|
||||
|
||||
assertEquals(5L, nl.get("cumulative_lookups"));
|
||||
assertEquals(2L, nl.get("cumulative_hits"));
|
||||
assertEquals(102L, nl.get("cumulative_inserts"));
|
||||
scNew.close();
|
||||
}
|
||||
|
||||
public void testOldestItems() {
|
||||
ConcurrentLRUCache<Integer, String> cache = new ConcurrentLRUCache<>(100, 90);
|
||||
for (int i = 0; i < 50; i++) {
|
||||
cache.put(i + 1, "" + (i + 1));
|
||||
}
|
||||
cache.get(1);
|
||||
cache.get(3);
|
||||
Map<Integer, String> m = cache.getOldestAccessedItems(5);
|
||||
//7 6 5 4 2
|
||||
assertNotNull(m.get(7));
|
||||
assertNotNull(m.get(6));
|
||||
assertNotNull(m.get(5));
|
||||
assertNotNull(m.get(4));
|
||||
assertNotNull(m.get(2));
|
||||
|
||||
m = cache.getOldestAccessedItems(0);
|
||||
assertTrue(m.isEmpty());
|
||||
|
||||
//test this too
|
||||
m = cache.getLatestAccessedItems(0);
|
||||
assertTrue(m.isEmpty());
|
||||
|
||||
cache.destroy();
|
||||
}
|
||||
|
||||
// enough randomness to exercise all of the different cache purging phases
|
||||
public void testRandom() {
|
||||
int sz = random().nextInt(100)+5;
|
||||
int lowWaterMark = random().nextInt(sz-3)+1;
|
||||
int keyrange = random().nextInt(sz*3)+1;
|
||||
ConcurrentLRUCache<Integer, String> cache = new ConcurrentLRUCache<>(sz, lowWaterMark);
|
||||
for (int i=0; i<10000; i++) {
|
||||
cache.put(random().nextInt(keyrange), "");
|
||||
cache.get(random().nextInt(keyrange));
|
||||
}
|
||||
}
|
||||
|
||||
void doPerfTest(int iter, int cacheSize, int maxKey) {
|
||||
final RTimer timer = new RTimer();
|
||||
|
||||
int lowerWaterMark = cacheSize;
|
||||
int upperWaterMark = (int)(lowerWaterMark * 1.1);
|
||||
|
||||
Random r = random();
|
||||
ConcurrentLRUCache cache = new ConcurrentLRUCache(upperWaterMark, lowerWaterMark, (upperWaterMark+lowerWaterMark)/2, upperWaterMark, false, false, null, -1);
|
||||
boolean getSize=false;
|
||||
int minSize=0,maxSize=0;
|
||||
for (int i=0; i<iter; i++) {
|
||||
cache.put(r.nextInt(maxKey),"TheValue");
|
||||
int sz = cache.size();
|
||||
if (!getSize && sz >= cacheSize) {
|
||||
getSize = true;
|
||||
minSize = sz;
|
||||
} else {
|
||||
if (sz < minSize) minSize=sz;
|
||||
else if (sz > maxSize) maxSize=sz;
|
||||
}
|
||||
}
|
||||
cache.destroy();
|
||||
|
||||
System.out.println("time=" + timer.getTime() + ", minSize="+minSize+",maxSize="+maxSize);
|
||||
}
|
||||
|
||||
public void testAccountable() throws Exception {
|
||||
FastLRUCache<Query, DocSet> sc = new FastLRUCache<>();
|
||||
try {
|
||||
Map l = new HashMap();
|
||||
l.put("size", "100");
|
||||
l.put("initialSize", "10");
|
||||
l.put("autowarmCount", "25");
|
||||
CacheRegenerator cr = new NoOpRegenerator();
|
||||
Object o = sc.init(l, null, cr);
|
||||
SolrMetricsContext solrMetricsContext = new SolrMetricsContext(metricManager, registry, "foo");
|
||||
sc.initializeMetrics(solrMetricsContext, scope);
|
||||
sc.setState(SolrCache.State.LIVE);
|
||||
long initialBytes = sc.ramBytesUsed();
|
||||
WildcardQuery q = new WildcardQuery(new Term("foo", "bar"));
|
||||
DocSet docSet = new BitDocSet();
|
||||
sc.put(q, docSet);
|
||||
long updatedBytes = sc.ramBytesUsed();
|
||||
assertTrue(updatedBytes > initialBytes);
|
||||
long estimated = initialBytes + q.ramBytesUsed() + docSet.ramBytesUsed() + ConcurrentLRUCache.CacheEntry.BASE_RAM_BYTES_USED
|
||||
+ RamUsageEstimator.HASHTABLE_RAM_BYTES_PER_ENTRY;
|
||||
assertEquals(estimated, updatedBytes);
|
||||
sc.clear();
|
||||
long clearedBytes = sc.ramBytesUsed();
|
||||
assertEquals(initialBytes, clearedBytes);
|
||||
} finally {
|
||||
sc.close();
|
||||
}
|
||||
}
|
||||
|
||||
public void testSetLimits() throws Exception {
|
||||
FastLRUCache<String, Accountable> cache = new FastLRUCache<>();
|
||||
Map<String, String> params = new HashMap<>();
|
||||
params.put("size", "6");
|
||||
params.put("maxRamMB", "8");
|
||||
CacheRegenerator cr = new NoOpRegenerator();
|
||||
Object o = cache.init(params, null, cr);
|
||||
SolrMetricsContext solrMetricsContext = new SolrMetricsContext(metricManager, registry, "foo");
|
||||
cache.initializeMetrics(solrMetricsContext, scope);
|
||||
for (int i = 0; i < 6; i++) {
|
||||
cache.put("" + i, new Accountable() {
|
||||
@Override
|
||||
public long ramBytesUsed() {
|
||||
return 1024 * 1024;
|
||||
}
|
||||
});
|
||||
}
|
||||
// no evictions yet
|
||||
assertEquals(6, cache.size());
|
||||
// this also sets minLimit = 4
|
||||
cache.setMaxSize(5);
|
||||
// should not happen yet - evictions are triggered by put
|
||||
assertEquals(6, cache.size());
|
||||
cache.put("6", new Accountable() {
|
||||
@Override
|
||||
public long ramBytesUsed() {
|
||||
return 1024 * 1024;
|
||||
}
|
||||
});
|
||||
// should evict to minLimit
|
||||
assertEquals(4, cache.size());
|
||||
|
||||
// modify ram limit
|
||||
cache.setMaxRamMB(3);
|
||||
// should not happen yet - evictions are triggered by put
|
||||
assertEquals(4, cache.size());
|
||||
// this evicts down to 3MB * 0.8, ie. ramLowerWaterMark
|
||||
cache.put("7", new Accountable() {
|
||||
@Override
|
||||
public long ramBytesUsed() {
|
||||
return 0;
|
||||
}
|
||||
});
|
||||
assertEquals(3, cache.size());
|
||||
assertNotNull("5", cache.get("5"));
|
||||
assertNotNull("6", cache.get("6"));
|
||||
assertNotNull("7", cache.get("7"));
|
||||
|
||||
// scale up
|
||||
|
||||
cache.setMaxRamMB(4);
|
||||
cache.put("8", new Accountable() {
|
||||
@Override
|
||||
public long ramBytesUsed() {
|
||||
return 1024 * 1024;
|
||||
}
|
||||
});
|
||||
assertEquals(4, cache.size());
|
||||
|
||||
cache.setMaxSize(10);
|
||||
for (int i = 0; i < 6; i++) {
|
||||
cache.put("new" + i, new Accountable() {
|
||||
@Override
|
||||
public long ramBytesUsed() {
|
||||
return 0;
|
||||
}
|
||||
});
|
||||
}
|
||||
assertEquals(10, cache.size());
|
||||
}
|
||||
|
||||
public void testMaxIdleTime() throws Exception {
|
||||
int IDLE_TIME_SEC = 600;
|
||||
long IDLE_TIME_NS = TimeUnit.NANOSECONDS.convert(IDLE_TIME_SEC, TimeUnit.SECONDS);
|
||||
CountDownLatch sweepFinished = new CountDownLatch(1);
|
||||
ConcurrentLRUCache<String, Accountable> cache = new ConcurrentLRUCache<>(6, 5, 5, 6, false, false, null, IDLE_TIME_SEC) {
|
||||
@Override
|
||||
public void markAndSweep() {
|
||||
super.markAndSweep();
|
||||
sweepFinished.countDown();
|
||||
}
|
||||
};
|
||||
long currentTime = TimeSource.NANO_TIME.getEpochTimeNs();
|
||||
for (int i = 0; i < 4; i++) {
|
||||
cache.putCacheEntry(new ConcurrentLRUCache.CacheEntry<>("" + i, new Accountable() {
|
||||
@Override
|
||||
public long ramBytesUsed() {
|
||||
return 1024 * 1024;
|
||||
}
|
||||
}, currentTime, 0));
|
||||
}
|
||||
// no evictions yet
|
||||
assertEquals(4, cache.size());
|
||||
assertEquals("markAndSweep spurious run", 1, sweepFinished.getCount());
|
||||
cache.putCacheEntry(new ConcurrentLRUCache.CacheEntry<>("4", new Accountable() {
|
||||
@Override
|
||||
public long ramBytesUsed() {
|
||||
return 0;
|
||||
}
|
||||
}, currentTime - IDLE_TIME_NS * 2, 0));
|
||||
boolean await = sweepFinished.await(10, TimeUnit.SECONDS);
|
||||
assertTrue("did not evict entries in time", await);
|
||||
assertEquals(4, cache.size());
|
||||
assertNull(cache.get("4"));
|
||||
}
|
||||
|
||||
/***
|
||||
public void testPerf() {
|
||||
doPerfTest(1000000, 100000, 200000); // big cache, warmup
|
||||
doPerfTest(2000000, 100000, 200000); // big cache
|
||||
doPerfTest(2000000, 100000, 120000); // smaller key space increases distance between oldest, newest and makes the first passes less effective.
|
||||
doPerfTest(6000000, 1000, 2000); // small cache, smaller hit rate
|
||||
doPerfTest(6000000, 1000, 1200); // small cache, bigger hit rate
|
||||
}
|
||||
***/
|
||||
|
||||
// returns number of puts
|
||||
int useCache(SolrCache sc, int numGets, int maxKey, int seed) {
|
||||
int ret = 0;
|
||||
Random r = new Random(seed);
|
||||
|
||||
// use like a cache... gets and a put if not found
|
||||
for (int i=0; i<numGets; i++) {
|
||||
Integer k = r.nextInt(maxKey);
|
||||
Integer v = (Integer)sc.get(k);
|
||||
if (v == null) {
|
||||
sc.put(k, k);
|
||||
ret++;
|
||||
}
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
void fillCache(SolrCache sc, int cacheSize, int maxKey) {
|
||||
for (int i=0; i<cacheSize; i++) {
|
||||
Integer kv = random().nextInt(maxKey);
|
||||
sc.put(kv,kv);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
double[] cachePerfTest(final SolrCache sc, final int nThreads, final int numGets, int cacheSize, final int maxKey) {
|
||||
Map l = new HashMap();
|
||||
l.put("size", ""+cacheSize);
|
||||
l.put("initialSize", ""+cacheSize);
|
||||
|
||||
Object o = sc.init(l, null, null);
|
||||
sc.setState(SolrCache.State.LIVE);
|
||||
|
||||
fillCache(sc, cacheSize, maxKey);
|
||||
|
||||
final RTimer timer = new RTimer();
|
||||
|
||||
Thread[] threads = new Thread[nThreads];
|
||||
final AtomicInteger puts = new AtomicInteger(0);
|
||||
for (int i=0; i<threads.length; i++) {
|
||||
final int seed=random().nextInt();
|
||||
threads[i] = new Thread() {
|
||||
@Override
|
||||
public void run() {
|
||||
int ret = useCache(sc, numGets/nThreads, maxKey, seed);
|
||||
puts.addAndGet(ret);
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
for (Thread thread : threads) {
|
||||
try {
|
||||
thread.start();
|
||||
} catch (Exception e) {
|
||||
e.printStackTrace();
|
||||
}
|
||||
}
|
||||
|
||||
for (Thread thread : threads) {
|
||||
try {
|
||||
thread.join();
|
||||
} catch (Exception e) {
|
||||
e.printStackTrace();
|
||||
}
|
||||
}
|
||||
|
||||
double time = timer.getTime();
|
||||
double hitRatio = (1-(((double)puts.get())/numGets));
|
||||
// System.out.println("time=" + time + " impl=" +sc.getClass().getSimpleName()
|
||||
// +" nThreads= " + nThreads + " size="+cacheSize+" maxKey="+maxKey+" gets="+numGets
|
||||
// +" hitRatio="+(1-(((double)puts.get())/numGets)));
|
||||
return new double[]{time, hitRatio};
|
||||
}
|
||||
|
||||
private int NUM_RUNS = 5;
|
||||
void perfTestBoth(int maxThreads, int numGets, int cacheSize, int maxKey,
|
||||
Map<String, Map<String, SummaryStatistics>> timeStats,
|
||||
Map<String, Map<String, SummaryStatistics>> hitStats) {
|
||||
for (int nThreads = 1 ; nThreads <= maxThreads; nThreads++) {
|
||||
String testKey = "threads=" + nThreads + ",gets=" + numGets + ",size=" + cacheSize + ",maxKey=" + maxKey;
|
||||
System.err.println(testKey);
|
||||
for (int i = 0; i < NUM_RUNS; i++) {
|
||||
double[] data = cachePerfTest(new LRUCache(), nThreads, numGets, cacheSize, maxKey);
|
||||
timeStats.computeIfAbsent(testKey, k -> new TreeMap<>())
|
||||
.computeIfAbsent("LRUCache", k -> new SummaryStatistics())
|
||||
.addValue(data[0]);
|
||||
hitStats.computeIfAbsent(testKey, k -> new TreeMap<>())
|
||||
.computeIfAbsent("LRUCache", k -> new SummaryStatistics())
|
||||
.addValue(data[1]);
|
||||
data = cachePerfTest(new CaffeineCache(), nThreads, numGets, cacheSize, maxKey);
|
||||
timeStats.computeIfAbsent(testKey, k -> new TreeMap<>())
|
||||
.computeIfAbsent("CaffeineCache", k -> new SummaryStatistics())
|
||||
.addValue(data[0]);
|
||||
hitStats.computeIfAbsent(testKey, k -> new TreeMap<>())
|
||||
.computeIfAbsent("CaffeineCache", k -> new SummaryStatistics())
|
||||
.addValue(data[1]);
|
||||
data = cachePerfTest(new FastLRUCache(), nThreads, numGets, cacheSize, maxKey);
|
||||
timeStats.computeIfAbsent(testKey, k -> new TreeMap<>())
|
||||
.computeIfAbsent("FastLRUCache", k -> new SummaryStatistics())
|
||||
.addValue(data[0]);
|
||||
hitStats.computeIfAbsent(testKey, k -> new TreeMap<>())
|
||||
.computeIfAbsent("FastLRUCache", k -> new SummaryStatistics())
|
||||
.addValue(data[1]);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
int NUM_THREADS = 4;
|
||||
/***
|
||||
public void testCachePerf() {
|
||||
Map<String, Map<String, SummaryStatistics>> timeStats = new TreeMap<>();
|
||||
Map<String, Map<String, SummaryStatistics>> hitStats = new TreeMap<>();
|
||||
// warmup
|
||||
perfTestBoth(NUM_THREADS, 100000, 100000, 120000, new HashMap<>(), new HashMap());
|
||||
|
||||
perfTestBoth(NUM_THREADS, 2000000, 100000, 100000, timeStats, hitStats); // big cache, 100% hit ratio
|
||||
perfTestBoth(NUM_THREADS, 2000000, 100000, 120000, timeStats, hitStats); // big cache, bigger hit ratio
|
||||
perfTestBoth(NUM_THREADS, 2000000, 100000, 200000, timeStats, hitStats); // big cache, ~50% hit ratio
|
||||
perfTestBoth(NUM_THREADS, 2000000, 100000, 1000000, timeStats, hitStats); // big cache, ~10% hit ratio
|
||||
|
||||
perfTestBoth(NUM_THREADS, 2000000, 1000, 1000, timeStats, hitStats); // small cache, ~100% hit ratio
|
||||
perfTestBoth(NUM_THREADS, 2000000, 1000, 1200, timeStats, hitStats); // small cache, bigger hit ratio
|
||||
perfTestBoth(NUM_THREADS, 2000000, 1000, 2000, timeStats, hitStats); // small cache, ~50% hit ratio
|
||||
perfTestBoth(NUM_THREADS, 2000000, 1000, 10000, timeStats, hitStats); // small cache, ~10% hit ratio
|
||||
|
||||
System.out.println("\n=====================\n");
|
||||
timeStats.forEach((testKey, map) -> {
|
||||
Map<String, SummaryStatistics> hits = hitStats.get(testKey);
|
||||
System.out.println("* " + testKey);
|
||||
map.forEach((type, summary) -> {
|
||||
System.out.println("\t" + String.format("%14s", type) + "\ttime " + summary.getMean() + "\thitRatio " + hits.get(type).getMean());
|
||||
});
|
||||
});
|
||||
}
|
||||
***/
|
||||
|
||||
|
||||
}
|
|
@ -1,673 +0,0 @@
|
|||
/*
|
||||
* Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
* contributor license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright ownership.
|
||||
* The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
* (the "License"); you may not use this file except in compliance with
|
||||
* the License. You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
package org.apache.solr.search;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.lang.invoke.MethodHandles;
|
||||
import java.util.HashMap;
|
||||
import java.util.Locale;
|
||||
import java.util.Map;
|
||||
import java.util.Random;
|
||||
import java.util.concurrent.CountDownLatch;
|
||||
import java.util.concurrent.ExecutorService;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
import java.util.concurrent.atomic.AtomicLong;
|
||||
import java.util.concurrent.atomic.AtomicReference;
|
||||
|
||||
import org.apache.lucene.index.Term;
|
||||
import org.apache.lucene.search.TermQuery;
|
||||
import org.apache.lucene.search.WildcardQuery;
|
||||
import org.apache.lucene.util.RamUsageEstimator;
|
||||
import org.apache.lucene.util.TestUtil;
|
||||
import org.apache.solr.SolrTestCaseJ4;
|
||||
import org.apache.solr.common.util.ExecutorUtil;
|
||||
import org.apache.solr.common.util.TimeSource;
|
||||
import org.apache.solr.metrics.SolrMetricManager;
|
||||
import org.apache.solr.metrics.SolrMetricsContext;
|
||||
import org.apache.solr.util.ConcurrentLFUCache;
|
||||
import org.apache.solr.util.DefaultSolrThreadFactory;
|
||||
import org.junit.BeforeClass;
|
||||
import org.junit.Test;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
|
||||
/**
|
||||
* Test for LFUCache
|
||||
*
|
||||
* @see org.apache.solr.search.LFUCache
|
||||
* @since solr 3.6
|
||||
*/
|
||||
public class TestLFUCache extends SolrTestCaseJ4 {
|
||||
|
||||
private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
|
||||
|
||||
@BeforeClass
|
||||
public static void beforeClass() throws Exception {
|
||||
initCore("solrconfig-caching.xml", "schema.xml");
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testTimeDecayParams() throws IOException {
|
||||
h.getCore().withSearcher(searcher -> {
|
||||
LFUCache cacheDecayTrue = (LFUCache) searcher.getCache("lfuCacheDecayTrue");
|
||||
assertNotNull(cacheDecayTrue);
|
||||
Map<String,Object> stats = cacheDecayTrue.getMetricsMap().getValue();
|
||||
assertTrue((Boolean) stats.get("timeDecay"));
|
||||
addCache(cacheDecayTrue, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10);
|
||||
for (int idx = 0; idx < 64; ++idx) {
|
||||
assertCache(cacheDecayTrue, 1, 2, 3, 4, 5);
|
||||
}
|
||||
addCache(cacheDecayTrue, 11, 12, 13, 14, 15);
|
||||
assertCache(cacheDecayTrue, 1, 2, 3, 4, 5, 12, 13, 14, 15);
|
||||
|
||||
LFUCache cacheDecayDefault = (LFUCache) searcher.getCache("lfuCacheDecayDefault");
|
||||
assertNotNull(cacheDecayDefault);
|
||||
stats = cacheDecayDefault.getMetricsMap().getValue();
|
||||
assertTrue((Boolean) stats.get("timeDecay"));
|
||||
addCache(cacheDecayDefault, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10);
|
||||
assertCache(cacheDecayDefault, 1, 2, 3, 4, 5);
|
||||
for (int idx = 0; idx < 64; ++idx) {
|
||||
assertCache(cacheDecayDefault, 1, 2, 3, 4, 5);
|
||||
}
|
||||
addCache(cacheDecayDefault, 11, 12, 13, 14, 15);
|
||||
assertCache(cacheDecayDefault, 1, 2, 3, 4, 5, 12, 13, 14, 15);
|
||||
addCache(cacheDecayDefault, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21);
|
||||
assertCache(cacheDecayDefault, 1, 2, 3, 4, 5, 17, 18, 19, 20, 21);
|
||||
|
||||
LFUCache cacheDecayFalse = (LFUCache) searcher.getCache("lfuCacheDecayFalse");
|
||||
assertNotNull(cacheDecayFalse);
|
||||
stats = cacheDecayFalse.getMetricsMap().getValue();
|
||||
assertFalse((Boolean) stats.get("timeDecay"));
|
||||
addCache(cacheDecayFalse, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10);
|
||||
assertCache(cacheDecayFalse, 1, 2, 3, 4, 5);
|
||||
for (int idx = 0; idx < 16; ++idx) {
|
||||
assertCache(cacheDecayFalse, 1, 2, 3, 4, 5);
|
||||
}
|
||||
addCache(cacheDecayFalse, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21);
|
||||
|
||||
assertCache(cacheDecayFalse, 1, 2, 3, 4, 5);
|
||||
assertNotCache(cacheDecayFalse, 6, 7, 8, 9, 10);
|
||||
for (int idx = 22; idx < 256; ++idx) {
|
||||
addCache(cacheDecayFalse, idx);
|
||||
}
|
||||
assertCache(cacheDecayFalse, 1, 2, 3, 4, 5);
|
||||
return null;
|
||||
});
|
||||
}
|
||||
|
||||
private void addCache(LFUCache cache, int... inserts) {
|
||||
for (int idx : inserts) {
|
||||
cache.put(idx, Integer.toString(idx));
|
||||
}
|
||||
}
|
||||
|
||||
private void assertCache(LFUCache cache, int... gets) {
|
||||
for (int idx : gets) {
|
||||
if (cache.get(idx) == null) {
|
||||
log.error(String.format(Locale.ROOT, "Expected entry %d not in cache", idx));
|
||||
assertTrue(false);
|
||||
}
|
||||
}
|
||||
}
|
||||
private void assertNotCache(LFUCache cache, int... gets) {
|
||||
for (int idx : gets) {
|
||||
if (cache.get(idx) != null) {
|
||||
log.error(String.format(Locale.ROOT, "Unexpected entry %d in cache", idx));
|
||||
assertTrue(false);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@Test
|
||||
public void testSimple() throws Exception {
|
||||
SolrMetricManager metricManager = new SolrMetricManager();
|
||||
Random r = random();
|
||||
String registry = TestUtil.randomSimpleString(r, 2, 10);
|
||||
String scope = TestUtil.randomSimpleString(r, 2, 10);
|
||||
LFUCache lfuCache = new LFUCache();
|
||||
LFUCache newLFUCache = new LFUCache();
|
||||
LFUCache noWarmLFUCache = new LFUCache();
|
||||
SolrMetricsContext solrMetricsContext = new SolrMetricsContext(metricManager, registry, "foo");
|
||||
lfuCache.initializeMetrics(solrMetricsContext, scope + ".lfuCache");
|
||||
newLFUCache.initializeMetrics(solrMetricsContext, scope + ".newLFUCache");
|
||||
noWarmLFUCache.initializeMetrics(solrMetricsContext, scope + ".noWarmLFUCache");
|
||||
try {
|
||||
Map params = new HashMap();
|
||||
params.put("size", "100");
|
||||
params.put("initialSize", "10");
|
||||
params.put("autowarmCount", "25");
|
||||
NoOpRegenerator regenerator = new NoOpRegenerator();
|
||||
Object initObj = lfuCache.init(params, null, regenerator);
|
||||
lfuCache.setState(SolrCache.State.LIVE);
|
||||
for (int i = 0; i < 101; i++) {
|
||||
lfuCache.put(i + 1, "" + (i + 1));
|
||||
}
|
||||
assertEquals("15", lfuCache.get(15));
|
||||
assertEquals("75", lfuCache.get(75));
|
||||
assertEquals(null, lfuCache.get(110));
|
||||
Map<String,Object> nl = lfuCache.getMetricsMap().getValue();
|
||||
assertEquals(3L, nl.get("lookups"));
|
||||
assertEquals(2L, nl.get("hits"));
|
||||
assertEquals(101L, nl.get("inserts"));
|
||||
|
||||
assertEquals(null, lfuCache.get(1)); // first item put in should be the first out
|
||||
|
||||
// Test autowarming
|
||||
newLFUCache.init(params, initObj, regenerator);
|
||||
newLFUCache.warm(null, lfuCache);
|
||||
newLFUCache.setState(SolrCache.State.LIVE);
|
||||
|
||||
newLFUCache.put(103, "103");
|
||||
assertEquals("15", newLFUCache.get(15));
|
||||
assertEquals("75", newLFUCache.get(75));
|
||||
assertEquals(null, newLFUCache.get(50));
|
||||
nl = newLFUCache.getMetricsMap().getValue();
|
||||
assertEquals(3L, nl.get("lookups"));
|
||||
assertEquals(2L, nl.get("hits"));
|
||||
assertEquals(1L, nl.get("inserts"));
|
||||
assertEquals(0L, nl.get("evictions"));
|
||||
|
||||
assertEquals(7L, nl.get("cumulative_lookups"));
|
||||
assertEquals(4L, nl.get("cumulative_hits"));
|
||||
assertEquals(102L, nl.get("cumulative_inserts"));
|
||||
newLFUCache.close();
|
||||
|
||||
// Test no autowarming
|
||||
|
||||
params.put("autowarmCount", "0");
|
||||
noWarmLFUCache.init(params, initObj, regenerator);
|
||||
noWarmLFUCache.warm(null, lfuCache);
|
||||
noWarmLFUCache.setState(SolrCache.State.LIVE);
|
||||
|
||||
noWarmLFUCache.put(103, "103");
|
||||
assertNull(noWarmLFUCache.get(15));
|
||||
assertNull(noWarmLFUCache.get(75));
|
||||
assertEquals("103", noWarmLFUCache.get(103));
|
||||
} finally {
|
||||
if (newLFUCache != null) newLFUCache.close();
|
||||
if (noWarmLFUCache != null) noWarmLFUCache.close();
|
||||
if (lfuCache != null) lfuCache.close();
|
||||
}
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testItemOrdering() {
|
||||
ConcurrentLFUCache<Integer, String> cache = new ConcurrentLFUCache<>(100, 90);
|
||||
try {
|
||||
for (int i = 0; i < 50; i++) {
|
||||
cache.put(i + 1, "" + (i + 1));
|
||||
}
|
||||
for (int i = 0; i < 44; i++) {
|
||||
cache.get(i + 1);
|
||||
cache.get(i + 1);
|
||||
}
|
||||
cache.get(1);
|
||||
cache.get(1);
|
||||
cache.get(1);
|
||||
cache.get(3);
|
||||
cache.get(3);
|
||||
cache.get(3);
|
||||
cache.get(5);
|
||||
cache.get(5);
|
||||
cache.get(5);
|
||||
cache.get(7);
|
||||
cache.get(7);
|
||||
cache.get(7);
|
||||
cache.get(9);
|
||||
cache.get(9);
|
||||
cache.get(9);
|
||||
cache.get(48);
|
||||
cache.get(48);
|
||||
cache.get(48);
|
||||
cache.get(50);
|
||||
cache.get(50);
|
||||
cache.get(50);
|
||||
cache.get(50);
|
||||
cache.get(50);
|
||||
|
||||
Map<Integer, String> m;
|
||||
|
||||
m = cache.getMostUsedItems(5);
|
||||
//System.out.println(m);
|
||||
// 50 9 7 5 3 1
|
||||
assertNotNull(m.get(50));
|
||||
assertNotNull(m.get(9));
|
||||
assertNotNull(m.get(7));
|
||||
assertNotNull(m.get(5));
|
||||
assertNotNull(m.get(3));
|
||||
|
||||
m = cache.getLeastUsedItems(5);
|
||||
//System.out.println(m);
|
||||
// 49 47 46 45 2
|
||||
assertNotNull(m.get(49));
|
||||
assertNotNull(m.get(47));
|
||||
assertNotNull(m.get(46));
|
||||
assertNotNull(m.get(45));
|
||||
assertNotNull(m.get(2));
|
||||
|
||||
m = cache.getLeastUsedItems(0);
|
||||
assertTrue(m.isEmpty());
|
||||
|
||||
//test this too
|
||||
m = cache.getMostUsedItems(0);
|
||||
assertTrue(m.isEmpty());
|
||||
} finally {
|
||||
cache.destroy();
|
||||
}
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testTimeDecay() {
|
||||
ConcurrentLFUCache<Integer, String> cacheDecay = new ConcurrentLFUCache<>(10, 9);
|
||||
try {
|
||||
for (int i = 1; i < 21; i++) {
|
||||
cacheDecay.put(i, Integer.toString(i));
|
||||
}
|
||||
Map<Integer, String> itemsDecay;
|
||||
|
||||
//11-20 now in cache.
|
||||
itemsDecay = cacheDecay.getMostUsedItems(10);
|
||||
for (int i = 11; i < 21; ++i) {
|
||||
assertNotNull(itemsDecay.get(i));
|
||||
}
|
||||
|
||||
// Now increase the freq count for 5 items
|
||||
for (int i = 0; i < 5; ++i) {
|
||||
for (int jdx = 0; jdx < 63; ++jdx) {
|
||||
cacheDecay.get(i + 13);
|
||||
}
|
||||
}
|
||||
// OK, 13 - 17 should have larger counts and should stick past next few collections. One collection should
|
||||
// be triggered for each two insertions
|
||||
cacheDecay.put(22, "22");
|
||||
cacheDecay.put(23, "23"); // Surplus count at 32
|
||||
cacheDecay.put(24, "24");
|
||||
cacheDecay.put(25, "25"); // Surplus count at 16
|
||||
itemsDecay = cacheDecay.getMostUsedItems(10);
|
||||
// 13 - 17 should be in cache, but 11 and 18 (among others) should not Testing that elements before and
|
||||
// after the ones with increased counts are removed, and all the increased count ones are still in the cache
|
||||
assertNull(itemsDecay.get(11));
|
||||
assertNull(itemsDecay.get(18));
|
||||
assertNotNull(itemsDecay.get(13));
|
||||
assertNotNull(itemsDecay.get(14));
|
||||
assertNotNull(itemsDecay.get(15));
|
||||
assertNotNull(itemsDecay.get(16));
|
||||
assertNotNull(itemsDecay.get(17));
|
||||
|
||||
|
||||
// Testing that all the elements in front of the ones with increased counts are gone
|
||||
for (int idx = 26; idx < 32; ++idx) {
|
||||
cacheDecay.put(idx, Integer.toString(idx));
|
||||
}
|
||||
//Surplus count should be at 0
|
||||
itemsDecay = cacheDecay.getMostUsedItems(10);
|
||||
assertNull(itemsDecay.get(20));
|
||||
assertNull(itemsDecay.get(24));
|
||||
assertNotNull(itemsDecay.get(13));
|
||||
assertNotNull(itemsDecay.get(14));
|
||||
assertNotNull(itemsDecay.get(15));
|
||||
assertNotNull(itemsDecay.get(16));
|
||||
assertNotNull(itemsDecay.get(17));
|
||||
|
||||
for (int idx = 32; idx < 40; ++idx) {
|
||||
cacheDecay.put(idx, Integer.toString(idx));
|
||||
}
|
||||
|
||||
// All the entries with increased counts should be gone.
|
||||
itemsDecay = cacheDecay.getMostUsedItems(10);
|
||||
System.out.println(itemsDecay);
|
||||
assertNull(itemsDecay.get(13));
|
||||
assertNull(itemsDecay.get(14));
|
||||
assertNull(itemsDecay.get(15));
|
||||
assertNull(itemsDecay.get(16));
|
||||
assertNull(itemsDecay.get(17));
|
||||
for (int idx = 30; idx < 40; ++idx) {
|
||||
assertNotNull(itemsDecay.get(idx));
|
||||
}
|
||||
} finally {
|
||||
cacheDecay.destroy();
|
||||
}
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testTimeNoDecay() {
|
||||
|
||||
ConcurrentLFUCache<Integer, String> cacheNoDecay = new ConcurrentLFUCache<>(10, 9,
|
||||
(int) Math.floor((9 + 10) / 2), (int) Math.ceil(0.75 * 10), false, false, null, false);
|
||||
try {
|
||||
for (int i = 1; i < 21; i++) {
|
||||
cacheNoDecay.put(i, Integer.toString(i));
|
||||
}
|
||||
Map<Integer, String> itemsNoDecay;
|
||||
|
||||
//11-20 now in cache.
|
||||
itemsNoDecay = cacheNoDecay.getMostUsedItems(10);
|
||||
for (int i = 11; i < 21; ++i) {
|
||||
assertNotNull(itemsNoDecay.get(i));
|
||||
}
|
||||
|
||||
// Now increase the freq count for 5 items
|
||||
for (int i = 0; i < 5; ++i) {
|
||||
for (int jdx = 0; jdx < 10; ++jdx) {
|
||||
cacheNoDecay.get(i + 13);
|
||||
}
|
||||
}
|
||||
// OK, 13 - 17 should have larger counts but that shouldn't matter since timeDecay=false
|
||||
cacheNoDecay.put(22, "22");
|
||||
cacheNoDecay.put(23, "23");
|
||||
cacheNoDecay.put(24, "24");
|
||||
cacheNoDecay.put(25, "25");
|
||||
itemsNoDecay = cacheNoDecay.getMostUsedItems(10);
|
||||
for (int idx = 15; idx < 25; ++idx) {
|
||||
assertNotNull(itemsNoDecay.get(15));
|
||||
}
|
||||
} finally {
|
||||
cacheNoDecay.destroy();
|
||||
}
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testConcurrentAccess() throws InterruptedException {
|
||||
/* Set up a thread pool with twice as many threads as there are CPUs. */
|
||||
final ConcurrentLFUCache<Integer,Long> cache = new ConcurrentLFUCache<>(10, 9);
|
||||
ExecutorService executorService = ExecutorUtil.newMDCAwareFixedThreadPool(10,
|
||||
new DefaultSolrThreadFactory("testConcurrentAccess"));
|
||||
final AtomicReference<Throwable> error = new AtomicReference<>();
|
||||
|
||||
/*
|
||||
* Use the thread pool to execute at least two million puts into the cache.
|
||||
* Without the fix on SOLR-7585, NoSuchElementException is thrown.
|
||||
* Simultaneous calls to markAndSweep are protected from each other by a
|
||||
* lock, so they run sequentially, and due to a problem in the previous
|
||||
* design, the cache eviction doesn't work right.
|
||||
*/
|
||||
for (int i = 0; i < atLeast(2_000_000); ++i) {
|
||||
executorService.submit(() -> {
|
||||
try {
|
||||
cache.put(random().nextInt(100), random().nextLong());
|
||||
} catch (Throwable t) {
|
||||
error.compareAndSet(null, t);
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
executorService.shutdown();
|
||||
executorService.awaitTermination(1, TimeUnit.MINUTES);
|
||||
|
||||
// then:
|
||||
assertNull("Exception during concurrent access: " + error.get(), error.get());
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testAccountable() throws Exception {
|
||||
SolrMetricManager metricManager = new SolrMetricManager();
|
||||
Random r = random();
|
||||
String registry = TestUtil.randomSimpleString(r, 2, 10);
|
||||
String scope = TestUtil.randomSimpleString(r, 2, 10);
|
||||
LFUCache lfuCache = new LFUCache();
|
||||
SolrMetricsContext solrMetricsContext = new SolrMetricsContext(metricManager, registry, "foo");
|
||||
lfuCache.initializeMetrics(solrMetricsContext, scope + ".lfuCache");
|
||||
try {
|
||||
Map params = new HashMap();
|
||||
params.put("size", "100");
|
||||
params.put("initialSize", "10");
|
||||
params.put("autowarmCount", "25");
|
||||
NoOpRegenerator regenerator = new NoOpRegenerator();
|
||||
Object initObj = lfuCache.init(params, null, regenerator);
|
||||
lfuCache.setState(SolrCache.State.LIVE);
|
||||
|
||||
long initialBytes = lfuCache.ramBytesUsed();
|
||||
WildcardQuery q = new WildcardQuery(new Term("foo", "bar"));
|
||||
DocSet docSet = new BitDocSet();
|
||||
|
||||
// 1 insert
|
||||
lfuCache.put(q, docSet);
|
||||
long updatedBytes = lfuCache.ramBytesUsed();
|
||||
assertTrue(updatedBytes > initialBytes);
|
||||
long estimated = initialBytes + q.ramBytesUsed() + docSet.ramBytesUsed() + ConcurrentLFUCache.CacheEntry.BASE_RAM_BYTES_USED
|
||||
+ RamUsageEstimator.HASHTABLE_RAM_BYTES_PER_ENTRY;
|
||||
assertEquals(estimated, updatedBytes);
|
||||
|
||||
TermQuery tq = new TermQuery(new Term("foo", "bar"));
|
||||
lfuCache.put(tq, docSet);
|
||||
estimated += RamUsageEstimator.sizeOfObject(tq, RamUsageEstimator.QUERY_DEFAULT_RAM_BYTES_USED) +
|
||||
docSet.ramBytesUsed() + ConcurrentLFUCache.CacheEntry.BASE_RAM_BYTES_USED +
|
||||
RamUsageEstimator.HASHTABLE_RAM_BYTES_PER_ENTRY;
|
||||
updatedBytes = lfuCache.ramBytesUsed();
|
||||
assertEquals(estimated, updatedBytes);
|
||||
lfuCache.clear();
|
||||
long clearedBytes = lfuCache.ramBytesUsed();
|
||||
assertEquals(initialBytes, clearedBytes);
|
||||
} finally {
|
||||
lfuCache.close();
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
public void testSetLimits() throws Exception {
|
||||
SolrMetricManager metricManager = new SolrMetricManager();
|
||||
Random r = random();
|
||||
String registry = TestUtil.randomSimpleString(r, 2, 10);
|
||||
String scope = TestUtil.randomSimpleString(r, 2, 10);
|
||||
LFUCache<String, String> cache = new LFUCache<>();
|
||||
SolrMetricsContext solrMetricsContext = new SolrMetricsContext(metricManager, registry, "foo");
|
||||
cache.initializeMetrics(solrMetricsContext, scope + ".lfuCache");
|
||||
|
||||
Map<String, String> params = new HashMap<>();
|
||||
params.put("size", "6");
|
||||
CacheRegenerator cr = new NoOpRegenerator();
|
||||
Object o = cache.init(params, null, cr);
|
||||
for (int i = 0; i < 6; i++) {
|
||||
cache.put("" + i, "foo " + i);
|
||||
}
|
||||
// no evictions yet
|
||||
assertEquals(6, cache.size());
|
||||
// this sets minSize = 4, evictions will target minSize
|
||||
cache.setMaxSize(5);
|
||||
// should not happen yet - evictions are triggered by put
|
||||
assertEquals(6, cache.size());
|
||||
cache.put("6", "foo 6");
|
||||
// should evict to minSize
|
||||
assertEquals(4, cache.size());
|
||||
// should allow adding 1 more item before hitting "size" limit
|
||||
cache.put("7", "foo 7");
|
||||
assertEquals(5, cache.size());
|
||||
// should evict down to minSize = 4
|
||||
cache.put("8", "foo 8");
|
||||
assertEquals(4, cache.size());
|
||||
|
||||
// scale up
|
||||
|
||||
cache.setMaxSize(10);
|
||||
for (int i = 0; i < 6; i++) {
|
||||
cache.put("new" + i, "bar " + i);
|
||||
}
|
||||
assertEquals(10, cache.size());
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testMaxIdleTimeEviction() throws Exception {
|
||||
int IDLE_TIME_SEC = 5;
|
||||
long IDLE_TIME_NS = TimeUnit.NANOSECONDS.convert(IDLE_TIME_SEC, TimeUnit.SECONDS);
|
||||
CountDownLatch sweepFinished = new CountDownLatch(1);
|
||||
final AtomicLong numSweepsStarted = new AtomicLong(0);
|
||||
ConcurrentLFUCache<String, String> cache = new ConcurrentLFUCache<>(6, 5, 5, 6, false, false, null, false, IDLE_TIME_SEC) {
|
||||
@Override
|
||||
public void markAndSweep() {
|
||||
numSweepsStarted.incrementAndGet();
|
||||
super.markAndSweep();
|
||||
sweepFinished.countDown();
|
||||
}
|
||||
};
|
||||
for (int i = 0; i < 4; i++) {
|
||||
cache.put("" + i, "foo " + i);
|
||||
}
|
||||
// no evictions yet
|
||||
assertEquals(4, cache.size());
|
||||
assertEquals("markAndSweep spurious run", 0, numSweepsStarted.get());
|
||||
long currentTime = TimeSource.NANO_TIME.getEpochTimeNs();
|
||||
cache.putCacheEntry(new ConcurrentLFUCache.CacheEntry<>("4", "foo5",
|
||||
currentTime - IDLE_TIME_NS * 2));
|
||||
boolean await = sweepFinished.await(10, TimeUnit.SECONDS);
|
||||
assertTrue("did not evict entries in time", await);
|
||||
assertEquals(4, cache.size());
|
||||
assertNull(cache.get("4"));
|
||||
}
|
||||
|
||||
// From the original LRU cache tests, they're commented out there too because they take a while.
|
||||
// void doPerfTest(int iter, int cacheSize, int maxKey) {
|
||||
// long start = System.currentTimeMillis();
|
||||
//
|
||||
// int lowerWaterMark = cacheSize;
|
||||
// int upperWaterMark = (int) (lowerWaterMark * 1.1);
|
||||
//
|
||||
// Random r = random;
|
||||
// ConcurrentLFUCache cache = new ConcurrentLFUCache(upperWaterMark, lowerWaterMark,
|
||||
// (upperWaterMark + lowerWaterMark) / 2, upperWaterMark, false, false, null, true);
|
||||
// boolean getSize = false;
|
||||
// int minSize = 0, maxSize = 0;
|
||||
// for (int i = 0; i < iter; i++) {
|
||||
// cache.put(r.nextInt(maxKey), "TheValue");
|
||||
// int sz = cache.size();
|
||||
// if (!getSize && sz >= cacheSize) {
|
||||
// getSize = true;
|
||||
// minSize = sz;
|
||||
// } else {
|
||||
// if (sz < minSize) minSize = sz;
|
||||
// else if (sz > maxSize) maxSize = sz;
|
||||
// }
|
||||
// }
|
||||
// cache.destroy();
|
||||
//
|
||||
// long end = System.currentTimeMillis();
|
||||
// System.out.println("time=" + (end - start) + ", minSize=" + minSize + ",maxSize=" + maxSize);
|
||||
// }
|
||||
//
|
||||
//
|
||||
// @Test
|
||||
// public void testPerf() {
|
||||
// doPerfTest(1000000, 100000, 200000); // big cache, warmup
|
||||
// doPerfTest(2000000, 100000, 200000); // big cache
|
||||
// doPerfTest(2000000, 100000, 120000); // smaller key space increases distance between oldest, newest and makes the first passes less effective.
|
||||
// doPerfTest(6000000, 1000, 2000); // small cache, smaller hit rate
|
||||
// doPerfTest(6000000, 1000, 1200); // small cache, bigger hit rate
|
||||
// }
|
||||
//
|
||||
//
|
||||
// // returns number of puts
|
||||
// int useCache(SolrCache sc, int numGets, int maxKey, int seed) {
|
||||
// int ret = 0;
|
||||
// Random r = new Random(seed);
|
||||
//
|
||||
// // use like a cache... gets and a put if not found
|
||||
// for (int i = 0; i < numGets; i++) {
|
||||
// Integer k = r.nextInt(maxKey);
|
||||
// Integer v = (Integer) sc.get(k);
|
||||
// if (v == null) {
|
||||
// sc.put(k, k);
|
||||
// ret++;
|
||||
// }
|
||||
// }
|
||||
//
|
||||
// return ret;
|
||||
// }
|
||||
//
|
||||
// void fillCache(SolrCache sc, int cacheSize, int maxKey) {
|
||||
// for (int i = 0; i < cacheSize; i++) {
|
||||
// Integer kv = random.nextInt(maxKey);
|
||||
// sc.put(kv, kv);
|
||||
// }
|
||||
// }
|
||||
//
|
||||
//
|
||||
// void cachePerfTest(final SolrCache sc, final int nThreads, final int numGets, int cacheSize, final int maxKey) {
|
||||
// Map l = new HashMap();
|
||||
// l.put("size", "" + cacheSize);
|
||||
// l.put("initialSize", "" + cacheSize);
|
||||
//
|
||||
// Object o = sc.init(l, null, null);
|
||||
// sc.setState(SolrCache.State.LIVE);
|
||||
//
|
||||
// fillCache(sc, cacheSize, maxKey);
|
||||
//
|
||||
// long start = System.currentTimeMillis();
|
||||
//
|
||||
// Thread[] threads = new Thread[nThreads];
|
||||
// final AtomicInteger puts = new AtomicInteger(0);
|
||||
// for (int i = 0; i < threads.length; i++) {
|
||||
// final int seed = random.nextInt();
|
||||
// threads[i] = new Thread() {
|
||||
// @Override
|
||||
// public void run() {
|
||||
// int ret = useCache(sc, numGets / nThreads, maxKey, seed);
|
||||
// puts.addAndGet(ret);
|
||||
// }
|
||||
// };
|
||||
// }
|
||||
//
|
||||
// for (Thread thread : threads) {
|
||||
// try {
|
||||
// thread.start();
|
||||
// } catch (Exception e) {
|
||||
// e.printStackTrace();
|
||||
// }
|
||||
// }
|
||||
//
|
||||
// for (Thread thread : threads) {
|
||||
// try {
|
||||
// thread.join();
|
||||
// } catch (Exception e) {
|
||||
// e.printStackTrace();
|
||||
// }
|
||||
// }
|
||||
//
|
||||
// long end = System.currentTimeMillis();
|
||||
// System.out.println("time=" + (end - start) + " impl=" + sc.getClass().getSimpleName()
|
||||
// + " nThreads= " + nThreads + " size=" + cacheSize + " maxKey=" + maxKey + " gets=" + numGets
|
||||
// + " hitRatio=" + (1 - (((double) puts.get()) / numGets)));
|
||||
// }
|
||||
//
|
||||
// void perfTestBoth(int nThreads, int numGets, int cacheSize, int maxKey) {
|
||||
// cachePerfTest(new LFUCache(), nThreads, numGets, cacheSize, maxKey);
|
||||
// }
|
||||
//
|
||||
//
|
||||
// public void testCachePerf() {
|
||||
// // warmup
|
||||
// perfTestBoth(2, 100000, 100000, 120000);
|
||||
// perfTestBoth(1, 2000000, 100000, 100000); // big cache, 100% hit ratio
|
||||
// perfTestBoth(2, 2000000, 100000, 100000); // big cache, 100% hit ratio
|
||||
// perfTestBoth(1, 2000000, 100000, 120000); // big cache, bigger hit ratio
|
||||
// perfTestBoth(2, 2000000, 100000, 120000); // big cache, bigger hit ratio
|
||||
// perfTestBoth(1, 2000000, 100000, 200000); // big cache, ~50% hit ratio
|
||||
// perfTestBoth(2, 2000000, 100000, 200000); // big cache, ~50% hit ratio
|
||||
// perfTestBoth(1, 2000000, 100000, 1000000); // big cache, ~10% hit ratio
|
||||
// perfTestBoth(2, 2000000, 100000, 1000000); // big cache, ~10% hit ratio
|
||||
//
|
||||
// perfTestBoth(1, 2000000, 1000, 1000); // small cache, ~100% hit ratio
|
||||
// perfTestBoth(2, 2000000, 1000, 1000); // small cache, ~100% hit ratio
|
||||
// perfTestBoth(1, 2000000, 1000, 1200); // small cache, bigger hit ratio
|
||||
// perfTestBoth(2, 2000000, 1000, 1200); // small cache, bigger hit ratio
|
||||
// perfTestBoth(1, 2000000, 1000, 2000); // small cache, ~50% hit ratio
|
||||
// perfTestBoth(2, 2000000, 1000, 2000); // small cache, ~50% hit ratio
|
||||
// perfTestBoth(1, 2000000, 1000, 10000); // small cache, ~10% hit ratio
|
||||
// perfTestBoth(2, 2000000, 1000, 10000); // small cache, ~10% hit ratio
|
||||
// }
|
||||
|
||||
}
|
|
@ -1,301 +0,0 @@
|
|||
/*
|
||||
* Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
* contributor license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright ownership.
|
||||
* The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
* (the "License"); you may not use this file except in compliance with
|
||||
* the License. You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
package org.apache.solr.search;
|
||||
|
||||
import java.util.HashMap;
|
||||
import java.util.Map;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
|
||||
import org.apache.lucene.util.Accountable;
|
||||
import org.apache.solr.SolrTestCase;
|
||||
import org.apache.lucene.util.RamUsageEstimator;
|
||||
import org.apache.lucene.util.TestUtil;
|
||||
import org.apache.solr.common.util.TimeSource;
|
||||
import org.apache.solr.metrics.SolrMetricManager;
|
||||
import org.apache.solr.metrics.SolrMetricsContext;
|
||||
import org.junit.Test;
|
||||
|
||||
/**
|
||||
* Test for <code>org.apache.solr.search.LRUCache</code>
|
||||
*/
|
||||
public class TestLRUCache extends SolrTestCase {
|
||||
|
||||
SolrMetricManager metricManager = new SolrMetricManager();
|
||||
String registry = TestUtil.randomSimpleString(random(), 2, 10);
|
||||
String scope = TestUtil.randomSimpleString(random(), 2, 10);
|
||||
|
||||
public void testFullAutowarm() throws Exception {
|
||||
LRUCache<Object, Object> lruCache = new LRUCache<>();
|
||||
Map<String, String> params = new HashMap<>();
|
||||
params.put("size", "100");
|
||||
params.put("initialSize", "10");
|
||||
params.put("autowarmCount", "100%");
|
||||
CacheRegenerator cr = new NoOpRegenerator();
|
||||
Object o = lruCache.init(params, null, cr);
|
||||
lruCache.setState(SolrCache.State.LIVE);
|
||||
for (int i = 0; i < 101; i++) {
|
||||
lruCache.put(i + 1, "" + (i + 1));
|
||||
}
|
||||
assertEquals("25", lruCache.get(25));
|
||||
assertEquals(null, lruCache.get(110));
|
||||
assertEquals(null, lruCache.get(1)); // first item put in should be the first out
|
||||
LRUCache<Object, Object> lruCacheNew = new LRUCache<>();
|
||||
lruCacheNew.init(params, o, cr);
|
||||
lruCacheNew.warm(null, lruCache);
|
||||
lruCacheNew.setState(SolrCache.State.LIVE);
|
||||
lruCache.close();
|
||||
lruCacheNew.put(103, "103");
|
||||
assertEquals("90", lruCacheNew.get(90));
|
||||
assertEquals("50", lruCacheNew.get(50));
|
||||
lruCacheNew.close();
|
||||
}
|
||||
|
||||
public void testPercentageAutowarm() throws Exception {
|
||||
doTestPercentageAutowarm(100, 50, new int[]{51, 55, 60, 70, 80, 99, 100}, new int[]{1, 2, 3, 5, 10, 20, 30, 40, 50});
|
||||
doTestPercentageAutowarm(100, 25, new int[]{76, 80, 99, 100}, new int[]{1, 2, 3, 5, 10, 20, 30, 40, 50, 51, 55, 60, 70});
|
||||
doTestPercentageAutowarm(1000, 10, new int[]{901, 930, 950, 999, 1000}, new int[]{1, 5, 100, 200, 300, 400, 800, 899, 900});
|
||||
doTestPercentageAutowarm(10, 10, new int[]{10}, new int[]{1, 5, 9, 100, 200, 300, 400, 800, 899, 900});
|
||||
}
|
||||
|
||||
private void doTestPercentageAutowarm(int limit, int percentage, int[] hits, int[]misses) throws Exception {
|
||||
LRUCache<Object, Object> lruCache = new LRUCache<>();
|
||||
Map<String, String> params = new HashMap<>();
|
||||
params.put("size", String.valueOf(limit));
|
||||
params.put("initialSize", "10");
|
||||
params.put("autowarmCount", percentage + "%");
|
||||
CacheRegenerator cr = new NoOpRegenerator();
|
||||
Object o = lruCache.init(params, null, cr);
|
||||
lruCache.setState(SolrCache.State.LIVE);
|
||||
for (int i = 1; i <= limit; i++) {
|
||||
lruCache.put(i, "" + i);//adds numbers from 1 to 100
|
||||
}
|
||||
|
||||
LRUCache<Object, Object> lruCacheNew = new LRUCache<>();
|
||||
lruCacheNew.init(params, o, cr);
|
||||
lruCacheNew.warm(null, lruCache);
|
||||
lruCacheNew.setState(SolrCache.State.LIVE);
|
||||
lruCache.close();
|
||||
|
||||
for(int hit:hits) {
|
||||
assertEquals("The value " + hit + " should be on new cache", String.valueOf(hit), lruCacheNew.get(hit));
|
||||
}
|
||||
|
||||
for(int miss:misses) {
|
||||
assertEquals("The value " + miss + " should NOT be on new cache", null, lruCacheNew.get(miss));
|
||||
}
|
||||
lruCacheNew.close();
|
||||
}
|
||||
|
||||
@SuppressWarnings("unchecked")
|
||||
public void testNoAutowarm() throws Exception {
|
||||
LRUCache<Object, Object> lruCache = new LRUCache<>();
|
||||
SolrMetricsContext solrMetricsContext = new SolrMetricsContext(metricManager, registry, "foo");
|
||||
lruCache.initializeMetrics(solrMetricsContext, scope);
|
||||
Map<String, String> params = new HashMap<>();
|
||||
params.put("size", "100");
|
||||
params.put("initialSize", "10");
|
||||
CacheRegenerator cr = new NoOpRegenerator();
|
||||
Object o = lruCache.init(params, null, cr);
|
||||
lruCache.setState(SolrCache.State.LIVE);
|
||||
for (int i = 0; i < 101; i++) {
|
||||
lruCache.put(i + 1, "" + (i + 1));
|
||||
}
|
||||
assertEquals("25", lruCache.get(25));
|
||||
assertEquals(null, lruCache.get(110));
|
||||
Map<String,Object> nl = lruCache.getMetricsMap().getValue();
|
||||
assertEquals(2L, nl.get("lookups"));
|
||||
assertEquals(1L, nl.get("hits"));
|
||||
assertEquals(101L, nl.get("inserts"));
|
||||
assertEquals(null, lruCache.get(1)); // first item put in should be the first out
|
||||
LRUCache<Object, Object> lruCacheNew = new LRUCache<>();
|
||||
lruCacheNew.init(params, o, cr);
|
||||
lruCacheNew.warm(null, lruCache);
|
||||
lruCacheNew.setState(SolrCache.State.LIVE);
|
||||
lruCache.close();
|
||||
lruCacheNew.put(103, "103");
|
||||
assertEquals(null, lruCacheNew.get(90));
|
||||
assertEquals(null, lruCacheNew.get(50));
|
||||
lruCacheNew.close();
|
||||
}
|
||||
|
||||
public void testMaxRamSize() throws Exception {
|
||||
LRUCache<String, Accountable> accountableLRUCache = new LRUCache<>();
|
||||
SolrMetricsContext solrMetricsContext = new SolrMetricsContext(metricManager, registry, "foo");
|
||||
accountableLRUCache.initializeMetrics(solrMetricsContext, scope);
|
||||
Map<String, String> params = new HashMap<>();
|
||||
params.put("size", "5");
|
||||
params.put("maxRamMB", "1");
|
||||
CacheRegenerator cr = new NoOpRegenerator();
|
||||
Object o = accountableLRUCache.init(params, null, cr);
|
||||
long baseSize = accountableLRUCache.ramBytesUsed();
|
||||
assertEquals(LRUCache.BASE_RAM_BYTES_USED, baseSize);
|
||||
accountableLRUCache.put("1", new Accountable() {
|
||||
@Override
|
||||
public long ramBytesUsed() {
|
||||
return 512 * 1024;
|
||||
}
|
||||
});
|
||||
assertEquals(1, accountableLRUCache.size());
|
||||
assertEquals(baseSize + 512 * 1024 + RamUsageEstimator.sizeOfObject("1") + RamUsageEstimator.LINKED_HASHTABLE_RAM_BYTES_PER_ENTRY + LRUCache.CacheValue.BASE_RAM_BYTES_USED, accountableLRUCache.ramBytesUsed());
|
||||
accountableLRUCache.put("20", new Accountable() {
|
||||
@Override
|
||||
public long ramBytesUsed() {
|
||||
return 512 * 1024;
|
||||
}
|
||||
});
|
||||
assertEquals(1, accountableLRUCache.size());
|
||||
assertEquals(baseSize + 512 * 1024 + RamUsageEstimator.sizeOfObject("20") + RamUsageEstimator.LINKED_HASHTABLE_RAM_BYTES_PER_ENTRY + LRUCache.CacheValue.BASE_RAM_BYTES_USED, accountableLRUCache.ramBytesUsed());
|
||||
Map<String,Object> nl = accountableLRUCache.getMetricsMap().getValue();
|
||||
assertEquals(1L, nl.get("evictions"));
|
||||
assertEquals(1L, nl.get("evictionsRamUsage"));
|
||||
accountableLRUCache.put("300", new Accountable() {
|
||||
@Override
|
||||
public long ramBytesUsed() {
|
||||
return 1024;
|
||||
}
|
||||
});
|
||||
nl = accountableLRUCache.getMetricsMap().getValue();
|
||||
assertEquals(1L, nl.get("evictions"));
|
||||
assertEquals(1L, nl.get("evictionsRamUsage"));
|
||||
assertEquals(2L, accountableLRUCache.size());
|
||||
assertEquals(baseSize + 513 * 1024 +
|
||||
(RamUsageEstimator.LINKED_HASHTABLE_RAM_BYTES_PER_ENTRY + LRUCache.CacheValue.BASE_RAM_BYTES_USED) * 2 +
|
||||
RamUsageEstimator.sizeOfObject("20") + RamUsageEstimator.sizeOfObject("300"), accountableLRUCache.ramBytesUsed());
|
||||
|
||||
accountableLRUCache.clear();
|
||||
assertEquals(RamUsageEstimator.shallowSizeOfInstance(LRUCache.class), accountableLRUCache.ramBytesUsed());
|
||||
}
|
||||
|
||||
// public void testNonAccountableValues() throws Exception {
|
||||
// LRUCache<String, String> cache = new LRUCache<>();
|
||||
// Map<String, String> params = new HashMap<>();
|
||||
// params.put("size", "5");
|
||||
// params.put("maxRamMB", "1");
|
||||
// CacheRegenerator cr = new NoOpRegenerator();
|
||||
// Object o = cache.init(params, null, cr);
|
||||
//
|
||||
// expectThrows(SolrException.class, "Adding a non-accountable value to a cache configured with maxRamBytes should have failed",
|
||||
// () -> cache.put("1", "1")
|
||||
// );
|
||||
// }
|
||||
//
|
||||
|
||||
public void testSetLimits() throws Exception {
|
||||
LRUCache<String, Accountable> cache = new LRUCache<>();
|
||||
SolrMetricsContext solrMetricsContext = new SolrMetricsContext(metricManager, registry, "foo");
|
||||
cache.initializeMetrics(solrMetricsContext, scope);
|
||||
Map<String, String> params = new HashMap<>();
|
||||
params.put("size", "6");
|
||||
params.put("maxRamMB", "8");
|
||||
CacheRegenerator cr = new NoOpRegenerator();
|
||||
Object o = cache.init(params, null, cr);
|
||||
for (int i = 0; i < 6; i++) {
|
||||
cache.put("" + i, new Accountable() {
|
||||
@Override
|
||||
public long ramBytesUsed() {
|
||||
return 1024 * 1024;
|
||||
}
|
||||
});
|
||||
}
|
||||
// no evictions yet
|
||||
assertEquals(6, cache.size());
|
||||
cache.setMaxSize(5);
|
||||
// should not happen yet - evictions are triggered by put
|
||||
assertEquals(6, cache.size());
|
||||
cache.put("6", new Accountable() {
|
||||
@Override
|
||||
public long ramBytesUsed() {
|
||||
return 1024 * 1024;
|
||||
}
|
||||
});
|
||||
// should evict by count limit
|
||||
assertEquals(5, cache.size());
|
||||
|
||||
// modify ram limit
|
||||
cache.setMaxRamMB(3);
|
||||
// should not happen yet - evictions are triggered by put
|
||||
assertEquals(5, cache.size());
|
||||
cache.put("7", new Accountable() {
|
||||
@Override
|
||||
public long ramBytesUsed() {
|
||||
return 0;
|
||||
}
|
||||
});
|
||||
assertEquals(3, cache.size());
|
||||
assertNotNull("5", cache.get("5"));
|
||||
assertNotNull("6", cache.get("6"));
|
||||
assertNotNull("7", cache.get("7"));
|
||||
|
||||
// scale up
|
||||
|
||||
cache.setMaxRamMB(4);
|
||||
cache.put("8", new Accountable() {
|
||||
@Override
|
||||
public long ramBytesUsed() {
|
||||
return 1024 * 1024;
|
||||
}
|
||||
});
|
||||
assertEquals(4, cache.size());
|
||||
|
||||
cache.setMaxSize(10);
|
||||
for (int i = 0; i < 6; i++) {
|
||||
cache.put("new" + i, new Accountable() {
|
||||
@Override
|
||||
public long ramBytesUsed() {
|
||||
return 0;
|
||||
}
|
||||
});
|
||||
}
|
||||
assertEquals(10, cache.size());
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testMaxIdleTime() throws Exception {
|
||||
int IDLE_TIME_SEC = 600;
|
||||
long IDLE_TIME_NS = TimeUnit.NANOSECONDS.convert(IDLE_TIME_SEC, TimeUnit.SECONDS);
|
||||
LRUCache<String, Accountable> cache = new LRUCache<>();
|
||||
SolrMetricsContext solrMetricsContext = new SolrMetricsContext(metricManager, registry, "foo");
|
||||
cache.initializeMetrics(solrMetricsContext, scope);
|
||||
Map<String, String> params = new HashMap<>();
|
||||
params.put("size", "6");
|
||||
params.put("maxIdleTime", "" + IDLE_TIME_SEC);
|
||||
CacheRegenerator cr = new NoOpRegenerator();
|
||||
Object o = cache.init(params, null, cr);
|
||||
cache.setSyntheticEntries(true);
|
||||
for (int i = 0; i < 4; i++) {
|
||||
cache.put("" + i, new Accountable() {
|
||||
@Override
|
||||
public long ramBytesUsed() {
|
||||
return 1024 * 1024;
|
||||
}
|
||||
});
|
||||
}
|
||||
// no evictions yet
|
||||
assertEquals(4, cache.size());
|
||||
long currentTime = TimeSource.NANO_TIME.getEpochTimeNs();
|
||||
cache.putCacheValue("4", new LRUCache.CacheValue<>(new Accountable() {
|
||||
@Override
|
||||
public long ramBytesUsed() {
|
||||
return 0;
|
||||
}
|
||||
}, currentTime - IDLE_TIME_NS * 2));
|
||||
assertEquals(4, cache.size());
|
||||
assertNull(cache.get("4"));
|
||||
Map<String, Object> stats = cache.getMetricsMap().getValue();
|
||||
assertEquals(1, ((Number)stats.get("evictionsIdleTime")).intValue());
|
||||
}
|
||||
}
|
|
@ -316,8 +316,11 @@ public class TestSolr4Spatial2 extends SolrTestCaseJ4 {
|
|||
assertJQ(sameReq, "/response/numFound==1", "/response/docs/[0]/id=='1'");
|
||||
|
||||
// When there are new segments, we accumulate another hit. This tests the cache was not blown away on commit.
|
||||
// (i.e. the cache instance is new but it should've been regenerated from the old one).
|
||||
// Checking equality for the first reader's cache key indicates whether the cache should still be valid.
|
||||
Object leafKey2 = getFirstLeafReaderKey();
|
||||
// get the current instance of metrics - the old one may not represent the current cache instance
|
||||
cacheMetrics = (MetricsMap) ((SolrMetricManager.GaugeWrapper)h.getCore().getCoreMetricManager().getRegistry().getMetrics().get("CACHE.searcher.perSegSpatialFieldCache_" + fieldName)).getGauge();
|
||||
assertEquals(leafKey1.equals(leafKey2) ? "2" : "1", cacheMetrics.getValue().get("cumulative_hits").toString());
|
||||
}
|
||||
|
||||
|
|
|
@ -38,10 +38,7 @@ import org.junit.Test;
|
|||
public class TestSolrCachePerf extends SolrTestCaseJ4 {
|
||||
|
||||
private static final Class<? extends SolrCache>[] IMPLS = new Class[] {
|
||||
CaffeineCache.class,
|
||||
LRUCache.class,
|
||||
LFUCache.class,
|
||||
FastLRUCache.class
|
||||
CaffeineCache.class
|
||||
};
|
||||
|
||||
private final int NUM_KEYS = 5000;
|
||||
|
|
|
@ -373,15 +373,7 @@
|
|||
|
||||
|
||||
<!-- Solr Internal Query Caches
|
||||
|
||||
There are two implementations of cache available for Solr,
|
||||
LRUCache, based on a synchronized LinkedHashMap, and
|
||||
FastLRUCache, based on a ConcurrentHashMap.
|
||||
|
||||
FastLRUCache has faster gets and slower puts in single
|
||||
threaded operation and thus is generally faster than LRUCache
|
||||
when the hit ratio of the cache is high (> 75%), and may be
|
||||
faster under other scenarios on multi-cpu systems.
|
||||
Starting with Solr 9.0 the default cache implementation used is CaffeineCache.
|
||||
-->
|
||||
|
||||
<!-- Filter Cache
|
||||
|
@ -390,20 +382,17 @@
|
|||
unordered sets of *all* documents that match a query. When a
|
||||
new searcher is opened, its caches may be prepopulated or
|
||||
"autowarmed" using data from caches in the old searcher.
|
||||
autowarmCount is the number of items to prepopulate. For
|
||||
LRUCache, the autowarmed items will be the most recently
|
||||
accessed items.
|
||||
autowarmCount is the number of items to prepopulate.
|
||||
|
||||
Parameters:
|
||||
class - the SolrCache implementation LRUCache or
|
||||
(LRUCache or FastLRUCache)
|
||||
class - the SolrCache implementation
|
||||
size - the maximum number of entries in the cache
|
||||
initialSize - the initial capacity (number of entries) of
|
||||
the cache. (see java.util.HashMap)
|
||||
autowarmCount - the number of entries to prepopulate from
|
||||
and old cache.
|
||||
-->
|
||||
<filterCache class="solr.FastLRUCache"
|
||||
<filterCache class="solr.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="0"/>
|
||||
|
@ -413,7 +402,7 @@
|
|||
Caches results of searches - ordered lists of document ids
|
||||
(DocList) based on a query, a sort, and the range of documents requested.
|
||||
-->
|
||||
<queryResultCache class="solr.LRUCache"
|
||||
<queryResultCache class="solr.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="0"/>
|
||||
|
@ -424,14 +413,14 @@
|
|||
document). Since Lucene internal document ids are transient,
|
||||
this cache will not be autowarmed.
|
||||
-->
|
||||
<documentCache class="solr.LRUCache"
|
||||
<documentCache class="solr.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="0"/>
|
||||
|
||||
<!-- custom cache currently used by block join -->
|
||||
<cache name="perSegFilter"
|
||||
class="solr.search.LRUCache"
|
||||
class="solr.search.CaffeineCache"
|
||||
size="10"
|
||||
initialSize="0"
|
||||
autowarmCount="10"
|
||||
|
@ -444,7 +433,7 @@
|
|||
even if not configured here.
|
||||
-->
|
||||
<!--
|
||||
<fieldValueCache class="solr.FastLRUCache"
|
||||
<fieldValueCache class="solr.CaffeineCache"
|
||||
size="512"
|
||||
autowarmCount="128"
|
||||
showItems="32" />
|
||||
|
@ -461,7 +450,7 @@
|
|||
-->
|
||||
<!--
|
||||
<cache name="myUserCache"
|
||||
class="solr.LRUCache"
|
||||
class="solr.CaffeineCache"
|
||||
size="4096"
|
||||
initialSize="1024"
|
||||
autowarmCount="1024"
|
||||
|
|
|
@ -376,15 +376,7 @@
|
|||
|
||||
|
||||
<!-- Solr Internal Query Caches
|
||||
|
||||
There are two implementations of cache available for Solr,
|
||||
LRUCache, based on a synchronized LinkedHashMap, and
|
||||
FastLRUCache, based on a ConcurrentHashMap.
|
||||
|
||||
FastLRUCache has faster gets and slower puts in single
|
||||
threaded operation and thus is generally faster than LRUCache
|
||||
when the hit ratio of the cache is high (> 75%), and may be
|
||||
faster under other scenarios on multi-cpu systems.
|
||||
Starting with Solr 9.0 the default cache implementation used is CaffeineCache.
|
||||
-->
|
||||
|
||||
<!-- Filter Cache
|
||||
|
@ -393,20 +385,17 @@
|
|||
unordered sets of *all* documents that match a query. When a
|
||||
new searcher is opened, its caches may be prepopulated or
|
||||
"autowarmed" using data from caches in the old searcher.
|
||||
autowarmCount is the number of items to prepopulate. For
|
||||
LRUCache, the autowarmed items will be the most recently
|
||||
accessed items.
|
||||
autowarmCount is the number of items to prepopulate.
|
||||
|
||||
Parameters:
|
||||
class - the SolrCache implementation LRUCache or
|
||||
(LRUCache or FastLRUCache)
|
||||
class - the SolrCache implementation
|
||||
size - the maximum number of entries in the cache
|
||||
initialSize - the initial capacity (number of entries) of
|
||||
the cache. (see java.util.HashMap)
|
||||
autowarmCount - the number of entries to prepopulate from
|
||||
and old cache.
|
||||
-->
|
||||
<filterCache class="solr.FastLRUCache"
|
||||
<filterCache class="solr.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="0"/>
|
||||
|
@ -416,7 +405,7 @@
|
|||
Caches results of searches - ordered lists of document ids
|
||||
(DocList) based on a query, a sort, and the range of documents requested.
|
||||
-->
|
||||
<queryResultCache class="solr.LRUCache"
|
||||
<queryResultCache class="solr.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="0"/>
|
||||
|
@ -427,14 +416,14 @@
|
|||
document). Since Lucene internal document ids are transient,
|
||||
this cache will not be autowarmed.
|
||||
-->
|
||||
<documentCache class="solr.LRUCache"
|
||||
<documentCache class="solr.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="0"/>
|
||||
|
||||
<!-- custom cache currently used by block join -->
|
||||
<cache name="perSegFilter"
|
||||
class="solr.search.LRUCache"
|
||||
class="solr.search.CaffeineCache"
|
||||
size="10"
|
||||
initialSize="0"
|
||||
autowarmCount="10"
|
||||
|
@ -447,7 +436,7 @@
|
|||
even if not configured here.
|
||||
-->
|
||||
<!--
|
||||
<fieldValueCache class="solr.FastLRUCache"
|
||||
<fieldValueCache class="solr.CaffeineCache"
|
||||
size="512"
|
||||
autowarmCount="128"
|
||||
showItems="32" />
|
||||
|
@ -464,7 +453,7 @@
|
|||
-->
|
||||
<!--
|
||||
<cache name="myUserCache"
|
||||
class="solr.LRUCache"
|
||||
class="solr.CaffeineCache"
|
||||
size="4096"
|
||||
initialSize="1024"
|
||||
autowarmCount="1024"
|
||||
|
|
|
@ -373,15 +373,7 @@
|
|||
|
||||
|
||||
<!-- Solr Internal Query Caches
|
||||
|
||||
There are two implementations of cache available for Solr,
|
||||
LRUCache, based on a synchronized LinkedHashMap, and
|
||||
FastLRUCache, based on a ConcurrentHashMap.
|
||||
|
||||
FastLRUCache has faster gets and slower puts in single
|
||||
threaded operation and thus is generally faster than LRUCache
|
||||
when the hit ratio of the cache is high (> 75%), and may be
|
||||
faster under other scenarios on multi-cpu systems.
|
||||
Starting with Solr 9.0 the default cache implementation used is CaffeineCache.
|
||||
-->
|
||||
|
||||
<!-- Filter Cache
|
||||
|
@ -390,20 +382,17 @@
|
|||
unordered sets of *all* documents that match a query. When a
|
||||
new searcher is opened, its caches may be prepopulated or
|
||||
"autowarmed" using data from caches in the old searcher.
|
||||
autowarmCount is the number of items to prepopulate. For
|
||||
LRUCache, the autowarmed items will be the most recently
|
||||
accessed items.
|
||||
autowarmCount is the number of items to prepopulate.
|
||||
|
||||
Parameters:
|
||||
class - the SolrCache implementation LRUCache or
|
||||
(LRUCache or FastLRUCache)
|
||||
class - the SolrCache implementation
|
||||
size - the maximum number of entries in the cache
|
||||
initialSize - the initial capacity (number of entries) of
|
||||
the cache. (see java.util.HashMap)
|
||||
autowarmCount - the number of entries to prepopulate from
|
||||
and old cache.
|
||||
-->
|
||||
<filterCache class="solr.FastLRUCache"
|
||||
<filterCache class="solr.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="0"/>
|
||||
|
@ -413,7 +402,7 @@
|
|||
Caches results of searches - ordered lists of document ids
|
||||
(DocList) based on a query, a sort, and the range of documents requested.
|
||||
-->
|
||||
<queryResultCache class="solr.LRUCache"
|
||||
<queryResultCache class="solr.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="0"/>
|
||||
|
@ -424,14 +413,14 @@
|
|||
document). Since Lucene internal document ids are transient,
|
||||
this cache will not be autowarmed.
|
||||
-->
|
||||
<documentCache class="solr.LRUCache"
|
||||
<documentCache class="solr.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="0"/>
|
||||
|
||||
<!-- custom cache currently used by block join -->
|
||||
<cache name="perSegFilter"
|
||||
class="solr.search.LRUCache"
|
||||
class="solr.search.CaffeineCache"
|
||||
size="10"
|
||||
initialSize="0"
|
||||
autowarmCount="10"
|
||||
|
@ -444,7 +433,7 @@
|
|||
even if not configured here.
|
||||
-->
|
||||
<!--
|
||||
<fieldValueCache class="solr.FastLRUCache"
|
||||
<fieldValueCache class="solr.CaffeineCache"
|
||||
size="512"
|
||||
autowarmCount="128"
|
||||
showItems="32" />
|
||||
|
@ -461,7 +450,7 @@
|
|||
-->
|
||||
<!--
|
||||
<cache name="myUserCache"
|
||||
class="solr.LRUCache"
|
||||
class="solr.CaffeineCache"
|
||||
size="4096"
|
||||
initialSize="1024"
|
||||
autowarmCount="1024"
|
||||
|
|
|
@ -374,15 +374,7 @@
|
|||
|
||||
|
||||
<!-- Solr Internal Query Caches
|
||||
|
||||
There are two implementations of cache available for Solr,
|
||||
LRUCache, based on a synchronized LinkedHashMap, and
|
||||
FastLRUCache, based on a ConcurrentHashMap.
|
||||
|
||||
FastLRUCache has faster gets and slower puts in single
|
||||
threaded operation and thus is generally faster than LRUCache
|
||||
when the hit ratio of the cache is high (> 75%), and may be
|
||||
faster under other scenarios on multi-cpu systems.
|
||||
Starting with Solr 9.0 the default cache implementation used is CaffeineCache.
|
||||
-->
|
||||
|
||||
<!-- Filter Cache
|
||||
|
@ -391,20 +383,17 @@
|
|||
unordered sets of *all* documents that match a query. When a
|
||||
new searcher is opened, its caches may be prepopulated or
|
||||
"autowarmed" using data from caches in the old searcher.
|
||||
autowarmCount is the number of items to prepopulate. For
|
||||
LRUCache, the autowarmed items will be the most recently
|
||||
accessed items.
|
||||
autowarmCount is the number of items to prepopulate.
|
||||
|
||||
Parameters:
|
||||
class - the SolrCache implementation LRUCache or
|
||||
(LRUCache or FastLRUCache)
|
||||
class - the SolrCache implementation (CaffeineCache by default)
|
||||
size - the maximum number of entries in the cache
|
||||
initialSize - the initial capacity (number of entries) of
|
||||
the cache. (see java.util.HashMap)
|
||||
autowarmCount - the number of entries to prepopulate from
|
||||
and old cache.
|
||||
-->
|
||||
<filterCache class="solr.FastLRUCache"
|
||||
<filterCache class="solr.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="0"/>
|
||||
|
@ -413,11 +402,11 @@
|
|||
|
||||
Caches results of searches - ordered lists of document ids
|
||||
(DocList) based on a query, a sort, and the range of documents requested.
|
||||
Additional supported parameter by LRUCache:
|
||||
Additional supported parameter by CaffeineCache:
|
||||
maxRamMB - the maximum amount of RAM (in MB) that this cache is allowed
|
||||
to occupy
|
||||
-->
|
||||
<queryResultCache class="solr.LRUCache"
|
||||
<queryResultCache class="solr.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="0"/>
|
||||
|
@ -428,7 +417,7 @@
|
|||
document). Since Lucene internal document ids are transient,
|
||||
this cache will not be autowarmed.
|
||||
-->
|
||||
<documentCache class="solr.LRUCache"
|
||||
<documentCache class="solr.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="0"/>
|
||||
|
@ -440,7 +429,7 @@
|
|||
even if not configured here.
|
||||
-->
|
||||
<!--
|
||||
<fieldValueCache class="solr.FastLRUCache"
|
||||
<fieldValueCache class="solr.CaffeineCache"
|
||||
size="512"
|
||||
autowarmCount="128"
|
||||
showItems="32" />
|
||||
|
@ -457,7 +446,7 @@
|
|||
-->
|
||||
<!--
|
||||
<cache name="myUserCache"
|
||||
class="solr.LRUCache"
|
||||
class="solr.CaffeineCache"
|
||||
size="4096"
|
||||
initialSize="1024"
|
||||
autowarmCount="1024"
|
||||
|
|
|
@ -391,15 +391,7 @@
|
|||
<maxBooleanClauses>${solr.max.booleanClauses:1024}</maxBooleanClauses>
|
||||
|
||||
<!-- Solr Internal Query Caches
|
||||
|
||||
There are two implementations of cache available for Solr,
|
||||
LRUCache, based on a synchronized LinkedHashMap, and
|
||||
FastLRUCache, based on a ConcurrentHashMap.
|
||||
|
||||
FastLRUCache has faster gets and slower puts in single
|
||||
threaded operation and thus is generally faster than LRUCache
|
||||
when the hit ratio of the cache is high (> 75%), and may be
|
||||
faster under other scenarios on multi-cpu systems.
|
||||
Starting with Solr 9.0 the default cache implementation used is CaffeineCache.
|
||||
-->
|
||||
|
||||
<!-- Filter Cache
|
||||
|
@ -409,12 +401,11 @@
|
|||
new searcher is opened, its caches may be prepopulated or
|
||||
"autowarmed" using data from caches in the old searcher.
|
||||
autowarmCount is the number of items to prepopulate. For
|
||||
LRUCache, the autowarmed items will be the most recently
|
||||
CaffeineCache, the autowarmed items will be the most recently
|
||||
accessed items.
|
||||
|
||||
Parameters:
|
||||
class - the SolrCache implementation LRUCache or
|
||||
(LRUCache or FastLRUCache)
|
||||
class - the SolrCache implementation (CaffeineCache by default)
|
||||
size - the maximum number of entries in the cache
|
||||
initialSize - the initial capacity (number of entries) of
|
||||
the cache. (see java.util.HashMap)
|
||||
|
@ -424,7 +415,7 @@
|
|||
to occupy. Note that when this option is specified, the size
|
||||
and initialSize parameters are ignored.
|
||||
-->
|
||||
<filterCache class="solr.FastLRUCache"
|
||||
<filterCache class="solr.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="0"/>
|
||||
|
@ -433,11 +424,11 @@
|
|||
|
||||
Caches results of searches - ordered lists of document ids
|
||||
(DocList) based on a query, a sort, and the range of documents requested.
|
||||
Additional supported parameter by LRUCache:
|
||||
Additional supported parameter by CaffeineCache:
|
||||
maxRamMB - the maximum amount of RAM (in MB) that this cache is allowed
|
||||
to occupy
|
||||
-->
|
||||
<queryResultCache class="solr.LRUCache"
|
||||
<queryResultCache class="solr.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="0"/>
|
||||
|
@ -448,14 +439,14 @@
|
|||
document). Since Lucene internal document ids are transient,
|
||||
this cache will not be autowarmed.
|
||||
-->
|
||||
<documentCache class="solr.LRUCache"
|
||||
<documentCache class="solr.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="0"/>
|
||||
|
||||
<!-- custom cache currently used by block join -->
|
||||
<cache name="perSegFilter"
|
||||
class="solr.search.LRUCache"
|
||||
class="solr.search.CaffeineCache"
|
||||
size="10"
|
||||
initialSize="0"
|
||||
autowarmCount="10"
|
||||
|
@ -468,7 +459,7 @@
|
|||
even if not configured here.
|
||||
-->
|
||||
<!--
|
||||
<fieldValueCache class="solr.FastLRUCache"
|
||||
<fieldValueCache class="solr.CaffeineCache"
|
||||
size="512"
|
||||
autowarmCount="128"
|
||||
showItems="32" />
|
||||
|
@ -485,7 +476,7 @@
|
|||
-->
|
||||
<!--
|
||||
<cache name="myUserCache"
|
||||
class="solr.LRUCache"
|
||||
class="solr.CaffeineCache"
|
||||
size="4096"
|
||||
initialSize="1024"
|
||||
autowarmCount="1024"
|
||||
|
|
|
@ -406,15 +406,7 @@
|
|||
|
||||
|
||||
<!-- Solr Internal Query Caches
|
||||
|
||||
There are two implementations of cache available for Solr,
|
||||
LRUCache, based on a synchronized LinkedHashMap, and
|
||||
FastLRUCache, based on a ConcurrentHashMap.
|
||||
|
||||
FastLRUCache has faster gets and slower puts in single
|
||||
threaded operation and thus is generally faster than LRUCache
|
||||
when the hit ratio of the cache is high (> 75%), and may be
|
||||
faster under other scenarios on multi-cpu systems.
|
||||
Starting with Solr 9.0 the default cache implementation used is CaffeineCache.
|
||||
-->
|
||||
|
||||
<!-- Filter Cache
|
||||
|
@ -423,13 +415,10 @@
|
|||
unordered sets of *all* documents that match a query. When a
|
||||
new searcher is opened, its caches may be prepopulated or
|
||||
"autowarmed" using data from caches in the old searcher.
|
||||
autowarmCount is the number of items to prepopulate. For
|
||||
LRUCache, the autowarmed items will be the most recently
|
||||
accessed items.
|
||||
autowarmCount is the number of items to prepopulate.
|
||||
|
||||
Parameters:
|
||||
class - the SolrCache implementation LRUCache or
|
||||
(LRUCache or FastLRUCache)
|
||||
class - the SolrCache implementation
|
||||
size - the maximum number of entries in the cache
|
||||
initialSize - the initial capacity (number of entries) of
|
||||
the cache. (see java.util.HashMap)
|
||||
|
@ -439,7 +428,7 @@
|
|||
to occupy. Note that when this option is specified, the size
|
||||
and initialSize parameters are ignored.
|
||||
-->
|
||||
<filterCache class="solr.FastLRUCache"
|
||||
<filterCache class="solr.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="0"/>
|
||||
|
@ -448,11 +437,11 @@
|
|||
|
||||
Caches results of searches - ordered lists of document ids
|
||||
(DocList) based on a query, a sort, and the range of documents requested.
|
||||
Additional supported parameter by LRUCache:
|
||||
Additional supported parameter by CaffeineCache:
|
||||
maxRamMB - the maximum amount of RAM (in MB) that this cache is allowed
|
||||
to occupy
|
||||
-->
|
||||
<queryResultCache class="solr.LRUCache"
|
||||
<queryResultCache class="solr.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="0"/>
|
||||
|
@ -463,14 +452,14 @@
|
|||
document). Since Lucene internal document ids are transient,
|
||||
this cache will not be autowarmed.
|
||||
-->
|
||||
<documentCache class="solr.LRUCache"
|
||||
<documentCache class="solr.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="0"/>
|
||||
|
||||
<!-- custom cache currently used by block join -->
|
||||
<cache name="perSegFilter"
|
||||
class="solr.search.LRUCache"
|
||||
class="solr.search.CaffeineCache"
|
||||
size="10"
|
||||
initialSize="0"
|
||||
autowarmCount="10"
|
||||
|
@ -483,7 +472,7 @@
|
|||
even if not configured here.
|
||||
-->
|
||||
<!--
|
||||
<fieldValueCache class="solr.FastLRUCache"
|
||||
<fieldValueCache class="solr.CaffeineCache"
|
||||
size="512"
|
||||
autowarmCount="128"
|
||||
showItems="32" />
|
||||
|
@ -500,7 +489,7 @@
|
|||
https://lucene.apache.org/solr/guide/learning-to-rank.html
|
||||
-->
|
||||
<cache enable="${solr.ltr.enabled:false}" name="QUERY_DOC_FV"
|
||||
class="solr.search.LRUCache"
|
||||
class="solr.search.CaffeineCache"
|
||||
size="4096"
|
||||
initialSize="2048"
|
||||
autowarmCount="4096"
|
||||
|
@ -517,7 +506,7 @@
|
|||
-->
|
||||
<!--
|
||||
<cache name="myUserCache"
|
||||
class="solr.LRUCache"
|
||||
class="solr.CaffeineCache"
|
||||
size="4096"
|
||||
initialSize="1024"
|
||||
autowarmCount="1024"
|
||||
|
|
|
@ -402,7 +402,7 @@ Learning-To-Rank is a contrib module and therefore its plugins must be configure
|
|||
[source,xml]
|
||||
----
|
||||
<cache name="QUERY_DOC_FV"
|
||||
class="solr.search.LRUCache"
|
||||
class="solr.search.CaffeineCache"
|
||||
size="4096"
|
||||
initialSize="2048"
|
||||
autowarmCount="4096"
|
||||
|
|
|
@ -35,27 +35,19 @@ When a new searcher is opened, the current searcher continues servicing requests
|
|||
|
||||
=== Cache Implementations
|
||||
|
||||
In Solr, the following cache implementations are available: recommended `solr.search.CaffeineCache`, and legacy implementations: `solr.search.LRUCache`, `solr.search.FastLRUCache,` and `solr.search.LFUCache`.
|
||||
Solr comes with a default `SolrCache` implementation that is used for different types of caches.
|
||||
|
||||
The `CaffeineCache` is an implementation backed by the https://github.com/ben-manes/caffeine[Caffeine caching library]. By default it uses a Window TinyLFU (W-TinyLFU) eviction policy, which allows the eviction based on both frequency and recency of use in O(1) time with a small footprint. Generally this cache implementation is recommended over other legacy caches as it usually offers lower memory footprint, higher hit ratio and better multi-threaded performance over legacy caches.
|
||||
|
||||
The acronym LRU stands for Least Recently Used. When an LRU cache fills up, the entry with the oldest last-accessed timestamp is evicted to make room for the new entry. The net effect is that entries that are accessed frequently tend to stay in the cache, while those that are not accessed frequently tend to drop out and will be re-fetched from the index if needed again.
|
||||
|
||||
The `FastLRUCache`, which was introduced in Solr 1.4, is designed to be lock-free, so it is well suited for caches which are hit several times in a request.
|
||||
|
||||
`CaffeineCache`, `LRUCache` and `FastLRUCache` use an auto-warm count that supports both integers and percentages which get evaluated relative to the current size of the cache when warming happens.
|
||||
|
||||
The `LFUCache` refers to the Least Frequently Used cache. This works in a way similar to the LRU cache, except that when the cache fills up, the entry that has been used the least is evicted.
|
||||
`CaffeineCache` uses an auto-warm count that supports both integers and percentages which get evaluated relative to the current size of the cache when warming happens.
|
||||
|
||||
The Statistics page in the Solr Admin UI will display information about the performance of all the active caches. This information can help you fine-tune the sizes of the various caches appropriately for your particular application. When a Searcher terminates, a summary of its cache usage is also written to the log.
|
||||
|
||||
Each cache has settings to define its initial size (`initialSize`), maximum size (`size`) and number of items to use for during warming (`autowarmCount`). The Caffeine, LRU and FastLRU cache implementations can take a percentage instead of an absolute value for `autowarmCount`.
|
||||
Each cache has settings to define its initial size (`initialSize`), maximum size (`size`) and number of items to use for during warming (`autowarmCount`). For `autowarmCount` this can be also expressed as a percentage instead of an absolute value.
|
||||
|
||||
Each cache implementation also supports a `maxIdleTime` attribute that controls the automatic eviction of entries that haven't been used for a while. This attribute is expressed in seconds, with the default value of `0` meaning no entries are automatically evicted due to exceeded idle time. Smaller values of this attribute will cause older entries to be evicted quickly, which will reduce cache memory usage but may instead cause thrashing due to a repeating eviction-lookup-miss-insertion cycle of the same entries. Larger values will cause entries to stay around longer, waiting to be reused, at the cost of increased memory usage. Reasonable values, depending on the query volume and patterns, may lie somewhere between 60-3600. Please note that this condition is evaluated synchronously and before other eviction conditions on every entry insertion.
|
||||
A `maxIdleTime` attribute controls the automatic eviction of entries that haven't been used for a while. This attribute is expressed in seconds, with the default value of `0` meaning no entries are automatically evicted due to exceeded idle time. Smaller values of this attribute will cause older entries to be evicted quickly, which will reduce cache memory usage but may instead cause thrashing due to a repeating eviction-lookup-miss-insertion cycle of the same entries. Larger values will cause entries to stay around longer, waiting to be reused, at the cost of increased memory usage. Reasonable values, depending on the query volume and patterns, may lie somewhere between 60-3600.
|
||||
|
||||
`CaffeineCache`, `LRUCache` and `FastLRUCache` support a `maxRamMB` attribute that limits the maximum amount of memory a cache may consume. When both `size` and `maxRamMB` limits are specified the behavior will differ among implementations: in `CaffeineCache` the `maxRamMB` limit will take precedence and the `size` limit will be ignored, while in `LRUCache` and `FastLRUCache` both limits will be observed, with entries being evicted whenever any of the limits is reached.
|
||||
|
||||
`FastLRUCache` and `LFUCache` support `showItems` attribute. This is the number of cache items to display in the stats page for the cache. It is for debugging.
|
||||
The `maxRamMB` attribute limits the maximum amount of memory a cache may consume. When both `size` and `maxRamMB` limits are specified the `maxRamMB` limit will take precedence and the `size` limit will be ignored.
|
||||
|
||||
Details of each cache are described below.
|
||||
|
||||
|
@ -67,21 +59,19 @@ The most typical way Solr uses the `filterCache` is to cache results of each `fq
|
|||
|
||||
Solr also uses this cache for faceting when the configuration parameter `facet.method` is set to `fc`. For a discussion of faceting, see <<searching.adoc#searching,Searching>>.
|
||||
|
||||
The filter cache uses a specialized cache named as FastLRUCache which is optimized for fast concurrent access with the trade-off that writes and evictions are costlier than the LRUCache used for query result cache and document cache.
|
||||
|
||||
[source,xml]
|
||||
----
|
||||
<filterCache class="solr.FastLRUCache"
|
||||
<filterCache class="solr.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="128"/>
|
||||
----
|
||||
|
||||
The FastLRUCache used for filter cache also supports a `maxRamMB` parameter which restricts the maximum amount of heap used by this cache. The FastLRUCache only supports evictions by either heap usage or size but not both. Therefore, the `size` parameter is ignored if `maxRamMB` is specified.
|
||||
The cache supports a `maxRamMB` parameter which restricts the maximum amount of heap used by this cache. The `CaffeineCache` only supports evictions by either heap usage or size but not both. Therefore, the `size` parameter is ignored if `maxRamMB` is specified.
|
||||
|
||||
[source,xml]
|
||||
----
|
||||
<filterCache class="solr.FastLRUCache"
|
||||
<filterCache class="solr.CaffeineCache"
|
||||
maxRamMB="1000"
|
||||
autowarmCount="128"/>
|
||||
----
|
||||
|
@ -90,15 +80,14 @@ The FastLRUCache used for filter cache also supports a `maxRamMB` parameter whic
|
|||
|
||||
This cache holds the results of previous searches: ordered lists of document IDs (DocList) based on a query, a sort, and the range of documents requested.
|
||||
|
||||
The `queryResultCache` has an additional (optional) setting to limit the maximum amount of RAM used (`maxRamMB`). This lets you specify the maximum heap size, in megabytes, used by the contents of this cache. When the cache grows beyond this size, oldest accessed queries will be evicted until the heap usage of the cache decreases below the specified limit. If a `size` is specified in addition to `maxRamMB` then both heap usage and maximum size limits are respected.
|
||||
The `queryResultCache` has an optional setting to limit the maximum amount of RAM used (`maxRamMB`). This lets you specify the maximum heap size, in megabytes, used by the contents of this cache. When the cache grows beyond this size, oldest accessed queries will be evicted until the heap usage of the cache decreases below the specified limit. If a `size` is specified in addition to `maxRamMB` then only the heap usage limit is respected.
|
||||
|
||||
[source,xml]
|
||||
----
|
||||
<queryResultCache class="solr.LRUCache"
|
||||
<queryResultCache class="solr.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="128"
|
||||
maxRamMB="1000"/>
|
||||
autowarmCount="128"/>
|
||||
----
|
||||
|
||||
=== documentCache
|
||||
|
@ -107,7 +96,7 @@ This cache holds Lucene Document objects (the stored fields for each document).
|
|||
|
||||
[source,xml]
|
||||
----
|
||||
<documentCache class="solr.LRUCache"
|
||||
<documentCache class="solr.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="0"/>
|
||||
|
@ -119,7 +108,7 @@ You can also define named caches for your own application code to use. You can l
|
|||
|
||||
[source,xml]
|
||||
----
|
||||
<cache name="myUserCache" class="solr.LRUCache"
|
||||
<cache name="myUserCache" class="solr.CaffeineCache"
|
||||
size="4096"
|
||||
initialSize="1024"
|
||||
autowarmCount="1024"
|
||||
|
|
|
@ -374,7 +374,7 @@ An optional in-memory cache can be defined in `solrconfig.xml`, which should be
|
|||
[source,xml]
|
||||
----
|
||||
<cache name="perSegSpatialFieldCache_geom"
|
||||
class="solr.LRUCache"
|
||||
class="solr.CaffeineCache"
|
||||
size="256"
|
||||
initialSize="0"
|
||||
autowarmCount="100%"
|
||||
|
|
Loading…
Reference in New Issue