This commit is contained in:
Karl Wright 2017-07-26 10:35:46 -04:00
commit e95b48e12d
184 changed files with 2681 additions and 3256 deletions

View File

@ -858,11 +858,11 @@ def testSolrExample(unpackPath, javaPath, isSrc):
run('sh ./exampledocs/test_utf8.sh http://localhost:8983/solr/techproducts', 'utf8.log')
print(' run query...')
s = load('http://localhost:8983/solr/techproducts/select/?q=video')
if s.find('<result name="response" numFound="3" start="0">') == -1:
if s.find('"numFound":3,"start":0') == -1:
print('FAILED: response is:\n%s' % s)
raise RuntimeError('query on solr example instance failed')
s = load('http://localhost:8983/v2/cores')
if s.find('"responseHeader":{"status":0') == -1:
if s.find('"status":0,') == -1:
print('FAILED: response is:\n%s' % s)
raise RuntimeError('query api v2 on solr example instance failed')
finally:

View File

@ -73,6 +73,8 @@ Bug Fixes
* SOLR-11011: Assign.buildCoreName can lead to error in creating a new core when legacyCloud=false (Cao Manh Dat)
* SOLR-10944: Get expression fails to return EOF tuple (Susheel Kumar, Joel Bernstein)
Optimizations
----------------------
@ -106,6 +108,12 @@ Other Changes
* SOLR-10338: Configure SecureRandom non blocking for tests. (Mihaly Toth, hossman, Ishan Chattopadhyaya, via Mark Miller)
* SOLR-10916: Convert tests that extend LuceneTestCase and use MiniSolrCloudCluster
to instead extend SolrCloudTestCase. (Steve Rowe)
* SOLR-11131: Document 'assert' as a command option in bin/solr, and bin/solr.cmd scripts.
(Jason Gerlowski via Anshum Gupta)
================== 7.0.0 ==================
Versions of Major Components
@ -120,6 +128,11 @@ Jetty 9.3.14.v20161028
Upgrading from Solr 6.x
----------------------
* The default response type is now JSON ("wt=json") instead of XML, and line indentation is now on by default
("indent=on"). If you expect the responses to your queries to be returned in the previous format (XML
format, no indentation), you must now you must now explicitly pass in "wt=xml" and "indent=off" as query
parameters, or configure them as defaults on your request handlers. See SOLR-10494 for more details.
* the cluster property 'legacyCloud' is set to false from 7.0. This means 'zookeeper is the truth' by
default. If an entry for a replica does not exist in the state.json, that replica cannot get
registered. This may affect users who use that feature where they bring up replicas and they are
@ -305,6 +318,8 @@ New Features
* SOLR-10282: bin/solr support for enabling Kerberos authentication (Ishan Chattopadhyaya, Jason Gerlowski)
* SOLR-11093: Add support for PointFields for {!graph} query. (yonik)
Bug Fixes
----------------------
* SOLR-9262: Connection and read timeouts are being ignored by UpdateShardHandler after SOLR-4509.
@ -372,6 +387,11 @@ Bug Fixes
* SOLR-11057: Fix overflow in point range queries when querying the type limits
(e.g. q=field_i:{Integer.MAX_VALUE TO Integer.MAX_VALUE]) (Tomás Fernández Löbbe)
* SOLR-11136: Fix solrj XMLResponseParser when nested docs transformer is used with indented XML (hossman)
* SOLR-11130: V2Request in SolrJ should return the correct collection name so that the request is forwarded to the
correct node (noble)
Optimizations
----------------------
@ -531,6 +551,9 @@ Other Changes
* SOLR-11119: Switch from Trie to Points field types in the .system collection schema. (Steve Rowe)
* SOLR-10494: Make default response format JSON (wt=json), and also indent text responses formats
(indent=on) by default (Trey Grainger & Cassandra Targett via hossman)
================== 6.7.0 ==================
Consult the LUCENE_CHANGES.txt file for additional, low level, changes in this release.
@ -2311,6 +2334,7 @@ Bug Fixes
* SOLR-9391: LBHttpSolrClient.request now correctly returns Rsp.server when
previously skipped servers were successfully tried. (Christine Poerschke)
Optimizations
----------------------

View File

@ -295,7 +295,7 @@ function print_usage() {
if [ -z "$CMD" ]; then
echo ""
echo "Usage: solr COMMAND OPTIONS"
echo " where COMMAND is one of: start, stop, restart, status, healthcheck, create, create_core, create_collection, delete, version, zk, auth"
echo " where COMMAND is one of: start, stop, restart, status, healthcheck, create, create_core, create_collection, delete, version, zk, auth, assert"
echo ""
echo " Standalone server example (start Solr running in the background on port 8984):"
echo ""

View File

@ -268,7 +268,7 @@ goto done
:script_usage
@echo.
@echo Usage: solr COMMAND OPTIONS
@echo where COMMAND is one of: start, stop, restart, healthcheck, create, create_core, create_collection, delete, version, zk, auth
@echo where COMMAND is one of: start, stop, restart, healthcheck, create, create_core, create_collection, delete, version, zk, auth, assert
@echo.
@echo Standalone server example (start Solr running in the background on port 8984):
@echo.

View File

@ -160,9 +160,9 @@ public class TestHierarchicalDocBuilder extends AbstractDataImportHandlerTestCas
int totalDocsNum = parentsNum + childrenNum + grandChildrenNum;
String resp = runFullImport(THREE_LEVEL_HIERARCHY_CONFIG);
String xpath = "//arr[@name='documents']/lst/arr[@name='id' and .='"+parentId1+"']/../"+
"arr[@name='_childDocuments_']/lst/arr[@name='id' and .='"+childId+"']/../"+
"arr[@name='_childDocuments_']/lst/arr[@name='id' and .='"+grandChildrenIds.get(0)+"']";
String xpath = "//arr[@name='documents']/lst[arr[@name='id']/str='"+parentId1+"']/"+
"arr[@name='_childDocuments_']/lst[arr[@name='id']/str='"+childId+"']/"+
"arr[@name='_childDocuments_']/lst[arr[@name='id']/str='"+grandChildrenIds.get(0)+"']";
String results = TestHarness.validateXPath(resp,
xpath);
assertTrue("Debug documents does not contain child documents\n"+resp+"\n"+ xpath+

View File

@ -8,7 +8,7 @@ deploy that model to Solr and use it to rerank your top X search results.
# Getting Started With Solr Learning To Rank
For information on how to get started with solr ltr please see:
* [Solr Reference Guide's section on Learning To Rank](https://cwiki.apache.org/confluence/display/solr/Learning+To+Rank)
* [Solr Reference Guide's section on Learning To Rank](https://lucene.apache.org/solr/guide/learning-to-rank.html)
# Getting Started With Solr
@ -21,4 +21,3 @@ For information on how to get started with solr please see:
For information on how to contribute see:
* http://wiki.apache.org/lucene-java/HowToContribute
* http://wiki.apache.org/solr/HowToContribute

View File

@ -1,6 +1,6 @@
This README file is only about this example directory's content.
Please refer to the Solr Reference Guide's section on [Learning To Rank](https://cwiki.apache.org/confluence/display/solr/Learning+To+Rank) section for broader information on Learning to Rank (LTR) with Apache Solr.
Please refer to the Solr Reference Guide's section on [Learning To Rank](https://lucene.apache.org/solr/guide/learning-to-rank.html) section for broader information on Learning to Rank (LTR) with Apache Solr.
# Start Solr with the LTR plugin enabled
@ -29,7 +29,7 @@ Please refer to the Solr Reference Guide's section on [Learning To Rank](https:/
4. Search and rerank the results using the trained model
```
http://localhost:8983/solr/techproducts/query?indent=on&q=test&wt=json&rq={!ltr%20model=exampleModel%20reRankDocs=25%20efi.user_query=%27test%27}&fl=price,score,name
http://localhost:8983/solr/techproducts/query?q=test&rq={!ltr%20model=exampleModel%20reRankDocs=25%20efi.user_query=%27test%27}&fl=price,score,name
```
# Assemble training data
@ -101,8 +101,8 @@ hard drive|6H500F0 |0|CLICK_LOGS
hard drive|F8V7067-APL-KIT|0|CLICK_LOGS
```
This is a really trival way to generate a training dataset, and in many settings
it might not produce great results. Indeed, it is a well known fact that
This is a really trival way to generate a training dataset, and in many settings
it might not produce great results. Indeed, it is a well known fact that
clicks are *biased*: users tend to click on the first
result proposed for a query, also if it is not relevant. A click on a document in position
five could be considered more important than a click on a document in position one, because
@ -128,5 +128,3 @@ Usually a human worker visualizes a query together with a list of results and th
consists in assigning a relevance label to each document (e.g., Perfect, Excellent, Good, Fair, Not relevant).
Training data can then be obtained by translating the labels into numeric scores
(e.g., Perfect = 4, Excellent = 3, Good = 2, Fair = 1, Not relevant = 0).

View File

@ -48,7 +48,6 @@ import org.apache.solr.ltr.model.LTRScoringModel;
import org.apache.solr.ltr.model.TestLinearModel;
import org.apache.solr.ltr.norm.IdentityNormalizer;
import org.apache.solr.ltr.norm.Normalizer;
import org.junit.Ignore;
import org.junit.Test;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@ -60,7 +59,12 @@ public class TestLTRReRankingPipeline extends LuceneTestCase {
private static final SolrResourceLoader solrResourceLoader = new SolrResourceLoader();
private IndexSearcher getSearcher(IndexReader r) {
final IndexSearcher searcher = newSearcher(r);
// 'yes' to maybe wrapping in general
final boolean maybeWrap = true;
final boolean wrapWithAssertions = false;
// 'no' to asserting wrap because lucene AssertingWeight
// cannot be cast to solr LTRScoringQuery$ModelWeight
final IndexSearcher searcher = newSearcher(r, maybeWrap, wrapWithAssertions);
return searcher;
}
@ -102,7 +106,6 @@ public class TestLTRReRankingPipeline extends LuceneTestCase {
}
@Ignore
@Test
public void testRescorer() throws IOException {
final Directory dir = newDirectory();
@ -112,7 +115,7 @@ public class TestLTRReRankingPipeline extends LuceneTestCase {
doc.add(newStringField("id", "0", Field.Store.YES));
doc.add(newTextField("field", "wizard the the the the the oz",
Field.Store.NO));
doc.add(new FloatDocValuesField("final-score", 1.0f));
doc.add(newStringField("final-score", "F", Field.Store.YES)); // TODO: change to numeric field
w.addDocument(doc);
doc = new Document();
@ -120,7 +123,7 @@ public class TestLTRReRankingPipeline extends LuceneTestCase {
// 1 extra token, but wizard and oz are close;
doc.add(newTextField("field", "wizard oz the the the the the the",
Field.Store.NO));
doc.add(new FloatDocValuesField("final-score", 2.0f));
doc.add(newStringField("final-score", "T", Field.Store.YES)); // TODO: change to numeric field
w.addDocument(doc);
final IndexReader r = w.getReader();
@ -145,7 +148,7 @@ public class TestLTRReRankingPipeline extends LuceneTestCase {
final List<Feature> allFeatures = makeFieldValueFeatures(new int[] {0, 1,
2, 3, 4, 5, 6, 7, 8, 9}, "final-score");
final LTRScoringModel ltrScoringModel = TestLinearModel.createLinearModel("test",
features, norms, "test", allFeatures, null);
features, norms, "test", allFeatures, TestLinearModel.makeFeatureWeights(features));
final LTRRescorer rescorer = new LTRRescorer(new LTRScoringQuery(ltrScoringModel));
hits = rescorer.rescore(searcher, hits, 2);
@ -159,7 +162,7 @@ public class TestLTRReRankingPipeline extends LuceneTestCase {
}
@Ignore
@AwaitsFix(bugUrl = "https://issues.apache.org/jira/browse/SOLR-11134")
@Test
public void testDifferentTopN() throws IOException {
final Directory dir = newDirectory();
@ -221,7 +224,7 @@ public class TestLTRReRankingPipeline extends LuceneTestCase {
final List<Feature> allFeatures = makeFieldValueFeatures(new int[] {0, 1,
2, 3, 4, 5, 6, 7, 8, 9}, "final-score");
final LTRScoringModel ltrScoringModel = TestLinearModel.createLinearModel("test",
features, norms, "test", allFeatures, null);
features, norms, "test", allFeatures, TestLinearModel.makeFeatureWeights(features));
final LTRRescorer rescorer = new LTRRescorer(new LTRScoringQuery(ltrScoringModel));

View File

@ -82,7 +82,7 @@ public class V2HttpCall extends HttpSolrCall {
api = new Api(null) {
@Override
public void call(SolrQueryRequest req, SolrQueryResponse rsp) {
rsp.add("documentation", "https://cwiki.apache.org/confluence/display/solr/v2+API");
rsp.add("documentation", "https://lucene.apache.org/solr/guide/v2-api.html");
rsp.add("description", "V2 API root path");
}
};

View File

@ -104,7 +104,7 @@ public class AddReplicaCmd implements OverseerCollectionMessageHandler.Cmd {
throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, "Node: " + node + " is not live");
}
if (coreName == null) {
coreName = Assign.buildCoreName(ocmh.zkStateReader.getZkClient(), coll.getName(), shard, replicaType);
coreName = Assign.buildCoreName(ocmh.zkStateReader.getZkClient(), coll, shard, replicaType);
} else if (!skipCreateReplicaInClusterState) {
//Validate that the core name is unique in that collection
for (Slice slice : coll.getSlices()) {

View File

@ -62,13 +62,14 @@ import static org.apache.solr.cloud.OverseerCollectionMessageHandler.CREATE_NODE
import static org.apache.solr.cloud.OverseerCollectionMessageHandler.CREATE_NODE_SET_SHUFFLE;
import static org.apache.solr.cloud.OverseerCollectionMessageHandler.CREATE_NODE_SET_SHUFFLE_DEFAULT;
import static org.apache.solr.common.cloud.DocCollection.SNITCH;
import static org.apache.solr.common.cloud.ZkStateReader.CORE_NAME_PROP;
import static org.apache.solr.common.cloud.ZkStateReader.MAX_SHARDS_PER_NODE;
import static org.apache.solr.common.cloud.ZkStateReader.SOLR_AUTOSCALING_CONF_PATH;
public class Assign {
public static int incAndGetId(SolrZkClient zkClient, String collection) {
public static int incAndGetId(SolrZkClient zkClient, String collection, int defaultValue) {
String path = "/collections/"+collection;
try {
if (!zkClient.exists(path, true)) {
@ -81,7 +82,7 @@ public class Assign {
path += "/counter";
if (!zkClient.exists(path, true)) {
try {
zkClient.create(path, NumberUtils.intToBytes(0), CreateMode.PERSISTENT, true);
zkClient.create(path, NumberUtils.intToBytes(defaultValue), CreateMode.PERSISTENT, true);
} catch (KeeperException.NodeExistsException e) {
// it's okay if another beats us creating the node
}
@ -112,8 +113,16 @@ public class Assign {
}
}
public static String assignNode(SolrZkClient client, String collection) {
return "core_node" + incAndGetId(client, collection);
public static String assignNode(SolrZkClient client, DocCollection collection) {
// for backward compatibility;
int numReplicas = collection.getReplicas().size();
String coreNodeName = "core_node" + incAndGetId(client, collection.getName(), numReplicas * 20);
while (collection.getReplica(coreNodeName) != null) {
// there is wee chance that, the new coreNodeName id not totally unique,
// but this will be guaranteed unique for new collections
coreNodeName = "core_node" + incAndGetId(client, collection.getName(), numReplicas * 20);
}
return coreNodeName;
}
/**
@ -160,15 +169,33 @@ public class Assign {
return returnShardId;
}
public static String buildCoreName(String collectionName, String shard, Replica.Type type, int replicaNum) {
private static String buildCoreName(String collectionName, String shard, Replica.Type type, int replicaNum) {
// TODO: Adding the suffix is great for debugging, but may be an issue if at some point we want to support a way to change replica type
return String.format(Locale.ROOT, "%s_%s_replica_%s%s", collectionName, shard, type.name().substring(0,1).toLowerCase(Locale.ROOT), replicaNum);
}
public static String buildCoreName(SolrZkClient zkClient, String collection, String shard, Replica.Type type) {
int replicaNum = incAndGetId(zkClient, collection);
return buildCoreName(collection, shard, type, replicaNum);
public static String buildCoreName(SolrZkClient zkClient, DocCollection collection, String shard, Replica.Type type) {
Slice slice = collection.getSlice(shard);
int numReplicas = collection.getReplicas().size();
int replicaNum = incAndGetId(zkClient, collection.getName(), numReplicas * 20);
String coreName = buildCoreName(collection.getName(), shard, type, replicaNum);
while (existCoreName(coreName, slice)) {
replicaNum = incAndGetId(zkClient, collection.getName(), numReplicas * 20);
coreName = buildCoreName(collection.getName(), shard, type, replicaNum);
}
return coreName;
}
private static boolean existCoreName(String coreName, Slice slice) {
if (slice == null) return false;
for (Replica replica : slice.getReplicas()) {
if (coreName.equals(replica.getStr(CORE_NAME_PROP))) {
return true;
}
}
return false;
}
public static List<String> getLiveOrLiveAndCreateNodeSetList(final Set<String> liveNodes, final ZkNodeProps message, final Random random) {
// TODO: add smarter options that look at the current number of cores per
// node?

View File

@ -211,7 +211,8 @@ public class CreateCollectionCmd implements Cmd {
Map<String,ShardRequest> coresToCreate = new LinkedHashMap<>();
for (ReplicaPosition replicaPosition : replicaPositions) {
String nodeName = replicaPosition.node;
String coreName = Assign.buildCoreName(collectionName, replicaPosition.shard, replicaPosition.type, replicaPosition.index + 1);
String coreName = Assign.buildCoreName(ocmh.zkStateReader.getZkClient(), zkStateReader.getClusterState().getCollection(collectionName),
replicaPosition.shard, replicaPosition.type);
log.debug(formatString("Creating core {0} as part of shard {1} of collection {2} on {3}"
, coreName, replicaPosition.shard, collectionName, nodeName));

View File

@ -88,23 +88,19 @@ public class CreateShardCmd implements Cmd {
int createdNrtReplicas = 0, createdTlogReplicas = 0, createdPullReplicas = 0;
CountDownLatch countDownLatch = new CountDownLatch(totalReplicas);
for (int j = 1; j <= totalReplicas; j++) {
int coreNameNumber;
Replica.Type typeToCreate;
if (createdNrtReplicas < numNrtReplicas) {
createdNrtReplicas++;
coreNameNumber = createdNrtReplicas;
typeToCreate = Replica.Type.NRT;
} else if (createdTlogReplicas < numTlogReplicas) {
createdTlogReplicas++;
coreNameNumber = createdTlogReplicas;
typeToCreate = Replica.Type.TLOG;
} else {
createdPullReplicas++;
coreNameNumber = createdPullReplicas;
typeToCreate = Replica.Type.PULL;
}
String nodeName = sortedNodeList.get(((j - 1)) % sortedNodeList.size()).nodeName;
String coreName = Assign.buildCoreName(collectionName, sliceName, typeToCreate, coreNameNumber);
String coreName = Assign.buildCoreName(ocmh.zkStateReader.getZkClient(), collection, sliceName, typeToCreate);
// String coreName = collectionName + "_" + sliceName + "_replica" + j;
log.info("Creating replica " + coreName + " as part of slice " + sliceName + " of collection " + collectionName
+ " on " + nodeName);

View File

@ -224,7 +224,7 @@ public class MigrateCmd implements OverseerCollectionMessageHandler.Cmd {
Slice tempSourceSlice = clusterState.getCollection(tempSourceCollectionName).getSlices().iterator().next();
Replica tempSourceLeader = zkStateReader.getLeaderRetry(tempSourceCollectionName, tempSourceSlice.getName(), 120000);
String tempCollectionReplica1 = Assign.buildCoreName(tempSourceCollectionName, tempSourceSlice.getName(), Replica.Type.NRT, 1);
String tempCollectionReplica1 = tempSourceLeader.getCoreName();
String coreNodeName = ocmh.waitForCoreNodeName(tempSourceCollectionName,
sourceLeader.getNodeName(), tempCollectionReplica1);
// wait for the replicas to be seen as active on temp source leader
@ -257,7 +257,8 @@ public class MigrateCmd implements OverseerCollectionMessageHandler.Cmd {
log.info("Creating a replica of temporary collection: {} on the target leader node: {}",
tempSourceCollectionName, targetLeader.getNodeName());
String tempCollectionReplica2 = Assign.buildCoreName(tempSourceCollectionName, tempSourceSlice.getName(), Replica.Type.NRT, 2);
String tempCollectionReplica2 = Assign.buildCoreName(ocmh.zkStateReader.getZkClient(),
zkStateReader.getClusterState().getCollection(tempSourceCollectionName), tempSourceSlice.getName(), Replica.Type.NRT);
props = new HashMap<>();
props.put(Overseer.QUEUE_OPERATION, ADDREPLICA.toLower());
props.put(COLLECTION_PROP, tempSourceCollectionName);

View File

@ -184,7 +184,7 @@ public class MoveReplicaCmd implements Cmd{
private void moveNormalReplica(ClusterState clusterState, NamedList results, String targetNode, String async,
DocCollection coll, Replica replica, Slice slice, int timeout) throws Exception {
String newCoreName = Assign.buildCoreName(ocmh.zkStateReader.getZkClient(), coll.getName(), slice.getName(), replica.getType());
String newCoreName = Assign.buildCoreName(ocmh.zkStateReader.getZkClient(), coll, slice.getName(), replica.getType());
ZkNodeProps addReplicasProps = new ZkNodeProps(
COLLECTION_PROP, coll.getName(),
SHARD_ID_PROP, slice.getName(),

View File

@ -205,7 +205,7 @@ public class SplitShardCmd implements Cmd {
for (int i = 0; i < subRanges.size(); i++) {
String subSlice = slice + "_" + i;
subSlices.add(subSlice);
String subShardName = Assign.buildCoreName(collectionName, subSlice, Replica.Type.NRT, 1);
String subShardName = Assign.buildCoreName(ocmh.zkStateReader.getZkClient(), collection, subSlice, Replica.Type.NRT);
subShardNames.add(subShardName);
}

View File

@ -240,7 +240,7 @@ public class ReplicaMutator {
log.debug("node=" + coreNodeName + " is already registered");
} else {
// if coreNodeName is null, auto assign one
coreNodeName = Assign.assignNode(zkStateReader.getZkClient(), collection.getName());
coreNodeName = Assign.assignNode(zkStateReader.getZkClient(), collection);
}
message.getProperties().put(ZkStateReader.CORE_NODE_NAME_PROP,
coreNodeName);

View File

@ -70,7 +70,7 @@ public class SliceMutator {
if (message.getStr(ZkStateReader.CORE_NODE_NAME_PROP) != null) {
coreNodeName = message.getStr(ZkStateReader.CORE_NODE_NAME_PROP);
} else {
coreNodeName = Assign.assignNode(zkStateReader.getZkClient(), collection.getName());
coreNodeName = Assign.assignNode(zkStateReader.getZkClient(), collection);
}
Replica replica = new Replica(coreNodeName,
makeMap(

View File

@ -2565,8 +2565,8 @@ public final class SolrCore implements SolrInfoBean, SolrMetricProducer, Closeab
static{
HashMap<String, QueryResponseWriter> m= new HashMap<>(15, 1);
m.put("xml", new XMLResponseWriter());
m.put("standard", m.get("xml"));
m.put(CommonParams.JSON, new JSONResponseWriter());
m.put("standard", m.get(CommonParams.JSON));
m.put("geojson", new GeoJSONResponseWriter());
m.put("graphml", new GraphMLResponseWriter());
m.put("python", new PythonResponseWriter());

View File

@ -84,7 +84,7 @@ public abstract class TextResponseWriter implements PushWriter {
this.req = req;
this.rsp = rsp;
String indent = req.getParams().get("indent");
if (indent != null && !"".equals(indent) && !"off".equals(indent)) {
if (null == indent || !("off".equals(indent) || "false".equals(indent))){
doIndent=true;
}
returnFields = rsp.getReturnFields();

View File

@ -29,6 +29,8 @@ import org.apache.lucene.index.SortedNumericDocValues;
import org.apache.lucene.search.DocIdSetIterator;
import org.apache.solr.common.util.SimpleOrderedMap;
import org.apache.solr.schema.SchemaField;
import org.apache.solr.util.LongIterator;
import org.apache.solr.util.LongSet;
public class UniqueAgg extends StrAggValueSource {
public static String UNIQUE = "unique";
@ -122,75 +124,6 @@ public class UniqueAgg extends StrAggValueSource {
}
static class LongSet {
static final float LOAD_FACTOR = 0.7f;
long[] vals;
int cardinality;
int mask;
int threshold;
int zeroCount; // 1 if a 0 was collected
/** sz must be a power of two */
LongSet(int sz) {
vals = new long[sz];
mask = sz - 1;
threshold = (int) (sz * LOAD_FACTOR);
}
void add(long val) {
if (val == 0) {
zeroCount = 1;
return;
}
if (cardinality >= threshold) {
rehash();
}
// For floats: exponent bits start at bit 23 for single precision,
// and bit 52 for double precision.
// Many values will only have significant bits just to the right of that,
// and the leftmost bits will all be zero.
// For now, lets just settle to get first 8 significant mantissa bits of double or float in the lowest bits of our hash
// The upper bits of our hash will be irrelevant.
int h = (int) (val + (val >>> 44) + (val >>> 15));
for (int slot = h & mask; ;slot = (slot + 1) & mask) {
long v = vals[slot];
if (v == 0) {
vals[slot] = val;
cardinality++;
break;
} else if (v == val) {
// val is already in the set
break;
}
}
}
private void rehash() {
long[] oldVals = vals;
int newCapacity = vals.length << 1;
vals = new long[newCapacity];
mask = newCapacity - 1;
threshold = (int) (newCapacity * LOAD_FACTOR);
cardinality = 0;
for (long val : oldVals) {
if (val != 0) {
add(val);
}
}
}
int cardinality() {
return cardinality + zeroCount;
}
}
static abstract class BaseNumericAcc extends SlotAcc {
SchemaField sf;
LongSet[] sets;
@ -259,16 +192,11 @@ public class UniqueAgg extends StrAggValueSource {
if (unique <= maxExplicit) {
List lst = new ArrayList( Math.min(unique, maxExplicit) );
if (set != null) {
if (set.zeroCount > 0) {
lst.add(0);
}
for (long val : set.vals) {
if (val != 0) {
lst.add(val);
}
LongIterator iter = set.iterator();
while (iter.hasNext()) {
lst.add( iter.next() );
}
}
map.add("vals", lst);
}

View File

@ -25,16 +25,15 @@ import java.util.TreeSet;
import org.apache.lucene.index.LeafReaderContext;
import org.apache.lucene.index.Term;
import org.apache.lucene.search.AutomatonQuery;
import org.apache.lucene.search.BooleanClause.Occur;
import org.apache.lucene.search.BooleanQuery;
import org.apache.lucene.search.DocIdSet;
import org.apache.lucene.search.DocIdSetIterator;
import org.apache.lucene.search.DocValuesFieldExistsQuery;
import org.apache.lucene.search.Explanation;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.Scorer;
import org.apache.lucene.search.TermInSetQuery;
import org.apache.lucene.search.Weight;
import org.apache.lucene.search.WildcardQuery;
import org.apache.lucene.util.BytesRef;
@ -42,6 +41,7 @@ import org.apache.lucene.util.BytesRefHash;
import org.apache.lucene.util.FixedBitSet;
import org.apache.lucene.util.automaton.Automaton;
import org.apache.lucene.util.automaton.DaciukMihovAutomatonBuilder;
import org.apache.solr.schema.SchemaField;
import org.apache.solr.search.BitDocSet;
import org.apache.solr.search.DocSet;
import org.apache.solr.search.Filter;
@ -130,17 +130,22 @@ public class GraphQuery extends Query {
protected class GraphQueryWeight extends Weight {
final SolrIndexSearcher fromSearcher;
private final float boost;
private int frontierSize = 0;
private int currentDepth = -1;
private Filter filter;
private DocSet resultSet;
SchemaField fromSchemaField;
SchemaField toSchemaField;
public GraphQueryWeight(SolrIndexSearcher searcher, float boost) {
// Grab the searcher so we can run additional searches.
super(null);
this.fromSearcher = searcher;
this.boost = boost;
this.fromSchemaField = searcher.getSchema().getField(fromField);
this.toSchemaField = searcher.getSchema().getField(toField);
}
GraphQuery getGraphQuery() {
return GraphQuery.this;
}
@Override
@ -175,7 +180,7 @@ public class GraphQuery extends Query {
// the initial query for the frontier for the first query
Query frontierQuery = q;
// Find all documents in this graph that are leaf nodes to speed traversal
DocSet leafNodes = resolveLeafNodes(toField);
DocSet leafNodes = resolveLeafNodes();
// Start the breadth first graph traversal.
do {
@ -187,25 +192,17 @@ public class GraphQuery extends Query {
// if we've reached the max depth, don't worry about collecting edges.
fromSet = fromSearcher.getDocSetBits(frontierQuery);
// explicitly the frontier size is zero now so we can break
frontierSize = 0;
frontierQuery = null;
} else {
// when we're not at the max depth level, we need to collect edges
// Create the graph result collector for this level
GraphTermsCollector graphResultCollector = new GraphTermsCollector(toField,capacity, resultBits, leafNodes);
GraphEdgeCollector graphResultCollector = toSchemaField.getType().isPointField()
? new GraphPointsCollector(this, capacity, resultBits, leafNodes)
: new GraphTermsCollector(this, capacity, resultBits, leafNodes);
fromSearcher.search(frontierQuery, graphResultCollector);
fromSet = graphResultCollector.getDocSet();
// All edge ids on the frontier.
BytesRefHash collectorTerms = graphResultCollector.getCollectorTerms();
frontierSize = collectorTerms.size();
// The resulting doc set from the frontier.
FrontierQuery fq = buildFrontierQuery(collectorTerms, frontierSize);
if (fq == null) {
// in case we get null back, make sure we know we're done at this level.
frontierSize = 0;
} else {
frontierQuery = fq.getQuery();
frontierSize = fq.getFrontierSize();
}
frontierQuery = graphResultCollector.getFrontierQuery();
}
if (currentDepth == 0 && !returnRoot) {
// grab a copy of the root bits but only if we need it.
@ -217,7 +214,7 @@ public class GraphQuery extends Query {
if ((maxDepth != -1 && currentDepth >= maxDepth)) {
break;
}
} while (frontierSize > 0);
} while (frontierQuery != null);
// helper bit set operations on the final result set
if (!returnRoot) {
resultBits.andNot(rootBits);
@ -232,9 +229,10 @@ public class GraphQuery extends Query {
}
}
private DocSet resolveLeafNodes(String field) throws IOException {
private DocSet resolveLeafNodes() throws IOException {
String field = toSchemaField.getName();
BooleanQuery.Builder leafNodeQuery = new BooleanQuery.Builder();
WildcardQuery edgeQuery = new WildcardQuery(new Term(field, "*"));
Query edgeQuery = toSchemaField.hasDocValues() ? new DocValuesFieldExistsQuery(field) : new WildcardQuery(new Term(field, "*"));
leafNodeQuery.add(edgeQuery, Occur.MUST_NOT);
DocSet leafNodes = fromSearcher.getDocSet(leafNodeQuery.build());
return leafNodes;
@ -252,50 +250,7 @@ public class GraphQuery extends Query {
final Automaton a = DaciukMihovAutomatonBuilder.build(terms);
return a;
}
/**
* This return a query that represents the documents that match the next hop in the query.
*
* collectorTerms - the terms that represent the edge ids for the current frontier.
* frontierSize - the size of the frontier query (number of unique edges)
*
*/
public FrontierQuery buildFrontierQuery(BytesRefHash collectorTerms, Integer frontierSize) {
if (collectorTerms == null || collectorTerms.size() == 0) {
// return null if there are no terms (edges) to traverse.
return null;
} else {
// Create a query
Query q = null;
// TODO: see if we should dynamically select this based on the frontier size.
if (useAutn) {
// build an automaton based query for the frontier.
Automaton autn = buildAutomaton(collectorTerms);
AutomatonQuery autnQuery = new AutomatonQuery(new Term(fromField), autn);
q = autnQuery;
} else {
List<BytesRef> termList = new ArrayList<>(collectorTerms.size());
for (int i = 0 ; i < collectorTerms.size(); i++) {
BytesRef ref = new BytesRef();
collectorTerms.get(i, ref);
termList.add(ref);
}
q = new TermInSetQuery(fromField, termList);
}
// If there is a filter to be used while crawling the graph, add that.
if (traversalFilter != null) {
BooleanQuery.Builder builder = new BooleanQuery.Builder();
builder.add(q, Occur.MUST);
builder.add(traversalFilter, Occur.MUST);
q = builder.build();
}
// return the new query.
FrontierQuery frontier = new FrontierQuery(q, frontierSize);
return frontier;
}
}
@Override
public Scorer scorer(LeafReaderContext context) throws IOException {

View File

@ -41,6 +41,7 @@ public class GraphQueryParser extends QParser {
String traversalFilterS = localParams.get("traversalFilter");
Query traversalFilter = traversalFilterS == null ? null : subQuery(traversalFilterS, null).getQuery();
// NOTE: the from/to are reversed from {!join}
String fromField = localParams.get("from", "node_id");
String toField = localParams.get("to", "edge_ids");

View File

@ -17,19 +17,40 @@
package org.apache.solr.search.join;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.TreeSet;
import org.apache.lucene.document.DoublePoint;
import org.apache.lucene.document.FloatPoint;
import org.apache.lucene.document.IntPoint;
import org.apache.lucene.document.LongPoint;
import org.apache.lucene.index.DocValues;
import org.apache.lucene.index.LeafReaderContext;
import org.apache.lucene.index.SortedNumericDocValues;
import org.apache.lucene.index.SortedSetDocValues;
import org.apache.lucene.index.Term;
import org.apache.lucene.search.AutomatonQuery;
import org.apache.lucene.search.BooleanClause;
import org.apache.lucene.search.BooleanQuery;
import org.apache.lucene.search.Collector;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.SimpleCollector;
import org.apache.lucene.search.TermInSetQuery;
import org.apache.lucene.util.BitSet;
import org.apache.lucene.util.Bits;
import org.apache.lucene.util.BytesRef;
import org.apache.lucene.util.BytesRefHash;
import org.apache.lucene.util.FixedBitSet;
import org.apache.lucene.util.NumericUtils;
import org.apache.lucene.util.automaton.Automaton;
import org.apache.lucene.util.automaton.DaciukMihovAutomatonBuilder;
import org.apache.solr.schema.NumberType;
import org.apache.solr.schema.SchemaField;
import org.apache.solr.search.BitDocSet;
import org.apache.solr.search.DocSet;
import org.apache.solr.util.LongIterator;
import org.apache.solr.util.LongSet;
/**
* A graph hit collector. This accumulates the edges for a given graph traversal.
@ -37,17 +58,14 @@ import org.apache.solr.search.DocSet;
* already traversed.
* @lucene.internal
*/
class GraphTermsCollector extends SimpleCollector implements Collector {
// the field to collect edge ids from
private String field;
// all the collected terms
private BytesRefHash collectorTerms;
private SortedSetDocValues docTermOrds;
abstract class GraphEdgeCollector extends SimpleCollector implements Collector {
GraphQuery.GraphQueryWeight weight;
// the result set that is being collected.
private Bits currentResult;
Bits currentResult;
// known leaf nodes
private DocSet leafNodes;
DocSet leafNodes;
// number of hits discovered at this level.
int numHits=0;
BitSet bits;
@ -56,11 +74,10 @@ class GraphTermsCollector extends SimpleCollector implements Collector {
int baseInParent;
// if we care to track this.
boolean hasCycles = false;
GraphTermsCollector(String field,int maxDoc, Bits currentResult, DocSet leafNodes) {
this.field = field;
GraphEdgeCollector(GraphQuery.GraphQueryWeight weight, int maxDoc, Bits currentResult, DocSet leafNodes) {
this.weight = weight;
this.maxDoc = maxDoc;
this.collectorTerms = new BytesRefHash();
this.currentResult = currentResult;
this.leafNodes = leafNodes;
if (bits==null) {
@ -80,29 +97,14 @@ class GraphTermsCollector extends SimpleCollector implements Collector {
// collect the docs
addDocToResult(doc);
// Optimization to not look up edges for a document that is a leaf node
if (!leafNodes.exists(doc)) {
if (leafNodes == null || !leafNodes.exists(doc)) {
addEdgeIdsToResult(doc-base);
}
// Note: tracking links in for each result would be a huge memory hog... so not implementing at this time.
}
private void addEdgeIdsToResult(int doc) throws IOException {
// set the doc to pull the edges ids for.
if (doc > docTermOrds.docID()) {
docTermOrds.advance(doc);
}
if (doc == docTermOrds.docID()) {
BytesRef edgeValue = new BytesRef();
long ord;
while ((ord = docTermOrds.nextOrd()) != SortedSetDocValues.NO_MORE_ORDS) {
// TODO: handle non string type fields.
edgeValue = docTermOrds.lookupOrd(ord);
// add the edge id to the collector terms.
collectorTerms.add(edgeValue);
}
}
}
abstract void addEdgeIdsToResult(int doc) throws IOException;
private void addDocToResult(int docWithBase) {
// this document is part of the traversal. mark it in our bitmap.
@ -121,14 +123,25 @@ class GraphTermsCollector extends SimpleCollector implements Collector {
@Override
public void doSetNextReader(LeafReaderContext context) throws IOException {
// Grab the updated doc values.
docTermOrds = DocValues.getSortedSet(context.reader(), field);
base = context.docBase;
baseInParent = context.docBaseInParent;
}
public BytesRefHash getCollectorTerms() {
return collectorTerms;
protected abstract Query getResultQuery();
public Query getFrontierQuery() {
Query q = getResultQuery();
if (q == null) return null;
// If there is a filter to be used while crawling the graph, add that.
if (weight.getGraphQuery().getTraversalFilter() != null) {
BooleanQuery.Builder builder = new BooleanQuery.Builder();
builder.add(q, BooleanClause.Occur.MUST);
builder.add(weight.getGraphQuery().getTraversalFilter(), BooleanClause.Occur.MUST);
q = builder.build();
}
return q;
}
@Override
@ -137,3 +150,180 @@ class GraphTermsCollector extends SimpleCollector implements Collector {
}
}
class GraphTermsCollector extends GraphEdgeCollector {
// all the collected terms
private BytesRefHash collectorTerms;
private SortedSetDocValues docTermOrds;
GraphTermsCollector(GraphQuery.GraphQueryWeight weight, int maxDoc, Bits currentResult, DocSet leafNodes) {
super(weight, maxDoc, currentResult, leafNodes);
this.collectorTerms = new BytesRefHash();
}
@Override
public void doSetNextReader(LeafReaderContext context) throws IOException {
super.doSetNextReader(context);
// Grab the updated doc values.
docTermOrds = DocValues.getSortedSet(context.reader(), weight.getGraphQuery().getToField());
}
@Override
void addEdgeIdsToResult(int doc) throws IOException {
// set the doc to pull the edges ids for.
if (doc > docTermOrds.docID()) {
docTermOrds.advance(doc);
}
if (doc == docTermOrds.docID()) {
BytesRef edgeValue = new BytesRef();
long ord;
while ((ord = docTermOrds.nextOrd()) != SortedSetDocValues.NO_MORE_ORDS) {
edgeValue = docTermOrds.lookupOrd(ord);
// add the edge id to the collector terms.
collectorTerms.add(edgeValue);
}
}
}
@Override
protected Query getResultQuery() {
if (collectorTerms == null || collectorTerms.size() == 0) {
// return null if there are no terms (edges) to traverse.
return null;
} else {
// Create a query
Query q = null;
GraphQuery gq = weight.getGraphQuery();
// TODO: see if we should dynamically select this based on the frontier size.
if (gq.isUseAutn()) {
// build an automaton based query for the frontier.
Automaton autn = buildAutomaton(collectorTerms);
AutomatonQuery autnQuery = new AutomatonQuery(new Term(gq.getFromField()), autn);
q = autnQuery;
} else {
List<BytesRef> termList = new ArrayList<>(collectorTerms.size());
for (int i = 0 ; i < collectorTerms.size(); i++) {
BytesRef ref = new BytesRef();
collectorTerms.get(i, ref);
termList.add(ref);
}
q = new TermInSetQuery(gq.getFromField(), termList);
}
return q;
}
}
/** Build an automaton to represent the frontier query */
private Automaton buildAutomaton(BytesRefHash termBytesHash) {
// need top pass a sorted set of terms to the autn builder (maybe a better way to avoid this?)
final TreeSet<BytesRef> terms = new TreeSet<BytesRef>();
for (int i = 0 ; i < termBytesHash.size(); i++) {
BytesRef ref = new BytesRef();
termBytesHash.get(i, ref);
terms.add(ref);
}
final Automaton a = DaciukMihovAutomatonBuilder.build(terms);
return a;
}
}
class GraphPointsCollector extends GraphEdgeCollector {
final LongSet set = new LongSet(256);
SortedNumericDocValues values = null;
GraphPointsCollector(GraphQuery.GraphQueryWeight weight, int maxDoc, Bits currentResult, DocSet leafNodes) {
super(weight, maxDoc, currentResult, leafNodes);
}
@Override
public void doSetNextReader(LeafReaderContext context) throws IOException {
super.doSetNextReader(context);
values = DocValues.getSortedNumeric(context.reader(), weight.getGraphQuery().getToField());
}
@Override
void addEdgeIdsToResult(int doc) throws IOException {
// set the doc to pull the edges ids for.
int valuesDoc = values.docID();
if (valuesDoc < doc) {
valuesDoc = values.advance(doc);
}
if (valuesDoc == doc) {
int count = values.docValueCount();
for (int i = 0; i < count; i++) {
long v = values.nextValue();
set.add(v);
}
}
}
@Override
protected Query getResultQuery() {
if (set.cardinality() == 0) return null;
Query q = null;
SchemaField sfield = weight.fromSchemaField;
NumberType ntype = sfield.getType().getNumberType();
boolean multiValued = sfield.multiValued();
if (ntype == NumberType.LONG || ntype == NumberType.DATE) {
long[] vals = new long[set.cardinality()];
int i = 0;
for (LongIterator iter = set.iterator(); iter.hasNext(); ) {
long bits = iter.next();
long v = bits;
vals[i++] = v;
}
q = LongPoint.newSetQuery(sfield.getName(), vals);
} else if (ntype == NumberType.INTEGER) {
int[] vals = new int[set.cardinality()];
int i = 0;
for (LongIterator iter = set.iterator(); iter.hasNext(); ) {
long bits = iter.next();
int v = (int)bits;
vals[i++] = v;
}
q = IntPoint.newSetQuery(sfield.getName(), vals);
} else if (ntype == NumberType.DOUBLE) {
double[] vals = new double[set.cardinality()];
int i = 0;
for (LongIterator iter = set.iterator(); iter.hasNext(); ) {
long bits = iter.next();
double v = multiValued ? NumericUtils.sortableLongToDouble(bits) : Double.longBitsToDouble(bits);
vals[i++] = v;
}
q = DoublePoint.newSetQuery(sfield.getName(), vals);
} else if (ntype == NumberType.FLOAT) {
float[] vals = new float[set.cardinality()];
int i = 0;
for (LongIterator iter = set.iterator(); iter.hasNext(); ) {
long bits = iter.next();
float v = multiValued ? NumericUtils.sortableIntToFloat((int) bits) : Float.intBitsToFloat((int) bits);
vals[i++] = v;
}
q = FloatPoint.newSetQuery(sfield.getName(), vals);
}
return q;
}
/** Build an automaton to represent the frontier query */
private Automaton buildAutomaton(BytesRefHash termBytesHash) {
// need top pass a sorted set of terms to the autn builder (maybe a better way to avoid this?)
final TreeSet<BytesRef> terms = new TreeSet<BytesRef>();
for (int i = 0 ; i < termBytesHash.size(); i++) {
BytesRef ref = new BytesRef();
termBytesHash.get(i, ref);
terms.add(ref);
}
final Automaton a = DaciukMihovAutomatonBuilder.build(terms);
return a;
}
}

View File

@ -63,6 +63,7 @@ public class DirectSolrConnection
* For example:
*
* String json = solr.request( "/select?qt=dismax&amp;wt=json&amp;q=...", null );
* String xml = solr.request( "/select?qt=dismax&amp;wt=xml&amp;q=...", null );
* String xml = solr.request( "/update", "&lt;add&gt;&lt;doc&gt;&lt;field ..." );
*/
public String request( String pathAndParams, String body ) throws Exception

View File

@ -14,13 +14,13 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.solr.util.hll;
package org.apache.solr.util;
/**
* A <code>long</code>-based iterator. This is not <i>is-a</i> {@link java.util.Iterator}
* to prevent autoboxing between <code>Long</code> and <code>long</code>.
*/
interface LongIterator {
public interface LongIterator {
/**
* @return <code>true</code> if and only if there are more elements to
* iterate over. <code>false</code> otherwise.
@ -28,7 +28,7 @@ interface LongIterator {
boolean hasNext();
/**
* @return the next <code>long</code> in the collection.
* @return the next <code>long</code> in the collection. Only valid after hasNext() has been called and returns true.
*/
long next();
}

View File

@ -0,0 +1,135 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.solr.util;
/** Collects long values in a hash set (closed hashing on power-of-two sized long[])
* @lucene.internal
*/
public class LongSet {
private static final float LOAD_FACTOR = 0.7f;
private long[] vals;
private int cardinality;
private int mask;
private int threshold;
private int zeroCount; // 1 if a 0 was collected
public LongSet(int sz) {
sz = Math.max(org.apache.lucene.util.BitUtil.nextHighestPowerOfTwo(sz), 2);
vals = new long[sz];
mask = sz - 1;
threshold = (int) (sz * LOAD_FACTOR);
}
/** Returns the long[] array that has entries filled in with values or "0" for empty.
* To see if "0" itself is in the set, call containsZero()
*/
public long[] getBackingArray() {
return vals;
}
public boolean containsZero() {
return zeroCount != 0;
}
/** Adds an additional value to the set */
public void add(long val) {
if (val == 0) {
zeroCount = 1;
return;
}
if (cardinality >= threshold) {
rehash();
}
// For floats: exponent bits start at bit 23 for single precision,
// and bit 52 for double precision.
// Many values will only have significant bits just to the right of that.
// For now, lets just settle to get first 8 significant mantissa bits of double or float in the lowest bits of our hash
// The upper bits of our hash will be irrelevant.
int h = (int) (val + (val >>> 44) + (val >>> 15));
for (int slot = h & mask; ; slot = (slot + 1) & mask) {
long v = vals[slot];
if (v == 0) {
vals[slot] = val;
cardinality++;
break;
} else if (v == val) {
// val is already in the set
break;
}
}
}
private void rehash() {
long[] oldVals = vals;
int newCapacity = vals.length << 1;
vals = new long[newCapacity];
mask = newCapacity - 1;
threshold = (int) (newCapacity * LOAD_FACTOR);
cardinality = 0;
for (long val : oldVals) {
if (val != 0) {
add(val);
}
}
}
/** The number of values in the set */
public int cardinality() {
return cardinality + zeroCount;
}
/** Returns an iterator over the values in the set.
* hasNext() must return true for next() to return a valid value.
*/
public LongIterator iterator() {
return new LongIterator() {
private boolean hasNext = zeroCount > 0;
private int i = -1;
private long value = 0;
@Override
public boolean hasNext() {
if (hasNext) {
// this is only executed the first time for the special case 0 value
return true;
}
while (++i < vals.length) {
value = vals[i];
if (value != 0) {
return hasNext = true;
}
}
return false;
}
@Override
public long next() {
hasNext = false;
return value;
}
};
}
}

View File

@ -16,6 +16,8 @@
*/
package org.apache.solr.util.hll;
import org.apache.solr.util.LongIterator;
/**
* A vector (array) of bits that is accessed in units ("registers") of <code>width</code>
* bits which are stored as 64bit "words" (<code>long</code>s). In this context

View File

@ -22,6 +22,7 @@ import com.carrotsearch.hppc.IntByteHashMap;
import com.carrotsearch.hppc.LongHashSet;
import com.carrotsearch.hppc.cursors.IntByteCursor;
import com.carrotsearch.hppc.cursors.LongCursor;
import org.apache.solr.util.LongIterator;
/**
* A probabilistic set of hashed <code>long</code> elements. Useful for computing

View File

@ -240,6 +240,16 @@
<dynamicField name="*_dtdS" type="date" indexed="true" stored="true" docValues="true"/>
<dynamicField name="*_dtdsS" type="date" indexed="true" stored="true" multiValued="true" docValues="true"/>
<!-- explicit points with docValues (since they can't be uninverted with FieldCache -->
<dynamicField name="*_ip" type="pint" indexed="true" stored="true" docValues="true" multiValued="false"/>
<dynamicField name="*_ips" type="pint" indexed="true" stored="true" docValues="true" multiValued="true"/>
<dynamicField name="*_lp" type="plong" indexed="true" stored="true" docValues="true" multiValued="false"/>
<dynamicField name="*_lps" type="plong" indexed="true" stored="true" docValues="true" multiValued="true"/>
<dynamicField name="*_fp" type="pfloat" indexed="true" stored="true" docValues="true" multiValued="false"/>
<dynamicField name="*_fps" type="pfloat" indexed="true" stored="true" docValues="true" multiValued="true"/>
<dynamicField name="*_dp" type="pdouble" indexed="true" stored="true" docValues="true" multiValued="false"/>
<dynamicField name="*_dps" type="pdouble" indexed="true" stored="true" docValues="true" multiValued="true"/>
<dynamicField name="*_b" type="boolean" indexed="true" stored="true"/>
<dynamicField name="*_bs" type="boolean" indexed="true" stored="true" multiValued="true"/>
@ -354,6 +364,13 @@
field first in an ascending sort and last in a descending sort.
-->
<!-- Point Fields -->
<fieldType name="pint" class="solr.IntPointField" docValues="true"/>
<fieldType name="plong" class="solr.LongPointField" docValues="true"/>
<fieldType name="pdouble" class="solr.DoublePointField" docValues="true"/>
<fieldType name="pfloat" class="solr.FloatPointField" docValues="true"/>
<fieldType name="pdate" class="solr.DatePointField" docValues="true"/>
<!--
Default numeric field types. For faster range queries, consider the tint/tfloat/tlong/tdouble types.

View File

@ -16,9 +16,9 @@
limitations under the License.
-->
<!--
<!--
For more details about configurations options that may appear in
this file, see http://wiki.apache.org/solr/SolrConfigXml.
this file, see http://wiki.apache.org/solr/SolrConfigXml.
-->
<config>
<!-- In all configuration below, a prefix of "solr." for class names
@ -46,19 +46,19 @@
instanceDir.
Please note that <lib/> directives are processed in the order
that they appear in your solrconfig.xml file, and are "stacked"
on top of each other when building a ClassLoader - so if you have
plugin jars with dependencies on other jars, the "lower level"
that they appear in your solrconfig.xml file, and are "stacked"
on top of each other when building a ClassLoader - so if you have
plugin jars with dependencies on other jars, the "lower level"
dependency jars should be loaded first.
If a "./lib" directory exists in your instanceDir, all files
found in it are included as if you had used the following
syntax...
<lib dir="./lib" />
-->
<!-- A 'dir' option by itself adds any files found in the directory
<!-- A 'dir' option by itself adds any files found in the directory
to the classpath, this is useful for including all jars in a
directory.
@ -69,7 +69,7 @@
If a 'dir' option (with or without a regex) is used and nothing
is found that matches, a warning will be logged.
The examples below can be used to load some solr-contribs along
The examples below can be used to load some solr-contribs along
with their external dependencies.
-->
<lib dir="${solr.install.dir:../../../..}/contrib/extraction/lib" regex=".*\.jar" />
@ -83,12 +83,12 @@
<lib dir="${solr.install.dir:../../../..}/contrib/velocity/lib" regex=".*\.jar" />
<lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-velocity-\d.*\.jar" />
<!-- an exact 'path' can be used instead of a 'dir' to specify a
specific jar file. This will cause a serious error to be logged
<!-- an exact 'path' can be used instead of a 'dir' to specify a
specific jar file. This will cause a serious error to be logged
if it can't be loaded.
-->
<!--
<lib path="../a-jar-that-does-not-exist.jar" />
<lib path="../a-jar-that-does-not-exist.jar" />
-->
<!-- Data Directory
@ -102,7 +102,7 @@
<!-- The DirectoryFactory to use for indexes.
solr.StandardDirectoryFactory is filesystem
based and tries to pick the best implementation for the current
JVM and platform. solr.NRTCachingDirectoryFactory, the default,
@ -125,7 +125,7 @@
are experimental, so if you choose to customize the index format, it's a good
idea to convert back to the official format e.g. via IndexWriter.addIndexes(IndexReader)
before upgrading to a newer version to avoid unnecessary reindexing.
A "compressionMode" string element can be added to <codecFactory> to choose
A "compressionMode" string element can be added to <codecFactory> to choose
between the existing compression modes in the default codec: "BEST_SPEED" (default)
or "BEST_COMPRESSION".
-->
@ -135,19 +135,19 @@
Index Config - These settings control low-level behavior of indexing
Most example settings here show the default value, but are commented
out, to more easily see where customizations have been made.
Note: This replaces <indexDefaults> and <mainIndex> from older versions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->
<indexConfig>
<!-- maxFieldLength was removed in 4.0. To get similar behavior, include a
LimitTokenCountFilterFactory in your fieldType definition. E.g.
<!-- maxFieldLength was removed in 4.0. To get similar behavior, include a
LimitTokenCountFilterFactory in your fieldType definition. E.g.
<filter class="solr.LimitTokenCountFilterFactory" maxTokenCount="10000"/>
-->
<!-- Maximum time to wait for a write lock (ms) for an IndexWriter. Default: 1000 -->
<!-- <writeLockTimeout>1000</writeLockTimeout> -->
<!-- Expert: Enabling compound file will use less files for the index,
using fewer file descriptors on the expense of performance decrease.
<!-- Expert: Enabling compound file will use less files for the index,
using fewer file descriptors on the expense of performance decrease.
Default in Lucene is "true". Default in Solr is "false" (since 3.6) -->
<!-- <useCompoundFile>false</useCompoundFile> -->
@ -161,7 +161,7 @@
<!-- <ramBufferSizeMB>100</ramBufferSizeMB> -->
<!-- <maxBufferedDocs>1000</maxBufferedDocs> -->
<!-- Expert: Merge Policy
<!-- Expert: Merge Policy
The Merge Policy in Lucene controls how merging of segments is done.
The default since Solr/Lucene 3.3 is TieredMergePolicy.
The default since Lucene 2.3 was the LogByteSizeMergePolicy,
@ -181,15 +181,15 @@
can perform merges in the background using separate threads.
The SerialMergeScheduler (Lucene 2.2 default) does not.
-->
<!--
<!--
<mergeScheduler class="org.apache.lucene.index.ConcurrentMergeScheduler"/>
-->
<!-- LockFactory
<!-- LockFactory
This option specifies which Lucene LockFactory implementation
to use.
single = SingleInstanceLockFactory - suggested for a
read-only index or when there is no possibility of
another process trying to modify the index.
@ -213,11 +213,11 @@
The default Solr IndexDeletionPolicy implementation supports
deleting index commit points on number of commits, age of
commit point and optimized status.
The latest commit point should always be preserved regardless
of the criteria.
-->
<!--
<!--
<deletionPolicy class="solr.SolrDeletionPolicy">
-->
<!-- The number of commit points to be kept -->
@ -232,12 +232,12 @@
<str name="maxCommitAge">30MINUTES</str>
<str name="maxCommitAge">1DAY</str>
-->
<!--
<!--
</deletionPolicy>
-->
<!-- Lucene Infostream
To aid in advanced debugging, Lucene provides an "InfoStream"
of detailed information when indexing.
@ -249,7 +249,7 @@
<!-- JMX
This example enables JMX if and only if an existing MBeanServer
is found, use this if you want to configure JMX through JVM
parameters. Remove this to disable exposing Solr configuration
@ -259,7 +259,7 @@
-->
<jmx />
<!-- If you want to connect to a particular server, specify the
agentId
agentId
-->
<!-- <jmx agentId="myAgent" /> -->
<!-- If you want to start a new MBeanServer, specify the serviceUrl -->
@ -291,7 +291,7 @@
Perform a hard commit automatically under certain conditions.
Instead of enabling autoCommit, consider using "commitWithin"
when adding documents.
when adding documents.
http://wiki.apache.org/solr/UpdateXmlMessages
@ -300,7 +300,7 @@
maxTime - Maximum amount of time in ms that is allowed to pass
since a document was added before automatically
triggering a new commit.
triggering a new commit.
openSearcher - if false, the commit causes recent index changes
to be flushed to stable storage, but does not cause a new
searcher to be opened to make those changes visible.
@ -324,7 +324,7 @@
</autoSoftCommit>
<!-- Update Related Event Listeners
Various IndexWriter related events can trigger Listeners to
take actions.
@ -333,10 +333,10 @@
-->
<!-- The RunExecutableListener executes an external command from a
hook such as postCommit or postOptimize.
exe - the name of the executable to run
dir - dir to use as the current working directory. (default=".")
wait - the calling thread waits until the executable returns.
wait - the calling thread waits until the executable returns.
(default="true")
args - the arguments to pass to the program. (default is none)
env - environment variables to set. (default is none)
@ -401,7 +401,7 @@
There are two implementations of cache available for Solr,
LRUCache, based on a synchronized LinkedHashMap, and
FastLRUCache, based on a ConcurrentHashMap.
FastLRUCache, based on a ConcurrentHashMap.
FastLRUCache has faster gets and slower puts in single
threaded operation and thus is generally faster than LRUCache
@ -437,7 +437,7 @@
autowarmCount="0"/>
<!-- Query Result Cache
Caches results of searches - ordered lists of document ids
(DocList) based on a query, a sort, and the range of documents requested.
Additional supported parameter by LRUCache:
@ -469,7 +469,7 @@
regenerator="solr.NoOpRegenerator" />
<!-- Field Value Cache
Cache used to hold field values that are quickly accessible
by document id. The fieldValueCache is created by default
even if not configured here.
@ -487,8 +487,8 @@
name through SolrIndexSearcher.getCache(),cacheLookup(), and
cacheInsert(). The purpose is to enable easy caching of
user/application level data. The regenerator argument should
be specified as an implementation of solr.CacheRegenerator
if autowarming is desired.
be specified as an implementation of solr.CacheRegenerator
if autowarming is desired.
-->
<!--
<cache name="myUserCache"
@ -512,14 +512,14 @@
<enableLazyFieldLoading>true</enableLazyFieldLoading>
<!-- Use Filter For Sorted Query
A possible optimization that attempts to use a filter to
satisfy a search. If the requested sort does not include
score, then the filterCache will be checked for a filter
matching the query. If found, the filter will be used as the
source of document ids, and then the sort will be applied to
that.
For most situations, this will not be useful unless you
frequently get the same search repeatedly with different sort
options, and none of them ever use "score"
@ -529,39 +529,39 @@
-->
<!-- Result Window Size
An optimization for use with the queryResultCache. When a search
is requested, a superset of the requested number of document ids
are collected. For example, if a search for a particular query
requests matching documents 10 through 19, and queryWindowSize is 50,
then documents 0 through 49 will be collected and cached. Any further
requests in that range can be satisfied via the cache.
requests in that range can be satisfied via the cache.
-->
<queryResultWindowSize>20</queryResultWindowSize>
<!-- Maximum number of documents to cache for any entry in the
queryResultCache.
queryResultCache.
-->
<queryResultMaxDocsCached>200</queryResultMaxDocsCached>
<!-- Query Related Event Listeners
Various IndexSearcher related events can trigger Listeners to
take actions.
newSearcher - fired whenever a new searcher is being prepared
and there is a current searcher handling requests (aka
registered). It can be used to prime certain caches to
prevent long request times for certain requests.
firstSearcher - fired whenever a new searcher is being
prepared but there is no current registered searcher to handle
requests or to gain autowarming data from.
-->
<!-- QuerySenderListener takes an array of NamedList and executes a
local query request for each NamedList in sequence.
local query request for each NamedList in sequence.
-->
<listener event="newSearcher" class="solr.QuerySenderListener">
<arr name="queries">
@ -611,19 +611,19 @@
multipartUploadLimitInKB - specifies the max size (in KiB) of
Multipart File Uploads that Solr will allow in a Request.
formdataUploadLimitInKB - specifies the max size (in KiB) of
form data (application/x-www-form-urlencoded) sent via
POST. You can use POST to pass request parameters not
fitting into the URL.
addHttpRequestToContext - if set to true, it will instruct
the requestParsers to include the original HttpServletRequest
object in the context map of the SolrQueryRequest under the
object in the context map of the SolrQueryRequest under the
key "httpRequest". It will not be used by any of the existing
Solr components, but may be useful when developing custom
Solr components, but may be useful when developing custom
plugins.
*** WARNING ***
Before enabling remote streaming, you should make sure your
system has authentication enabled.
@ -645,21 +645,21 @@
<!-- If you include a <cacheControl> directive, it will be used to
generate a Cache-Control header (as well as an Expires header
if the value contains "max-age=")
By default, no Cache-Control header is generated.
You can use the <cacheControl> option even if you have set
never304="true"
-->
<!--
<httpCaching never304="true" >
<cacheControl>max-age=30, public</cacheControl>
<cacheControl>max-age=30, public</cacheControl>
</httpCaching>
-->
<!-- To enable Solr to respond with automatically generated HTTP
Caching headers, and to response to Cache Validation requests
correctly, set the value of never304="false"
This will cause Solr to generate Last-Modified and ETag
headers based on the properties of the Index.
@ -684,12 +684,12 @@
<!--
<httpCaching lastModifiedFrom="openTime"
etagSeed="Solr">
<cacheControl>max-age=30, public</cacheControl>
<cacheControl>max-age=30, public</cacheControl>
</httpCaching>
-->
</requestDispatcher>
<!-- Request Handlers
<!-- Request Handlers
http://wiki.apache.org/solr/SolrRequestHandler
@ -716,7 +716,12 @@
<lst name="defaults">
<str name="echoParams">explicit</str>
<int name="rows">10</int>
<!-- <str name="df">text</str> -->
<!-- Default search field
<str name="df">text</str>
-->
<!-- Change from JSON to XML format (the default prior to Solr 7.0)
<str name="wt">xml</str>
-->
</lst>
<!-- In addition to defaults, "appends" params can be specified
to identify values which should be appended to the list of
@ -783,7 +788,7 @@
<!-- A Robust Example
This example SearchHandler declaration shows off usage of the
SearchHandler with many defaults declared
@ -805,7 +810,7 @@
<!-- Solr Cell Update Request Handler
http://wiki.apache.org/solr/ExtractingRequestHandler
http://wiki.apache.org/solr/ExtractingRequestHandler
-->
<requestHandler name="/update/extract"
@ -820,18 +825,18 @@
<!-- Search Components
Search components are registered to SolrCore and used by
Search components are registered to SolrCore and used by
instances of SearchHandler (which can access them by name)
By default, the following components are available:
<searchComponent name="query" class="solr.QueryComponent" />
<searchComponent name="facet" class="solr.FacetComponent" />
<searchComponent name="mlt" class="solr.MoreLikeThisComponent" />
<searchComponent name="highlight" class="solr.HighlightComponent" />
<searchComponent name="stats" class="solr.StatsComponent" />
<searchComponent name="debug" class="solr.DebugComponent" />
Default configuration in a requestHandler would look like:
<arr name="components">
@ -843,28 +848,28 @@
<str>debug</str>
</arr>
If you register a searchComponent to one of the standard names,
If you register a searchComponent to one of the standard names,
that will be used instead of the default.
To insert components before or after the 'standard' components, use:
<arr name="first-components">
<str>myFirstComponentName</str>
</arr>
<arr name="last-components">
<str>myLastComponentName</str>
</arr>
NOTE: The component registered with the name "debug" will
always be executed after the "last-components"
always be executed after the "last-components"
-->
<!-- Spell Check
The spell check component can return a list of alternative spelling
suggestions.
suggestions.
http://wiki.apache.org/solr/SpellCheckComponent
-->
@ -913,7 +918,7 @@
-->
</searchComponent>
<!-- A request handler for demonstrating the spellcheck component.
<!-- A request handler for demonstrating the spellcheck component.
NOTE: This is purely as an example. The whole purpose of the
SpellCheckComponent is to hook it into the request handler that
@ -922,7 +927,7 @@
IN OTHER WORDS, THERE IS REALLY GOOD CHANCE THE SETUP BELOW IS
NOT WHAT YOU WANT FOR YOUR PRODUCTION SYSTEM!
See http://wiki.apache.org/solr/SpellCheckComponent for details
on the request parameters.
-->
@ -958,8 +963,8 @@
This is purely as an example.
In reality you will likely want to add the component to your
already specified request handlers.
In reality you will likely want to add the component to your
already specified request handlers.
-->
<requestHandler name="/tvrh" class="solr.SearchHandler" startup="lazy">
<lst name="defaults">
@ -1032,8 +1037,8 @@
</lst>
</fragmenter>
<!-- A regular-expression-based fragmenter
(for sentence extraction)
<!-- A regular-expression-based fragmenter
(for sentence extraction)
-->
<fragmenter name="regex"
class="solr.highlight.RegexFragmenter">
@ -1078,7 +1083,7 @@
<fragmentsBuilder name="default"
default="true"
class="solr.highlight.ScoreOrderFragmentsBuilder">
<!--
<!--
<lst name="defaults">
<str name="hl.multiValuedSeparatorChar">/</str>
</lst>
@ -1131,19 +1136,19 @@
http://wiki.apache.org/solr/UpdateRequestProcessor
-->
<!-- Add unknown fields to the schema
<!-- Add unknown fields to the schema
Field type guessing update processors that will
attempt to parse string-typed field values as Booleans, Longs,
Doubles, or Dates, and then add schema fields with the guessed
field types. Text content will be indexed as "text_general" as
well as a copy to a plain string version in *_str.
well as a copy to a plain string version in *_str.
These require that the schema is both managed and mutable, by
declaring schemaFactory as ManagedIndexSchemaFactory, with
mutable specified as true.
mutable specified as true.
See http://wiki.apache.org/solr/GuessingFieldTypes
-->
<updateProcessor class="solr.UUIDUpdateProcessorFactory" name="uuid"/>
@ -1220,8 +1225,8 @@
on the fly based on the hash code of some other fields. This
example has overwriteDupes set to false since we are using the
id field as the signatureField and Solr will maintain
uniqueness based on that anyway.
uniqueness based on that anyway.
-->
<!--
<updateRequestProcessorChain name="dedupe">
@ -1292,7 +1297,7 @@
overridden...
-->
<!--
<queryResponseWriter name="xml"
<queryResponseWriter name="xml"
default="true"
class="solr.XMLResponseWriter" />
<queryResponseWriter name="json" class="solr.JSONResponseWriter"/>
@ -1323,7 +1328,7 @@
<!-- XSLT response writer transforms the XML output by any xslt file found
in Solr's conf/xslt directory. Changes to xslt files are checked for
every xsltCacheLifetimeSeconds.
every xsltCacheLifetimeSeconds.
-->
<queryResponseWriter name="xslt" class="solr.XSLTResponseWriter">
<int name="xsltCacheLifetimeSeconds">5</int>
@ -1331,7 +1336,7 @@
<!-- Query Parsers
https://cwiki.apache.org/confluence/display/solr/Query+Syntax+and+Parsing
https://lucene.apache.org/solr/guide/query-syntax-and-parsing.html
Multiple QParserPlugins can be registered by name, and then
used in either the "defType" param for the QueryComponent (used
@ -1351,7 +1356,7 @@
-->
<!-- example of registering a custom function parser -->
<!--
<valueSourceParser name="myfunc"
<valueSourceParser name="myfunc"
class="com.mycompany.MyValueSourceParser" />
-->
@ -1364,12 +1369,12 @@
<transformer name="db" class="com.mycompany.LoadFromDatabaseTransformer" >
<int name="connection">jdbc://....</int>
</transformer>
To add a constant value to all docs, use:
<transformer name="mytrans2" class="org.apache.solr.response.transform.ValueAugmenterFactory" >
<int name="value">5</int>
</transformer>
If you want the user to still be able to change it with _value:something_ use this:
<transformer name="mytrans3" class="org.apache.solr.response.transform.ValueAugmenterFactory" >
<double name="defaultValue">5</double>

View File

@ -21,6 +21,7 @@ import java.util.Collections;
import com.google.common.collect.ImmutableMap;
import org.apache.solr.common.SolrException.ErrorCode;
import org.apache.solr.common.params.ModifiableSolrParams;
import org.apache.solr.core.CoreContainer;
import org.apache.solr.core.SolrCore;
import org.apache.solr.request.LocalSolrQueryRequest;
@ -109,8 +110,14 @@ public class TestCrossCoreJoin extends SolrTestCaseJ4 {
public String query(SolrCore core, SolrQueryRequest req) throws Exception {
String handler = "standard";
if (req.getParams().get("qt") != null)
if (req.getParams().get("qt") != null) {
handler = req.getParams().get("qt");
}
if (req.getParams().get("wt") == null){
ModifiableSolrParams params = new ModifiableSolrParams(req.getParams());
params.set("wt", "xml");
req.setParams(params);
}
SolrQueryResponse rsp = new SolrQueryResponse();
SolrRequestInfo.setRequestInfo(new SolrRequestInfo(req, rsp));
core.execute(core.getRequestHandler(handler), req, rsp);

View File

@ -0,0 +1,101 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.solr.cloud;
import java.io.IOException;
import java.lang.invoke.MethodHandles;
import java.util.HashSet;
import java.util.Set;
import org.apache.solr.client.solrj.SolrServerException;
import org.apache.solr.client.solrj.request.CollectionAdminRequest;
import org.apache.solr.client.solrj.response.CollectionAdminResponse;
import org.apache.solr.common.cloud.ClusterStateUtil;
import org.apache.solr.common.cloud.DocCollection;
import org.apache.solr.common.cloud.Replica;
import org.apache.zookeeper.KeeperException;
import org.junit.BeforeClass;
import org.junit.Test;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
/**
* Test for backward compatibility when users update from 6.x or 7.0 to 7.1,
* then the counter of collection does not exist in Zk
* TODO Remove in Solr 9.0
*/
public class AssignBackwardCompatibilityTest extends SolrCloudTestCase {
private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
private static final String COLLECTION = "collection1";
@BeforeClass
public static void setupCluster() throws Exception {
configureCluster(4)
.addConfig("conf1", TEST_PATH().resolve("configsets").resolve("cloud-dynamic").resolve("conf"))
.configure();
CollectionAdminRequest.createCollection(COLLECTION, 1, 4)
.setMaxShardsPerNode(1000)
.process(cluster.getSolrClient());
}
@Test
public void test() throws IOException, SolrServerException, KeeperException, InterruptedException {
Set<String> coreNames = new HashSet<>();
Set<String> coreNodeNames = new HashSet<>();
int numOperations = random().nextInt(4 * 15);
int numLiveReplicas = 4;
boolean clearedCounter = false;
for (int i = 0; i < numOperations; i++) {
boolean deleteReplica = random().nextBoolean() && numLiveReplicas > 1;
// No need to clear counter more than one time
if (random().nextInt(30) < 5 && !clearedCounter) {
// clear counter
cluster.getZkClient().delete("/collections/"+COLLECTION+"/counter", -1, true);
clearedCounter = true;
}
if (deleteReplica) {
assertTrue(ClusterStateUtil.waitForLiveAndActiveReplicaCount(
cluster.getSolrClient().getZkStateReader(), COLLECTION, numLiveReplicas, 30000));
DocCollection dc = getCollectionState(COLLECTION);
Replica replica = getRandomReplica(dc.getSlice("shard1"), (r) -> r.getState() == Replica.State.ACTIVE);
CollectionAdminRequest.deleteReplica(COLLECTION, "shard1", replica.getName()).process(cluster.getSolrClient());
numLiveReplicas--;
} else {
CollectionAdminResponse response = CollectionAdminRequest.addReplicaToShard(COLLECTION, "shard1")
.process(cluster.getSolrClient());
assertTrue(response.isSuccess());
String coreName = response.getCollectionCoresStatus()
.keySet().iterator().next();
assertFalse("Core name is not unique coreName=" + coreName + " " + coreNames, coreNames.contains(coreName));
coreNames.add(coreName);
Replica newReplica = getCollectionState(COLLECTION).getReplicas().stream()
.filter(r -> r.getCoreName().equals(coreName))
.findAny().get();
String coreNodeName = newReplica.getName();
assertFalse("Core node name is not unique", coreNodeNames.contains(coreName));
coreNodeNames.add(coreNodeName);
numLiveReplicas++;
}
}
}
}

View File

@ -16,6 +16,7 @@
*/
package org.apache.solr.cloud;
import java.io.IOException;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
@ -25,9 +26,13 @@ import java.util.concurrent.ExecutorService;
import java.util.concurrent.Future;
import org.apache.solr.SolrTestCaseJ4;
import org.apache.solr.common.cloud.DocCollection;
import org.apache.solr.common.cloud.DocRouter;
import org.apache.solr.common.cloud.Replica;
import org.apache.solr.common.cloud.Slice;
import org.apache.solr.common.cloud.SolrZkClient;
import org.apache.solr.common.util.ExecutorUtil;
import org.apache.zookeeper.KeeperException;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
@ -65,11 +70,11 @@ public class AssignTest extends SolrTestCaseJ4 {
);
when(zkClient.getData(anyString(), any(), any(), anyBoolean())).then(invocation ->
zkClientData.get(invocation.getArgument(0)));
String nodeName = Assign.assignNode(zkClient, "collection1");
String nodeName = Assign.assignNode(zkClient, new DocCollection("collection1", new HashMap<>(), new HashMap<>(), DocRouter.DEFAULT));
assertEquals("core_node1", nodeName);
nodeName = Assign.assignNode(zkClient, "collection2");
nodeName = Assign.assignNode(zkClient, new DocCollection("collection2", new HashMap<>(), new HashMap<>(), DocRouter.DEFAULT));
assertEquals("core_node1", nodeName);
nodeName = Assign.assignNode(zkClient, "collection1");
nodeName = Assign.assignNode(zkClient, new DocCollection("collection1", new HashMap<>(), new HashMap<>(), DocRouter.DEFAULT));
assertEquals("core_node2", nodeName);
}
@ -98,7 +103,7 @@ public class AssignTest extends SolrTestCaseJ4 {
for (int i = 0; i < 1000; i++) {
futures.add(executor.submit(() -> {
String collection = collections[random().nextInt(collections.length)];
int id = Assign.incAndGetId(zkClient, collection);
int id = Assign.incAndGetId(zkClient, collection, 0);
Object val = collectionUniqueIds.get(collection).put(id, fixedValue);
if (val != null) {
fail("ZkController do not generate unique id for " + collection);
@ -120,9 +125,22 @@ public class AssignTest extends SolrTestCaseJ4 {
@Test
public void testBuildCoreName() {
assertEquals("Core name pattern changed", "collection1_shard1_replica_n1", Assign.buildCoreName("collection1", "shard1", Replica.Type.NRT, 1));
assertEquals("Core name pattern changed", "collection1_shard2_replica_p2", Assign.buildCoreName("collection1", "shard2", Replica.Type.PULL,2));
public void testBuildCoreName() throws IOException, InterruptedException, KeeperException {
String zkDir = createTempDir("zkData").toFile().getAbsolutePath();
ZkTestServer server = new ZkTestServer(zkDir);
server.run();
try (SolrZkClient zkClient = new SolrZkClient(server.getZkAddress(), 10000)) {
zkClient.makePath("/", true);
Map<String, Slice> slices = new HashMap<>();
slices.put("shard1", new Slice("shard1", new HashMap<>(), null));
slices.put("shard2", new Slice("shard2", new HashMap<>(), null));
DocCollection docCollection = new DocCollection("collection1", slices, null, DocRouter.DEFAULT);
assertEquals("Core name pattern changed", "collection1_shard1_replica_n1", Assign.buildCoreName(zkClient, docCollection, "shard1", Replica.Type.NRT));
assertEquals("Core name pattern changed", "collection1_shard2_replica_p2", Assign.buildCoreName(zkClient, docCollection, "shard2", Replica.Type.PULL));
} finally {
server.shutdown();
}
}
}

View File

@ -56,6 +56,7 @@ public class BasicDistributedZk2Test extends AbstractFullDistribZkTestBase {
private static final String ONE_NODE_COLLECTION = "onenodecollection";
private final boolean onlyLeaderIndexes = random().nextBoolean();
public BasicDistributedZk2Test() {
super();
// we need DVs on point fields to compute stats & facets

View File

@ -95,7 +95,7 @@ public class ClusterStateUpdateTest extends SolrCloudTestCase {
assertEquals(1, shards.size());
// assert this is core of container1
Replica zkProps = shards.get("core_node1");
Replica zkProps = shards.values().iterator().next();
assertNotNull(zkProps);

View File

@ -284,7 +284,7 @@ public class CollectionsAPIDistributedZkTest extends SolrCloudTestCase {
.process(cluster.getSolrClient()).getStatus());
assertTrue(CollectionAdminRequest.addReplicaToShard("halfcollectionblocker", "shard1")
.setNode(cluster.getJettySolrRunner(0).getNodeName())
.setCoreName(Assign.buildCoreName("halfcollection", "shard1", Replica.Type.NRT, 1))
.setCoreName("halfcollection_shard1_replica_n1")
.process(cluster.getSolrClient()).isSuccess());
assertEquals(0, CollectionAdminRequest.createCollection("halfcollectionblocker2", "conf",1, 1)
@ -292,7 +292,7 @@ public class CollectionsAPIDistributedZkTest extends SolrCloudTestCase {
.process(cluster.getSolrClient()).getStatus());
assertTrue(CollectionAdminRequest.addReplicaToShard("halfcollectionblocker2", "shard1")
.setNode(cluster.getJettySolrRunner(1).getNodeName())
.setCoreName(Assign.buildCoreName("halfcollection", "shard1", Replica.Type.NRT, 1))
.setCoreName("halfcollection_shard1_replica_n1")
.process(cluster.getSolrClient()).isSuccess());
String nn1 = cluster.getJettySolrRunner(0).getNodeName();

View File

@ -70,8 +70,8 @@ public class CollectionsAPISolrJTest extends SolrCloudTestCase {
assertTrue(response.isSuccess());
Map<String, NamedList<Integer>> coresStatus = response.getCollectionCoresStatus();
assertEquals(4, coresStatus.size());
for (int i=0; i<4; i++) {
NamedList<Integer> status = coresStatus.get(Assign.buildCoreName(collectionName, "shard" + (i/2+1), Replica.Type.NRT, (i%2+1)));
for (String coreName : coresStatus.keySet()) {
NamedList<Integer> status = coresStatus.get(coreName);
assertEquals(0, (int)status.get("status"));
assertTrue(status.get("QTime") > 0);
}
@ -98,8 +98,8 @@ public class CollectionsAPISolrJTest extends SolrCloudTestCase {
assertTrue(response.isSuccess());
Map<String, NamedList<Integer>> coresStatus = response.getCollectionCoresStatus();
assertEquals(4, coresStatus.size());
for (int i=0; i<4; i++) {
NamedList<Integer> status = coresStatus.get(Assign.buildCoreName(collectionName, "shard" + (i/2+1), Replica.Type.NRT, (i%2+1)));
for (String coreName : coresStatus.keySet()) {
NamedList<Integer> status = coresStatus.get(coreName);
assertEquals(0, (int)status.get("status"));
assertTrue(status.get("QTime") > 0);
}
@ -168,9 +168,18 @@ public class CollectionsAPISolrJTest extends SolrCloudTestCase {
assertTrue(response.isSuccess());
coresStatus = response.getCollectionCoresStatus();
assertEquals(3, coresStatus.size());
assertEquals(0, (int) coresStatus.get(Assign.buildCoreName(collectionName, "shardC", Replica.Type.NRT, 1)).get("status"));
assertEquals(0, (int) coresStatus.get(Assign.buildCoreName(collectionName, "shardC", Replica.Type.TLOG, 1)).get("status"));
assertEquals(0, (int) coresStatus.get(Assign.buildCoreName(collectionName, "shardC", Replica.Type.PULL, 1)).get("status"));
int replicaTlog = 0;
int replicaNrt = 0;
int replicaPull = 0;
for (String coreName : coresStatus.keySet()) {
assertEquals(0, (int) coresStatus.get(coreName).get("status"));
if (coreName.contains("shardC_replica_t")) replicaTlog++;
else if (coreName.contains("shardC_replica_n")) replicaNrt++;
else replicaPull++;
}
assertEquals(1, replicaNrt);
assertEquals(1, replicaTlog);
assertEquals(1, replicaPull);
response = CollectionAdminRequest.deleteShard(collectionName, "shardC").process(cluster.getSolrClient());
@ -208,8 +217,15 @@ public class CollectionsAPISolrJTest extends SolrCloudTestCase {
assertEquals(0, response.getStatus());
assertTrue(response.isSuccess());
Map<String, NamedList<Integer>> coresStatus = response.getCollectionCoresStatus();
assertEquals(0, (int) coresStatus.get(Assign.buildCoreName(collectionName, "shard1_0" , Replica.Type.NRT, 1)).get("status"));
assertEquals(0, (int) coresStatus.get(Assign.buildCoreName(collectionName, "shard1_1" , Replica.Type.NRT, 1)).get("status"));
int shard10 = 0;
int shard11 = 0;
for (String coreName : coresStatus.keySet()) {
assertEquals(0, (int) coresStatus.get(coreName).get("status"));
if (coreName.contains("_shard1_0")) shard10++;
else shard11++;
}
assertEquals(1, shard10);
assertEquals(1, shard11);
waitForState("Expected all shards to be active and parent shard to be removed", collectionName, (n, c) -> {
if (c.getSlice("shard1").getState() == Slice.State.ACTIVE)
@ -254,7 +270,7 @@ public class CollectionsAPISolrJTest extends SolrCloudTestCase {
DocCollection testCollection = getCollectionState(collectionName);
Replica replica1 = testCollection.getReplica("core_node1");
Replica replica1 = testCollection.getReplicas().iterator().next();
CoreStatus coreStatus = getCoreStatus(replica1);
assertEquals(Paths.get(coreStatus.getDataDirectory()).toString(), dataDir.toString());

View File

@ -237,6 +237,14 @@ public class OverseerCollectionConfigSetProcessorTest extends SolrTestCaseJ4 {
});
when(clusterStateMock.getLiveNodes()).thenReturn(liveNodes);
Map<String, byte[]> zkClientData = new HashMap<>();
when(solrZkClientMock.setData(anyString(), any(), anyInt(), anyBoolean())).then(invocation -> {
zkClientData.put(invocation.getArgument(0), invocation.getArgument(1));
return null;
}
);
when(solrZkClientMock.getData(anyString(), any(), any(), anyBoolean())).then(invocation ->
zkClientData.get(invocation.getArgument(0)));
when(solrZkClientMock.create(any(), any(), any(), anyBoolean())).thenAnswer(invocation -> {
String key = invocation.getArgument(0);
zkMap.put(key, null);
@ -376,9 +384,7 @@ public class OverseerCollectionConfigSetProcessorTest extends SolrTestCaseJ4 {
assertEquals(numberOfSlices * numberOfReplica, coreNames.size());
for (int i = 1; i <= numberOfSlices; i++) {
for (int j = 1; j <= numberOfReplica; j++) {
String coreName = Assign.buildCoreName(COLLECTION_NAME, "shard" + i, Replica.Type.NRT, j);
assertTrue("Shard " + coreName + " was not created",
coreNames.contains(coreName));
String coreName = coreNames.get((i-1) * numberOfReplica + (j-1));
if (dontShuffleCreateNodeSet) {
final String expectedNodeName = nodeUrlWithoutProtocolPartForLiveNodes.get((numberOfReplica * (i - 1) + (j - 1)) % nodeUrlWithoutProtocolPartForLiveNodes.size());

View File

@ -92,8 +92,8 @@ class SegmentTerminateEarlyTestState {
}
void queryTimestampDescending(CloudSolrClient cloudSolrClient) throws Exception {
TestMiniSolrCloudCluster.assertFalse(maxTimestampDocKeys.isEmpty());
TestMiniSolrCloudCluster.assertTrue("numDocs="+numDocs+" is not even", (numDocs%2)==0);
TestSegmentSorting.assertFalse(maxTimestampDocKeys.isEmpty());
TestSegmentSorting.assertTrue("numDocs="+numDocs+" is not even", (numDocs%2)==0);
final Long oddFieldValue = new Long(maxTimestampDocKeys.iterator().next().intValue()%2);
final SolrQuery query = new SolrQuery(ODD_FIELD +":"+oddFieldValue);
query.setSort(TIMESTAMP_FIELD, SolrQuery.ORDER.desc);
@ -102,24 +102,24 @@ class SegmentTerminateEarlyTestState {
// CommonParams.SEGMENT_TERMINATE_EARLY parameter intentionally absent
final QueryResponse rsp = cloudSolrClient.query(query);
// check correctness of the results count
TestMiniSolrCloudCluster.assertEquals("numFound", numDocs/2, rsp.getResults().getNumFound());
TestSegmentSorting.assertEquals("numFound", numDocs/2, rsp.getResults().getNumFound());
// check correctness of the first result
if (rsp.getResults().getNumFound() > 0) {
final SolrDocument solrDocument0 = rsp.getResults().get(0);
final Integer idAsInt = Integer.parseInt(solrDocument0.getFieldValue(KEY_FIELD).toString());
TestMiniSolrCloudCluster.assertTrue
TestSegmentSorting.assertTrue
(KEY_FIELD +"="+idAsInt+" of ("+solrDocument0+") is not in maxTimestampDocKeys("+maxTimestampDocKeys+")",
maxTimestampDocKeys.contains(idAsInt));
TestMiniSolrCloudCluster.assertEquals(ODD_FIELD, oddFieldValue, solrDocument0.getFieldValue(ODD_FIELD));
TestSegmentSorting.assertEquals(ODD_FIELD, oddFieldValue, solrDocument0.getFieldValue(ODD_FIELD));
}
// check segmentTerminatedEarly flag
TestMiniSolrCloudCluster.assertNull("responseHeader.segmentTerminatedEarly present in "+rsp.getResponseHeader(),
TestSegmentSorting.assertNull("responseHeader.segmentTerminatedEarly present in "+rsp.getResponseHeader(),
rsp.getResponseHeader().get(SolrQueryResponse.RESPONSE_HEADER_SEGMENT_TERMINATED_EARLY_KEY));
}
void queryTimestampDescendingSegmentTerminateEarlyYes(CloudSolrClient cloudSolrClient) throws Exception {
TestMiniSolrCloudCluster.assertFalse(maxTimestampDocKeys.isEmpty());
TestMiniSolrCloudCluster.assertTrue("numDocs="+numDocs+" is not even", (numDocs%2)==0);
TestSegmentSorting.assertFalse(maxTimestampDocKeys.isEmpty());
TestSegmentSorting.assertTrue("numDocs="+numDocs+" is not even", (numDocs%2)==0);
final Long oddFieldValue = new Long(maxTimestampDocKeys.iterator().next().intValue()%2);
final SolrQuery query = new SolrQuery(ODD_FIELD +":"+oddFieldValue);
query.setSort(TIMESTAMP_FIELD, SolrQuery.ORDER.desc);
@ -133,28 +133,28 @@ class SegmentTerminateEarlyTestState {
query.set(CommonParams.SEGMENT_TERMINATE_EARLY, true);
final QueryResponse rsp = cloudSolrClient.query(query);
// check correctness of the results count
TestMiniSolrCloudCluster.assertTrue("numFound", rowsWanted <= rsp.getResults().getNumFound());
TestMiniSolrCloudCluster.assertTrue("numFound", rsp.getResults().getNumFound() <= numDocs/2);
TestSegmentSorting.assertTrue("numFound", rowsWanted <= rsp.getResults().getNumFound());
TestSegmentSorting.assertTrue("numFound", rsp.getResults().getNumFound() <= numDocs/2);
// check correctness of the first result
if (rsp.getResults().getNumFound() > 0) {
final SolrDocument solrDocument0 = rsp.getResults().get(0);
final Integer idAsInt = Integer.parseInt(solrDocument0.getFieldValue(KEY_FIELD).toString());
TestMiniSolrCloudCluster.assertTrue
TestSegmentSorting.assertTrue
(KEY_FIELD +"="+idAsInt+" of ("+solrDocument0+") is not in maxTimestampDocKeys("+maxTimestampDocKeys+")",
maxTimestampDocKeys.contains(idAsInt));
TestMiniSolrCloudCluster.assertEquals(ODD_FIELD, oddFieldValue, rsp.getResults().get(0).getFieldValue(ODD_FIELD));
TestSegmentSorting.assertEquals(ODD_FIELD, oddFieldValue, rsp.getResults().get(0).getFieldValue(ODD_FIELD));
}
// check segmentTerminatedEarly flag
TestMiniSolrCloudCluster.assertNotNull("responseHeader.segmentTerminatedEarly missing in "+rsp.getResponseHeader(),
TestSegmentSorting.assertNotNull("responseHeader.segmentTerminatedEarly missing in "+rsp.getResponseHeader(),
rsp.getResponseHeader().get(SolrQueryResponse.RESPONSE_HEADER_SEGMENT_TERMINATED_EARLY_KEY));
TestMiniSolrCloudCluster.assertTrue("responseHeader.segmentTerminatedEarly missing/false in "+rsp.getResponseHeader(),
TestSegmentSorting.assertTrue("responseHeader.segmentTerminatedEarly missing/false in "+rsp.getResponseHeader(),
Boolean.TRUE.equals(rsp.getResponseHeader().get(SolrQueryResponse.RESPONSE_HEADER_SEGMENT_TERMINATED_EARLY_KEY)));
// check shards info
final Object shardsInfo = rsp.getResponse().get(ShardParams.SHARDS_INFO);
if (!Boolean.TRUE.equals(shardsInfoWanted)) {
TestMiniSolrCloudCluster.assertNull(ShardParams.SHARDS_INFO, shardsInfo);
TestSegmentSorting.assertNull(ShardParams.SHARDS_INFO, shardsInfo);
} else {
TestMiniSolrCloudCluster.assertNotNull(ShardParams.SHARDS_INFO, shardsInfo);
TestSegmentSorting.assertNotNull(ShardParams.SHARDS_INFO, shardsInfo);
int segmentTerminatedEarlyShardsCount = 0;
for (Map.Entry<String, ?> si : (SimpleOrderedMap<?>)shardsInfo) {
if (Boolean.TRUE.equals(((SimpleOrderedMap)si.getValue()).get(SolrQueryResponse.RESPONSE_HEADER_SEGMENT_TERMINATED_EARLY_KEY))) {
@ -162,14 +162,14 @@ class SegmentTerminateEarlyTestState {
}
}
// check segmentTerminatedEarly flag within shards info
TestMiniSolrCloudCluster.assertTrue(segmentTerminatedEarlyShardsCount+" shards reported "+SolrQueryResponse.RESPONSE_HEADER_SEGMENT_TERMINATED_EARLY_KEY,
TestSegmentSorting.assertTrue(segmentTerminatedEarlyShardsCount+" shards reported "+SolrQueryResponse.RESPONSE_HEADER_SEGMENT_TERMINATED_EARLY_KEY,
(0<segmentTerminatedEarlyShardsCount));
}
}
void queryTimestampDescendingSegmentTerminateEarlyNo(CloudSolrClient cloudSolrClient) throws Exception {
TestMiniSolrCloudCluster.assertFalse(maxTimestampDocKeys.isEmpty());
TestMiniSolrCloudCluster.assertTrue("numDocs="+numDocs+" is not even", (numDocs%2)==0);
TestSegmentSorting.assertFalse(maxTimestampDocKeys.isEmpty());
TestSegmentSorting.assertTrue("numDocs="+numDocs+" is not even", (numDocs%2)==0);
final Long oddFieldValue = new Long(maxTimestampDocKeys.iterator().next().intValue()%2);
final SolrQuery query = new SolrQuery(ODD_FIELD +":"+oddFieldValue);
query.setSort(TIMESTAMP_FIELD, SolrQuery.ORDER.desc);
@ -182,71 +182,71 @@ class SegmentTerminateEarlyTestState {
query.set(CommonParams.SEGMENT_TERMINATE_EARLY, false);
final QueryResponse rsp = cloudSolrClient.query(query);
// check correctness of the results count
TestMiniSolrCloudCluster.assertEquals("numFound", numDocs/2, rsp.getResults().getNumFound());
TestSegmentSorting.assertEquals("numFound", numDocs/2, rsp.getResults().getNumFound());
// check correctness of the first result
if (rsp.getResults().getNumFound() > 0) {
final SolrDocument solrDocument0 = rsp.getResults().get(0);
final Integer idAsInt = Integer.parseInt(solrDocument0.getFieldValue(KEY_FIELD).toString());
TestMiniSolrCloudCluster.assertTrue
TestSegmentSorting.assertTrue
(KEY_FIELD +"="+idAsInt+" of ("+solrDocument0+") is not in maxTimestampDocKeys("+maxTimestampDocKeys+")",
maxTimestampDocKeys.contains(idAsInt));
TestMiniSolrCloudCluster.assertEquals(ODD_FIELD, oddFieldValue, rsp.getResults().get(0).getFieldValue(ODD_FIELD));
TestSegmentSorting.assertEquals(ODD_FIELD, oddFieldValue, rsp.getResults().get(0).getFieldValue(ODD_FIELD));
}
// check segmentTerminatedEarly flag
TestMiniSolrCloudCluster.assertNull("responseHeader.segmentTerminatedEarly present in "+rsp.getResponseHeader(),
TestSegmentSorting.assertNull("responseHeader.segmentTerminatedEarly present in "+rsp.getResponseHeader(),
rsp.getResponseHeader().get(SolrQueryResponse.RESPONSE_HEADER_SEGMENT_TERMINATED_EARLY_KEY));
TestMiniSolrCloudCluster.assertFalse("responseHeader.segmentTerminatedEarly present/true in "+rsp.getResponseHeader(),
TestSegmentSorting.assertFalse("responseHeader.segmentTerminatedEarly present/true in "+rsp.getResponseHeader(),
Boolean.TRUE.equals(rsp.getResponseHeader().get(SolrQueryResponse.RESPONSE_HEADER_SEGMENT_TERMINATED_EARLY_KEY)));
// check shards info
final Object shardsInfo = rsp.getResponse().get(ShardParams.SHARDS_INFO);
if (!Boolean.TRUE.equals(shardsInfoWanted)) {
TestMiniSolrCloudCluster.assertNull(ShardParams.SHARDS_INFO, shardsInfo);
TestSegmentSorting.assertNull(ShardParams.SHARDS_INFO, shardsInfo);
} else {
TestMiniSolrCloudCluster.assertNotNull(ShardParams.SHARDS_INFO, shardsInfo);
TestSegmentSorting.assertNotNull(ShardParams.SHARDS_INFO, shardsInfo);
int segmentTerminatedEarlyShardsCount = 0;
for (Map.Entry<String, ?> si : (SimpleOrderedMap<?>)shardsInfo) {
if (Boolean.TRUE.equals(((SimpleOrderedMap)si.getValue()).get(SolrQueryResponse.RESPONSE_HEADER_SEGMENT_TERMINATED_EARLY_KEY))) {
segmentTerminatedEarlyShardsCount += 1;
}
}
TestMiniSolrCloudCluster.assertEquals("shards reporting "+SolrQueryResponse.RESPONSE_HEADER_SEGMENT_TERMINATED_EARLY_KEY,
TestSegmentSorting.assertEquals("shards reporting "+SolrQueryResponse.RESPONSE_HEADER_SEGMENT_TERMINATED_EARLY_KEY,
0, segmentTerminatedEarlyShardsCount);
}
}
void queryTimestampDescendingSegmentTerminateEarlyYesGrouped(CloudSolrClient cloudSolrClient) throws Exception {
TestMiniSolrCloudCluster.assertFalse(maxTimestampDocKeys.isEmpty());
TestMiniSolrCloudCluster.assertTrue("numDocs="+numDocs+" is not even", (numDocs%2)==0);
TestSegmentSorting.assertFalse(maxTimestampDocKeys.isEmpty());
TestSegmentSorting.assertTrue("numDocs="+numDocs+" is not even", (numDocs%2)==0);
final Long oddFieldValue = new Long(maxTimestampDocKeys.iterator().next().intValue()%2);
final SolrQuery query = new SolrQuery(ODD_FIELD +":"+oddFieldValue);
query.setSort(TIMESTAMP_FIELD, SolrQuery.ORDER.desc);
query.setFields(KEY_FIELD, ODD_FIELD, TIMESTAMP_FIELD);
query.setRows(1);
query.set(CommonParams.SEGMENT_TERMINATE_EARLY, true);
TestMiniSolrCloudCluster.assertTrue("numDocs="+numDocs+" is not quad-able", (numDocs%4)==0);
TestSegmentSorting.assertTrue("numDocs="+numDocs+" is not quad-able", (numDocs%4)==0);
query.add("group.field", QUAD_FIELD);
query.set("group", true);
final QueryResponse rsp = cloudSolrClient.query(query);
// check correctness of the results count
TestMiniSolrCloudCluster.assertEquals("matches", numDocs/2, rsp.getGroupResponse().getValues().get(0).getMatches());
TestSegmentSorting.assertEquals("matches", numDocs/2, rsp.getGroupResponse().getValues().get(0).getMatches());
// check correctness of the first result
if (rsp.getGroupResponse().getValues().get(0).getMatches() > 0) {
final SolrDocument solrDocument = rsp.getGroupResponse().getValues().get(0).getValues().get(0).getResult().get(0);
final Integer idAsInt = Integer.parseInt(solrDocument.getFieldValue(KEY_FIELD).toString());
TestMiniSolrCloudCluster.assertTrue
TestSegmentSorting.assertTrue
(KEY_FIELD +"="+idAsInt+" of ("+solrDocument+") is not in maxTimestampDocKeys("+maxTimestampDocKeys+")",
maxTimestampDocKeys.contains(idAsInt));
TestMiniSolrCloudCluster.assertEquals(ODD_FIELD, oddFieldValue, solrDocument.getFieldValue(ODD_FIELD));
TestSegmentSorting.assertEquals(ODD_FIELD, oddFieldValue, solrDocument.getFieldValue(ODD_FIELD));
}
// check segmentTerminatedEarly flag
// at present segmentTerminateEarly cannot be used with grouped queries
TestMiniSolrCloudCluster.assertFalse("responseHeader.segmentTerminatedEarly present/true in "+rsp.getResponseHeader(),
TestSegmentSorting.assertFalse("responseHeader.segmentTerminatedEarly present/true in "+rsp.getResponseHeader(),
Boolean.TRUE.equals(rsp.getResponseHeader().get(SolrQueryResponse.RESPONSE_HEADER_SEGMENT_TERMINATED_EARLY_KEY)));
}
void queryTimestampAscendingSegmentTerminateEarlyYes(CloudSolrClient cloudSolrClient) throws Exception {
TestMiniSolrCloudCluster.assertFalse(minTimestampDocKeys.isEmpty());
TestMiniSolrCloudCluster.assertTrue("numDocs="+numDocs+" is not even", (numDocs%2)==0);
TestSegmentSorting.assertFalse(minTimestampDocKeys.isEmpty());
TestSegmentSorting.assertTrue("numDocs="+numDocs+" is not even", (numDocs%2)==0);
final Long oddFieldValue = new Long(minTimestampDocKeys.iterator().next().intValue()%2);
final SolrQuery query = new SolrQuery(ODD_FIELD +":"+oddFieldValue);
query.setSort(TIMESTAMP_FIELD, SolrQuery.ORDER.asc); // a sort order that is _not_ compatible with the merge sort order
@ -255,21 +255,21 @@ class SegmentTerminateEarlyTestState {
query.set(CommonParams.SEGMENT_TERMINATE_EARLY, true);
final QueryResponse rsp = cloudSolrClient.query(query);
// check correctness of the results count
TestMiniSolrCloudCluster.assertEquals("numFound", numDocs/2, rsp.getResults().getNumFound());
TestSegmentSorting.assertEquals("numFound", numDocs/2, rsp.getResults().getNumFound());
// check correctness of the first result
if (rsp.getResults().getNumFound() > 0) {
final SolrDocument solrDocument0 = rsp.getResults().get(0);
final Integer idAsInt = Integer.parseInt(solrDocument0.getFieldValue(KEY_FIELD).toString());
TestMiniSolrCloudCluster.assertTrue
TestSegmentSorting.assertTrue
(KEY_FIELD +"="+idAsInt+" of ("+solrDocument0+") is not in minTimestampDocKeys("+minTimestampDocKeys+")",
minTimestampDocKeys.contains(idAsInt));
TestMiniSolrCloudCluster.assertEquals(ODD_FIELD, oddFieldValue, solrDocument0.getFieldValue(ODD_FIELD));
TestSegmentSorting.assertEquals(ODD_FIELD, oddFieldValue, solrDocument0.getFieldValue(ODD_FIELD));
}
// check segmentTerminatedEarly flag
TestMiniSolrCloudCluster.assertNotNull("responseHeader.segmentTerminatedEarly missing in "+rsp.getResponseHeader(),
TestSegmentSorting.assertNotNull("responseHeader.segmentTerminatedEarly missing in "+rsp.getResponseHeader(),
rsp.getResponseHeader().get(SolrQueryResponse.RESPONSE_HEADER_SEGMENT_TERMINATED_EARLY_KEY));
// segmentTerminateEarly cannot be used with incompatible sort orders
TestMiniSolrCloudCluster.assertTrue("responseHeader.segmentTerminatedEarly missing/true in "+rsp.getResponseHeader(),
TestSegmentSorting.assertTrue("responseHeader.segmentTerminatedEarly missing/true in "+rsp.getResponseHeader(),
Boolean.FALSE.equals(rsp.getResponseHeader().get(SolrQueryResponse.RESPONSE_HEADER_SEGMENT_TERMINATED_EARLY_KEY)));
}
}

View File

@ -265,7 +265,7 @@ public class SolrCloudExampleTest extends AbstractFullDistribZkTestBase {
DocCollection coll = cloudClient.getZkStateReader().getClusterState().getCollection(collection);
for (Slice slice : coll.getActiveSlices()) {
for (Replica replica : slice.getReplicas()) {
String uri = "" + replica.get(ZkStateReader.BASE_URL_PROP) + "/" + replica.get(ZkStateReader.CORE_NAME_PROP) + "/config?wt=json";
String uri = "" + replica.get(ZkStateReader.BASE_URL_PROP) + "/" + replica.get(ZkStateReader.CORE_NAME_PROP) + "/config";
Map respMap = getAsMap(cloudClient, uri);
Long maxTime = (Long) (getObjectByPath(respMap, true, asList("config", "updateHandler", "autoSoftCommit", "maxTime")));
ret.put(replica.getCoreName(), maxTime);

View File

@ -21,42 +21,21 @@ import javax.servlet.ServletRequest;
import javax.servlet.ServletResponse;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import java.io.IOException;
import java.lang.invoke.MethodHandles;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule;
import org.apache.http.HttpException;
import org.apache.http.HttpRequest;
import org.apache.http.HttpRequestInterceptor;
import org.apache.http.protocol.HttpContext;
import org.apache.lucene.util.LuceneTestCase;
import org.apache.lucene.util.LuceneTestCase.SuppressSysoutChecks;
import org.apache.solr.SolrTestCaseJ4;
import org.apache.solr.client.solrj.SolrQuery;
import org.apache.solr.client.solrj.embedded.JettyConfig;
import org.apache.solr.client.solrj.embedded.JettySolrRunner;
import org.apache.solr.client.solrj.impl.CloudSolrClient;
import org.apache.solr.client.solrj.impl.HttpClientUtil;
import org.apache.solr.client.solrj.impl.SolrHttpClientBuilder;
import org.apache.solr.client.solrj.request.CollectionAdminRequest;
import org.apache.solr.client.solrj.request.UpdateRequest;
import org.apache.solr.client.solrj.response.QueryResponse;
import org.apache.solr.common.SolrInputDocument;
import org.apache.solr.common.cloud.ZkStateReader;
import org.apache.solr.core.CoreDescriptor;
import org.apache.solr.index.TieredMergePolicyFactory;
import org.apache.solr.security.AuthenticationPlugin;
import org.apache.solr.security.HttpClientBuilderPlugin;
import org.apache.solr.util.RevertDefaultThreadHandlerRule;
import org.junit.After;
import org.junit.Before;
import org.junit.ClassRule;
import org.junit.Rule;
import org.junit.Test;
import org.junit.rules.RuleChain;
import org.junit.rules.TestRule;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@ -64,30 +43,23 @@ import org.slf4j.LoggerFactory;
* Test of the MiniSolrCloudCluster functionality with authentication enabled.
*/
@LuceneTestCase.Slow
@SuppressSysoutChecks(bugUrl = "Solr logs to JUL")
public class TestAuthenticationFramework extends LuceneTestCase {
public class TestAuthenticationFramework extends SolrCloudTestCase {
private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
private int NUM_SERVERS = 5;
private int NUM_SHARDS = 2;
private int REPLICATION_FACTOR = 2;
private static final int numShards = 2;
private static final int numReplicas = 2;
private static final int maxShardsPerNode = 2;
private static final int nodeCount = (numShards*numReplicas + (maxShardsPerNode-1))/maxShardsPerNode;
private static final String configName = "solrCloudCollectionConfig";
private static final String collectionName = "testcollection";
static String requestUsername = MockAuthenticationPlugin.expectedUsername;
static String requestPassword = MockAuthenticationPlugin.expectedPassword;
@Rule
public TestRule solrTestRules = RuleChain
.outerRule(new SystemPropertiesRestoreRule());
@ClassRule
public static TestRule solrClassRules = RuleChain.outerRule(
new SystemPropertiesRestoreRule()).around(
new RevertDefaultThreadHandlerRule());
@Before
@Override
public void setUp() throws Exception {
SolrTestCaseJ4.randomizeNumericTypesProperties(); // SOLR-10916
setupAuthenticationPlugin();
configureCluster(nodeCount).addConfig(configName, configset("cloud-minimal")).configure();
super.setUp();
}
@ -99,120 +71,67 @@ public class TestAuthenticationFramework extends LuceneTestCase {
@Test
public void testBasics() throws Exception {
collectionCreateSearchDeleteTwice();
MiniSolrCloudCluster miniCluster = createMiniSolrCloudCluster();
MockAuthenticationPlugin.expectedUsername = "solr";
MockAuthenticationPlugin.expectedPassword = "s0lrRocks";
// Should fail with 401
try {
// Should pass
collectionCreateSearchDelete(miniCluster);
MockAuthenticationPlugin.expectedUsername = "solr";
MockAuthenticationPlugin.expectedPassword = "s0lrRocks";
// Should fail with 401
try {
collectionCreateSearchDelete(miniCluster);
collectionCreateSearchDeleteTwice();
fail("Should've returned a 401 error");
} catch (Exception ex) {
if (!ex.getMessage().contains("Error 401")) {
fail("Should've returned a 401 error");
} catch (Exception ex) {
if (!ex.getMessage().contains("Error 401")) {
fail("Should've returned a 401 error");
}
} finally {
MockAuthenticationPlugin.expectedUsername = null;
MockAuthenticationPlugin.expectedPassword = null;
}
} finally {
miniCluster.shutdown();
MockAuthenticationPlugin.expectedUsername = null;
MockAuthenticationPlugin.expectedPassword = null;
}
}
@After
@Override
public void tearDown() throws Exception {
SolrTestCaseJ4.clearNumericTypesProperties(); // SOLR-10916
System.clearProperty("authenticationPlugin");
super.tearDown();
}
private MiniSolrCloudCluster createMiniSolrCloudCluster() throws Exception {
JettyConfig.Builder jettyConfig = JettyConfig.builder();
jettyConfig.waitForLoadingCoresToFinish(null);
return new MiniSolrCloudCluster(NUM_SERVERS, createTempDir(), jettyConfig.build());
}
private void createCollection(MiniSolrCloudCluster miniCluster, String collectionName, String asyncId)
private void createCollection(String collectionName)
throws Exception {
String configName = "solrCloudCollectionConfig";
miniCluster.uploadConfigSet(SolrTestCaseJ4.TEST_PATH().resolve("collection1/conf"), configName);
final boolean persistIndex = random().nextBoolean();
Map<String, String> collectionProperties = new HashMap<>();
collectionProperties.putIfAbsent(CoreDescriptor.CORE_CONFIG, "solrconfig-tlog.xml");
collectionProperties.putIfAbsent("solr.tests.maxBufferedDocs", "100000");
collectionProperties.putIfAbsent("solr.tests.ramBufferSizeMB", "100");
// use non-test classes so RandomizedRunner isn't necessary
collectionProperties.putIfAbsent(SolrTestCaseJ4.SYSTEM_PROPERTY_SOLR_TESTS_MERGEPOLICYFACTORY, TieredMergePolicyFactory.class.getName());
collectionProperties.putIfAbsent("solr.tests.mergeScheduler", "org.apache.lucene.index.ConcurrentMergeScheduler");
collectionProperties.putIfAbsent("solr.directoryFactory", (persistIndex ? "solr.StandardDirectoryFactory" : "solr.RAMDirectoryFactory"));
if (asyncId == null) {
CollectionAdminRequest.createCollection(collectionName, configName, NUM_SHARDS, REPLICATION_FACTOR)
.setProperties(collectionProperties)
.process(miniCluster.getSolrClient());
if (random().nextBoolean()) { // process asynchronously
CollectionAdminRequest.createCollection(collectionName, configName, numShards, numReplicas)
.setMaxShardsPerNode(maxShardsPerNode)
.processAndWait(cluster.getSolrClient(), 90);
}
else {
CollectionAdminRequest.createCollection(collectionName, configName, NUM_SHARDS, REPLICATION_FACTOR)
.setProperties(collectionProperties)
.processAndWait(miniCluster.getSolrClient(), 30);
CollectionAdminRequest.createCollection(collectionName, configName, numShards, numReplicas)
.setMaxShardsPerNode(maxShardsPerNode)
.process(cluster.getSolrClient());
}
AbstractDistribZkTestBase.waitForRecoveriesToFinish
(collectionName, cluster.getSolrClient().getZkStateReader(), true, true, 330);
}
public void collectionCreateSearchDelete(MiniSolrCloudCluster miniCluster) throws Exception {
public void collectionCreateSearchDeleteTwice() throws Exception {
final CloudSolrClient client = cluster.getSolrClient();
final String collectionName = "testcollection";
for (int i = 0 ; i < 2 ; ++i) {
// create collection
createCollection(collectionName);
final CloudSolrClient cloudSolrClient = miniCluster.getSolrClient();
// check that there's no left-over state
assertEquals(0, client.query(collectionName, new SolrQuery("*:*")).getResults().getNumFound());
assertNotNull(miniCluster.getZkServer());
List<JettySolrRunner> jettys = miniCluster.getJettySolrRunners();
assertEquals(NUM_SERVERS, jettys.size());
for (JettySolrRunner jetty : jettys) {
assertTrue(jetty.isRunning());
// modify/query collection
new UpdateRequest().add("id", "1").commit(client, collectionName);
QueryResponse rsp = client.query(collectionName, new SolrQuery("*:*"));
assertEquals(1, rsp.getResults().getNumFound());
// delete the collection
CollectionAdminRequest.deleteCollection(collectionName).process(client);
AbstractDistribZkTestBase.waitForCollectionToDisappear
(collectionName, client.getZkStateReader(), true, true, 330);
}
// create collection
log.info("#### Creating a collection");
final String asyncId = (random().nextBoolean() ? null : "asyncId("+collectionName+".create)="+random().nextInt());
createCollection(miniCluster, collectionName, asyncId);
ZkStateReader zkStateReader = miniCluster.getSolrClient().getZkStateReader();
AbstractDistribZkTestBase.waitForRecoveriesToFinish(collectionName, zkStateReader, true, true, 330);
// modify/query collection
log.info("#### updating a querying collection");
cloudSolrClient.setDefaultCollection(collectionName);
SolrInputDocument doc = new SolrInputDocument();
doc.setField("id", "1");
cloudSolrClient.add(doc);
cloudSolrClient.commit();
SolrQuery query = new SolrQuery();
query.setQuery("*:*");
QueryResponse rsp = cloudSolrClient.query(query);
assertEquals(1, rsp.getResults().getNumFound());
// delete the collection we created earlier
CollectionAdminRequest.deleteCollection(collectionName).process(miniCluster.getSolrClient());
// create it again
String asyncId2 = (random().nextBoolean() ? null : "asyncId("+collectionName+".create)="+random().nextInt());
createCollection(miniCluster, collectionName, asyncId2);
AbstractDistribZkTestBase.waitForRecoveriesToFinish(collectionName, zkStateReader, true, true, 330);
// check that there's no left-over state
assertEquals(0, cloudSolrClient.query(new SolrQuery("*:*")).getResults().getNumFound());
cloudSolrClient.add(doc);
cloudSolrClient.commit();
assertEquals(1, cloudSolrClient.query(new SolrQuery("*:*")).getResults().getNumFound());
}
public static class MockAuthenticationPlugin extends AuthenticationPlugin implements HttpClientBuilderPlugin {
@ -245,12 +164,9 @@ public class TestAuthenticationFramework extends LuceneTestCase {
@Override
public SolrHttpClientBuilder getHttpClientBuilder(SolrHttpClientBuilder httpClientBuilder) {
interceptor = new HttpRequestInterceptor() {
@Override
public void process(HttpRequest req, HttpContext rsp) throws HttpException, IOException {
req.addHeader("username", requestUsername);
req.addHeader("password", requestPassword);
}
interceptor = (req, rsp) -> {
req.addHeader("username", requestUsername);
req.addHeader("password", requestPassword);
};
HttpClientUtil.addRequestInterceptor(interceptor);

View File

@ -0,0 +1,295 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.solr.cloud;
import java.lang.invoke.MethodHandles;
import java.net.URL;
import java.util.ArrayList;
import java.util.Collection;
import java.util.Collections;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import org.apache.lucene.util.LuceneTestCase;
import org.apache.solr.client.solrj.SolrQuery;
import org.apache.solr.client.solrj.embedded.JettySolrRunner;
import org.apache.solr.client.solrj.impl.CloudSolrClient;
import org.apache.solr.client.solrj.request.CollectionAdminRequest;
import org.apache.solr.client.solrj.request.UpdateRequest;
import org.apache.solr.client.solrj.response.QueryResponse;
import org.apache.solr.common.SolrInputDocument;
import org.apache.solr.common.cloud.ClusterState;
import org.apache.solr.common.cloud.DocCollection;
import org.apache.solr.common.cloud.Replica;
import org.apache.solr.common.cloud.Slice;
import org.apache.solr.common.cloud.ZkStateReader;
import org.junit.Test;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
/**
* Test of the Collections API with the MiniSolrCloudCluster.
*/
@LuceneTestCase.Slow
public class TestCollectionsAPIViaSolrCloudCluster extends SolrCloudTestCase {
private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
private static final int numShards = 2;
private static final int numReplicas = 2;
private static final int maxShardsPerNode = 1;
private static final int nodeCount = 5;
private static final String configName = "solrCloudCollectionConfig";
private static final Map<String,String> collectionProperties // ensure indexes survive core shutdown
= Collections.singletonMap("solr.directoryFactory", "solr.StandardDirectoryFactory");
@Override
public void setUp() throws Exception {
configureCluster(nodeCount).addConfig(configName, configset("cloud-minimal")).configure();
super.setUp();
}
@Override
public void tearDown() throws Exception {
cluster.shutdown();
super.tearDown();
}
private void createCollection(String collectionName, String createNodeSet) throws Exception {
if (random().nextBoolean()) { // process asynchronously
CollectionAdminRequest.createCollection(collectionName, configName, numShards, numReplicas)
.setMaxShardsPerNode(maxShardsPerNode)
.setCreateNodeSet(createNodeSet)
.setProperties(collectionProperties)
.processAndWait(cluster.getSolrClient(), 30);
}
else {
CollectionAdminRequest.createCollection(collectionName, configName, numShards, numReplicas)
.setMaxShardsPerNode(maxShardsPerNode)
.setCreateNodeSet(createNodeSet)
.setProperties(collectionProperties)
.process(cluster.getSolrClient());
}
AbstractDistribZkTestBase.waitForRecoveriesToFinish
(collectionName, cluster.getSolrClient().getZkStateReader(), true, true, 330);
}
@Test
public void testCollectionCreateSearchDelete() throws Exception {
final CloudSolrClient client = cluster.getSolrClient();
final String collectionName = "testcollection";
assertNotNull(cluster.getZkServer());
List<JettySolrRunner> jettys = cluster.getJettySolrRunners();
assertEquals(nodeCount, jettys.size());
for (JettySolrRunner jetty : jettys) {
assertTrue(jetty.isRunning());
}
// shut down a server
JettySolrRunner stoppedServer = cluster.stopJettySolrRunner(0);
assertTrue(stoppedServer.isStopped());
assertEquals(nodeCount - 1, cluster.getJettySolrRunners().size());
// create a server
JettySolrRunner startedServer = cluster.startJettySolrRunner();
assertTrue(startedServer.isRunning());
assertEquals(nodeCount, cluster.getJettySolrRunners().size());
// create collection
createCollection(collectionName, null);
// modify/query collection
new UpdateRequest().add("id", "1").commit(client, collectionName);
QueryResponse rsp = client.query(collectionName, new SolrQuery("*:*"));
assertEquals(1, rsp.getResults().getNumFound());
// remove a server not hosting any replicas
ZkStateReader zkStateReader = client.getZkStateReader();
zkStateReader.forceUpdateCollection(collectionName);
ClusterState clusterState = zkStateReader.getClusterState();
Map<String,JettySolrRunner> jettyMap = new HashMap<>();
for (JettySolrRunner jetty : cluster.getJettySolrRunners()) {
String key = jetty.getBaseUrl().toString().substring((jetty.getBaseUrl().getProtocol() + "://").length());
jettyMap.put(key, jetty);
}
Collection<Slice> slices = clusterState.getCollection(collectionName).getSlices();
// track the servers not host replicas
for (Slice slice : slices) {
jettyMap.remove(slice.getLeader().getNodeName().replace("_solr", "/solr"));
for (Replica replica : slice.getReplicas()) {
jettyMap.remove(replica.getNodeName().replace("_solr", "/solr"));
}
}
assertTrue("Expected to find a node without a replica", jettyMap.size() > 0);
JettySolrRunner jettyToStop = jettyMap.entrySet().iterator().next().getValue();
jettys = cluster.getJettySolrRunners();
for (int i = 0; i < jettys.size(); ++i) {
if (jettys.get(i).equals(jettyToStop)) {
cluster.stopJettySolrRunner(i);
assertEquals(nodeCount - 1, cluster.getJettySolrRunners().size());
}
}
// re-create a server (to restore original nodeCount count)
startedServer = cluster.startJettySolrRunner(jettyToStop);
assertTrue(startedServer.isRunning());
assertEquals(nodeCount, cluster.getJettySolrRunners().size());
CollectionAdminRequest.deleteCollection(collectionName).process(client);
AbstractDistribZkTestBase.waitForCollectionToDisappear
(collectionName, client.getZkStateReader(), true, true, 330);
// create it again
createCollection(collectionName, null);
// check that there's no left-over state
assertEquals(0, client.query(collectionName, new SolrQuery("*:*")).getResults().getNumFound());
// modify/query collection
new UpdateRequest().add("id", "1").commit(client, collectionName);
assertEquals(1, client.query(collectionName, new SolrQuery("*:*")).getResults().getNumFound());
}
@Test
public void testCollectionCreateWithoutCoresThenDelete() throws Exception {
final String collectionName = "testSolrCloudCollectionWithoutCores";
final CloudSolrClient client = cluster.getSolrClient();
assertNotNull(cluster.getZkServer());
assertFalse(cluster.getJettySolrRunners().isEmpty());
// create collection
createCollection(collectionName, OverseerCollectionMessageHandler.CREATE_NODE_SET_EMPTY);
// check the collection's corelessness
int coreCount = 0;
DocCollection docCollection = client.getZkStateReader().getClusterState().getCollection(collectionName);
for (Map.Entry<String,Slice> entry : docCollection.getSlicesMap().entrySet()) {
coreCount += entry.getValue().getReplicasMap().entrySet().size();
}
assertEquals(0, coreCount);
// delete the collection
CollectionAdminRequest.deleteCollection(collectionName).process(client);
AbstractDistribZkTestBase.waitForCollectionToDisappear
(collectionName, client.getZkStateReader(), true, true, 330);
}
@Test
public void testStopAllStartAll() throws Exception {
final String collectionName = "testStopAllStartAllCollection";
final CloudSolrClient client = cluster.getSolrClient();
assertNotNull(cluster.getZkServer());
List<JettySolrRunner> jettys = cluster.getJettySolrRunners();
assertEquals(nodeCount, jettys.size());
for (JettySolrRunner jetty : jettys) {
assertTrue(jetty.isRunning());
}
final SolrQuery query = new SolrQuery("*:*");
final SolrInputDocument doc = new SolrInputDocument();
// create collection
createCollection(collectionName, null);
ZkStateReader zkStateReader = client.getZkStateReader();
// modify collection
final int numDocs = 1 + random().nextInt(10);
for (int ii = 1; ii <= numDocs; ++ii) {
doc.setField("id", ""+ii);
client.add(collectionName, doc);
if (ii*2 == numDocs) client.commit(collectionName);
}
client.commit(collectionName);
// query collection
assertEquals(numDocs, client.query(collectionName, query).getResults().getNumFound());
// the test itself
zkStateReader.forceUpdateCollection(collectionName);
final ClusterState clusterState = zkStateReader.getClusterState();
final Set<Integer> leaderIndices = new HashSet<>();
final Set<Integer> followerIndices = new HashSet<>();
{
final Map<String,Boolean> shardLeaderMap = new HashMap<>();
for (final Slice slice : clusterState.getCollection(collectionName).getSlices()) {
for (final Replica replica : slice.getReplicas()) {
shardLeaderMap.put(replica.getNodeName().replace("_solr", "/solr"), Boolean.FALSE);
}
shardLeaderMap.put(slice.getLeader().getNodeName().replace("_solr", "/solr"), Boolean.TRUE);
}
for (int ii = 0; ii < jettys.size(); ++ii) {
final URL jettyBaseUrl = jettys.get(ii).getBaseUrl();
final String jettyBaseUrlString = jettyBaseUrl.toString().substring((jettyBaseUrl.getProtocol() + "://").length());
final Boolean isLeader = shardLeaderMap.get(jettyBaseUrlString);
if (Boolean.TRUE.equals(isLeader)) {
leaderIndices.add(ii);
} else if (Boolean.FALSE.equals(isLeader)) {
followerIndices.add(ii);
} // else neither leader nor follower i.e. node without a replica (for our collection)
}
}
final List<Integer> leaderIndicesList = new ArrayList<>(leaderIndices);
final List<Integer> followerIndicesList = new ArrayList<>(followerIndices);
// first stop the followers (in no particular order)
Collections.shuffle(followerIndicesList, random());
for (Integer ii : followerIndicesList) {
if (!leaderIndices.contains(ii)) {
cluster.stopJettySolrRunner(jettys.get(ii));
}
}
// then stop the leaders (again in no particular order)
Collections.shuffle(leaderIndicesList, random());
for (Integer ii : leaderIndicesList) {
cluster.stopJettySolrRunner(jettys.get(ii));
}
// calculate restart order
final List<Integer> restartIndicesList = new ArrayList<>();
Collections.shuffle(leaderIndicesList, random());
restartIndicesList.addAll(leaderIndicesList);
Collections.shuffle(followerIndicesList, random());
restartIndicesList.addAll(followerIndicesList);
if (random().nextBoolean()) Collections.shuffle(restartIndicesList, random());
// and then restart jettys in that order
for (Integer ii : restartIndicesList) {
final JettySolrRunner jetty = jettys.get(ii);
if (!jetty.isRunning()) {
cluster.startJettySolrRunner(jetty);
assertTrue(jetty.isRunning());
}
}
AbstractDistribZkTestBase.waitForRecoveriesToFinish(collectionName, zkStateReader, true, true, 330);
zkStateReader.forceUpdateCollection(collectionName);
// re-query collection
assertEquals(numDocs, client.query(collectionName, query).getResults().getNumFound());
}
}

View File

@ -286,7 +286,7 @@ public class TestConfigSetsAPI extends SolrTestCaseJ4 {
// Checking error when no configuration name is specified in request
Map map = postDataAndGetResponse(solrCluster.getSolrClient(),
solrCluster.getJettySolrRunners().get(0).getBaseUrl().toString()
+ "/admin/configs?action=UPLOAD&wt=json", emptyData, null, null);
+ "/admin/configs?action=UPLOAD", emptyData, null, null);
assertNotNull(map);
long statusCode = (long) getObjectByPath(map, false,
Arrays.asList("responseHeader", "status"));
@ -305,7 +305,7 @@ public class TestConfigSetsAPI extends SolrTestCaseJ4 {
// Checking error when configuration name specified already exists
map = postDataAndGetResponse(solrCluster.getSolrClient(),
solrCluster.getJettySolrRunners().get(0).getBaseUrl().toString()
+ "/admin/configs?action=UPLOAD&wt=json&name=myconf", emptyData, null, null);
+ "/admin/configs?action=UPLOAD&name=myconf", emptyData, null, null);
assertNotNull(map);
statusCode = (long) getObjectByPath(map, false,
Arrays.asList("responseHeader", "status"));
@ -416,7 +416,7 @@ public class TestConfigSetsAPI extends SolrTestCaseJ4 {
assertFalse(configManager.configExists(configSetName+suffix));
Map map = postDataAndGetResponse(solrCluster.getSolrClient(),
solrCluster.getJettySolrRunners().get(0).getBaseUrl().toString() + "/admin/configs?action=UPLOAD&wt=json&name="+configSetName+suffix,
solrCluster.getJettySolrRunners().get(0).getBaseUrl().toString() + "/admin/configs?action=UPLOAD&name="+configSetName+suffix,
sampleZippedConfig, username, password);
assertNotNull(map);
long statusCode = (long) getObjectByPath(map, false, Arrays.asList("responseHeader", "status"));

View File

@ -124,11 +124,11 @@ public class TestCryptoKeys extends AbstractFullDistribZkTestBase {
"'create-requesthandler' : { 'name' : '/runtime', 'class': 'org.apache.solr.core.RuntimeLibReqHandler' , 'runtimeLib':true }" +
"}";
RestTestHarness client = restTestHarnesses.get(random().nextInt(restTestHarnesses.size()));
TestSolrConfigHandler.runConfigCommand(client, "/config?wt=json", payload);
TestSolrConfigHandler.runConfigCommand(client, "/config", payload);
TestSolrConfigHandler.testForResponseElement(client,
null,
"/config/overlay?wt=json",
"/config/overlay",
null,
Arrays.asList("overlay", "requestHandler", "/runtime", "class"),
"org.apache.solr.core.RuntimeLibReqHandler", 10);
@ -138,15 +138,15 @@ public class TestCryptoKeys extends AbstractFullDistribZkTestBase {
"'add-runtimelib' : { 'name' : 'signedjar' ,'version':1}\n" +
"}";
client = restTestHarnesses.get(random().nextInt(restTestHarnesses.size()));
TestSolrConfigHandler.runConfigCommand(client, "/config?wt=json", payload);
TestSolrConfigHandler.runConfigCommand(client, "/config", payload);
TestSolrConfigHandler.testForResponseElement(client,
null,
"/config/overlay?wt=json",
"/config/overlay",
null,
Arrays.asList("overlay", "runtimeLib", blobName, "version"),
1l, 10);
Map map = TestSolrConfigHandler.getRespMap("/runtime?wt=json", client);
Map map = TestSolrConfigHandler.getRespMap("/runtime", client);
String s = (String) Utils.getObjectByPath(map, false, Arrays.asList("error", "msg"));
assertNotNull(TestBlobHandler.getAsString(map), s);
assertTrue(TestBlobHandler.getAsString(map), s.contains("should be signed with one of the keys in ZK /keys/exe"));
@ -157,15 +157,15 @@ public class TestCryptoKeys extends AbstractFullDistribZkTestBase {
"'update-runtimelib' : { 'name' : 'signedjar' ,'version':1, 'sig': 'QKqHtd37QN02iMW9UEgvAO9g9qOOuG5vEBNkbUsN7noc2hhXKic/ABFIOYJA9PKw61mNX2EmNFXOcO3WClYdSw=='}\n" +
"}";
client = restTestHarnesses.get(random().nextInt(restTestHarnesses.size()));
TestSolrConfigHandler.runConfigCommand(client, "/config?wt=json", payload);
TestSolrConfigHandler.runConfigCommand(client, "/config", payload);
TestSolrConfigHandler.testForResponseElement(client,
null,
"/config/overlay?wt=json",
"/config/overlay",
null,
Arrays.asList("overlay", "runtimeLib", blobName, "sig"),
wrongSig, 10);
map = TestSolrConfigHandler.getRespMap("/runtime?wt=json", client);
map = TestSolrConfigHandler.getRespMap("/runtime", client);
s = (String) Utils.getObjectByPath(map, false, Arrays.asList("error", "msg"));
assertNotNull(TestBlobHandler.getAsString(map), s);//No key matched signature for jar
assertTrue(TestBlobHandler.getAsString(map), s.contains("No key matched signature for jar"));
@ -176,17 +176,17 @@ public class TestCryptoKeys extends AbstractFullDistribZkTestBase {
"'update-runtimelib' : { 'name' : 'signedjar' ,'version':1, 'sig': 'YkTQgOtvcM/H/5EQdABGl3wjjrPhonAGlouIx59vppBy2cZEofX3qX1yZu5sPNRmJisNXEuhHN2149dxeUmk2Q=='}\n" +
"}";
client = restTestHarnesses.get(random().nextInt(restTestHarnesses.size()));
TestSolrConfigHandler.runConfigCommand(client, "/config?wt=json", payload);
TestSolrConfigHandler.runConfigCommand(client, "/config", payload);
TestSolrConfigHandler.testForResponseElement(client,
null,
"/config/overlay?wt=json",
"/config/overlay",
null,
Arrays.asList("overlay", "runtimeLib", blobName, "sig"),
rightSig, 10);
map = TestSolrConfigHandler.testForResponseElement(client,
null,
"/runtime?wt=json",
"/runtime",
null,
Arrays.asList("class"),
"org.apache.solr.core.RuntimeLibReqHandler", 10);
@ -197,17 +197,17 @@ public class TestCryptoKeys extends AbstractFullDistribZkTestBase {
"'update-runtimelib' : { 'name' : 'signedjar' ,'version':1, 'sig': 'VJPMTxDf8Km3IBj2B5HWkIOqeM/o+HHNobOYCNA3WjrEVfOMZbMMqS1Lo7uLUUp//RZwOGkOhrUhuPNY1z2CGEIKX2/m8VGH64L14d52oSvFiwhoTDDuuyjW1TFGu35D'}\n" +
"}";
client = restTestHarnesses.get(random().nextInt(restTestHarnesses.size()));
TestSolrConfigHandler.runConfigCommand(client, "/config?wt=json", payload);
TestSolrConfigHandler.runConfigCommand(client, "/config", payload);
TestSolrConfigHandler.testForResponseElement(client,
null,
"/config/overlay?wt=json",
"/config/overlay",
null,
Arrays.asList("overlay", "runtimeLib", blobName, "sig"),
rightSig, 10);
map = TestSolrConfigHandler.testForResponseElement(client,
null,
"/runtime?wt=json",
"/runtime",
null,
Arrays.asList("class"),
"org.apache.solr.core.RuntimeLibReqHandler", 10);

View File

@ -1,388 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.solr.cloud;
import java.lang.invoke.MethodHandles;
import java.net.URL;
import java.util.ArrayList;
import java.util.Collection;
import java.util.Collections;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule;
import org.apache.lucene.util.LuceneTestCase;
import org.apache.lucene.util.LuceneTestCase.SuppressSysoutChecks;
import org.apache.solr.SolrTestCaseJ4;
import org.apache.solr.client.solrj.SolrQuery;
import org.apache.solr.client.solrj.embedded.JettyConfig;
import org.apache.solr.client.solrj.embedded.JettyConfig.Builder;
import org.apache.solr.client.solrj.embedded.JettySolrRunner;
import org.apache.solr.client.solrj.impl.CloudSolrClient;
import org.apache.solr.client.solrj.request.CollectionAdminRequest;
import org.apache.solr.client.solrj.response.QueryResponse;
import org.apache.solr.common.SolrInputDocument;
import org.apache.solr.common.cloud.ClusterState;
import org.apache.solr.common.cloud.Replica;
import org.apache.solr.common.cloud.Slice;
import org.apache.solr.common.cloud.SolrZkClient;
import org.apache.solr.common.cloud.ZkStateReader;
import org.apache.solr.core.CoreDescriptor;
import org.apache.solr.index.TieredMergePolicyFactory;
import org.apache.solr.util.RevertDefaultThreadHandlerRule;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.ClassRule;
import org.junit.Rule;
import org.junit.Test;
import org.junit.rules.RuleChain;
import org.junit.rules.TestRule;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
/**
* Test of the MiniSolrCloudCluster functionality. Keep in mind,
* MiniSolrCloudCluster is designed to be used outside of the Lucene test
* hierarchy.
*/
@SuppressSysoutChecks(bugUrl = "Solr logs to JUL")
public class TestMiniSolrCloudCluster extends LuceneTestCase {
private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
protected int NUM_SERVERS = 5;
protected int NUM_SHARDS = 2;
protected int REPLICATION_FACTOR = 2;
public TestMiniSolrCloudCluster () {
NUM_SERVERS = 5;
NUM_SHARDS = 2;
REPLICATION_FACTOR = 2;
}
@BeforeClass
public static void setupHackNumerics() { // SOLR-10916
SolrTestCaseJ4.randomizeNumericTypesProperties();
}
@AfterClass
public static void clearHackNumerics() { // SOLR-10916
SolrTestCaseJ4.clearNumericTypesProperties();
}
@Rule
public TestRule solrTestRules = RuleChain
.outerRule(new SystemPropertiesRestoreRule());
@ClassRule
public static TestRule solrClassRules = RuleChain.outerRule(
new SystemPropertiesRestoreRule()).around(
new RevertDefaultThreadHandlerRule());
private MiniSolrCloudCluster createMiniSolrCloudCluster() throws Exception {
Builder jettyConfig = JettyConfig.builder();
jettyConfig.waitForLoadingCoresToFinish(null);
return new MiniSolrCloudCluster(NUM_SERVERS, createTempDir(), jettyConfig.build());
}
private void createCollection(MiniSolrCloudCluster miniCluster, String collectionName, String createNodeSet, String asyncId,
Boolean indexToPersist, Map<String,String> collectionProperties) throws Exception {
String configName = "solrCloudCollectionConfig";
miniCluster.uploadConfigSet(SolrTestCaseJ4.TEST_PATH().resolve("collection1").resolve("conf"), configName);
final boolean persistIndex = (indexToPersist != null ? indexToPersist.booleanValue() : random().nextBoolean());
if (collectionProperties == null) {
collectionProperties = new HashMap<>();
}
collectionProperties.putIfAbsent(CoreDescriptor.CORE_CONFIG, "solrconfig-tlog.xml");
collectionProperties.putIfAbsent("solr.tests.maxBufferedDocs", "100000");
collectionProperties.putIfAbsent("solr.tests.ramBufferSizeMB", "100");
// use non-test classes so RandomizedRunner isn't necessary
collectionProperties.putIfAbsent(SolrTestCaseJ4.SYSTEM_PROPERTY_SOLR_TESTS_MERGEPOLICYFACTORY, TieredMergePolicyFactory.class.getName());
collectionProperties.putIfAbsent("solr.tests.mergeScheduler", "org.apache.lucene.index.ConcurrentMergeScheduler");
collectionProperties.putIfAbsent("solr.directoryFactory", (persistIndex ? "solr.StandardDirectoryFactory" : "solr.RAMDirectoryFactory"));
if (asyncId == null) {
CollectionAdminRequest.createCollection(collectionName, configName, NUM_SHARDS, REPLICATION_FACTOR)
.setCreateNodeSet(createNodeSet)
.setProperties(collectionProperties)
.process(miniCluster.getSolrClient());
}
else {
CollectionAdminRequest.createCollection(collectionName, configName, NUM_SHARDS, REPLICATION_FACTOR)
.setCreateNodeSet(createNodeSet)
.setProperties(collectionProperties)
.processAndWait(miniCluster.getSolrClient(), 30);
}
}
@Test
public void testCollectionCreateSearchDelete() throws Exception {
final String collectionName = "testcollection";
MiniSolrCloudCluster miniCluster = createMiniSolrCloudCluster();
final CloudSolrClient cloudSolrClient = miniCluster.getSolrClient();
try {
assertNotNull(miniCluster.getZkServer());
List<JettySolrRunner> jettys = miniCluster.getJettySolrRunners();
assertEquals(NUM_SERVERS, jettys.size());
for (JettySolrRunner jetty : jettys) {
assertTrue(jetty.isRunning());
}
// shut down a server
log.info("#### Stopping a server");
JettySolrRunner stoppedServer = miniCluster.stopJettySolrRunner(0);
assertTrue(stoppedServer.isStopped());
assertEquals(NUM_SERVERS - 1, miniCluster.getJettySolrRunners().size());
// create a server
log.info("#### Starting a server");
JettySolrRunner startedServer = miniCluster.startJettySolrRunner();
assertTrue(startedServer.isRunning());
assertEquals(NUM_SERVERS, miniCluster.getJettySolrRunners().size());
// create collection
log.info("#### Creating a collection");
final String asyncId = (random().nextBoolean() ? null : "asyncId("+collectionName+".create)="+random().nextInt());
createCollection(miniCluster, collectionName, null, asyncId, null, null);
ZkStateReader zkStateReader = miniCluster.getSolrClient().getZkStateReader();
AbstractDistribZkTestBase.waitForRecoveriesToFinish(collectionName, zkStateReader, true, true, 330);
// modify/query collection
log.info("#### updating a querying collection");
cloudSolrClient.setDefaultCollection(collectionName);
SolrInputDocument doc = new SolrInputDocument();
doc.setField("id", "1");
cloudSolrClient.add(doc);
cloudSolrClient.commit();
SolrQuery query = new SolrQuery();
query.setQuery("*:*");
QueryResponse rsp = cloudSolrClient.query(query);
assertEquals(1, rsp.getResults().getNumFound());
// remove a server not hosting any replicas
zkStateReader.forceUpdateCollection(collectionName);
ClusterState clusterState = zkStateReader.getClusterState();
HashMap<String, JettySolrRunner> jettyMap = new HashMap<String, JettySolrRunner>();
for (JettySolrRunner jetty : miniCluster.getJettySolrRunners()) {
String key = jetty.getBaseUrl().toString().substring((jetty.getBaseUrl().getProtocol() + "://").length());
jettyMap.put(key, jetty);
}
Collection<Slice> slices = clusterState.getSlices(collectionName);
// track the servers not host repliacs
for (Slice slice : slices) {
jettyMap.remove(slice.getLeader().getNodeName().replace("_solr", "/solr"));
for (Replica replica : slice.getReplicas()) {
jettyMap.remove(replica.getNodeName().replace("_solr", "/solr"));
}
}
assertTrue("Expected to find a node without a replica", jettyMap.size() > 0);
log.info("#### Stopping a server");
JettySolrRunner jettyToStop = jettyMap.entrySet().iterator().next().getValue();
jettys = miniCluster.getJettySolrRunners();
for (int i = 0; i < jettys.size(); ++i) {
if (jettys.get(i).equals(jettyToStop)) {
miniCluster.stopJettySolrRunner(i);
assertEquals(NUM_SERVERS - 1, miniCluster.getJettySolrRunners().size());
}
}
// re-create a server (to restore original NUM_SERVERS count)
log.info("#### Starting a server");
startedServer = miniCluster.startJettySolrRunner(jettyToStop);
assertTrue(startedServer.isRunning());
assertEquals(NUM_SERVERS, miniCluster.getJettySolrRunners().size());
CollectionAdminRequest.deleteCollection(collectionName).process(miniCluster.getSolrClient());
// create it again
String asyncId2 = (random().nextBoolean() ? null : "asyncId("+collectionName+".create)="+random().nextInt());
createCollection(miniCluster, collectionName, null, asyncId2, null, null);
AbstractDistribZkTestBase.waitForRecoveriesToFinish(collectionName, zkStateReader, true, true, 330);
// check that there's no left-over state
assertEquals(0, cloudSolrClient.query(new SolrQuery("*:*")).getResults().getNumFound());
cloudSolrClient.add(doc);
cloudSolrClient.commit();
assertEquals(1, cloudSolrClient.query(new SolrQuery("*:*")).getResults().getNumFound());
}
finally {
miniCluster.shutdown();
}
}
@Test
public void testCollectionCreateWithoutCoresThenDelete() throws Exception {
final String collectionName = "testSolrCloudCollectionWithoutCores";
final MiniSolrCloudCluster miniCluster = createMiniSolrCloudCluster();
final CloudSolrClient cloudSolrClient = miniCluster.getSolrClient();
try {
assertNotNull(miniCluster.getZkServer());
assertFalse(miniCluster.getJettySolrRunners().isEmpty());
// create collection
final String asyncId = (random().nextBoolean() ? null : "asyncId("+collectionName+".create)="+random().nextInt());
createCollection(miniCluster, collectionName, OverseerCollectionMessageHandler.CREATE_NODE_SET_EMPTY, asyncId, null, null);
try (SolrZkClient zkClient = new SolrZkClient
(miniCluster.getZkServer().getZkAddress(), AbstractZkTestCase.TIMEOUT, AbstractZkTestCase.TIMEOUT, null);
ZkStateReader zkStateReader = new ZkStateReader(zkClient)) {
zkStateReader.createClusterStateWatchersAndUpdate();
// wait for collection to appear
AbstractDistribZkTestBase.waitForRecoveriesToFinish(collectionName, zkStateReader, true, true, 330);
// check the collection's corelessness
{
int coreCount = 0;
for (Map.Entry<String,Slice> entry : zkStateReader.getClusterState().getSlicesMap(collectionName).entrySet()) {
coreCount += entry.getValue().getReplicasMap().entrySet().size();
}
assertEquals(0, coreCount);
}
}
}
finally {
miniCluster.shutdown();
}
}
@Test
public void testStopAllStartAll() throws Exception {
final String collectionName = "testStopAllStartAllCollection";
final MiniSolrCloudCluster miniCluster = createMiniSolrCloudCluster();
try {
assertNotNull(miniCluster.getZkServer());
List<JettySolrRunner> jettys = miniCluster.getJettySolrRunners();
assertEquals(NUM_SERVERS, jettys.size());
for (JettySolrRunner jetty : jettys) {
assertTrue(jetty.isRunning());
}
createCollection(miniCluster, collectionName, null, null, Boolean.TRUE, null);
final CloudSolrClient cloudSolrClient = miniCluster.getSolrClient();
cloudSolrClient.setDefaultCollection(collectionName);
final SolrQuery query = new SolrQuery("*:*");
final SolrInputDocument doc = new SolrInputDocument();
try (SolrZkClient zkClient = new SolrZkClient
(miniCluster.getZkServer().getZkAddress(), AbstractZkTestCase.TIMEOUT, AbstractZkTestCase.TIMEOUT, null);
ZkStateReader zkStateReader = new ZkStateReader(zkClient)) {
zkStateReader.createClusterStateWatchersAndUpdate();
AbstractDistribZkTestBase.waitForRecoveriesToFinish(collectionName, zkStateReader, true, true, 330);
// modify collection
final int numDocs = 1 + random().nextInt(10);
for (int ii = 1; ii <= numDocs; ++ii) {
doc.setField("id", ""+ii);
cloudSolrClient.add(doc);
if (ii*2 == numDocs) cloudSolrClient.commit();
}
cloudSolrClient.commit();
// query collection
{
final QueryResponse rsp = cloudSolrClient.query(query);
assertEquals(numDocs, rsp.getResults().getNumFound());
}
// the test itself
zkStateReader.forceUpdateCollection(collectionName);
final ClusterState clusterState = zkStateReader.getClusterState();
final HashSet<Integer> leaderIndices = new HashSet<Integer>();
final HashSet<Integer> followerIndices = new HashSet<Integer>();
{
final HashMap<String,Boolean> shardLeaderMap = new HashMap<String,Boolean>();
for (final Slice slice : clusterState.getSlices(collectionName)) {
for (final Replica replica : slice.getReplicas()) {
shardLeaderMap.put(replica.getNodeName().replace("_solr", "/solr"), Boolean.FALSE);
}
shardLeaderMap.put(slice.getLeader().getNodeName().replace("_solr", "/solr"), Boolean.TRUE);
}
for (int ii = 0; ii < jettys.size(); ++ii) {
final URL jettyBaseUrl = jettys.get(ii).getBaseUrl();
final String jettyBaseUrlString = jettyBaseUrl.toString().substring((jettyBaseUrl.getProtocol() + "://").length());
final Boolean isLeader = shardLeaderMap.get(jettyBaseUrlString);
if (Boolean.TRUE.equals(isLeader)) {
leaderIndices.add(new Integer(ii));
} else if (Boolean.FALSE.equals(isLeader)) {
followerIndices.add(new Integer(ii));
} // else neither leader nor follower i.e. node without a replica (for our collection)
}
}
final List<Integer> leaderIndicesList = new ArrayList<Integer>(leaderIndices);
final List<Integer> followerIndicesList = new ArrayList<Integer>(followerIndices);
// first stop the followers (in no particular order)
Collections.shuffle(followerIndicesList, random());
for (Integer ii : followerIndicesList) {
if (!leaderIndices.contains(ii)) {
miniCluster.stopJettySolrRunner(jettys.get(ii.intValue()));
}
}
// then stop the leaders (again in no particular order)
Collections.shuffle(leaderIndicesList, random());
for (Integer ii : leaderIndicesList) {
miniCluster.stopJettySolrRunner(jettys.get(ii.intValue()));
}
// calculate restart order
final List<Integer> restartIndicesList = new ArrayList<Integer>();
Collections.shuffle(leaderIndicesList, random());
restartIndicesList.addAll(leaderIndicesList);
Collections.shuffle(followerIndicesList, random());
restartIndicesList.addAll(followerIndicesList);
if (random().nextBoolean()) Collections.shuffle(restartIndicesList, random());
// and then restart jettys in that order
for (Integer ii : restartIndicesList) {
final JettySolrRunner jetty = jettys.get(ii.intValue());
if (!jetty.isRunning()) {
miniCluster.startJettySolrRunner(jetty);
assertTrue(jetty.isRunning());
}
}
AbstractDistribZkTestBase.waitForRecoveriesToFinish(collectionName, zkStateReader, true, true, 330);
zkStateReader.forceUpdateCollection(collectionName);
// re-query collection
{
final QueryResponse rsp = cloudSolrClient.query(query);
assertEquals(numDocs, rsp.getResults().getNumFound());
}
}
}
finally {
miniCluster.shutdown();
}
}
}

View File

@ -1,141 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.solr.cloud;
import java.io.File;
import java.nio.charset.StandardCharsets;
import com.carrotsearch.randomizedtesting.annotations.ThreadLeakFilters;
import com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule;
import org.apache.commons.io.FileUtils;
import org.apache.lucene.util.LuceneTestCase;
import org.apache.lucene.util.LuceneTestCase.SuppressSysoutChecks;
import org.apache.solr.util.BadZookeeperThreadsFilter;
import org.apache.solr.util.RevertDefaultThreadHandlerRule;
import org.junit.ClassRule;
import org.junit.Ignore;
import org.junit.Rule;
import org.junit.Test;
import org.junit.rules.RuleChain;
import org.junit.rules.TestRule;
/**
* Test 5 nodes Solr cluster with Kerberos plugin enabled.
* This test is Ignored right now as Mini KDC has a known bug that
* doesn't allow us to run multiple nodes on the same host.
* https://issues.apache.org/jira/browse/HADOOP-9893
*/
@ThreadLeakFilters(defaultFilters = true, filters = {
BadZookeeperThreadsFilter.class // Zookeeper login leaks TGT renewal threads
})
@Ignore
@LuceneTestCase.Slow
@SuppressSysoutChecks(bugUrl = "Solr logs to JUL")
public class TestMiniSolrCloudClusterKerberos extends TestMiniSolrCloudCluster {
public TestMiniSolrCloudClusterKerberos () {
NUM_SERVERS = 5;
NUM_SHARDS = 2;
REPLICATION_FACTOR = 2;
}
private KerberosTestServices kerberosTestServices;
@Rule
public TestRule solrTestRules = RuleChain
.outerRule(new SystemPropertiesRestoreRule());
@ClassRule
public static TestRule solrClassRules = RuleChain.outerRule(
new SystemPropertiesRestoreRule()).around(
new RevertDefaultThreadHandlerRule());
@Override
public void setUp() throws Exception {
super.setUp();
setupMiniKdc();
}
private void setupMiniKdc() throws Exception {
String kdcDir = createTempDir()+File.separator+"minikdc";
File keytabFile = new File(kdcDir, "keytabs");
String principal = "HTTP/127.0.0.1";
String zkServerPrincipal = "zookeeper/127.0.0.1";
KerberosTestServices kerberosTestServices = KerberosTestServices.builder()
.withKdc(new File(kdcDir))
.withJaasConfiguration(principal, keytabFile, zkServerPrincipal, keytabFile)
.build();
kerberosTestServices.start();
kerberosTestServices.getKdc().createPrincipal(keytabFile, principal, zkServerPrincipal);
String jaas = "Client {\n"
+ " com.sun.security.auth.module.Krb5LoginModule required\n"
+ " useKeyTab=true\n"
+ " keyTab=\""+keytabFile.getAbsolutePath()+"\"\n"
+ " storeKey=true\n"
+ " useTicketCache=false\n"
+ " doNotPrompt=true\n"
+ " debug=true\n"
+ " principal=\""+principal+"\";\n"
+ "};\n"
+ "Server {\n"
+ " com.sun.security.auth.module.Krb5LoginModule required\n"
+ " useKeyTab=true\n"
+ " keyTab=\""+keytabFile.getAbsolutePath()+"\"\n"
+ " storeKey=true\n"
+ " doNotPrompt=true\n"
+ " useTicketCache=false\n"
+ " debug=true\n"
+ " principal=\""+zkServerPrincipal+"\";\n"
+ "};\n";
String jaasFilePath = kdcDir+File.separator + "jaas-client.conf";
FileUtils.write(new File(jaasFilePath), jaas, StandardCharsets.UTF_8);
System.setProperty("java.security.auth.login.config", jaasFilePath);
System.setProperty("solr.kerberos.cookie.domain", "127.0.0.1");
System.setProperty("solr.kerberos.principal", principal);
System.setProperty("solr.kerberos.keytab", keytabFile.getAbsolutePath());
System.setProperty("authenticationPlugin", "org.apache.solr.security.KerberosPlugin");
// more debugging, if needed
/*System.setProperty("sun.security.jgss.debug", "true");
System.setProperty("sun.security.krb5.debug", "true");
System.setProperty("sun.security.jgss.debug", "true");
System.setProperty("java.security.debug", "logincontext,policy,scl,gssloginconfig");*/
}
@AwaitsFix(bugUrl="https://issues.apache.org/jira/browse/HADOOP-9893")
@Test
@Override
public void testCollectionCreateSearchDelete() throws Exception {
super.testCollectionCreateSearchDelete();
}
@Override
public void tearDown() throws Exception {
System.clearProperty("java.security.auth.login.config");
System.clearProperty("cookie.domain");
System.clearProperty("kerberos.principal");
System.clearProperty("kerberos.keytab");
System.clearProperty("authenticationPlugin");
kerberosTestServices.stop();
super.tearDown();
}
}

View File

@ -19,38 +19,22 @@ package org.apache.solr.cloud;
import java.io.File;
import java.lang.invoke.MethodHandles;
import java.nio.charset.StandardCharsets;
import java.util.List;
import java.util.Properties;
import com.carrotsearch.randomizedtesting.annotations.ThreadLeakFilters;
import org.apache.commons.io.FileUtils;
import org.apache.lucene.util.Constants;
import org.apache.lucene.util.LuceneTestCase;
import org.apache.solr.SolrTestCaseJ4;
import org.apache.solr.client.solrj.SolrQuery;
import org.apache.solr.client.solrj.embedded.JettyConfig;
import org.apache.solr.client.solrj.embedded.JettySolrRunner;
import org.apache.solr.client.solrj.impl.CloudSolrClient;
import org.apache.solr.client.solrj.request.CollectionAdminRequest;
import org.apache.solr.client.solrj.request.UpdateRequest;
import org.apache.solr.client.solrj.response.QueryResponse;
import org.apache.solr.common.SolrInputDocument;
import org.apache.solr.common.cloud.SolrZkClient;
import org.apache.solr.common.cloud.ZkStateReader;
import org.apache.solr.core.CoreDescriptor;
import org.apache.solr.index.TieredMergePolicyFactory;
import org.apache.solr.util.BadZookeeperThreadsFilter;
import org.apache.solr.util.RevertDefaultThreadHandlerRule;
import org.junit.BeforeClass;
import org.junit.ClassRule;
import org.junit.Rule;
import org.junit.Test;
import org.junit.rules.RuleChain;
import org.junit.rules.TestRule;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.carrotsearch.randomizedtesting.annotations.ThreadLeakFilters;
import com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule;
/**
* Test 5 nodes Solr cluster with Kerberos plugin enabled.
* This test is Ignored right now as Mini KDC has a known bug that
@ -62,31 +46,19 @@ import com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule;
})
@LuceneTestCase.Slow
@LuceneTestCase.SuppressSysoutChecks(bugUrl = "Solr logs to JUL")
public class TestSolrCloudWithKerberosAlt extends LuceneTestCase {
public class TestSolrCloudWithKerberosAlt extends SolrCloudTestCase {
private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
protected final int NUM_SERVERS;
protected final int NUM_SHARDS;
protected final int REPLICATION_FACTOR;
public TestSolrCloudWithKerberosAlt () {
NUM_SERVERS = 1;
NUM_SHARDS = 1;
REPLICATION_FACTOR = 1;
}
private static final int numShards = 1;
private static final int numReplicas = 1;
private static final int maxShardsPerNode = 1;
private static final int nodeCount = (numShards*numReplicas + (maxShardsPerNode-1))/maxShardsPerNode;
private static final String configName = "solrCloudCollectionConfig";
private static final String collectionName = "testkerberoscollection";
private KerberosTestServices kerberosTestServices;
@Rule
public TestRule solrTestRules = RuleChain
.outerRule(new SystemPropertiesRestoreRule());
@ClassRule
public static TestRule solrClassRules = RuleChain.outerRule(
new SystemPropertiesRestoreRule()).around(
new RevertDefaultThreadHandlerRule());
@BeforeClass
public static void betterNotBeJava9() {
assumeFalse("FIXME: SOLR-8182: This test fails under Java 9", Constants.JRE_IS_MINIMUM_JAVA9);
@ -94,9 +66,9 @@ public class TestSolrCloudWithKerberosAlt extends LuceneTestCase {
@Override
public void setUp() throws Exception {
SolrTestCaseJ4.randomizeNumericTypesProperties(); // SOLR-10916
super.setUp();
setupMiniKdc();
configureCluster(nodeCount).addConfig(configName, configset("cloud-minimal")).configure();
}
private void setupMiniKdc() throws Exception {
@ -141,10 +113,10 @@ public class TestSolrCloudWithKerberosAlt extends LuceneTestCase {
);
// more debugging, if needed
/*System.setProperty("sun.security.jgss.debug", "true");
System.setProperty("sun.security.krb5.debug", "true");
System.setProperty("sun.security.jgss.debug", "true");
System.setProperty("java.security.debug", "logincontext,policy,scl,gssloginconfig");*/
// System.setProperty("sun.security.jgss.debug", "true");
// System.setProperty("sun.security.krb5.debug", "true");
// System.setProperty("sun.security.jgss.debug", "true");
// System.setProperty("java.security.debug", "logincontext,policy,scl,gssloginconfig");
}
@Test
@ -154,79 +126,47 @@ public class TestSolrCloudWithKerberosAlt extends LuceneTestCase {
if (random().nextBoolean()) testCollectionCreateSearchDelete();
}
protected void testCollectionCreateSearchDelete() throws Exception {
String collectionName = "testkerberoscollection";
private void testCollectionCreateSearchDelete() throws Exception {
CloudSolrClient client = cluster.getSolrClient();
CollectionAdminRequest.createCollection(collectionName, configName, numShards, numReplicas)
.setMaxShardsPerNode(maxShardsPerNode)
.process(client);
MiniSolrCloudCluster miniCluster
= new MiniSolrCloudCluster(NUM_SERVERS, createTempDir(), JettyConfig.builder().setContext("/solr").build());
CloudSolrClient cloudSolrClient = miniCluster.getSolrClient();
cloudSolrClient.setDefaultCollection(collectionName);
try {
assertNotNull(miniCluster.getZkServer());
List<JettySolrRunner> jettys = miniCluster.getJettySolrRunners();
assertEquals(NUM_SERVERS, jettys.size());
for (JettySolrRunner jetty : jettys) {
assertTrue(jetty.isRunning());
}
AbstractDistribZkTestBase.waitForRecoveriesToFinish
(collectionName, client.getZkStateReader(), true, true, 330);
// create collection
String configName = "solrCloudCollectionConfig";
miniCluster.uploadConfigSet(SolrTestCaseJ4.TEST_PATH().resolve("collection1/conf"), configName);
// modify/query collection
CollectionAdminRequest.Create createRequest = CollectionAdminRequest.createCollection(collectionName, configName, NUM_SHARDS,REPLICATION_FACTOR);
Properties properties = new Properties();
properties.put(CoreDescriptor.CORE_CONFIG, "solrconfig-tlog.xml");
properties.put("solr.tests.maxBufferedDocs", "100000");
properties.put("solr.tests.ramBufferSizeMB", "100");
// use non-test classes so RandomizedRunner isn't necessary
properties.put(SolrTestCaseJ4.SYSTEM_PROPERTY_SOLR_TESTS_MERGEPOLICYFACTORY, TieredMergePolicyFactory.class.getName());
properties.put("solr.tests.mergeScheduler", "org.apache.lucene.index.ConcurrentMergeScheduler");
properties.put("solr.directoryFactory", "solr.RAMDirectoryFactory");
createRequest.setProperties(properties);
createRequest.process(cloudSolrClient);
try (SolrZkClient zkClient = new SolrZkClient
(miniCluster.getZkServer().getZkAddress(), AbstractZkTestCase.TIMEOUT, AbstractZkTestCase.TIMEOUT, null);
ZkStateReader zkStateReader = new ZkStateReader(zkClient)) {
zkStateReader.createClusterStateWatchersAndUpdate();
AbstractDistribZkTestBase.waitForRecoveriesToFinish(collectionName, zkStateReader, true, true, 330);
// modify/query collection
new UpdateRequest().add("id", "1").commit(client, collectionName);
QueryResponse rsp = client.query(collectionName, new SolrQuery("*:*"));
assertEquals(1, rsp.getResults().getNumFound());
SolrInputDocument doc = new SolrInputDocument();
doc.setField("id", "1");
cloudSolrClient.add(doc);
cloudSolrClient.commit();
SolrQuery query = new SolrQuery();
query.setQuery("*:*");
QueryResponse rsp = cloudSolrClient.query(query);
assertEquals(1, rsp.getResults().getNumFound());
// delete the collection we created earlier
CollectionAdminRequest.deleteCollection(collectionName).process(client);
// delete the collection we created earlier
CollectionAdminRequest.deleteCollection(collectionName).process(cloudSolrClient);
AbstractDistribZkTestBase.waitForCollectionToDisappear(collectionName, zkStateReader, true, true, 330);
}
}
finally {
cloudSolrClient.close();
miniCluster.shutdown();
}
AbstractDistribZkTestBase.waitForCollectionToDisappear
(collectionName, client.getZkStateReader(), true, true, 330);
}
@Override
public void tearDown() throws Exception {
System.clearProperty("java.security.auth.login.config");
System.clearProperty("cookie.domain");
System.clearProperty("kerberos.principal");
System.clearProperty("kerberos.keytab");
System.clearProperty("authenticationPlugin");
System.clearProperty("solr.kerberos.name.rules");
System.clearProperty("solr.jaas.debug");
System.clearProperty("java.security.auth.login.config");
System.clearProperty("solr.kerberos.jaas.appname");
System.clearProperty("solr.kerberos.cookie.domain");
System.clearProperty("solr.kerberos.principal");
System.clearProperty("solr.kerberos.keytab");
System.clearProperty("authenticationPlugin");
System.clearProperty("solr.kerberos.delegation.token.enabled");
System.clearProperty("solr.kerberos.name.rules");
// more debugging, if needed
// System.clearProperty("sun.security.jgss.debug");
// System.clearProperty("sun.security.krb5.debug");
// System.clearProperty("sun.security.jgss.debug");
// System.clearProperty("java.security.debug");
kerberosTestServices.stop();
SolrTestCaseJ4.clearNumericTypesProperties(); // SOLR-10916
super.tearDown();
}
}

View File

@ -73,7 +73,7 @@ public class TestConfigSetImmutable extends RestTestBase {
String payload = "{\n" +
"'create-requesthandler' : { 'name' : '/x', 'class': 'org.apache.solr.handler.DumpRequestHandler' , 'startup' : 'lazy'}\n" +
"}";
String uri = "/config?wt=json";
String uri = "/config";
String response = restTestHarness.post(uri, SolrTestCaseJ4.json(payload));
Map map = (Map) ObjectBuilder.getVal(new JSONParser(new StringReader(response)));
assertNotNull(map.get("error"));
@ -91,7 +91,7 @@ public class TestConfigSetImmutable extends RestTestBase {
" },\n" +
" }";
String response = restTestHarness.post("/schema?wt=json", json(payload));
String response = restTestHarness.post("/schema", json(payload));
Map map = (Map) ObjectBuilder.getVal(new JSONParser(new StringReader(response)));
assertNotNull(map.get("errors"));
assertTrue(map.get("errors").toString().contains("immutable"));

View File

@ -74,10 +74,10 @@ public class TestCustomStream extends AbstractFullDistribZkTestBase {
"}";
RestTestHarness client = restTestHarnesses.get(random().nextInt(restTestHarnesses.size()));
TestSolrConfigHandler.runConfigCommand(client,"/config?wt=json",payload);
TestSolrConfigHandler.runConfigCommand(client,"/config",payload);
TestSolrConfigHandler.testForResponseElement(client,
null,
"/config/overlay?wt=json",
"/config/overlay",
null,
Arrays.asList("overlay", "expressible", "hello", "class"),
"org.apache.solr.core.HelloStream",10);

View File

@ -79,10 +79,10 @@ public class TestDynamicLoading extends AbstractFullDistribZkTestBase {
"'add-runtimelib' : { 'name' : 'colltest' ,'version':1}\n" +
"}";
RestTestHarness client = restTestHarnesses.get(random().nextInt(restTestHarnesses.size()));
TestSolrConfigHandler.runConfigCommand(client, "/config?wt=json", payload);
TestSolrConfigHandler.runConfigCommand(client, "/config", payload);
TestSolrConfigHandler.testForResponseElement(client,
null,
"/config/overlay?wt=json",
"/config/overlay",
null,
Arrays.asList("overlay", "runtimeLib", blobName, "version"),
1l, 10);
@ -93,15 +93,15 @@ public class TestDynamicLoading extends AbstractFullDistribZkTestBase {
"}";
client = restTestHarnesses.get(random().nextInt(restTestHarnesses.size()));
TestSolrConfigHandler.runConfigCommand(client,"/config?wt=json",payload);
TestSolrConfigHandler.runConfigCommand(client,"/config",payload);
TestSolrConfigHandler.testForResponseElement(client,
null,
"/config/overlay?wt=json",
"/config/overlay",
null,
Arrays.asList("overlay", "requestHandler", "/test1", "class"),
"org.apache.solr.core.BlobStoreTestRequestHandler",10);
Map map = TestSolrConfigHandler.getRespMap("/test1?wt=json", client);
Map map = TestSolrConfigHandler.getRespMap("/test1", client);
assertNotNull(TestBlobHandler.getAsString(map), map = (Map) map.get("error"));
assertTrue(TestBlobHandler.getAsString(map), map.get("msg").toString().contains(".system collection not available"));
@ -110,7 +110,7 @@ public class TestDynamicLoading extends AbstractFullDistribZkTestBase {
TestBlobHandler.createSystemCollection(getHttpSolrClient(baseURL, randomClient.getHttpClient()));
waitForRecoveriesToFinish(".system", true);
map = TestSolrConfigHandler.getRespMap("/test1?wt=json", client);
map = TestSolrConfigHandler.getRespMap("/test1", client);
assertNotNull(map = (Map) map.get("error"));
@ -122,11 +122,11 @@ public class TestDynamicLoading extends AbstractFullDistribZkTestBase {
" }\n" +
" }";
TestSolrConfigHandler.runConfigCommand(client,"/config/params?wt=json",payload);
TestSolrConfigHandler.runConfigCommand(client,"/config/params",payload);
TestSolrConfigHandler.testForResponseElement(
client,
null,
"/config/params?wt=json",
"/config/params",
cloudClient,
Arrays.asList("response", "params", "watched", "x"),
"X val",
@ -136,7 +136,7 @@ public class TestDynamicLoading extends AbstractFullDistribZkTestBase {
for(int i=0;i<100;i++) {
map = TestSolrConfigHandler.getRespMap("/test1?wt=json", client);
map = TestSolrConfigHandler.getRespMap("/test1", client);
if("X val".equals(map.get("x"))){
success = true;
break;
@ -157,11 +157,11 @@ public class TestDynamicLoading extends AbstractFullDistribZkTestBase {
"'create-queryResponseWriter' : { 'name' : 'json1', 'class': 'org.apache.solr.core.RuntimeLibResponseWriter' , 'runtimeLib':true }" +
"}";
client = restTestHarnesses.get(random().nextInt(restTestHarnesses.size()));
TestSolrConfigHandler.runConfigCommand(client, "/config?wt=json", payload);
TestSolrConfigHandler.runConfigCommand(client, "/config", payload);
Map result = TestSolrConfigHandler.testForResponseElement(client,
null,
"/config/overlay?wt=json",
"/config/overlay",
null,
Arrays.asList("overlay", "requestHandler", "/runtime", "class"),
"org.apache.solr.core.RuntimeLibReqHandler", 10);
@ -170,7 +170,7 @@ public class TestDynamicLoading extends AbstractFullDistribZkTestBase {
result = TestSolrConfigHandler.testForResponseElement(client,
null,
"/runtime?wt=json",
"/runtime",
null,
Arrays.asList("class"),
"org.apache.solr.core.RuntimeLibReqHandler", 10);
@ -198,10 +198,10 @@ public class TestDynamicLoading extends AbstractFullDistribZkTestBase {
"'update-runtimelib' : { 'name' : 'colltest' ,'version':2}\n" +
"}";
client = restTestHarnesses.get(random().nextInt(restTestHarnesses.size()));
TestSolrConfigHandler.runConfigCommand(client, "/config?wt=json", payload);
TestSolrConfigHandler.runConfigCommand(client, "/config", payload);
TestSolrConfigHandler.testForResponseElement(client,
null,
"/config/overlay?wt=json",
"/config/overlay",
null,
Arrays.asList("overlay", "runtimeLib", blobName, "version"),
2l, 10);
@ -221,11 +221,11 @@ public class TestDynamicLoading extends AbstractFullDistribZkTestBase {
" }\n" +
" }";
TestSolrConfigHandler.runConfigCommand(client,"/config/params?wt=json",payload);
TestSolrConfigHandler.runConfigCommand(client,"/config/params",payload);
TestSolrConfigHandler.testForResponseElement(
client,
null,
"/config/params?wt=json",
"/config/params",
cloudClient,
Arrays.asList("response", "params", "watched", "x"),
"X val",
@ -233,7 +233,7 @@ public class TestDynamicLoading extends AbstractFullDistribZkTestBase {
result = TestSolrConfigHandler.testForResponseElement(
client,
null,
"/test1?wt=json",
"/test1",
cloudClient,
Arrays.asList("x"),
"X val",
@ -246,11 +246,11 @@ public class TestDynamicLoading extends AbstractFullDistribZkTestBase {
" }\n" +
" }";
TestSolrConfigHandler.runConfigCommand(client,"/config/params?wt=json",payload);
TestSolrConfigHandler.runConfigCommand(client,"/config/params",payload);
result = TestSolrConfigHandler.testForResponseElement(
client,
null,
"/test1?wt=json",
"/test1",
cloudClient,
Arrays.asList("x"),
"X val changed",

View File

@ -23,7 +23,6 @@ import org.apache.solr.metrics.reporters.SolrJmxReporter;
import org.apache.solr.util.AbstractSolrTestCase;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.Ignore;
import org.junit.Test;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@ -156,7 +155,7 @@ public class TestJmxIntegration extends AbstractSolrTestCase {
numDocs > oldNumDocs);
}
@Test @Ignore("timing problem? https://issues.apache.org/jira/browse/SOLR-2715")
@Test @AwaitsFix(bugUrl="https://issues.apache.org/jira/browse/SOLR-2715") // timing problem?
public void testJmxOnCoreReload() throws Exception {
String coreName = h.getCore().getName();

View File

@ -106,7 +106,7 @@ public class TestSolrConfigHandler extends RestTestBase {
public void testProperty() throws Exception {
RestTestHarness harness = restTestHarness;
Map confMap = getRespMap("/config?wt=json", harness);
Map confMap = getRespMap("/config", harness);
assertNotNull(getObjectByPath(confMap, false, Arrays.asList("config", "requestHandler", "/admin/luke")));
assertNotNull(getObjectByPath(confMap, false, Arrays.asList("config", "requestHandler", "/admin/system")));
assertNotNull(getObjectByPath(confMap, false, Arrays.asList("config", "requestHandler", "/admin/mbeans")));
@ -120,20 +120,20 @@ public class TestSolrConfigHandler extends RestTestBase {
String payload = "{\n" +
" 'set-property' : { 'updateHandler.autoCommit.maxDocs':100, 'updateHandler.autoCommit.maxTime':10 , 'requestDispatcher.requestParsers.addHttpRequestToContext':true} \n" +
" }";
runConfigCommand(harness, "/config?wt=json", payload);
runConfigCommand(harness, "/config", payload);
Map m = (Map) getRespMap("/config/overlay?wt=json", harness).get("overlay");
Map m = (Map) getRespMap("/config/overlay", harness).get("overlay");
Map props = (Map) m.get("props");
assertNotNull(props);
assertEquals("100", String.valueOf(getObjectByPath(props, true, ImmutableList.of("updateHandler", "autoCommit", "maxDocs"))));
assertEquals("10", String.valueOf(getObjectByPath(props, true, ImmutableList.of("updateHandler", "autoCommit", "maxTime"))));
m = getRespMap("/config/updateHandler?wt=json", harness);
m = getRespMap("/config/updateHandler", harness);
assertNotNull(getObjectByPath(m, true, ImmutableList.of("config","updateHandler", "commitWithin", "softCommit")));
assertNotNull(getObjectByPath(m, true, ImmutableList.of("config","updateHandler", "autoCommit", "maxDocs")));
assertNotNull(getObjectByPath(m, true, ImmutableList.of("config","updateHandler", "autoCommit", "maxTime")));
m = (Map) getRespMap("/config?wt=json", harness).get("config");
m = (Map) getRespMap("/config", harness).get("config");
assertNotNull(m);
assertEquals("100", String.valueOf(getObjectByPath(m, true, ImmutableList.of("updateHandler", "autoCommit", "maxDocs"))));
@ -142,9 +142,9 @@ public class TestSolrConfigHandler extends RestTestBase {
payload = "{\n" +
" 'unset-property' : 'updateHandler.autoCommit.maxDocs'} \n" +
" }";
runConfigCommand(harness, "/config?wt=json", payload);
runConfigCommand(harness, "/config", payload);
m = (Map) getRespMap("/config/overlay?wt=json", harness).get("overlay");
m = (Map) getRespMap("/config/overlay", harness).get("overlay");
props = (Map) m.get("props");
assertNotNull(props);
assertNull(getObjectByPath(props, true, ImmutableList.of("updateHandler", "autoCommit", "maxDocs")));
@ -157,15 +157,15 @@ public class TestSolrConfigHandler extends RestTestBase {
" 'set-user-property' : { 'my.custom.variable.a':'MODIFIEDA'," +
" 'my.custom.variable.b':'MODIFIEDB' } \n" +
" }";
runConfigCommand(harness, "/config?wt=json", payload);
runConfigCommand(harness, "/config", payload);
Map m = (Map) getRespMap("/config/overlay?wt=json", harness).get("overlay");
Map m = (Map) getRespMap("/config/overlay", harness).get("overlay");
Map props = (Map) m.get("userProps");
assertNotNull(props);
assertEquals(props.get("my.custom.variable.a"), "MODIFIEDA");
assertEquals(props.get("my.custom.variable.b"), "MODIFIEDB");
m = (Map) getRespMap("/dump?wt=json&json.nl=map&initArgs=true", harness).get("initArgs");
m = (Map) getRespMap("/dump?json.nl=map&initArgs=true", harness).get("initArgs");
m = (Map) m.get(PluginInfo.DEFAULTS);
assertEquals("MODIFIEDA", m.get("a"));
@ -191,11 +191,11 @@ public class TestSolrConfigHandler extends RestTestBase {
String payload = "{\n" +
"'create-requesthandler' : { 'name' : '/x', 'class': 'org.apache.solr.handler.DumpRequestHandler' , 'startup' : 'lazy'}\n" +
"}";
runConfigCommand(writeHarness, "/config?wt=json", payload);
runConfigCommand(writeHarness, "/config", payload);
testForResponseElement(writeHarness,
testServerBaseUrl,
"/config/overlay?wt=json",
"/config/overlay",
cloudSolrClient,
Arrays.asList("overlay", "requestHandler", "/x", "startup"),
"lazy",
@ -205,11 +205,11 @@ public class TestSolrConfigHandler extends RestTestBase {
"'update-requesthandler' : { 'name' : '/x', 'class': 'org.apache.solr.handler.DumpRequestHandler' ,registerPath :'/solr,/v2', " +
" 'startup' : 'lazy' , 'a':'b' , 'defaults': {'def_a':'def A val', 'multival':['a','b','c']}}\n" +
"}";
runConfigCommand(writeHarness, "/config?wt=json", payload);
runConfigCommand(writeHarness, "/config", payload);
testForResponseElement(writeHarness,
testServerBaseUrl,
"/config/overlay?wt=json",
"/config/overlay",
cloudSolrClient,
Arrays.asList("overlay", "requestHandler", "/x", "a"),
"b",
@ -222,10 +222,10 @@ public class TestSolrConfigHandler extends RestTestBase {
" 'defaults': {'a':'A','b':'B','c':'C'}}\n" +
"}";
runConfigCommand(writeHarness, "/config?wt=json", payload);
runConfigCommand(writeHarness, "/config", payload);
testForResponseElement(writeHarness,
testServerBaseUrl,
"/config/overlay?wt=json",
"/config/overlay",
cloudSolrClient,
Arrays.asList("overlay", "requestHandler", "/dump", "defaults", "c" ),
"C",
@ -233,7 +233,7 @@ public class TestSolrConfigHandler extends RestTestBase {
testForResponseElement(writeHarness,
testServerBaseUrl,
"/x?wt=json&getdefaults=true&json.nl=map",
"/x?getdefaults=true&json.nl=map",
cloudSolrClient,
Arrays.asList("getdefaults", "def_a"),
"def A val",
@ -241,7 +241,7 @@ public class TestSolrConfigHandler extends RestTestBase {
testForResponseElement(writeHarness,
testServerBaseUrl,
"/x?wt=json&param=multival&json.nl=map",
"/x?param=multival&json.nl=map",
cloudSolrClient,
Arrays.asList("params", "multival"),
Arrays.asList("a", "b", "c"),
@ -250,12 +250,12 @@ public class TestSolrConfigHandler extends RestTestBase {
payload = "{\n" +
"'delete-requesthandler' : '/x'" +
"}";
runConfigCommand(writeHarness, "/config?wt=json", payload);
runConfigCommand(writeHarness, "/config", payload);
boolean success = false;
long startTime = System.nanoTime();
int maxTimeoutSeconds = 10;
while (TimeUnit.SECONDS.convert(System.nanoTime() - startTime, TimeUnit.NANOSECONDS) < maxTimeoutSeconds) {
String uri = "/config/overlay?wt=json";
String uri = "/config/overlay";
Map m = testServerBaseUrl == null ? getRespMap(uri, writeHarness) : TestSolrConfigHandlerConcurrent.getAsMap(testServerBaseUrl + uri, cloudSolrClient);
if (null == Utils.getObjectByPath(m, true, Arrays.asList("overlay", "requestHandler", "/x", "a"))) {
success = true;
@ -269,10 +269,10 @@ public class TestSolrConfigHandler extends RestTestBase {
payload = "{\n" +
"'create-queryconverter' : { 'name' : 'qc', 'class': 'org.apache.solr.spelling.SpellingQueryConverter'}\n" +
"}";
runConfigCommand(writeHarness, "/config?wt=json", payload);
runConfigCommand(writeHarness, "/config", payload);
testForResponseElement(writeHarness,
testServerBaseUrl,
"/config?wt=json",
"/config",
cloudSolrClient,
Arrays.asList("config", "queryConverter", "qc", "class"),
"org.apache.solr.spelling.SpellingQueryConverter",
@ -280,10 +280,10 @@ public class TestSolrConfigHandler extends RestTestBase {
payload = "{\n" +
"'update-queryconverter' : { 'name' : 'qc', 'class': 'org.apache.solr.spelling.SuggestQueryConverter'}\n" +
"}";
runConfigCommand(writeHarness, "/config?wt=json", payload);
runConfigCommand(writeHarness, "/config", payload);
testForResponseElement(writeHarness,
testServerBaseUrl,
"/config?wt=json",
"/config",
cloudSolrClient,
Arrays.asList("config", "queryConverter", "qc", "class"),
"org.apache.solr.spelling.SuggestQueryConverter",
@ -292,10 +292,10 @@ public class TestSolrConfigHandler extends RestTestBase {
payload = "{\n" +
"'delete-queryconverter' : 'qc'" +
"}";
runConfigCommand(writeHarness, "/config?wt=json", payload);
runConfigCommand(writeHarness, "/config", payload);
testForResponseElement(writeHarness,
testServerBaseUrl,
"/config?wt=json",
"/config",
cloudSolrClient,
Arrays.asList("config", "queryConverter", "qc"),
null,
@ -304,10 +304,10 @@ public class TestSolrConfigHandler extends RestTestBase {
payload = "{\n" +
"'create-searchcomponent' : { 'name' : 'tc', 'class': 'org.apache.solr.handler.component.TermsComponent'}\n" +
"}";
runConfigCommand(writeHarness, "/config?wt=json", payload);
runConfigCommand(writeHarness, "/config", payload);
testForResponseElement(writeHarness,
testServerBaseUrl,
"/config?wt=json",
"/config",
cloudSolrClient,
Arrays.asList("config", "searchComponent", "tc", "class"),
"org.apache.solr.handler.component.TermsComponent",
@ -315,10 +315,10 @@ public class TestSolrConfigHandler extends RestTestBase {
payload = "{\n" +
"'update-searchcomponent' : { 'name' : 'tc', 'class': 'org.apache.solr.handler.component.TermVectorComponent' }\n" +
"}";
runConfigCommand(writeHarness, "/config?wt=json", payload);
runConfigCommand(writeHarness, "/config", payload);
testForResponseElement(writeHarness,
testServerBaseUrl,
"/config?wt=json",
"/config",
cloudSolrClient,
Arrays.asList("config", "searchComponent", "tc", "class"),
"org.apache.solr.handler.component.TermVectorComponent",
@ -327,10 +327,10 @@ public class TestSolrConfigHandler extends RestTestBase {
payload = "{\n" +
"'delete-searchcomponent' : 'tc'" +
"}";
runConfigCommand(writeHarness, "/config?wt=json", payload);
runConfigCommand(writeHarness, "/config", payload);
testForResponseElement(writeHarness,
testServerBaseUrl,
"/config?wt=json",
"/config",
cloudSolrClient,
Arrays.asList("config", "searchComponent", "tc"),
null,
@ -339,10 +339,10 @@ public class TestSolrConfigHandler extends RestTestBase {
payload = "{\n" +
"'create-valuesourceparser' : { 'name' : 'cu', 'class': 'org.apache.solr.core.CountUsageValueSourceParser'}\n" +
"}";
runConfigCommand(writeHarness, "/config?wt=json", payload);
runConfigCommand(writeHarness, "/config", payload);
testForResponseElement(writeHarness,
testServerBaseUrl,
"/config?wt=json",
"/config",
cloudSolrClient,
Arrays.asList("config", "valueSourceParser", "cu", "class"),
"org.apache.solr.core.CountUsageValueSourceParser",
@ -353,10 +353,10 @@ public class TestSolrConfigHandler extends RestTestBase {
payload = "{\n" +
"'update-valuesourceparser' : { 'name' : 'cu', 'class': 'org.apache.solr.search.function.NvlValueSourceParser'}\n" +
"}";
runConfigCommand(writeHarness, "/config?wt=json", payload);
runConfigCommand(writeHarness, "/config", payload);
testForResponseElement(writeHarness,
testServerBaseUrl,
"/config?wt=json",
"/config",
cloudSolrClient,
Arrays.asList("config", "valueSourceParser", "cu", "class"),
"org.apache.solr.search.function.NvlValueSourceParser",
@ -365,10 +365,10 @@ public class TestSolrConfigHandler extends RestTestBase {
payload = "{\n" +
"'delete-valuesourceparser' : 'cu'" +
"}";
runConfigCommand(writeHarness, "/config?wt=json", payload);
runConfigCommand(writeHarness, "/config", payload);
testForResponseElement(writeHarness,
testServerBaseUrl,
"/config?wt=json",
"/config",
cloudSolrClient,
Arrays.asList("config", "valueSourceParser", "cu"),
null,
@ -379,10 +379,10 @@ public class TestSolrConfigHandler extends RestTestBase {
payload = "{\n" +
"'create-transformer' : { 'name' : 'mytrans', 'class': 'org.apache.solr.response.transform.ValueAugmenterFactory', 'value':'5'}\n" +
"}";
runConfigCommand(writeHarness, "/config?wt=json", payload);
runConfigCommand(writeHarness, "/config", payload);
testForResponseElement(writeHarness,
testServerBaseUrl,
"/config?wt=json",
"/config",
cloudSolrClient,
Arrays.asList("config", "transformer", "mytrans", "class"),
"org.apache.solr.response.transform.ValueAugmenterFactory",
@ -391,10 +391,10 @@ public class TestSolrConfigHandler extends RestTestBase {
payload = "{\n" +
"'update-transformer' : { 'name' : 'mytrans', 'class': 'org.apache.solr.response.transform.ValueAugmenterFactory', 'value':'6'}\n" +
"}";
runConfigCommand(writeHarness, "/config?wt=json", payload);
runConfigCommand(writeHarness, "/config", payload);
testForResponseElement(writeHarness,
testServerBaseUrl,
"/config?wt=json",
"/config",
cloudSolrClient,
Arrays.asList("config", "transformer", "mytrans", "value"),
"6",
@ -404,10 +404,10 @@ public class TestSolrConfigHandler extends RestTestBase {
"'delete-transformer' : 'mytrans'," +
"'create-initparams' : { 'name' : 'hello', 'key':'val'}\n" +
"}";
runConfigCommand(writeHarness, "/config?wt=json", payload);
runConfigCommand(writeHarness, "/config", payload);
Map map = testForResponseElement(writeHarness,
testServerBaseUrl,
"/config?wt=json",
"/config",
cloudSolrClient,
Arrays.asList("config", "transformer", "mytrans"),
null,
@ -431,10 +431,10 @@ public class TestSolrConfigHandler extends RestTestBase {
" }\n" +
" }\n" +
"}";
runConfigCommand(writeHarness, "/config?wt=json", payload);
runConfigCommand(writeHarness, "/config", payload);
map = testForResponseElement(writeHarness,
testServerBaseUrl,
"/config?wt=json",
"/config",
cloudSolrClient,
Arrays.asList("config", "searchComponent","myspellcheck", "spellchecker", "class"),
"solr.DirectSolrSpellChecker",
@ -449,16 +449,16 @@ public class TestSolrConfigHandler extends RestTestBase {
" {name: s2,lookupImpl: FuzzyLookupFactory , dictionaryImpl : DocumentExpressionDictionaryFactory}]" +
" }\n" +
"}";
runConfigCommand(writeHarness, "/config?wt=json", payload);
runConfigCommand(writeHarness, "/config", payload);
map = testForResponseElement(writeHarness,
testServerBaseUrl,
"/config?wt=json",
"/config",
cloudSolrClient,
Arrays.asList("config", "requestHandler","/dump100", "class"),
"org.apache.solr.handler.DumpRequestHandler",
10);
map = getRespMap("/dump100?wt=json&json.nl=arrmap&initArgs=true", writeHarness);
map = getRespMap("/dump100?json.nl=arrmap&initArgs=true", writeHarness);
List initArgs = (List) map.get("initArgs");
assertNotNull(initArgs);
assertTrue(initArgs.size() >= 2);
@ -471,11 +471,11 @@ public class TestSolrConfigHandler extends RestTestBase {
" registerPath :'/solr,/v2'"+
", 'startup' : 'lazy'}\n" +
"}";
runConfigCommand(writeHarness, "/config?wt=json", payload);
runConfigCommand(writeHarness, "/config", payload);
testForResponseElement(writeHarness,
testServerBaseUrl,
"/config/overlay?wt=json",
"/config/overlay",
cloudSolrClient,
Arrays.asList("overlay", "requestHandler", "/dump101", "startup"),
"lazy",
@ -484,18 +484,18 @@ public class TestSolrConfigHandler extends RestTestBase {
payload = "{\n" +
"'add-cache' : {name:'lfuCacheDecayFalse', class:'solr.search.LFUCache', size:10 ,initialSize:9 , timeDecay:false }," +
"'add-cache' : {name: 'perSegFilter', class: 'solr.search.LRUCache', size:10, initialSize:0 , autowarmCount:10}}";
runConfigCommand(writeHarness, "/config?wt=json", payload);
runConfigCommand(writeHarness, "/config", payload);
map = testForResponseElement(writeHarness,
testServerBaseUrl,
"/config/overlay?wt=json",
"/config/overlay",
cloudSolrClient,
Arrays.asList("overlay", "cache", "lfuCacheDecayFalse", "class"),
"solr.search.LFUCache",
10);
assertEquals("solr.search.LRUCache",getObjectByPath(map, true, ImmutableList.of("overlay", "cache", "perSegFilter", "class")));
map = getRespMap("/dump101?cacheNames=lfuCacheDecayFalse&cacheNames=perSegFilter&wt=json", writeHarness);
map = getRespMap("/dump101?cacheNames=lfuCacheDecayFalse&cacheNames=perSegFilter", writeHarness);
assertEquals("Actual output "+ Utils.toJSONString(map), "org.apache.solr.search.LRUCache",getObjectByPath(map, true, ImmutableList.of( "caches", "perSegFilter")));
assertEquals("Actual output "+ Utils.toJSONString(map), "org.apache.solr.search.LFUCache",getObjectByPath(map, true, ImmutableList.of( "caches", "lfuCacheDecayFalse")));
@ -569,12 +569,12 @@ public class TestSolrConfigHandler extends RestTestBase {
" }";
TestSolrConfigHandler.runConfigCommand(harness, "/config/params?wt=json", payload);
TestSolrConfigHandler.runConfigCommand(harness, "/config/params", payload);
TestSolrConfigHandler.testForResponseElement(
harness,
null,
"/config/params?wt=json",
"/config/params",
null,
Arrays.asList("response", "params", "x", "a"),
"A val",
@ -583,7 +583,7 @@ public class TestSolrConfigHandler extends RestTestBase {
TestSolrConfigHandler.testForResponseElement(
harness,
null,
"/config/params?wt=json",
"/config/params",
null,
Arrays.asList("response", "params", "x", "b"),
"B val",
@ -593,12 +593,12 @@ public class TestSolrConfigHandler extends RestTestBase {
"'create-requesthandler' : { 'name' : '/d', registerPath :'/solr,/v2' , 'class': 'org.apache.solr.handler.DumpRequestHandler' }\n" +
"}";
TestSolrConfigHandler.runConfigCommand(harness, "/config?wt=json", payload);
TestSolrConfigHandler.runConfigCommand(harness, "/config", payload);
TestSolrConfigHandler.testForResponseElement(
harness,
null,
"/config/overlay?wt=json",
"/config/overlay",
null,
Arrays.asList("overlay", "requestHandler", "/d", "name"),
"/d",
@ -606,14 +606,14 @@ public class TestSolrConfigHandler extends RestTestBase {
TestSolrConfigHandler.testForResponseElement(harness,
null,
"/d?wt=json&useParams=x",
"/d?useParams=x",
null,
Arrays.asList("params", "a"),
"A val",
5);
TestSolrConfigHandler.testForResponseElement(harness,
null,
"/d?wt=json&useParams=x&a=fomrequest",
"/d?useParams=x&a=fomrequest",
null,
Arrays.asList("params", "a"),
"fomrequest",
@ -623,11 +623,11 @@ public class TestSolrConfigHandler extends RestTestBase {
"'create-requesthandler' : { 'name' : '/dump1', registerPath :'/solr,/v2' , 'class': 'org.apache.solr.handler.DumpRequestHandler', 'useParams':'x' }\n" +
"}";
TestSolrConfigHandler.runConfigCommand(harness, "/config?wt=json", payload);
TestSolrConfigHandler.runConfigCommand(harness, "/config", payload);
TestSolrConfigHandler.testForResponseElement(harness,
null,
"/config/overlay?wt=json",
"/config/overlay",
null,
Arrays.asList("overlay", "requestHandler", "/dump1", "name"),
"/dump1",
@ -636,7 +636,7 @@ public class TestSolrConfigHandler extends RestTestBase {
TestSolrConfigHandler.testForResponseElement(
harness,
null,
"/dump1?wt=json",
"/dump1",
null,
Arrays.asList("params", "a"),
"A val",
@ -652,12 +652,12 @@ public class TestSolrConfigHandler extends RestTestBase {
" }";
TestSolrConfigHandler.runConfigCommand(harness, "/config/params?wt=json", payload);
TestSolrConfigHandler.runConfigCommand(harness, "/config/params", payload);
TestSolrConfigHandler.testForResponseElement(
harness,
null,
"/config/params?wt=json",
"/config/params",
null,
Arrays.asList("response", "params", "y", "c"),
"CY val",
@ -665,7 +665,7 @@ public class TestSolrConfigHandler extends RestTestBase {
TestSolrConfigHandler.testForResponseElement(harness,
null,
"/dump1?wt=json&useParams=y",
"/dump1?useParams=y",
null,
Arrays.asList("params", "c"),
"CY val",
@ -675,7 +675,7 @@ public class TestSolrConfigHandler extends RestTestBase {
TestSolrConfigHandler.testForResponseElement(
harness,
null,
"/dump1?wt=json&useParams=y",
"/dump1?useParams=y",
null,
Arrays.asList("params", "b"),
"BY val",
@ -684,7 +684,7 @@ public class TestSolrConfigHandler extends RestTestBase {
TestSolrConfigHandler.testForResponseElement(
harness,
null,
"/dump1?wt=json&useParams=y",
"/dump1?useParams=y",
null,
Arrays.asList("params", "a"),
"A val",
@ -693,7 +693,7 @@ public class TestSolrConfigHandler extends RestTestBase {
TestSolrConfigHandler.testForResponseElement(
harness,
null,
"/dump1?wt=json&useParams=y",
"/dump1?useParams=y",
null,
Arrays.asList("params", "d"),
Arrays.asList("val 1", "val 2"),
@ -709,12 +709,12 @@ public class TestSolrConfigHandler extends RestTestBase {
" }";
TestSolrConfigHandler.runConfigCommand(harness, "/config/params?wt=json", payload);
TestSolrConfigHandler.runConfigCommand(harness, "/config/params", payload);
TestSolrConfigHandler.testForResponseElement(
harness,
null,
"/config/params?wt=json",
"/config/params",
null,
Arrays.asList("response", "params", "y", "c"),
"CY val modified",
@ -723,7 +723,7 @@ public class TestSolrConfigHandler extends RestTestBase {
TestSolrConfigHandler.testForResponseElement(
harness,
null,
"/config/params?wt=json",
"/config/params",
null,
Arrays.asList("response", "params", "y", "e"),
"EY val",
@ -738,11 +738,11 @@ public class TestSolrConfigHandler extends RestTestBase {
" }";
TestSolrConfigHandler.runConfigCommand(harness, "/config/params?wt=json", payload);
TestSolrConfigHandler.runConfigCommand(harness, "/config/params", payload);
TestSolrConfigHandler.testForResponseElement(
harness,
null,
"/config/params?wt=json",
"/config/params",
null,
Arrays.asList("response", "params", "y", "p"),
"P val",
@ -751,17 +751,17 @@ public class TestSolrConfigHandler extends RestTestBase {
TestSolrConfigHandler.testForResponseElement(
harness,
null,
"/config/params?wt=json",
"/config/params",
null,
Arrays.asList("response", "params", "y", "c"),
null,
10);
payload = " {'delete' : 'y'}";
TestSolrConfigHandler.runConfigCommand(harness, "/config/params?wt=json", payload);
TestSolrConfigHandler.runConfigCommand(harness, "/config/params", payload);
TestSolrConfigHandler.testForResponseElement(
harness,
null,
"/config/params?wt=json",
"/config/params",
null,
Arrays.asList("response", "params", "y", "p"),
null,
@ -786,10 +786,10 @@ public class TestSolrConfigHandler extends RestTestBase {
" }\n" +
"}";
TestSolrConfigHandler.runConfigCommand(harness, "/config?wt=json", payload);
TestSolrConfigHandler.runConfigCommand(harness, "/config", payload);
TestSolrConfigHandler.testForResponseElement(harness,
null,
"/config/overlay?wt=json",
"/config/overlay",
null,
Arrays.asList("overlay", "requestHandler", "aRequestHandler", "class"),
"org.apache.solr.handler.DumpRequestHandler",

View File

@ -46,7 +46,7 @@ public class CheckBackupStatus extends SolrTestCaseJ4 {
}
public void fetchStatus() throws IOException {
String masterUrl = client.getBaseURL() + "/" + coreName + ReplicationHandler.PATH + "?command=" + ReplicationHandler.CMD_DETAILS;
String masterUrl = client.getBaseURL() + "/" + coreName + ReplicationHandler.PATH + "?wt=xml&command=" + ReplicationHandler.CMD_DETAILS;
response = client.getHttpClient().execute(new HttpGet(masterUrl), new BasicResponseHandler());
if(pException.matcher(response).find()) {
fail("Failed to create backup");

View File

@ -121,7 +121,7 @@ public class TestConfigReload extends AbstractFullDistribZkTestBase {
while ( TimeUnit.SECONDS.convert(System.nanoTime() - startTime, TimeUnit.NANOSECONDS) < maxTimeoutSeconds){
Thread.sleep(50);
for (String url : urls) {
Map respMap = getAsMap(url+uri+"?wt=json");
Map respMap = getAsMap(url+uri);
if(String.valueOf(newVersion).equals(String.valueOf( getObjectByPath(respMap, true, asList(name, "znodeVersion"))))){
succeeded.add(url);
}

View File

@ -267,7 +267,7 @@ public class TestReplicationHandlerBackup extends SolrJettyTestBase {
public static void runBackupCommand(JettySolrRunner masterJetty, String cmd, String params) throws IOException {
String masterUrl = buildUrl(masterJetty.getLocalPort(), context) + "/" + DEFAULT_TEST_CORENAME
+ ReplicationHandler.PATH+"?command=" + cmd + params;
+ ReplicationHandler.PATH+"?wt=xml&command=" + cmd + params;
InputStream stream = null;
try {
URL url = new URL(masterUrl);
@ -290,7 +290,7 @@ public class TestReplicationHandlerBackup extends SolrJettyTestBase {
}
public boolean fetchStatus() throws IOException {
String masterUrl = buildUrl(masterJetty.getLocalPort(), context) + "/" + DEFAULT_TEST_CORENAME + ReplicationHandler.PATH + "?command=" + ReplicationHandler.CMD_DETAILS;
String masterUrl = buildUrl(masterJetty.getLocalPort(), context) + "/" + DEFAULT_TEST_CORENAME + ReplicationHandler.PATH + "?wt=xml&command=" + ReplicationHandler.CMD_DETAILS;
URL url;
InputStream stream = null;
try {

View File

@ -94,12 +94,12 @@ public class TestReqParamsAPI extends SolrCloudTestCase {
"'create-requesthandler' : { 'name' : '/dump0', 'class': 'org.apache.solr.handler.DumpRequestHandler' }\n" +
"}";
TestSolrConfigHandler.runConfigCommand(writeHarness, "/config?wt=json", payload);
TestSolrConfigHandler.runConfigCommand(writeHarness, "/config", payload);
payload = "{\n" +
"'create-requesthandler' : { 'name' : '/dump1', 'class': 'org.apache.solr.handler.DumpRequestHandler', 'useParams':'x' }\n" +
"}";
TestSolrConfigHandler.runConfigCommand(writeHarness, "/config?wt=json", payload);
TestSolrConfigHandler.runConfigCommand(writeHarness, "/config", payload);
AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(COLL_NAME, cloudClient.getZkStateReader(), false, true, 90);
@ -110,11 +110,11 @@ public class TestReqParamsAPI extends SolrCloudTestCase {
" }\n" +
" }";
TestSolrConfigHandler.runConfigCommand(writeHarness, "/config/params?wt=json", payload);
TestSolrConfigHandler.runConfigCommand(writeHarness, "/config/params", payload);
Map result = TestSolrConfigHandler.testForResponseElement(null,
urls.get(random().nextInt(urls.size())),
"/config/params?wt=json",
"/config/params",
cloudClient,
asList("response", "params", "x", "a"),
"A val",
@ -123,7 +123,7 @@ public class TestReqParamsAPI extends SolrCloudTestCase {
TestSolrConfigHandler.testForResponseElement(null,
urls.get(random().nextInt(urls.size())),
"/config/overlay?wt=json",
"/config/overlay",
cloudClient,
asList("overlay", "requestHandler", "/dump0", "name"),
"/dump0",
@ -131,7 +131,7 @@ public class TestReqParamsAPI extends SolrCloudTestCase {
result = TestSolrConfigHandler.testForResponseElement(null,
urls.get(random().nextInt(urls.size())),
"/dump0?wt=json&useParams=x",
"/dump0?useParams=x",
cloudClient,
asList("params", "a"),
"A val",
@ -140,7 +140,7 @@ public class TestReqParamsAPI extends SolrCloudTestCase {
TestSolrConfigHandler.testForResponseElement(null,
urls.get(random().nextInt(urls.size())),
"/dump0?wt=json&useParams=x&a=fomrequest",
"/dump0?useParams=x&a=fomrequest",
cloudClient,
asList("params", "a"),
"fomrequest",
@ -148,7 +148,7 @@ public class TestReqParamsAPI extends SolrCloudTestCase {
result = TestSolrConfigHandler.testForResponseElement(null,
urls.get(random().nextInt(urls.size())),
"/config/overlay?wt=json",
"/config/overlay",
cloudClient,
asList("overlay", "requestHandler", "/dump1", "name"),
"/dump1",
@ -156,7 +156,7 @@ public class TestReqParamsAPI extends SolrCloudTestCase {
result = TestSolrConfigHandler.testForResponseElement(null,
urls.get(random().nextInt(urls.size())),
"/dump1?wt=json",
"/dump1",
cloudClient,
asList("params", "a"),
"A val",
@ -174,12 +174,12 @@ public class TestReqParamsAPI extends SolrCloudTestCase {
" }";
TestSolrConfigHandler.runConfigCommand(writeHarness, "/config/params?wt=json", payload);
TestSolrConfigHandler.runConfigCommand(writeHarness, "/config/params", payload);
result = TestSolrConfigHandler.testForResponseElement(
null,
urls.get(random().nextInt(urls.size())),
"/config/params?wt=json",
"/config/params",
cloudClient,
asList("response", "params", "y", "c"),
"CY val",
@ -190,7 +190,7 @@ public class TestReqParamsAPI extends SolrCloudTestCase {
result = TestSolrConfigHandler.testForResponseElement(null,
urls.get(random().nextInt(urls.size())),
"/dump1?wt=json&useParams=y",
"/dump1?useParams=y",
cloudClient,
asList("params", "c"),
"CY val",
@ -202,7 +202,7 @@ public class TestReqParamsAPI extends SolrCloudTestCase {
result = TestSolrConfigHandler.testForResponseElement(null,
urls.get(random().nextInt(urls.size())),
"/config/requestHandler?componentName=/dump1&expandParams=true&wt=json&useParams=y&c=CC",
"/config/requestHandler?componentName=/dump1&expandParams=true&useParams=y&c=CC",
cloudClient,
asList("config", "requestHandler","/dump1","_useParamsExpanded_","x", "a"),
"A val",
@ -224,12 +224,12 @@ public class TestReqParamsAPI extends SolrCloudTestCase {
" }";
TestSolrConfigHandler.runConfigCommand(writeHarness, "/config/params?wt=json", payload);
TestSolrConfigHandler.runConfigCommand(writeHarness, "/config/params", payload);
result = TestSolrConfigHandler.testForResponseElement(
null,
urls.get(random().nextInt(urls.size())),
"/config/params?wt=json",
"/config/params",
cloudClient,
asList("response", "params", "y", "c"),
"CY val modified",
@ -246,11 +246,11 @@ public class TestReqParamsAPI extends SolrCloudTestCase {
" }";
TestSolrConfigHandler.runConfigCommand(writeHarness, "/config/params?wt=json", payload);
TestSolrConfigHandler.runConfigCommand(writeHarness, "/config/params", payload);
result = TestSolrConfigHandler.testForResponseElement(
null,
urls.get(random().nextInt(urls.size())),
"/config/params?wt=json",
"/config/params",
cloudClient,
asList("response", "params", "y", "p"),
"P val",
@ -260,12 +260,12 @@ public class TestReqParamsAPI extends SolrCloudTestCase {
compareValues(result, 0l, asList("response", "params", "x", "","v"));
payload = "{update :{x : {_appends_ :{ add : 'first' }, _invariants_ : {fixed: f }}}}";
TestSolrConfigHandler.runConfigCommand(writeHarness, "/config/params?wt=json", payload);
TestSolrConfigHandler.runConfigCommand(writeHarness, "/config/params", payload);
result = TestSolrConfigHandler.testForResponseElement(
null,
urls.get(random().nextInt(urls.size())),
"/config/params?wt=json",
"/config/params",
cloudClient,
asList("response", "params", "x", "_appends_", "add"),
"first",
@ -275,7 +275,7 @@ public class TestReqParamsAPI extends SolrCloudTestCase {
result = TestSolrConfigHandler.testForResponseElement(null,
urls.get(random().nextInt(urls.size())),
"/dump1?wt=json&fixed=changeit&add=second",
"/dump1?fixed=changeit&add=second",
cloudClient,
asList("params", "fixed"),
"f",
@ -289,11 +289,11 @@ public class TestReqParamsAPI extends SolrCloudTestCase {
}, asList("params", "add"));
payload = " {'delete' : 'y'}";
TestSolrConfigHandler.runConfigCommand(writeHarness, "/config/params?wt=json", payload);
TestSolrConfigHandler.runConfigCommand(writeHarness, "/config/params", payload);
TestSolrConfigHandler.testForResponseElement(
null,
urls.get(random().nextInt(urls.size())),
"/config/params?wt=json",
"/config/params",
cloudClient,
asList("response", "params", "y", "p"),
null,

View File

@ -222,7 +222,7 @@ public class TestRestoreCore extends SolrJettyTestBase {
public static boolean fetchRestoreStatus (String baseUrl, String coreName) throws IOException {
String masterUrl = baseUrl + "/" + coreName +
ReplicationHandler.PATH + "?command=" + ReplicationHandler.CMD_RESTORE_STATUS;
ReplicationHandler.PATH + "?wt=xml&command=" + ReplicationHandler.CMD_RESTORE_STATUS;
final Pattern pException = Pattern.compile("<str name=\"exception\">(.*?)</str>");
InputStream stream = null;

View File

@ -2554,12 +2554,13 @@ public class TestSQLHandler extends AbstractFullDistribZkTestBase {
public void assertResponseContains(SolrClient server, SolrParams requestParams, String json) throws IOException, SolrServerException {
String p = requestParams.get("qt");
ModifiableSolrParams modifiableSolrParams = (ModifiableSolrParams) requestParams;
modifiableSolrParams.set("indent", modifiableSolrParams.get("indent", "off"));
if(p != null) {
ModifiableSolrParams modifiableSolrParams = (ModifiableSolrParams) requestParams;
modifiableSolrParams.remove("qt");
}
QueryRequest query = new QueryRequest( requestParams );
QueryRequest query = new QueryRequest( modifiableSolrParams );
query.setPath(p);
query.setResponseParser(new InputStreamResponseParser("json"));
query.setMethod(SolrRequest.METHOD.POST);

View File

@ -76,12 +76,12 @@ public class TestSolrConfigHandlerCloud extends AbstractFullDistribZkTestBase {
"'create-requesthandler' : { 'name' : '/admin/luke', " +
"'class': 'org.apache.solr.handler.DumpRequestHandler'}}";
TestSolrConfigHandler.runConfigCommand(writeHarness, "/config?wt=json", payload);
TestSolrConfigHandler.runConfigCommand(writeHarness, "/config", payload);
TestSolrConfigHandler.testForResponseElement(writeHarness,
testServerBaseUrl,
"/config/overlay?wt=json",
"/config/overlay",
cloudClient,
Arrays.asList("overlay", "requestHandler", "/admin/luke", "class"),
"org.apache.solr.handler.DumpRequestHandler",
@ -124,11 +124,11 @@ public class TestSolrConfigHandlerCloud extends AbstractFullDistribZkTestBase {
" }";
TestSolrConfigHandler.runConfigCommand(writeHarness,"/config/params?wt=json", payload);
TestSolrConfigHandler.runConfigCommand(writeHarness,"/config/params", payload);
Map result = TestSolrConfigHandler.testForResponseElement(null,
urls.get(random().nextInt(urls.size())),
"/config/params?wt=json",
"/config/params",
cloudClient,
asList("response", "params", "x", "a"),
"A val",
@ -139,11 +139,11 @@ public class TestSolrConfigHandlerCloud extends AbstractFullDistribZkTestBase {
"'update-requesthandler' : { 'name' : '/dump', 'class': 'org.apache.solr.handler.DumpRequestHandler' }\n" +
"}";
TestSolrConfigHandler.runConfigCommand(writeHarness, "/config?wt=json", payload);
TestSolrConfigHandler.runConfigCommand(writeHarness, "/config", payload);
TestSolrConfigHandler.testForResponseElement(null,
urls.get(random().nextInt(urls.size())),
"/config/overlay?wt=json",
"/config/overlay",
cloudClient,
asList("overlay", "requestHandler", "/dump", "name"),
"/dump",
@ -151,7 +151,7 @@ public class TestSolrConfigHandlerCloud extends AbstractFullDistribZkTestBase {
result = TestSolrConfigHandler.testForResponseElement(null,
urls.get(random().nextInt(urls.size())),
"/dump?wt=json&useParams=x",
"/dump?useParams=x",
cloudClient,
asList("params", "a"),
"A val",
@ -160,7 +160,7 @@ public class TestSolrConfigHandlerCloud extends AbstractFullDistribZkTestBase {
TestSolrConfigHandler.testForResponseElement(null,
urls.get(random().nextInt(urls.size())),
"/dump?wt=json&useParams=x&a=fomrequest",
"/dump?useParams=x&a=fomrequest",
cloudClient,
asList("params", "a"),
"fomrequest",
@ -170,11 +170,11 @@ public class TestSolrConfigHandlerCloud extends AbstractFullDistribZkTestBase {
"'create-requesthandler' : { 'name' : '/dump1', 'class': 'org.apache.solr.handler.DumpRequestHandler', 'useParams':'x' }\n" +
"}";
TestSolrConfigHandler.runConfigCommand(writeHarness,"/config?wt=json", payload);
TestSolrConfigHandler.runConfigCommand(writeHarness,"/config", payload);
result = TestSolrConfigHandler.testForResponseElement(null,
urls.get(random().nextInt(urls.size())),
"/config/overlay?wt=json",
"/config/overlay",
cloudClient,
asList("overlay", "requestHandler", "/dump1", "name"),
"/dump1",
@ -182,7 +182,7 @@ public class TestSolrConfigHandlerCloud extends AbstractFullDistribZkTestBase {
result = TestSolrConfigHandler.testForResponseElement(null,
urls.get(random().nextInt(urls.size())),
"/dump1?wt=json",
"/dump1",
cloudClient,
asList("params", "a"),
"A val",
@ -201,12 +201,12 @@ public class TestSolrConfigHandlerCloud extends AbstractFullDistribZkTestBase {
" }";
TestSolrConfigHandler.runConfigCommand(writeHarness,"/config/params?wt=json", payload);
TestSolrConfigHandler.runConfigCommand(writeHarness,"/config/params", payload);
result = TestSolrConfigHandler.testForResponseElement(
null,
urls.get(random().nextInt(urls.size())),
"/config/params?wt=json",
"/config/params",
cloudClient,
asList("response", "params", "y", "c"),
"CY val",
@ -216,7 +216,7 @@ public class TestSolrConfigHandlerCloud extends AbstractFullDistribZkTestBase {
result = TestSolrConfigHandler.testForResponseElement(null,
urls.get(random().nextInt(urls.size())),
"/dump?wt=json&useParams=y",
"/dump?useParams=y",
cloudClient,
asList("params", "c"),
"CY val",
@ -235,12 +235,12 @@ public class TestSolrConfigHandlerCloud extends AbstractFullDistribZkTestBase {
" }";
TestSolrConfigHandler.runConfigCommand(writeHarness,"/config/params?wt=json", payload);
TestSolrConfigHandler.runConfigCommand(writeHarness,"/config/params", payload);
result = TestSolrConfigHandler.testForResponseElement(
null,
urls.get(random().nextInt(urls.size())),
"/config/params?wt=json",
"/config/params",
cloudClient,
asList("response", "params", "y", "c"),
"CY val modified",
@ -257,11 +257,11 @@ public class TestSolrConfigHandlerCloud extends AbstractFullDistribZkTestBase {
" }";
TestSolrConfigHandler.runConfigCommand(writeHarness,"/config/params?wt=json", payload);
TestSolrConfigHandler.runConfigCommand(writeHarness,"/config/params", payload);
result = TestSolrConfigHandler.testForResponseElement(
null,
urls.get(random().nextInt(urls.size())),
"/config/params?wt=json",
"/config/params",
cloudClient,
asList("response", "params", "y", "p"),
"P val",
@ -269,11 +269,11 @@ public class TestSolrConfigHandlerCloud extends AbstractFullDistribZkTestBase {
compareValues(result, null, asList("response", "params", "y", "c"));
payload = " {'delete' : 'y'}";
TestSolrConfigHandler.runConfigCommand(writeHarness,"/config/params?wt=json", payload);
TestSolrConfigHandler.runConfigCommand(writeHarness,"/config/params", payload);
TestSolrConfigHandler.testForResponseElement(
null,
urls.get(random().nextInt(urls.size())),
"/config/params?wt=json",
"/config/params",
cloudClient,
asList("response", "params", "y", "p"),
null,

View File

@ -143,7 +143,7 @@ public class TestSolrConfigHandlerConcurrent extends AbstractFullDistribZkTestBa
val3 = String.valueOf(10 * i + 3);
payload = payload.replace("CACHEVAL3", val3);
response = publisher.post("/config?wt=json", SolrTestCaseJ4.json(payload));
response = publisher.post("/config", SolrTestCaseJ4.json(payload));
} finally {
publisher.close();
}
@ -171,7 +171,7 @@ public class TestSolrConfigHandlerConcurrent extends AbstractFullDistribZkTestBa
while ( TimeUnit.SECONDS.convert(System.nanoTime() - startTime, TimeUnit.NANOSECONDS) < maxTimeoutSeconds) {
Thread.sleep(100);
errmessages.clear();
Map respMap = getAsMap(url+"/config/overlay?wt=json", cloudClient);
Map respMap = getAsMap(url+"/config/overlay", cloudClient);
Map m = (Map) respMap.get("overlay");
if(m!= null) m = (Map) m.get("props");
if(m == null) {

View File

@ -51,7 +51,7 @@ public class JSONWriterTest extends SolrTestCaseJ4 {
@Test
public void testTypes() throws IOException {
SolrQueryRequest req = req("dummy");
SolrQueryRequest req = req("q", "dummy", "indent","off");
SolrQueryResponse rsp = new SolrQueryResponse();
QueryResponseWriter w = new PythonResponseWriter();
@ -90,7 +90,7 @@ public class JSONWriterTest extends SolrTestCaseJ4 {
}
private void implTestJSON(final String namedListStyle) throws IOException {
SolrQueryRequest req = req("wt","json","json.nl",namedListStyle);
SolrQueryRequest req = req("wt","json","json.nl",namedListStyle, "indent", "off");
SolrQueryResponse rsp = new SolrQueryResponse();
JSONResponseWriter w = new JSONResponseWriter();

View File

@ -23,7 +23,6 @@ import java.util.HashSet;
import java.util.List;
import java.util.Locale;
import java.util.Set;
import org.apache.solr.SolrTestCaseJ4;
import org.apache.solr.common.SolrInputDocument;
import org.apache.solr.common.util.SuppressForbidden;
@ -257,6 +256,7 @@ public class TestExportWriter extends SolrTestCaseJ4 {
@Test
@SuppressForbidden(reason="using new Date(time) to create random dates")
public void testRandomNumerics() throws Exception {
SimpleDateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss'Z'", Locale.ROOT);
assertU(delQ("*:*"));
assertU(commit());
List<String> trieFields = new ArrayList<String>();
@ -297,20 +297,20 @@ public class TestExportWriter extends SolrTestCaseJ4 {
addLong(doc, random().nextLong(), false);
addFloat(doc, random().nextFloat() * 3000 * (random().nextBoolean()?1:-1), false);
addDouble(doc, random().nextDouble() * 3000 * (random().nextBoolean()?1:-1), false);
addDate(doc, new Date(), false);
addDate(doc, dateFormat.format(new Date()), false);
// MV need to be unique in order to be the same in Trie vs Points
Set<Integer> ints = new HashSet<>();
Set<Long> longs = new HashSet<>();
Set<Float> floats = new HashSet<>();
Set<Double> doubles = new HashSet<>();
Set<Date> dates = new HashSet<>();
Set<String> dates = new HashSet<>();
for (int j=0; j < random().nextInt(20); j++) {
ints.add(random().nextInt());
longs.add(random().nextLong());
floats.add(random().nextFloat() * 3000 * (random().nextBoolean()?1:-1));
doubles.add(random().nextDouble() * 3000 * (random().nextBoolean()?1:-1));
dates.add(new Date(System.currentTimeMillis() + random().nextInt()));
dates.add(dateFormat.format(new Date(System.currentTimeMillis() + random().nextInt())));
}
ints.stream().forEach((val)->addInt(doc, val, true));
longs.stream().forEach((val)->addLong(doc, val, true));
@ -356,8 +356,8 @@ public class TestExportWriter extends SolrTestCaseJ4 {
addField(doc, "i", String.valueOf(value), mv);
}
private void addDate(SolrInputDocument doc, Date value, boolean mv) {
addField(doc, "dt", new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss'Z'", Locale.ROOT).format(value), mv);
private void addDate(SolrInputDocument doc, String value, boolean mv) {
addField(doc, "dt", value, mv);
}
private void addField(SolrInputDocument doc, String type, String value, boolean mv) {

View File

@ -138,7 +138,12 @@ public class TestRawResponseWriter extends SolrTestCaseJ4 {
// check response against each writer
// xml & none (default behavior same as XML)
String xml = "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<response>\n<str name=\"content\">test</str><str name=\"foo\">bar</str>\n</response>\n";
String xml = "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n" +
"<response>\n" +
"\n" +
"<str name=\"content\">test</str>\n" +
"<str name=\"foo\">bar</str>\n" +
"</response>\n";
StringWriter xmlSout = new StringWriter();
writerXmlBase.write(xmlSout, req(), rsp);
assertEquals(xml, xmlSout.toString());
@ -154,7 +159,9 @@ public class TestRawResponseWriter extends SolrTestCaseJ4 {
assertEquals(xml, noneBout.toString(StandardCharsets.UTF_8.toString()));
// json
String json = "{\"content\":\"test\",\"foo\":\"bar\"}\n";
String json = "{\n" +
" \"content\":\"test\",\n" +
" \"foo\":\"bar\"}\n";
StringWriter jsonSout = new StringWriter();
writerJsonBase.write(jsonSout, req(), rsp);
assertEquals(json, jsonSout.toString());

View File

@ -107,7 +107,7 @@ public class TestBulkSchemaAPI extends RestTestBase {
" }\n" +
" }";
String response = restTestHarness.post("/schema?wt=json", json(payload));
String response = restTestHarness.post("/schema", json(payload));
Map map = (Map) ObjectBuilder.getVal(new JSONParser(new StringReader(response)));
List l = (List) map.get("errors");
assertNotNull("No errors", l);
@ -142,7 +142,7 @@ public class TestBulkSchemaAPI extends RestTestBase {
" }\n"+
"}}";
String response = restTestHarness.post("/schema?wt=json",
String response = restTestHarness.post("/schema",
json(addFieldTypeAnalyzerWithClass + ',' + charFilters + tokenizer + filters + suffix));
Map map = (Map)ObjectBuilder.getVal(new JSONParser(new StringReader(response)));
List list = (List)map.get("errors");
@ -151,7 +151,7 @@ public class TestBulkSchemaAPI extends RestTestBase {
assertTrue (((String)errorList.get(0)).contains
("An analyzer with a class property may not define any char filters!"));
response = restTestHarness.post("/schema?wt=json",
response = restTestHarness.post("/schema",
json(addFieldTypeAnalyzerWithClass + ',' + tokenizer + filters + suffix));
map = (Map)ObjectBuilder.getVal(new JSONParser(new StringReader(response)));
list = (List)map.get("errors");
@ -160,7 +160,7 @@ public class TestBulkSchemaAPI extends RestTestBase {
assertTrue (((String)errorList.get(0)).contains
("An analyzer with a class property may not define a tokenizer!"));
response = restTestHarness.post("/schema?wt=json",
response = restTestHarness.post("/schema",
json(addFieldTypeAnalyzerWithClass + ',' + filters + suffix));
map = (Map)ObjectBuilder.getVal(new JSONParser(new StringReader(response)));
list = (List)map.get("errors");
@ -169,7 +169,7 @@ public class TestBulkSchemaAPI extends RestTestBase {
assertTrue (((String)errorList.get(0)).contains
("An analyzer with a class property may not define any filters!"));
response = restTestHarness.post("/schema?wt=json", json(addFieldTypeAnalyzerWithClass + suffix));
response = restTestHarness.post("/schema", json(addFieldTypeAnalyzerWithClass + suffix));
map = (Map)ObjectBuilder.getVal(new JSONParser(new StringReader(response)));
assertNull(response, map.get("errors"));
@ -203,7 +203,7 @@ public class TestBulkSchemaAPI extends RestTestBase {
" }\n" +
" }";
String response = harness.post("/schema?wt=json", json(payload));
String response = harness.post("/schema", json(payload));
map = (Map)ObjectBuilder.getVal(new JSONParser(new StringReader(response)));
assertNull(response, map.get("errors"));
@ -226,7 +226,7 @@ public class TestBulkSchemaAPI extends RestTestBase {
" }\n" +
" }";
String response = harness.post("/schema?wt=json", json(payload));
String response = harness.post("/schema", json(payload));
Map map = (Map)ObjectBuilder.getVal(new JSONParser(new StringReader(response)));
assertNotNull(response, map.get("errors"));
@ -249,7 +249,7 @@ public class TestBulkSchemaAPI extends RestTestBase {
" }\n" +
"}";
String response = harness.post("/schema?wt=json", json(payload));
String response = harness.post("/schema", json(payload));
Map map = (Map)ObjectBuilder.getVal(new JSONParser(new StringReader(response)));
assertNotNull(response, map.get("errors"));
@ -271,7 +271,7 @@ public class TestBulkSchemaAPI extends RestTestBase {
" }\n" +
"}";
response = harness.post("/schema?wt=json", json(payload));
response = harness.post("/schema", json(payload));
map = (Map)ObjectBuilder.getVal(new JSONParser(new StringReader(response)));
assertNotNull(response, map.get("errors"));
}
@ -302,7 +302,7 @@ public class TestBulkSchemaAPI extends RestTestBase {
" }\n" +
"}";
String response = harness.post("/schema?wt=json", json(payload));
String response = harness.post("/schema", json(payload));
map = (Map)ObjectBuilder.getVal(new JSONParser(new StringReader(response)));
assertNull(response, map.get("errors"));
@ -319,7 +319,7 @@ public class TestBulkSchemaAPI extends RestTestBase {
" }\n" +
" }";
response = harness.post("/schema?wt=json", json(payload));
response = harness.post("/schema", json(payload));
map = (Map)ObjectBuilder.getVal(new JSONParser(new StringReader(response)));
assertNull(response, map.get("errors"));
@ -499,7 +499,7 @@ public class TestBulkSchemaAPI extends RestTestBase {
" }\n" +
" }\n";
String response = harness.post("/schema?wt=json", json(payload));
String response = harness.post("/schema", json(payload));
Map map = (Map) ObjectBuilder.getVal(new JSONParser(new StringReader(response)));
assertNull(response, map.get("errors"));
@ -636,7 +636,7 @@ public class TestBulkSchemaAPI extends RestTestBase {
" {'source':'NewField4', 'dest':['NewField1'], maxChars: 3333 }]\n" +
"}\n";
String response = harness.post("/schema?wt=json", json(cmds));
String response = harness.post("/schema", json(cmds));
map = (Map)ObjectBuilder.getVal(new JSONParser(new StringReader(response)));
assertNull(response, map.get("errors"));
@ -691,14 +691,14 @@ public class TestBulkSchemaAPI extends RestTestBase {
assertNull(map.get("NewField3"));
cmds = "{'delete-field-type' : {'name':'NewFieldType'}}";
response = harness.post("/schema?wt=json", json(cmds));
response = harness.post("/schema", json(cmds));
map = (Map)ObjectBuilder.getVal(new JSONParser(new StringReader(response)));
Object errors = map.get("errors");
assertNotNull(errors);
assertTrue(errors.toString().contains("Can't delete 'NewFieldType' because it's the field type of "));
cmds = "{'delete-field' : {'name':'NewField1'}}";
response = harness.post("/schema?wt=json", json(cmds));
response = harness.post("/schema", json(cmds));
map = (Map)ObjectBuilder.getVal(new JSONParser(new StringReader(response)));
errors = map.get("errors");
assertNotNull(errors);
@ -706,7 +706,7 @@ public class TestBulkSchemaAPI extends RestTestBase {
("Can't delete field 'NewField1' because it's referred to by at least one copy field directive"));
cmds = "{'delete-field' : {'name':'NewField2'}}";
response = harness.post("/schema?wt=json", json(cmds));
response = harness.post("/schema", json(cmds));
map = (Map)ObjectBuilder.getVal(new JSONParser(new StringReader(response)));
errors = map.get("errors");
assertNotNull(errors);
@ -714,7 +714,7 @@ public class TestBulkSchemaAPI extends RestTestBase {
("Can't delete field 'NewField2' because it's referred to by at least one copy field directive"));
cmds = "{'replace-field' : {'name':'NewField1', 'type':'string'}}";
response = harness.post("/schema?wt=json", json(cmds));
response = harness.post("/schema", json(cmds));
map = (Map)ObjectBuilder.getVal(new JSONParser(new StringReader(response)));
assertNull(map.get("errors"));
// Make sure the copy field directives with source NewField1 are preserved
@ -728,7 +728,7 @@ public class TestBulkSchemaAPI extends RestTestBase {
assertTrue(set.contains("NewDynamicField1A"));
cmds = "{'delete-dynamic-field' : {'name':'NewDynamicField1*'}}";
response = harness.post("/schema?wt=json", json(cmds));
response = harness.post("/schema", json(cmds));
map = (Map)ObjectBuilder.getVal(new JSONParser(new StringReader(response)));
errors = map.get("errors");
assertNotNull(errors);
@ -736,7 +736,7 @@ public class TestBulkSchemaAPI extends RestTestBase {
("copyField dest :'NewDynamicField1A' is not an explicit field and doesn't match a dynamicField."));
cmds = "{'replace-field' : {'name':'NewField2', 'type':'string'}}";
response = harness.post("/schema?wt=json", json(cmds));
response = harness.post("/schema", json(cmds));
map = (Map)ObjectBuilder.getVal(new JSONParser(new StringReader(response)));
errors = map.get("errors");
assertNull(errors);
@ -753,7 +753,7 @@ public class TestBulkSchemaAPI extends RestTestBase {
assertTrue(set.contains("NewDynamicField2*"));
cmds = "{'replace-dynamic-field' : {'name':'NewDynamicField2*', 'type':'string'}}";
response = harness.post("/schema?wt=json", json(cmds));
response = harness.post("/schema", json(cmds));
map = (Map)ObjectBuilder.getVal(new JSONParser(new StringReader(response)));
errors = map.get("errors");
assertNull(errors);
@ -763,7 +763,7 @@ public class TestBulkSchemaAPI extends RestTestBase {
assertEquals("NewField2", ((Map) list.get(0)).get("dest"));
cmds = "{'replace-dynamic-field' : {'name':'NewDynamicField1*', 'type':'string'}}";
response = harness.post("/schema?wt=json", json(cmds));
response = harness.post("/schema", json(cmds));
map = (Map)ObjectBuilder.getVal(new JSONParser(new StringReader(response)));
errors = map.get("errors");
assertNull(errors);
@ -773,7 +773,7 @@ public class TestBulkSchemaAPI extends RestTestBase {
assertEquals("NewField1", ((Map) list.get(0)).get("source"));
cmds = "{'replace-field-type': {'name':'NewFieldType', 'class':'solr.BinaryField'}}";
response = harness.post("/schema?wt=json", json(cmds));
response = harness.post("/schema", json(cmds));
map = (Map)ObjectBuilder.getVal(new JSONParser(new StringReader(response)));
assertNull(map.get("errors"));
// Make sure the copy field directives with sources and destinations of type NewFieldType are preserved
@ -793,7 +793,7 @@ public class TestBulkSchemaAPI extends RestTestBase {
" {'source':'NewDynamicField3*', 'dest':'NewField3' },\n" +
" {'source':'NewField4', 'dest':['NewField1', 'NewField2', 'NewField3']}]\n" +
"}\n";
response = harness.post("/schema?wt=json", json(cmds));
response = harness.post("/schema", json(cmds));
map = (Map)ObjectBuilder.getVal(new JSONParser(new StringReader(response)));
assertNull(map.get("errors"));
list = getSourceCopyFields(harness, "NewField1");
@ -808,7 +808,7 @@ public class TestBulkSchemaAPI extends RestTestBase {
assertEquals(0, list.size());
cmds = "{'delete-field': [{'name':'NewField1'},{'name':'NewField2'},{'name':'NewField3'},{'name':'NewField4'}]}";
response = harness.post("/schema?wt=json", json(cmds));
response = harness.post("/schema", json(cmds));
map = (Map)ObjectBuilder.getVal(new JSONParser(new StringReader(response)));
assertNull(map.get("errors"));
@ -816,12 +816,12 @@ public class TestBulkSchemaAPI extends RestTestBase {
" {'name':'NewDynamicField2*'},\n" +
" {'name':'NewDynamicField3*'}]\n" +
"}\n";
response = harness.post("/schema?wt=json", json(cmds));
response = harness.post("/schema", json(cmds));
map = (Map)ObjectBuilder.getVal(new JSONParser(new StringReader(response)));
assertNull(map.get("errors"));
cmds = "{'delete-field-type':{'name':'NewFieldType'}}";
response = harness.post("/schema?wt=json", json(cmds));
response = harness.post("/schema", json(cmds));
map = (Map)ObjectBuilder.getVal(new JSONParser(new StringReader(response)));
assertNull(map.get("errors"));
}
@ -849,7 +849,7 @@ public class TestBulkSchemaAPI extends RestTestBase {
" }\n" +
"}\n";
String response = harness.post("/schema?wt=json&indent=on", json(payload));
String response = harness.post("/schema", json(payload));
Map map = (Map) ObjectBuilder.getVal(new JSONParser(new StringReader(response)));
assertNull(response, map.get("errors"));
@ -876,7 +876,7 @@ public class TestBulkSchemaAPI extends RestTestBase {
" }\n"+
"}\n";
response = harness.post("/schema?wt=json&indent=on", json(payload));
response = harness.post("/schema", json(payload));
map = (Map)ObjectBuilder.getVal(new JSONParser(new StringReader(response)));
assertNull(response, map.get("errors"));
@ -900,7 +900,7 @@ public class TestBulkSchemaAPI extends RestTestBase {
}
public static Map getRespMap(RestTestHarness restHarness) throws Exception {
return getAsMap("/schema?wt=json", restHarness);
return getAsMap("/schema", restHarness);
}
public static Map getAsMap(String uri, RestTestHarness restHarness) throws Exception {

View File

@ -1,76 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.solr.rest.schema;
import org.apache.lucene.util.LuceneTestCase.AwaitsFix;
import org.apache.solr.util.RestTestBase;
import org.eclipse.jetty.servlet.ServletHolder;
import org.junit.BeforeClass;
import org.junit.Test;
import org.restlet.ext.servlet.ServerServlet;
import java.util.SortedMap;
import java.util.TreeMap;
@AwaitsFix(bugUrl = "https://issues.apache.org/jira/browse/SOLR-10142")
public class TestClassNameShortening extends RestTestBase {
@BeforeClass
public static void init() throws Exception {
final SortedMap<ServletHolder,String> extraServlets = new TreeMap<>();
final ServletHolder solrRestApi = new ServletHolder("SolrSchemaRestApi", ServerServlet.class);
solrRestApi.setInitParameter("org.restlet.application", "org.apache.solr.rest.SolrSchemaRestApi");
extraServlets.put(solrRestApi, "/schema/*"); // '/schema/*' matches '/schema', '/schema/', and '/schema/whatever...'
createJettyAndHarness(TEST_HOME(), "solrconfig-minimal.xml", "schema-class-name-shortening-on-serialization.xml",
"/solr", true, extraServlets);
}
@Test
public void testClassNamesNotShortened() throws Exception {
assertQ("/schema/fieldtypes/fullClassNames?indent=on&wt=xml&showDefaults=true",
"count(/response/lst[@name='fieldType']) = 1",
"/response/lst[@name='fieldType']/str[@name='class'] = 'org.apache.solr.schema.TextField'",
"//arr[@name='charFilters']/lst/str[@name='class'] = 'org.apache.solr.analysis.MockCharFilterFactory'",
"//lst[@name='tokenizer']/str[@name='class'] = 'org.apache.solr.analysis.MockTokenizerFactory'",
"//arr[@name='filters']/lst/str[@name='class'] = 'org.apache.solr.analysis.MockTokenFilterFactory'",
"/response/lst[@name='fieldType']/lst[@name='similarity']/str[@name='class'] = 'org.apache.lucene.misc.SweetSpotSimilarity'");
}
/**
* See {@link TestSchemaSimilarityResource#testGetSchemaSimilarity} for where the long class name
* is verified when the config doesn't specify a sim at all
*/
@Test
public void testShortenedGlobalSimilarityStaysShortened() throws Exception {
assertQ("/schema/similarity?indent=on&wt=xml",
"count(/response/lst[@name='similarity']) = 1",
"/response/lst[@name='similarity']/str[@name='class'][.='solr.SchemaSimilarityFactory']");
}
@Test
public void testShortenedClassNamesStayShortened() throws Exception {
assertQ("/schema/fieldtypes/shortenedClassNames?indent=on&wt=xml&showDefaults=true",
"count(/response/lst[@name='fieldType']) = 1",
"/response/lst[@name='fieldType']/str[@name='class'] = 'solr.TextField'",
"//arr[@name='charFilters']/lst/str[@name='class'] = 'solr.MockCharFilterFactory'",
"//lst[@name='tokenizer']/str[@name='class'] = 'solr.MockTokenizerFactory'",
"//arr[@name='filters']/lst/str[@name='class'] = 'solr.MockTokenFilterFactory'",
"/response/lst[@name='fieldType']/lst[@name='similarity']/str[@name='class'] = 'solr.SweetSpotSimilarityFactory'");
}
}

View File

@ -20,7 +20,7 @@ import org.junit.Test;
public class TestCopyFieldCollectionResource extends SolrRestletTestBase {
@Test
public void testGetAllCopyFields() throws Exception {
public void testXMLGetAllCopyFields() throws Exception {
assertQ("/schema/copyfields?indent=on&wt=xml",
"/response/arr[@name='copyFields']/lst[ str[@name='source'][.='src_sub_no_ast_i']"
+" and str[@name='dest'][.='title']]",
@ -79,8 +79,8 @@ public class TestCopyFieldCollectionResource extends SolrRestletTestBase {
}
@Test
public void testJsonGetAllCopyFields() throws Exception {
assertJQ("/schema/copyfields?indent=on&wt=json",
public void testGetAllCopyFields() throws Exception {
assertJQ("/schema/copyfields",
"/copyFields/[1]=={'source':'src_sub_no_ast_i','dest':'title'}",
"/copyFields/[7]=={'source':'title','dest':'dest_sub_no_ast_s'}",
@ -102,7 +102,7 @@ public class TestCopyFieldCollectionResource extends SolrRestletTestBase {
@Test
public void testRestrictSource() throws Exception {
assertQ("/schema/copyfields/?indent=on&wt=xml&source.fl=title,*_i,*_src_sub_i,src_sub_no_ast_i",
assertQ("/schema/copyfields/?wt=xml&source.fl=title,*_i,*_src_sub_i,src_sub_no_ast_i",
"count(/response/arr[@name='copyFields']/lst) = 16", // 4 + 4 + 4 + 4
"count(/response/arr[@name='copyFields']/lst/str[@name='source'][.='title']) = 4",
"count(/response/arr[@name='copyFields']/lst/str[@name='source'][.='*_i']) = 4",
@ -112,7 +112,7 @@ public class TestCopyFieldCollectionResource extends SolrRestletTestBase {
@Test
public void testRestrictDest() throws Exception {
assertQ("/schema/copyfields/?indent=on&wt=xml&dest.fl=title,*_s,*_dest_sub_s,dest_sub_no_ast_s",
assertQ("/schema/copyfields/?wt=xml&dest.fl=title,*_s,*_dest_sub_s,dest_sub_no_ast_s",
"count(/response/arr[@name='copyFields']/lst) = 16", // 3 + 4 + 4 + 5
"count(/response/arr[@name='copyFields']/lst/str[@name='dest'][.='title']) = 3",
"count(/response/arr[@name='copyFields']/lst/str[@name='dest'][.='*_s']) = 4",
@ -122,7 +122,7 @@ public class TestCopyFieldCollectionResource extends SolrRestletTestBase {
@Test
public void testRestrictSourceAndDest() throws Exception {
assertQ("/schema/copyfields/?indent=on&wt=xml&source.fl=title,*_i&dest.fl=title,dest_sub_no_ast_s",
assertQ("/schema/copyfields/?wt=xml&source.fl=title,*_i&dest.fl=title,dest_sub_no_ast_s",
"count(/response/arr[@name='copyFields']/lst) = 3",
"/response/arr[@name='copyFields']/lst[ str[@name='source'][.='title']"

View File

@ -23,8 +23,8 @@ import org.junit.Test;
public class TestFieldCollectionResource extends SolrRestletTestBase {
@Test
public void testGetAllFields() throws Exception {
assertQ("/schema/fields?indent=on&wt=xml",
public void testXMLGetAllFields() throws Exception {
assertQ("/schema/fields?wt=xml",
"(/response/arr[@name='fields']/lst/str[@name='name'])[1] = 'HTMLstandardtok'",
"(/response/arr[@name='fields']/lst/str[@name='name'])[2] = 'HTMLwhitetok'",
"(/response/arr[@name='fields']/lst/str[@name='name'])[3] = '_version_'");
@ -32,25 +32,25 @@ public class TestFieldCollectionResource extends SolrRestletTestBase {
@Test
public void testJsonGetAllFields() throws Exception {
assertJQ("/schema/fields?indent=on",
public void testGetAllFields() throws Exception {
assertJQ("/schema/fields",
"/fields/[0]/name=='HTMLstandardtok'",
"/fields/[1]/name=='HTMLwhitetok'",
"/fields/[2]/name=='_version_'");
}
@Test
public void testGetThreeFieldsDontIncludeDynamic() throws IOException {
public void testXMLGetThreeFieldsDontIncludeDynamic() throws IOException {
//
assertQ("/schema/fields?indent=on&wt=xml&fl=id,_version_,price_i",
assertQ("/schema/fields?wt=xml&fl=id,_version_,price_i",
"count(/response/arr[@name='fields']/lst/str[@name='name']) = 2",
"(/response/arr[@name='fields']/lst/str[@name='name'])[1] = 'id'",
"(/response/arr[@name='fields']/lst/str[@name='name'])[2] = '_version_'");
}
@Test
public void testGetThreeFieldsIncludeDynamic() throws IOException {
assertQ("/schema/fields?indent=on&wt=xml&fl=id,_version_,price_i&includeDynamic=on",
public void testXMLGetThreeFieldsIncludeDynamic() throws IOException {
assertQ("/schema/fields?wt=xml&fl=id,_version_,price_i&includeDynamic=on",
"count(/response/arr[@name='fields']/lst/str[@name='name']) = 3",
@ -64,16 +64,16 @@ public class TestFieldCollectionResource extends SolrRestletTestBase {
+" and str[@name='dynamicBase']='*_i']");
}
@Test
public void testNotFoundFields() throws IOException {
assertQ("/schema/fields?indent=on&wt=xml&fl=not_in_there,this_one_either",
public void testXMLNotFoundFields() throws IOException {
assertQ("/schema/fields?&wt=xml&fl=not_in_there,this_one_either",
"count(/response/arr[@name='fields']) = 1",
"count(/response/arr[@name='fields']/lst/str[@name='name']) = 0");
}
@Test
public void testJsonGetAllFieldsIncludeDynamic() throws Exception {
assertJQ("/schema/fields?indent=on&includeDynamic=true",
public void testGetAllFieldsIncludeDynamic() throws Exception {
assertJQ("/schema/fields?includeDynamic=true",
"/fields/[0]/name=='HTMLstandardtok'",
"/fields/[1]/name=='HTMLwhitetok'",
"/fields/[2]/name=='_version_'",

View File

@ -21,10 +21,10 @@ import org.junit.Test;
public class TestFieldTypeResource extends SolrRestletTestBase {
@Test
public void testGetFieldType() throws Exception {
public void testXMLGetFieldType() throws Exception {
final String expectedFloatClass = RANDOMIZED_NUMERIC_FIELDTYPES.get(Float.class);
final boolean expectedDocValues = Boolean.getBoolean(NUMERIC_DOCVALUES_SYSPROP);
assertQ("/schema/fieldtypes/float?indent=on&wt=xml&showDefaults=true",
assertQ("/schema/fieldtypes/float?wt=xml&showDefaults=true",
"count(/response/lst[@name='fieldType']) = 1",
"count(/response/lst[@name='fieldType']/*) = 17",
"/response/lst[@name='fieldType']/str[@name='name'] = 'float'",
@ -46,8 +46,8 @@ public class TestFieldTypeResource extends SolrRestletTestBase {
}
@Test
public void testGetNotFoundFieldType() throws Exception {
assertQ("/schema/fieldtypes/not_in_there?indent=on&wt=xml",
public void testXMLGetNotFoundFieldType() throws Exception {
assertQ("/schema/fieldtypes/not_in_there?wt=xml",
"count(/response/lst[@name='fieldtypes']) = 0",
"/response/lst[@name='responseHeader']/int[@name='status'] = '404'",
"/response/lst[@name='error']/int[@name='code'] = '404'");
@ -57,7 +57,7 @@ public class TestFieldTypeResource extends SolrRestletTestBase {
public void testJsonGetFieldType() throws Exception {
final String expectedFloatClass = RANDOMIZED_NUMERIC_FIELDTYPES.get(Float.class);
final boolean expectedDocValues = Boolean.getBoolean(NUMERIC_DOCVALUES_SYSPROP);
assertJQ("/schema/fieldtypes/float?indent=on&showDefaults=on", // assertJQ will add "&wt=json"
assertJQ("/schema/fieldtypes/float?showDefaults=on",
"/fieldType/name=='float'",
"/fieldType/class=='"+expectedFloatClass+"'",
"/fieldType/precisionStep=='0'",
@ -76,8 +76,8 @@ public class TestFieldTypeResource extends SolrRestletTestBase {
}
@Test
public void testGetFieldTypeDontShowDefaults() throws Exception {
assertQ("/schema/fieldtypes/teststop?wt=xml&indent=on",
public void testXMLGetFieldTypeDontShowDefaults() throws Exception {
assertQ("/schema/fieldtypes/teststop?wt=xml",
"count(/response/lst[@name='fieldType']/*) = 3",
"/response/lst[@name='fieldType']/str[@name='name'] = 'teststop'",
"/response/lst[@name='fieldType']/str[@name='class'] = 'solr.TextField'",

View File

@ -21,7 +21,7 @@ import org.junit.Test;
public class TestSchemaNameResource extends SolrRestletTestBase {
@Test
public void testGetSchemaName() throws Exception {
assertQ("/schema/name?indent=on&wt=xml",
assertQ("/schema/name?wt=xml",
"count(/response/str[@name='name']) = 1",
"/response/str[@name='name'][.='test-rest']");
}

View File

@ -103,7 +103,7 @@ public class TestSchemaResource extends SolrRestletTestBase {
@Test
public void testJSONResponse() throws Exception {
assertJQ("/schema?wt=json", // Should work with or without a trailing slash
assertJQ("/schema", // Should work with or without a trailing slash
"/schema/name=='test-rest'",
"/schema/version==1.6",

View File

@ -27,7 +27,7 @@ public class TestSchemaSimilarityResource extends SolrRestletTestBase {
*/
@Test
public void testGetSchemaSimilarity() throws Exception {
assertQ("/schema/similarity?indent=on&wt=xml",
assertQ("/schema/similarity?wt=xml",
"count(/response/lst[@name='similarity']) = 1",
"/response/lst[@name='similarity']/str[@name='class'][.='org.apache.solr.search.similarities.SchemaSimilarityFactory']");
}

View File

@ -149,7 +149,7 @@ public class TestBulkSchemaConcurrent extends AbstractFullDistribZkTestBase {
payload = payload.replace("myNewFieldTypeName", newFieldTypeName);
RestTestHarness publisher = restTestHarnesses.get(r.nextInt(restTestHarnesses.size()));
String response = publisher.post("/schema?wt=json", SolrTestCaseJ4.json(payload));
String response = publisher.post("/schema", SolrTestCaseJ4.json(payload));
Map map = (Map) ObjectBuilder.getVal(new JSONParser(new StringReader(response)));
Object errors = map.get("errors");
if (errors != null) {
@ -219,7 +219,7 @@ public class TestBulkSchemaConcurrent extends AbstractFullDistribZkTestBase {
payload = payload.replace("myNewFieldTypeName", newFieldTypeName);
RestTestHarness publisher = restTestHarnesses.get(r.nextInt(restTestHarnesses.size()));
String response = publisher.post("/schema?wt=json", SolrTestCaseJ4.json(payload));
String response = publisher.post("/schema", SolrTestCaseJ4.json(payload));
Map map = (Map) ObjectBuilder.getVal(new JSONParser(new StringReader(response)));
Object errors = map.get("errors");
if (errors != null) {
@ -281,7 +281,7 @@ public class TestBulkSchemaConcurrent extends AbstractFullDistribZkTestBase {
payload = payload.replace("myNewFieldTypeName", newFieldTypeName);
RestTestHarness publisher = restTestHarnesses.get(r.nextInt(restTestHarnesses.size()));
String response = publisher.post("/schema?wt=json", SolrTestCaseJ4.json(payload));
String response = publisher.post("/schema", SolrTestCaseJ4.json(payload));
Map map = (Map) ObjectBuilder.getVal(new JSONParser(new StringReader(response)));
Object errors = map.get("errors");
if (errors != null) {

View File

@ -1,717 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.solr.schema;
import java.lang.invoke.MethodHandles;
import java.util.ArrayList;
import java.util.List;
import java.util.SortedMap;
import java.util.TreeMap;
import java.util.concurrent.TimeUnit;
import org.apache.solr.client.solrj.SolrClient;
import org.apache.solr.client.solrj.embedded.JettySolrRunner;
import org.apache.solr.client.solrj.impl.HttpSolrClient;
import org.apache.solr.cloud.AbstractFullDistribZkTestBase;
import org.apache.solr.common.cloud.ClusterState;
import org.apache.solr.common.cloud.Replica;
import org.apache.solr.common.cloud.Slice;
import org.apache.solr.common.cloud.SolrZkClient;
import org.apache.solr.common.cloud.ZkCoreNodeProps;
import org.apache.solr.util.BaseTestHarness;
import org.apache.solr.util.RestTestHarness;
import org.apache.zookeeper.data.Stat;
import org.eclipse.jetty.servlet.ServletHolder;
import org.junit.BeforeClass;
import org.junit.Ignore;
import org.junit.Test;
import org.restlet.ext.servlet.ServerServlet;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@Ignore
public class TestCloudManagedSchemaConcurrent extends AbstractFullDistribZkTestBase {
private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
private static final String SUCCESS_XPATH = "/response/lst[@name='responseHeader']/int[@name='status'][.='0']";
private static final String PUT_DYNAMIC_FIELDNAME = "newdynamicfieldPut";
private static final String POST_DYNAMIC_FIELDNAME = "newdynamicfieldPost";
private static final String PUT_FIELDNAME = "newfieldPut";
private static final String POST_FIELDNAME = "newfieldPost";
private static final String PUT_FIELDTYPE = "newfieldtypePut";
private static final String POST_FIELDTYPE = "newfieldtypePost";
public TestCloudManagedSchemaConcurrent() {
super();
sliceCount = 4;
}
@BeforeClass
public static void initSysProperties() {
System.setProperty("managed.schema.mutable", "true");
System.setProperty("enable.update.log", "true");
}
@Override
public void distribTearDown() throws Exception {
super.distribTearDown();
for (RestTestHarness h : restTestHarnesses) {
h.close();
}
}
@Override
protected String getCloudSolrConfig() {
return "solrconfig-managed-schema.xml";
}
@Override
public SortedMap<ServletHolder,String> getExtraServlets() {
final SortedMap<ServletHolder,String> extraServlets = new TreeMap<>();
final ServletHolder solrRestApi = new ServletHolder("SolrSchemaRestApi", ServerServlet.class);
solrRestApi.setInitParameter("org.restlet.application", "org.apache.solr.rest.SolrSchemaRestApi");
extraServlets.put(solrRestApi, "/schema/*"); // '/schema/*' matches '/schema', '/schema/', and '/schema/whatever...'
return extraServlets;
}
private List<RestTestHarness> restTestHarnesses = new ArrayList<>();
private void setupHarnesses() {
for (final SolrClient client : clients) {
RestTestHarness harness = new RestTestHarness(() -> ((HttpSolrClient)client).getBaseURL());
restTestHarnesses.add(harness);
}
}
private static void verifySuccess(String request, String response) throws Exception {
String result = BaseTestHarness.validateXPath(response, SUCCESS_XPATH);
if (null != result) {
String msg = "QUERY FAILED: xpath=" + result + " request=" + request + " response=" + response;
log.error(msg);
fail(msg);
}
}
private static void addFieldPut(RestTestHarness publisher, String fieldName, int updateTimeoutSecs) throws Exception {
final String content = "{\"type\":\"text\",\"stored\":\"false\"}";
String request = "/schema/fields/" + fieldName + "?wt=xml";
if (updateTimeoutSecs > 0)
request += "&updateTimeoutSecs="+updateTimeoutSecs;
String response = publisher.put(request, content);
verifySuccess(request, response);
}
private static void addFieldPost(RestTestHarness publisher, String fieldName, int updateTimeoutSecs) throws Exception {
final String content = "[{\"name\":\""+fieldName+"\",\"type\":\"text\",\"stored\":\"false\"}]";
String request = "/schema/fields/?wt=xml";
if (updateTimeoutSecs > 0)
request += "&updateTimeoutSecs="+updateTimeoutSecs;
String response = publisher.post(request, content);
verifySuccess(request, response);
}
private static void addDynamicFieldPut(RestTestHarness publisher, String dynamicFieldPattern, int updateTimeoutSecs) throws Exception {
final String content = "{\"type\":\"text\",\"stored\":\"false\"}";
String request = "/schema/dynamicfields/" + dynamicFieldPattern + "?wt=xml";
if (updateTimeoutSecs > 0)
request += "&updateTimeoutSecs="+updateTimeoutSecs;
String response = publisher.put(request, content);
verifySuccess(request, response);
}
private static void addDynamicFieldPost(RestTestHarness publisher, String dynamicFieldPattern, int updateTimeoutSecs) throws Exception {
final String content = "[{\"name\":\""+dynamicFieldPattern+"\",\"type\":\"text\",\"stored\":\"false\"}]";
String request = "/schema/dynamicfields/?wt=xml";
if (updateTimeoutSecs > 0)
request += "&updateTimeoutSecs="+updateTimeoutSecs;
String response = publisher.post(request, content);
verifySuccess(request, response);
}
private static void copyField(RestTestHarness publisher, String source, String dest, int updateTimeoutSecs) throws Exception {
final String content = "[{\"source\":\""+source+"\",\"dest\":[\""+dest+"\"]}]";
String request = "/schema/copyfields/?wt=xml";
if (updateTimeoutSecs > 0)
request += "&updateTimeoutSecs="+updateTimeoutSecs;
String response = publisher.post(request, content);
verifySuccess(request, response);
}
private static void addFieldTypePut(RestTestHarness publisher, String typeName, int updateTimeoutSecs) throws Exception {
final String content = "{\"class\":\""+RANDOMIZED_NUMERIC_FIELDTYPES.get(Integer.class)+"\"}";
String request = "/schema/fieldtypes/" + typeName + "?wt=xml";
if (updateTimeoutSecs > 0)
request += "&updateTimeoutSecs="+updateTimeoutSecs;
String response = publisher.put(request, content);
verifySuccess(request, response);
}
private static void addFieldTypePost(RestTestHarness publisher, String typeName, int updateTimeoutSecs) throws Exception {
final String content = "[{\"name\":\""+typeName+"\",\"class\":\""+RANDOMIZED_NUMERIC_FIELDTYPES.get(Integer.class)+"\"}]";
String request = "/schema/fieldtypes/?wt=xml";
if (updateTimeoutSecs > 0)
request += "&updateTimeoutSecs="+updateTimeoutSecs;
String response = publisher.post(request, content);
verifySuccess(request, response);
}
private String[] getExpectedFieldResponses(Info info) {
String[] expectedAddFields = new String[1 + info.numAddFieldPuts + info.numAddFieldPosts];
expectedAddFields[0] = SUCCESS_XPATH;
for (int i = 0; i < info.numAddFieldPuts; ++i) {
String newFieldName = PUT_FIELDNAME + info.fieldNameSuffix + i;
expectedAddFields[1 + i]
= "/response/arr[@name='fields']/lst/str[@name='name'][.='" + newFieldName + "']";
}
for (int i = 0; i < info.numAddFieldPosts; ++i) {
String newFieldName = POST_FIELDNAME + info.fieldNameSuffix + i;
expectedAddFields[1 + info.numAddFieldPuts + i]
= "/response/arr[@name='fields']/lst/str[@name='name'][.='" + newFieldName + "']";
}
return expectedAddFields;
}
private String[] getExpectedDynamicFieldResponses(Info info) {
String[] expectedAddDynamicFields = new String[1 + info.numAddDynamicFieldPuts + info.numAddDynamicFieldPosts];
expectedAddDynamicFields[0] = SUCCESS_XPATH;
for (int i = 0; i < info.numAddDynamicFieldPuts; ++i) {
String newDynamicFieldPattern = PUT_DYNAMIC_FIELDNAME + info.fieldNameSuffix + i + "_*";
expectedAddDynamicFields[1 + i]
= "/response/arr[@name='dynamicFields']/lst/str[@name='name'][.='" + newDynamicFieldPattern + "']";
}
for (int i = 0; i < info.numAddDynamicFieldPosts; ++i) {
String newDynamicFieldPattern = POST_DYNAMIC_FIELDNAME + info.fieldNameSuffix + i + "_*";
expectedAddDynamicFields[1 + info.numAddDynamicFieldPuts + i]
= "/response/arr[@name='dynamicFields']/lst/str[@name='name'][.='" + newDynamicFieldPattern + "']";
}
return expectedAddDynamicFields;
}
private String[] getExpectedCopyFieldResponses(Info info) {
ArrayList<String> expectedCopyFields = new ArrayList<>();
expectedCopyFields.add(SUCCESS_XPATH);
for (CopyFieldInfo cpi : info.copyFields) {
String expectedSourceName = cpi.getSourceField();
expectedCopyFields.add
("/response/arr[@name='copyFields']/lst/str[@name='source'][.='" + expectedSourceName + "']");
String expectedDestName = cpi.getDestField();
expectedCopyFields.add
("/response/arr[@name='copyFields']/lst/str[@name='dest'][.='" + expectedDestName + "']");
}
return expectedCopyFields.toArray(new String[expectedCopyFields.size()]);
}
private String[] getExpectedFieldTypeResponses(Info info) {
String[] expectedAddFieldTypes = new String[1 + info.numAddFieldTypePuts + info.numAddFieldTypePosts];
expectedAddFieldTypes[0] = SUCCESS_XPATH;
for (int i = 0; i < info.numAddFieldTypePuts; ++i) {
String newFieldTypeName = PUT_FIELDTYPE + info.fieldNameSuffix + i;
expectedAddFieldTypes[1 + i]
= "/response/arr[@name='fieldTypes']/lst/str[@name='name'][.='" + newFieldTypeName + "']";
}
for (int i = 0; i < info.numAddFieldTypePosts; ++i) {
String newFieldTypeName = POST_FIELDTYPE + info.fieldNameSuffix + i;
expectedAddFieldTypes[1 + info.numAddFieldTypePuts + i]
= "/response/arr[@name='fieldTypes']/lst/str[@name='name'][.='" + newFieldTypeName + "']";
}
return expectedAddFieldTypes;
}
@Test
@ShardsFixed(num = 8)
public void test() throws Exception {
verifyWaitForSchemaUpdateToPropagate();
setupHarnesses();
concurrentOperationsTest();
schemaLockTest();
}
private static class Info {
int numAddFieldPuts = 0;
int numAddFieldPosts = 0;
int numAddDynamicFieldPuts = 0;
int numAddDynamicFieldPosts = 0;
int numAddFieldTypePuts = 0;
int numAddFieldTypePosts = 0;
public String fieldNameSuffix;
List<CopyFieldInfo> copyFields = new ArrayList<>();
public Info(String fieldNameSuffix) {
this.fieldNameSuffix = fieldNameSuffix;
}
}
private enum Operation {
PUT_AddField {
@Override public void execute(RestTestHarness publisher, int fieldNum, Info info) throws Exception {
String fieldname = PUT_FIELDNAME + info.numAddFieldPuts++;
addFieldPut(publisher, fieldname, 15);
}
},
POST_AddField {
@Override public void execute(RestTestHarness publisher, int fieldNum, Info info) throws Exception {
String fieldname = POST_FIELDNAME + info.numAddFieldPosts++;
addFieldPost(publisher, fieldname, 15);
}
},
PUT_AddDynamicField {
@Override public void execute(RestTestHarness publisher, int fieldNum, Info info) throws Exception {
addDynamicFieldPut(publisher, PUT_DYNAMIC_FIELDNAME + info.numAddDynamicFieldPuts++ + "_*", 15);
}
},
POST_AddDynamicField {
@Override public void execute(RestTestHarness publisher, int fieldNum, Info info) throws Exception {
addDynamicFieldPost(publisher, POST_DYNAMIC_FIELDNAME + info.numAddDynamicFieldPosts++ + "_*", 15);
}
},
POST_AddCopyField {
@Override public void execute(RestTestHarness publisher, int fieldNum, Info info) throws Exception {
String sourceField = null;
String destField = null;
int sourceType = random().nextInt(3);
if (sourceType == 0) { // existing
sourceField = "name";
} else if (sourceType == 1) { // newly created
sourceField = "copySource" + fieldNum;
addFieldPut(publisher, sourceField, 15);
} else { // dynamic
sourceField = "*_dynamicSource" + fieldNum + "_t";
// * only supported if both src and dst use it
destField = "*_dynamicDest" + fieldNum + "_t";
}
if (destField == null) {
int destType = random().nextInt(2);
if (destType == 0) { // existing
destField = "title";
} else { // newly created
destField = "copyDest" + fieldNum;
addFieldPut(publisher, destField, 15);
}
}
copyField(publisher, sourceField, destField, 15);
info.copyFields.add(new CopyFieldInfo(sourceField, destField));
}
},
PUT_AddFieldType {
@Override public void execute(RestTestHarness publisher, int fieldNum, Info info) throws Exception {
String typeName = PUT_FIELDTYPE + info.numAddFieldTypePuts++;
addFieldTypePut(publisher, typeName, 15);
}
},
POST_AddFieldType {
@Override public void execute(RestTestHarness publisher, int fieldNum, Info info) throws Exception {
String typeName = POST_FIELDTYPE + info.numAddFieldTypePosts++;
addFieldTypePost(publisher, typeName, 15);
}
};
public abstract void execute(RestTestHarness publisher, int fieldNum, Info info) throws Exception;
private static final Operation[] VALUES = values();
public static Operation randomOperation() {
return VALUES[r.nextInt(VALUES.length)];
}
}
private void verifyWaitForSchemaUpdateToPropagate() throws Exception {
String testCollectionName = "collection1";
ClusterState clusterState = cloudClient.getZkStateReader().getClusterState();
Replica shard1Leader = clusterState.getLeader(testCollectionName, "shard1");
final String coreUrl = (new ZkCoreNodeProps(shard1Leader)).getCoreUrl();
assertNotNull(coreUrl);
RestTestHarness harness = new RestTestHarness(() -> coreUrl.endsWith("/") ? coreUrl.substring(0, coreUrl.length()-1) : coreUrl);
try {
addFieldTypePut(harness, "fooInt", 15);
} finally {
harness.close();
}
// go into ZK to get the version of the managed schema after the update
SolrZkClient zkClient = cloudClient.getZkStateReader().getZkClient();
Stat stat = new Stat();
String znodePath = "/configs/conf1/managed-schema";
byte[] managedSchemaBytes = zkClient.getData(znodePath, null, stat, false);
int schemaZkVersion = stat.getVersion();
// now loop over all replicas and verify each has the same schema version
Replica randomReplicaNotLeader = null;
for (Slice slice : clusterState.getActiveSlices(testCollectionName)) {
for (Replica replica : slice.getReplicas()) {
validateZkVersion(replica, schemaZkVersion, 0, false);
// save a random replica to test zk watcher behavior
if (randomReplicaNotLeader == null && !replica.getName().equals(shard1Leader.getName()))
randomReplicaNotLeader = replica;
}
}
assertNotNull(randomReplicaNotLeader);
// now update the data and then verify the znode watcher fires correctly
// before an after a zk session expiration (see SOLR-6249)
zkClient.setData(znodePath, managedSchemaBytes, schemaZkVersion, false);
stat = new Stat();
managedSchemaBytes = zkClient.getData(znodePath, null, stat, false);
int updatedSchemaZkVersion = stat.getVersion();
assertTrue(updatedSchemaZkVersion > schemaZkVersion);
validateZkVersion(randomReplicaNotLeader, updatedSchemaZkVersion, 2, true);
// ok - looks like the watcher fired correctly on the replica
// now, expire that replica's zk session and then verify the watcher fires again (after reconnect)
JettySolrRunner randomReplicaJetty =
getJettyOnPort(getReplicaPort(randomReplicaNotLeader));
assertNotNull(randomReplicaJetty);
chaosMonkey.expireSession(randomReplicaJetty);
// update the data again to cause watchers to fire
zkClient.setData(znodePath, managedSchemaBytes, updatedSchemaZkVersion, false);
stat = new Stat();
managedSchemaBytes = zkClient.getData(znodePath, null, stat, false);
updatedSchemaZkVersion = stat.getVersion();
// give up to 10 secs for the replica to recover after zk session loss and see the update
validateZkVersion(randomReplicaNotLeader, updatedSchemaZkVersion, 10, true);
}
/**
* Sends a GET request to get the zk schema version from a specific replica.
*/
protected void validateZkVersion(Replica replica, int schemaZkVersion, int waitSecs, boolean retry) throws Exception {
final String replicaUrl = (new ZkCoreNodeProps(replica)).getCoreUrl();
RestTestHarness testHarness = new RestTestHarness(() -> replicaUrl.endsWith("/") ? replicaUrl.substring(0, replicaUrl.length()-1) : replicaUrl);
try {
long waitMs = waitSecs * 1000L;
if (waitMs > 0) Thread.sleep(waitMs); // wait a moment for the zk watcher to fire
try {
testHarness.validateQuery("/schema/zkversion?wt=xml", "//zkversion=" + schemaZkVersion);
} catch (Exception exc) {
if (retry) {
// brief wait before retrying
Thread.sleep(waitMs > 0 ? waitMs : 2000L);
testHarness.validateQuery("/schema/zkversion?wt=xml", "//zkversion=" + schemaZkVersion);
} else {
throw exc;
}
}
} finally {
testHarness.close();
}
}
private void concurrentOperationsTest() throws Exception {
// First, add a bunch of fields and dynamic fields via PUT and POST, as well as copyFields,
// but do it fast enough and verify shards' schemas after all of them are added
int numFields = 100;
Info info = new Info("");
for (int fieldNum = 0; fieldNum <= numFields ; ++fieldNum) {
RestTestHarness publisher = restTestHarnesses.get(r.nextInt(restTestHarnesses.size()));
Operation.randomOperation().execute(publisher, fieldNum, info);
}
String[] expectedAddFields = getExpectedFieldResponses(info);
String[] expectedAddDynamicFields = getExpectedDynamicFieldResponses(info);
String[] expectedCopyFields = getExpectedCopyFieldResponses(info);
String[] expectedAddFieldTypes = getExpectedFieldTypeResponses(info);
boolean success = false;
long maxTimeoutMillis = 100000;
long startTime = System.nanoTime();
String request = null;
String response = null;
String result = null;
while ( ! success
&& TimeUnit.MILLISECONDS.convert(System.nanoTime() - startTime, TimeUnit.NANOSECONDS) < maxTimeoutMillis) {
Thread.sleep(100);
for (RestTestHarness client : restTestHarnesses) {
// verify addFieldTypePuts and addFieldTypePosts
request = "/schema/fieldtypes?wt=xml";
response = client.query(request);
result = BaseTestHarness.validateXPath(response, expectedAddFieldTypes);
if (result != null) {
break;
}
// verify addFieldPuts and addFieldPosts
request = "/schema/fields?wt=xml";
response = client.query(request);
result = BaseTestHarness.validateXPath(response, expectedAddFields);
if (result != null) {
break;
}
// verify addDynamicFieldPuts and addDynamicFieldPosts
request = "/schema/dynamicfields?wt=xml";
response = client.query(request);
result = BaseTestHarness.validateXPath(response, expectedAddDynamicFields);
if (result != null) {
break;
}
// verify copyFields
request = "/schema/copyfields?wt=xml";
response = client.query(request);
result = BaseTestHarness.validateXPath(response, expectedCopyFields);
if (result != null) {
break;
}
}
success = (result == null);
}
if ( ! success) {
String msg = "QUERY FAILED: xpath=" + result + " request=" + request + " response=" + response;
log.error(msg);
fail(msg);
}
}
private abstract class PutPostThread extends Thread {
RestTestHarness harness;
Info info;
public String fieldName;
public PutPostThread(RestTestHarness harness, Info info) {
this.harness = harness;
this.info = info;
}
public abstract void run();
}
private class PutFieldThread extends PutPostThread {
public PutFieldThread(RestTestHarness harness, Info info) {
super(harness, info);
fieldName = PUT_FIELDNAME + "Thread" + info.numAddFieldPuts++;
}
public void run() {
try {
// don't have the client side wait for all replicas to see the update or that defeats the purpose
// of testing the locking support on the server-side
addFieldPut(harness, fieldName, -1);
} catch (Exception e) {
// log.error("###ACTUAL FAILURE!");
throw new RuntimeException(e);
}
}
}
private class PostFieldThread extends PutPostThread {
public PostFieldThread(RestTestHarness harness, Info info) {
super(harness, info);
fieldName = POST_FIELDNAME + "Thread" + info.numAddFieldPosts++;
}
public void run() {
try {
addFieldPost(harness, fieldName, -1);
} catch (Exception e) {
// log.error("###ACTUAL FAILURE!");
throw new RuntimeException(e);
}
}
}
private class PutFieldTypeThread extends PutPostThread {
public PutFieldTypeThread(RestTestHarness harness, Info info) {
super(harness, info);
fieldName = PUT_FIELDTYPE + "Thread" + info.numAddFieldTypePuts++;
}
public void run() {
try {
addFieldTypePut(harness, fieldName, -1);
} catch (Exception e) {
// log.error("###ACTUAL FAILURE!");
throw new RuntimeException(e);
}
}
}
private class PostFieldTypeThread extends PutPostThread {
public PostFieldTypeThread(RestTestHarness harness, Info info) {
super(harness, info);
fieldName = POST_FIELDTYPE + "Thread" + info.numAddFieldTypePosts++;
}
public void run() {
try {
addFieldTypePost(harness, fieldName, -1);
} catch (Exception e) {
// log.error("###ACTUAL FAILURE!");
throw new RuntimeException(e);
}
}
}
private class PutDynamicFieldThread extends PutPostThread {
public PutDynamicFieldThread(RestTestHarness harness, Info info) {
super(harness, info);
fieldName = PUT_FIELDNAME + "Thread" + info.numAddFieldPuts++;
}
public void run() {
try {
addFieldPut(harness, fieldName, -1);
} catch (Exception e) {
// log.error("###ACTUAL FAILURE!");
throw new RuntimeException(e);
}
}
}
private class PostDynamicFieldThread extends PutPostThread {
public PostDynamicFieldThread(RestTestHarness harness, Info info) {
super(harness, info);
fieldName = POST_FIELDNAME + "Thread" + info.numAddFieldPosts++;
}
public void run() {
try {
addFieldPost(harness, fieldName, -1);
} catch (Exception e) {
// log.error("###ACTUAL FAILURE!");
throw new RuntimeException(e);
}
}
}
private void schemaLockTest() throws Exception {
// First, add a bunch of fields via PUT and POST, as well as copyFields,
// but do it fast enough and verify shards' schemas after all of them are added
int numFields = 5;
Info info = new Info("Thread");
for (int i = 0; i <= numFields ; ++i) {
// System.err.println("###ITERATION: " + i);
RestTestHarness publisher = restTestHarnesses.get(r.nextInt(restTestHarnesses.size()));
PostFieldThread postFieldThread = new PostFieldThread(publisher, info);
postFieldThread.start();
publisher = restTestHarnesses.get(r.nextInt(restTestHarnesses.size()));
PutFieldThread putFieldThread = new PutFieldThread(publisher, info);
putFieldThread.start();
publisher = restTestHarnesses.get(r.nextInt(restTestHarnesses.size()));
PostDynamicFieldThread postDynamicFieldThread = new PostDynamicFieldThread(publisher, info);
postDynamicFieldThread.start();
publisher = restTestHarnesses.get(r.nextInt(restTestHarnesses.size()));
PutDynamicFieldThread putDynamicFieldThread = new PutDynamicFieldThread(publisher, info);
putDynamicFieldThread.start();
publisher = restTestHarnesses.get(r.nextInt(restTestHarnesses.size()));
PostFieldTypeThread postFieldTypeThread = new PostFieldTypeThread(publisher, info);
postFieldTypeThread.start();
publisher = restTestHarnesses.get(r.nextInt(restTestHarnesses.size()));
PutFieldTypeThread putFieldTypeThread = new PutFieldTypeThread(publisher, info);
putFieldTypeThread.start();
postFieldThread.join();
putFieldThread.join();
postDynamicFieldThread.join();
putDynamicFieldThread.join();
postFieldTypeThread.join();
putFieldTypeThread.join();
String[] expectedAddFields = getExpectedFieldResponses(info);
String[] expectedAddFieldTypes = getExpectedFieldTypeResponses(info);
String[] expectedAddDynamicFields = getExpectedDynamicFieldResponses(info);
boolean success = false;
long maxTimeoutMillis = 100000;
long startTime = System.nanoTime();
String request = null;
String response = null;
String result = null;
while ( ! success
&& TimeUnit.MILLISECONDS.convert(System.nanoTime() - startTime, TimeUnit.NANOSECONDS) < maxTimeoutMillis) {
Thread.sleep(10);
// int j = 0;
for (RestTestHarness client : restTestHarnesses) {
// System.err.println("###CHECKING HARNESS: " + j++ + " for iteration: " + i);
// verify addFieldPuts and addFieldPosts
request = "/schema/fields?wt=xml";
response = client.query(request);
//System.err.println("###RESPONSE: " + response);
result = BaseTestHarness.validateXPath(response, expectedAddFields);
if (result != null) {
// System.err.println("###FAILURE!");
break;
}
// verify addDynamicFieldPuts and addDynamicFieldPosts
request = "/schema/dynamicfields?wt=xml";
response = client.query(request);
//System.err.println("###RESPONSE: " + response);
result = BaseTestHarness.validateXPath(response, expectedAddDynamicFields);
if (result != null) {
// System.err.println("###FAILURE!");
break;
}
request = "/schema/fieldtypes?wt=xml";
response = client.query(request);
//System.err.println("###RESPONSE: " + response);
result = BaseTestHarness.validateXPath(response, expectedAddFieldTypes);
if (result != null) {
// System.err.println("###FAILURE!");
break;
}
}
success = (result == null);
}
if ( ! success) {
String msg = "QUERY FAILED: xpath=" + result + " request=" + request + " response=" + response;
log.error(msg);
fail(msg);
}
}
}
private static class CopyFieldInfo {
private String sourceField;
private String destField;
public CopyFieldInfo(String sourceField, String destField) {
this.sourceField = sourceField;
this.destField = destField;
}
public String getSourceField() { return sourceField; }
public String getDestField() { return destField; }
}
}

View File

@ -88,7 +88,7 @@ public class TestUseDocValuesAsStored2 extends RestTestBase {
" }\n" +
" }\n";
String response = harness.post("/schema?wt=json", json(payload));
String response = harness.post("/schema", json(payload));
Map m = (Map) ObjectBuilder.getVal(new JSONParser(new StringReader(response)));
assertNull(response, m.get("errors"));
@ -140,7 +140,7 @@ public class TestUseDocValuesAsStored2 extends RestTestBase {
" 'docValues':true,\n" +
" 'indexed':false\n" +
" }}";
response = harness.post("/schema?wt=json", json(payload));
response = harness.post("/schema", json(payload));
m = TestBulkSchemaAPI.getObj(harness, "a1", "fields");
assertNotNull("field a1 doesn't exist any more", m);
assertEquals(Boolean.FALSE, m.get("useDocValuesAsStored"));
@ -155,7 +155,7 @@ public class TestUseDocValuesAsStored2 extends RestTestBase {
" 'docValues':true,\n" +
" 'indexed':false\n" +
" }}";
response = harness.post("/schema?wt=json", json(payload));
response = harness.post("/schema", json(payload));
m = TestBulkSchemaAPI.getObj(harness, "a1", "fields");
assertNotNull("field a1 doesn't exist any more", m);
assertEquals(Boolean.TRUE, m.get("useDocValuesAsStored"));
@ -169,7 +169,7 @@ public class TestUseDocValuesAsStored2 extends RestTestBase {
" 'docValues':true,\n" +
" 'indexed':true\n" +
" }}";
response = harness.post("/schema?wt=json", json(payload));
response = harness.post("/schema", json(payload));
m = TestBulkSchemaAPI.getObj(harness, "a4", "fields");
assertNotNull("field a4 not found", m);
assertEquals(Boolean.TRUE, m.get("useDocValuesAsStored"));

View File

@ -84,6 +84,7 @@ public class TestHashQParserPlugin extends SolrTestCaseJ4 {
params.add("fq", "{!hash worker=0 workers=3 cost="+getCost(random)+"}");
params.add("partitionKeys", "a_s");
params.add("rows","50");
params.add("wt", "xml");
HashSet set1 = new HashSet();
String response = h.query(req(params));
@ -102,6 +103,7 @@ public class TestHashQParserPlugin extends SolrTestCaseJ4 {
params.add("fq", "{!hash worker=1 workers=3 cost="+getCost(random)+"}");
params.add("partitionKeys", "a_s");
params.add("rows","50");
params.add("wt", "xml");
HashSet set2 = new HashSet();
response = h.query(req(params));
@ -121,6 +123,7 @@ public class TestHashQParserPlugin extends SolrTestCaseJ4 {
params.add("fq", "{!hash worker=2 workers=3 cost="+getCost(random)+"}");
params.add("partitionKeys", "a_s");
params.add("rows","50");
params.add("wt", "xml");
HashSet set3 = new HashSet();
response = h.query(req(params));
@ -151,6 +154,7 @@ public class TestHashQParserPlugin extends SolrTestCaseJ4 {
params.add("fq", "{!hash worker=0 workers=2 cost="+getCost(random)+"}");
params.add("partitionKeys", "a_i");
params.add("rows","50");
params.add("wt", "xml");
set1 = new HashSet();
response = h.query(req(params));
@ -169,6 +173,7 @@ public class TestHashQParserPlugin extends SolrTestCaseJ4 {
params.add("fq", "{!hash worker=1 workers=2 cost="+getCost(random)+"}");
params.add("partitionKeys", "a_i");
params.add("rows","50");
params.add("wt", "xml");
set2 = new HashSet();
response = h.query(req(params));
@ -196,6 +201,7 @@ public class TestHashQParserPlugin extends SolrTestCaseJ4 {
params.add("fq", "{!hash worker=0 workers=2 cost="+getCost(random)+"}");
params.add("partitionKeys", "a_s, a_i, a_l");
params.add("rows","50");
params.add("wt", "xml");
set1 = new HashSet();
response = h.query(req(params));
@ -214,6 +220,7 @@ public class TestHashQParserPlugin extends SolrTestCaseJ4 {
params.add("fq", "{!hash worker=1 workers=2 cost="+getCost(random)+"}");
params.add("partitionKeys", "a_s, a_i, a_l");
params.add("rows","50");
params.add("wt", "xml");
set2 = new HashSet();
response = h.query(req(params));

View File

@ -31,17 +31,23 @@ public class GraphQueryTest extends SolrTestCaseJ4 {
@Test
public void testGraph() throws Exception {
// normal strings
doGraph( params("node_id","node_s", "edge_id","edge_ss") );
// TODO: try with numeric fields... doGraph( params("node_id","node_i", "edge_id","edge_is") );
// point based fields with docvalues
doGraph( params("node_id","node_ip", "edge_id","edge_ips") );
doGraph( params("node_id","node_lp", "edge_id","edge_lps") );
doGraph( params("node_id","node_fp", "edge_id","edge_fps") );
doGraph( params("node_id","node_dp", "edge_id","edge_dps") );
}
public void doGraph(SolrParams p) throws Exception {
String node_id = p.get("node_id");
String edge_id = p.get("edge_id");
// 1 -> 2 -> 3 -> ( 4 5 )
// 7 -> 1
// 8 -> ( 1 2 )
// NOTE: from/to fields are reversed from {!join}... values are looked up in the "toField" and then matched on the "fromField"
// 1->2->(3,9)->(4,5)->7
// 8->(1,2)->...
assertU(adoc("id", "doc_1", node_id, "1", edge_id, "2", "text", "foo", "title", "foo10" ));
assertU(adoc("id", "doc_2", node_id, "2", edge_id, "3", "text", "foo" ));
assertU(commit());
@ -71,6 +77,13 @@ public class GraphQueryTest extends SolrTestCaseJ4 {
assertJQ(req(p, "q","{!graph from=${node_id} to=${edge_id}}id:doc_1")
, "/response/numFound==7"
);
// reverse the order to test single/multi-valued on the opposite fields
// start with doc1, look up node_id (1) and match to edge_id (docs 7 and 8)
assertJQ(req(p, "q","{!graph from=${edge_id} to=${node_id} maxDepth=1}id:doc_1")
, "/response/numFound==3"
);
assertJQ(req(p, "q","{!graph from=${node_id} to=${edge_id} returnRoot=true returnOnlyLeaf=false}id:doc_8")
, "/response/numFound==8"
);

View File

@ -203,7 +203,7 @@ public class BasicAuthIntegrationTest extends SolrCloudTestCase {
executeCommand(baseUrl + authcPrefix, cl, "{set-property : { blockUnknown: true}}", "harry", "HarryIsUberCool");
verifySecurityStatus(cl, baseUrl + authcPrefix, "authentication/blockUnknown", "true", 20, "harry", "HarryIsUberCool");
verifySecurityStatus(cl, baseUrl + "/admin/info/key?wt=json", "key", NOT_NULL_PREDICATE, 20);
verifySecurityStatus(cl, baseUrl + "/admin/info/key", "key", NOT_NULL_PREDICATE, 20);
String[] toolArgs = new String[]{
"status", "-solr", baseUrl};

View File

@ -120,7 +120,7 @@ public class HttpSolrCallGetCoreTest extends SolrCloudTestCase {
@Override
public String getQueryString() {
return "wt=json&version=2";
return "version=2";
}
@Override

View File

@ -82,10 +82,10 @@ public class TestNamedUpdateProcessors extends AbstractFullDistribZkTestBase {
"'add-runtimelib' : { 'name' : 'colltest' ,'version':1}\n" +
"}";
RestTestHarness client = restTestHarnesses.get(random().nextInt(restTestHarnesses.size()));
TestSolrConfigHandler.runConfigCommand(client, "/config?wt=json", payload);
TestSolrConfigHandler.runConfigCommand(client, "/config", payload);
TestSolrConfigHandler.testForResponseElement(client,
null,
"/config/overlay?wt=json",
"/config/overlay",
null,
Arrays.asList("overlay", "runtimeLib", blobName, "version"),
1l, 10);
@ -97,11 +97,11 @@ public class TestNamedUpdateProcessors extends AbstractFullDistribZkTestBase {
"}";
client = restTestHarnesses.get(random().nextInt(restTestHarnesses.size()));
TestSolrConfigHandler.runConfigCommand(client, "/config?wt=json", payload);
TestSolrConfigHandler.runConfigCommand(client, "/config", payload);
for (RestTestHarness restTestHarness : restTestHarnesses) {
TestSolrConfigHandler.testForResponseElement(restTestHarness,
null,
"/config/overlay?wt=json",
"/config/overlay",
null,
Arrays.asList("overlay", "updateProcessor", "firstFld", "fieldName"),
"test_s", 10);

View File

@ -19,6 +19,7 @@ package org.apache.solr.util.hll;
import java.util.Locale;
import org.apache.lucene.util.LuceneTestCase;
import org.apache.solr.util.LongIterator;
import org.junit.Test;
/**

View File

@ -17,6 +17,7 @@
package org.apache.solr.util.hll;
import org.apache.lucene.util.LuceneTestCase;
import org.apache.solr.util.LongIterator;
import org.junit.Test;
/**

View File

@ -46,4 +46,4 @@ When Solr is started connect to:
http://localhost:8983/solr/tika/dataimport?command=full-import
Check also the Solr Reference Guide for detailed usage guide:
https://cwiki.apache.org/confluence/display/solr/Uploading+Structured+Data+Store+Data+with+the+Data+Import+Handler
https://lucene.apache.org/solr/guide/uploading-structured-data-store-data-with-the-data-import-handler.html

View File

@ -16,17 +16,17 @@
limitations under the License.
-->
<!--
This is a DEMO configuration, highlighting elements
<!--
This is a DEMO configuration, highlighting elements
specifically needed to get this example running
such as libraries and request handler specifics.
It uses defaults or does not define most of production-level settings
such as various caches or auto-commit policies.
See Solr Reference Guide and other examples for
See Solr Reference Guide and other examples for
more details on a well configured solrconfig.xml
https://cwiki.apache.org/confluence/display/solr/The+Well-Configured+Solr+Instance
https://lucene.apache.org/solr/guide/the-well-configured-solr-instance.html
-->
<config>
@ -44,6 +44,9 @@
<lst name="defaults">
<str name="echoParams">explicit</str>
<str name="df">text</str>
<!-- Change from JSON to XML format (the default prior to Solr 7.0)
<str name="wt">xml</str>
-->
</lst>
</requestHandler>

View File

@ -16,9 +16,9 @@
limitations under the License.
-->
<!--
<!--
For more details about configurations options that may appear in
this file, see http://wiki.apache.org/solr/SolrConfigXml.
this file, see http://wiki.apache.org/solr/SolrConfigXml.
-->
<config>
<!-- In all configuration below, a prefix of "solr." for class names
@ -46,19 +46,19 @@
instanceDir.
Please note that <lib/> directives are processed in the order
that they appear in your solrconfig.xml file, and are "stacked"
on top of each other when building a ClassLoader - so if you have
plugin jars with dependencies on other jars, the "lower level"
that they appear in your solrconfig.xml file, and are "stacked"
on top of each other when building a ClassLoader - so if you have
plugin jars with dependencies on other jars, the "lower level"
dependency jars should be loaded first.
If a "./lib" directory exists in your instanceDir, all files
found in it are included as if you had used the following
syntax...
<lib dir="./lib" />
-->
<!-- A 'dir' option by itself adds any files found in the directory
<!-- A 'dir' option by itself adds any files found in the directory
to the classpath, this is useful for including all jars in a
directory.
@ -69,7 +69,7 @@
If a 'dir' option (with or without a regex) is used and nothing
is found that matches, a warning will be logged.
The examples below can be used to load some solr-contribs along
The examples below can be used to load some solr-contribs along
with their external dependencies.
-->
<lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-dataimporthandler-.*\.jar" />
@ -83,14 +83,14 @@
<lib dir="${solr.install.dir:../../../..}/contrib/velocity/lib" regex=".*\.jar" />
<lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-velocity-\d.*\.jar" />
<!-- an exact 'path' can be used instead of a 'dir' to specify a
specific jar file. This will cause a serious error to be logged
<!-- an exact 'path' can be used instead of a 'dir' to specify a
specific jar file. This will cause a serious error to be logged
if it can't be loaded.
-->
<!--
<lib path="../a-jar-that-does-not-exist.jar" />
<lib path="../a-jar-that-does-not-exist.jar" />
-->
<!-- Data Directory
Used to specify an alternate directory to hold all index data
@ -102,7 +102,7 @@
<!-- The DirectoryFactory to use for indexes.
solr.StandardDirectoryFactory is filesystem
based and tries to pick the best implementation for the current
JVM and platform. solr.NRTCachingDirectoryFactory, the default,
@ -114,7 +114,7 @@
solr.RAMDirectoryFactory is memory based and not persistent.
-->
<directoryFactory name="DirectoryFactory"
<directoryFactory name="DirectoryFactory"
class="${solr.directoryFactory:solr.NRTCachingDirectoryFactory}"/>
<!-- The CodecFactory for defining the format of the inverted index.
@ -132,19 +132,19 @@
Index Config - These settings control low-level behavior of indexing
Most example settings here show the default value, but are commented
out, to more easily see where customizations have been made.
Note: This replaces <indexDefaults> and <mainIndex> from older versions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->
<indexConfig>
<!-- maxFieldLength was removed in 4.0. To get similar behavior, include a
LimitTokenCountFilterFactory in your fieldType definition. E.g.
<!-- maxFieldLength was removed in 4.0. To get similar behavior, include a
LimitTokenCountFilterFactory in your fieldType definition. E.g.
<filter class="solr.LimitTokenCountFilterFactory" maxTokenCount="10000"/>
-->
<!-- Maximum time to wait for a write lock (ms) for an IndexWriter. Default: 1000 -->
<!-- <writeLockTimeout>1000</writeLockTimeout> -->
<!-- Expert: Enabling compound file will use less files for the index,
using fewer file descriptors on the expense of performance decrease.
<!-- Expert: Enabling compound file will use less files for the index,
using fewer file descriptors on the expense of performance decrease.
Default in Lucene is "true". Default in Solr is "false" (since 3.6) -->
<!-- <useCompoundFile>false</useCompoundFile> -->
@ -159,7 +159,7 @@
<!-- <ramBufferSizeMB>100</ramBufferSizeMB> -->
<!-- <maxBufferedDocs>1000</maxBufferedDocs> -->
<!-- Expert: Merge Policy
<!-- Expert: Merge Policy
The Merge Policy in Lucene controls how merging of segments is done.
The default since Solr/Lucene 3.3 is TieredMergePolicy.
The default since Lucene 2.3 was the LogByteSizeMergePolicy,
@ -178,15 +178,15 @@
can perform merges in the background using separate threads.
The SerialMergeScheduler (Lucene 2.2 default) does not.
-->
<!--
<!--
<mergeScheduler class="org.apache.lucene.index.ConcurrentMergeScheduler"/>
-->
<!-- LockFactory
<!-- LockFactory
This option specifies which Lucene LockFactory implementation
to use.
single = SingleInstanceLockFactory - suggested for a
read-only index or when there is no possibility of
another process trying to modify the index.
@ -210,11 +210,11 @@
The default Solr IndexDeletionPolicy implementation supports
deleting index commit points on number of commits, age of
commit point and optimized status.
The latest commit point should always be preserved regardless
of the criteria.
-->
<!--
<!--
<deletionPolicy class="solr.SolrDeletionPolicy">
-->
<!-- The number of commit points to be kept -->
@ -229,12 +229,12 @@
<str name="maxCommitAge">30MINUTES</str>
<str name="maxCommitAge">1DAY</str>
-->
<!--
<!--
</deletionPolicy>
-->
<!-- Lucene Infostream
To aid in advanced debugging, Lucene provides an "InfoStream"
of detailed information when indexing.
@ -247,7 +247,7 @@
<!-- JMX
This example enables JMX if and only if an existing MBeanServer
is found, use this if you want to configure JMX through JVM
parameters. Remove this to disable exposing Solr configuration
@ -257,7 +257,7 @@
-->
<jmx />
<!-- If you want to connect to a particular server, specify the
agentId
agentId
-->
<!-- <jmx agentId="myAgent" /> -->
<!-- If you want to start a new MBeanServer, specify the serviceUrl -->
@ -272,16 +272,16 @@
uncommitted changes to the index, so use of a hard autoCommit
is recommended (see below).
"dir" - the target directory for transaction logs, defaults to the
solr data directory. -->
solr data directory. -->
<updateLog>
<str name="dir">${solr.ulog.dir:}</str>
</updateLog>
<!-- AutoCommit
Perform a hard commit automatically under certain conditions.
Instead of enabling autoCommit, consider using "commitWithin"
when adding documents.
when adding documents.
http://wiki.apache.org/solr/UpdateXmlMessages
@ -290,7 +290,7 @@
maxTime - Maximum amount of time in ms that is allowed to pass
since a document was added before automatically
triggering a new commit.
triggering a new commit.
openSearcher - if false, the commit causes recent index changes
to be flushed to stable storage, but does not cause a new
searcher to be opened to make those changes visible.
@ -298,9 +298,9 @@
If the updateLog is enabled, then it's highly recommended to
have some sort of hard autoCommit to limit the log size.
-->
<autoCommit>
<maxTime>${solr.autoCommit.maxTime:15000}</maxTime>
<openSearcher>false</openSearcher>
<autoCommit>
<maxTime>${solr.autoCommit.maxTime:15000}</maxTime>
<openSearcher>false</openSearcher>
</autoCommit>
<!-- softAutoCommit is like autoCommit except it causes a
@ -309,12 +309,12 @@
faster and more near-realtime friendly than a hard commit.
-->
<autoSoftCommit>
<maxTime>${solr.autoSoftCommit.maxTime:-1}</maxTime>
<autoSoftCommit>
<maxTime>${solr.autoSoftCommit.maxTime:-1}</maxTime>
</autoSoftCommit>
<!-- Update Related Event Listeners
Various IndexWriter related events can trigger Listeners to
take actions.
@ -323,10 +323,10 @@
-->
<!-- The RunExecutableListener executes an external command from a
hook such as postCommit or postOptimize.
exe - the name of the executable to run
dir - dir to use as the current working directory. (default=".")
wait - the calling thread waits until the executable returns.
wait - the calling thread waits until the executable returns.
(default="true")
args - the arguments to pass to the program. (default is none)
env - environment variables to set. (default is none)
@ -346,7 +346,7 @@
-->
</updateHandler>
<!-- IndexReaderFactory
Use the following format to specify a custom IndexReaderFactory,
@ -385,12 +385,12 @@
is thrown if exceeded.
** WARNING **
This option actually modifies a global Lucene property that
will affect all SolrCores. If multiple solrconfig.xml files
disagree on this property, the value at any given moment will
be based on the last SolrCore to be initialized.
-->
<maxBooleanClauses>1024</maxBooleanClauses>
@ -399,7 +399,7 @@
There are two implementations of cache available for Solr,
LRUCache, based on a synchronized LinkedHashMap, and
FastLRUCache, based on a ConcurrentHashMap.
FastLRUCache, based on a ConcurrentHashMap.
FastLRUCache has faster gets and slower puts in single
threaded operation and thus is generally faster than LRUCache
@ -424,7 +424,7 @@
initialSize - the initial capacity (number of entries) of
the cache. (see java.util.HashMap)
autowarmCount - the number of entries to prepopulate from
and old cache.
and old cache.
-->
<filterCache class="solr.FastLRUCache"
size="512"
@ -432,27 +432,27 @@
autowarmCount="0"/>
<!-- Query Result Cache
Caches results of searches - ordered lists of document ids
(DocList) based on a query, a sort, and the range of documents requested.
(DocList) based on a query, a sort, and the range of documents requested.
-->
<queryResultCache class="solr.LRUCache"
size="512"
initialSize="512"
autowarmCount="0"/>
<!-- Document Cache
Caches Lucene Document objects (the stored fields for each
document). Since Lucene internal document ids are transient,
this cache will not be autowarmed.
this cache will not be autowarmed.
-->
<documentCache class="solr.LRUCache"
size="512"
initialSize="512"
autowarmCount="0"/>
<!-- custom cache currently used by block join -->
<!-- custom cache currently used by block join -->
<cache name="perSegFilter"
class="solr.search.LRUCache"
size="10"
@ -461,7 +461,7 @@
regenerator="solr.NoOpRegenerator" />
<!-- Field Value Cache
Cache used to hold field values that are quickly accessible
by document id. The fieldValueCache is created by default
even if not configured here.
@ -479,8 +479,8 @@
name through SolrIndexSearcher.getCache(),cacheLookup(), and
cacheInsert(). The purpose is to enable easy caching of
user/application level data. The regenerator argument should
be specified as an implementation of solr.CacheRegenerator
if autowarming is desired.
be specified as an implementation of solr.CacheRegenerator
if autowarming is desired.
-->
<!--
<cache name="myUserCache"
@ -527,12 +527,12 @@
are collected. For example, if a search for a particular query
requests matching documents 10 through 19, and queryWindowSize is 50,
then documents 0 through 49 will be collected and cached. Any further
requests in that range can be satisfied via the cache.
requests in that range can be satisfied via the cache.
-->
<queryResultWindowSize>20</queryResultWindowSize>
<!-- Maximum number of documents to cache for any entry in the
queryResultCache.
queryResultCache.
-->
<queryResultMaxDocsCached>200</queryResultMaxDocsCached>
@ -550,10 +550,10 @@
prepared but there is no current registered searcher to handle
requests or to gain autowarming data from.
-->
<!-- QuerySenderListener takes an array of NamedList and executes a
local query request for each NamedList in sequence.
local query request for each NamedList in sequence.
-->
<listener event="newSearcher" class="solr.QuerySenderListener">
<arr name="queries">
@ -600,19 +600,19 @@
multipartUploadLimitInKB - specifies the max size (in KiB) of
Multipart File Uploads that Solr will allow in a Request.
formdataUploadLimitInKB - specifies the max size (in KiB) of
form data (application/x-www-form-urlencoded) sent via
POST. You can use POST to pass request parameters not
fitting into the URL.
addHttpRequestToContext - if set to true, it will instruct
the requestParsers to include the original HttpServletRequest
object in the context map of the SolrQueryRequest under the
object in the context map of the SolrQueryRequest under the
key "httpRequest". It will not be used by any of the existing
Solr components, but may be useful when developing custom
Solr components, but may be useful when developing custom
plugins.
*** WARNING ***
Before enabling remote streaming, you should make sure your
system has authentication enabled.
@ -634,21 +634,21 @@
<!-- If you include a <cacheControl> directive, it will be used to
generate a Cache-Control header (as well as an Expires header
if the value contains "max-age=")
By default, no Cache-Control header is generated.
You can use the <cacheControl> option even if you have set
never304="true"
-->
<!--
<httpCaching never304="true" >
<cacheControl>max-age=30, public</cacheControl>
<cacheControl>max-age=30, public</cacheControl>
</httpCaching>
-->
<!-- To enable Solr to respond with automatically generated HTTP
Caching headers, and to response to Cache Validation requests
correctly, set the value of never304="false"
This will cause Solr to generate Last-Modified and ETag
headers based on the properties of the Index.
@ -673,12 +673,12 @@
<!--
<httpCaching lastModifiedFrom="openTime"
etagSeed="Solr">
<cacheControl>max-age=30, public</cacheControl>
<cacheControl>max-age=30, public</cacheControl>
</httpCaching>
-->
</requestDispatcher>
<!-- Request Handlers
<!-- Request Handlers
http://wiki.apache.org/solr/SolrRequestHandler
@ -713,6 +713,9 @@
<str name="echoParams">explicit</str>
<int name="rows">10</int>
<str name="df">text</str>
<!-- Change from JSON to XML format (the default prior to Solr 7.0)
<str name="wt">xml</str>
-->
</lst>
<!-- In addition to defaults, "appends" params can be specified
to identify values which should be appended to the list of
@ -817,10 +820,10 @@
<!-- Solr Cell Update Request Handler
http://wiki.apache.org/solr/ExtractingRequestHandler
http://wiki.apache.org/solr/ExtractingRequestHandler
-->
<requestHandler name="/update/extract"
<requestHandler name="/update/extract"
startup="lazy"
class="solr.extraction.ExtractingRequestHandler" >
<lst name="defaults">
@ -836,18 +839,18 @@
<!-- Search Components
Search components are registered to SolrCore and used by
Search components are registered to SolrCore and used by
instances of SearchHandler (which can access them by name)
By default, the following components are available:
<searchComponent name="query" class="solr.QueryComponent" />
<searchComponent name="facet" class="solr.FacetComponent" />
<searchComponent name="mlt" class="solr.MoreLikeThisComponent" />
<searchComponent name="highlight" class="solr.HighlightComponent" />
<searchComponent name="stats" class="solr.StatsComponent" />
<searchComponent name="debug" class="solr.DebugComponent" />
Default configuration in a requestHandler would look like:
<arr name="components">
@ -859,28 +862,28 @@
<str>debug</str>
</arr>
If you register a searchComponent to one of the standard names,
If you register a searchComponent to one of the standard names,
that will be used instead of the default.
To insert components before or after the 'standard' components, use:
<arr name="first-components">
<str>myFirstComponentName</str>
</arr>
<arr name="last-components">
<str>myLastComponentName</str>
</arr>
NOTE: The component registered with the name "debug" will
always be executed after the "last-components"
always be executed after the "last-components"
-->
<!-- Spell Check
The spell check component can return a list of alternative spelling
suggestions.
suggestions.
http://wiki.apache.org/solr/SpellCheckComponent
-->
@ -915,11 +918,11 @@
<float name="thresholdTokenFrequency">.01</float>
-->
</lst>
<!-- a spellchecker that can break or combine words. See "/spell" handler below for usage -->
<lst name="spellchecker">
<str name="name">wordbreak</str>
<str name="classname">solr.WordBreakSolrSpellChecker</str>
<str name="classname">solr.WordBreakSolrSpellChecker</str>
<str name="field">name</str>
<str name="combineWords">true</str>
<str name="breakWords">true</str>
@ -938,7 +941,7 @@
</lst>
-->
<!-- a spellchecker that use an alternate comparator
<!-- a spellchecker that use an alternate comparator
comparatorClass be one of:
1. score (default)
@ -964,8 +967,8 @@
</lst>
-->
</searchComponent>
<!-- A request handler for demonstrating the spellcheck component.
<!-- A request handler for demonstrating the spellcheck component.
NOTE: This is purely as an example. The whole purpose of the
SpellCheckComponent is to hook it into the request handler that
@ -974,7 +977,7 @@
IN OTHER WORDS, THERE IS REALLY GOOD CHANCE THE SETUP BELOW IS
NOT WHAT YOU WANT FOR YOUR PRODUCTION SYSTEM!
See http://wiki.apache.org/solr/SpellCheckComponent for details
on the request parameters.
-->
@ -988,14 +991,14 @@
<str name="spellcheck.dictionary">default</str>
<str name="spellcheck.dictionary">wordbreak</str>
<str name="spellcheck">on</str>
<str name="spellcheck.extendedResults">true</str>
<str name="spellcheck.extendedResults">true</str>
<str name="spellcheck.count">10</str>
<str name="spellcheck.alternativeTermCount">5</str>
<str name="spellcheck.maxResultsForSuggest">5</str>
<str name="spellcheck.maxResultsForSuggest">5</str>
<str name="spellcheck.collate">true</str>
<str name="spellcheck.collateExtendedResults">true</str>
<str name="spellcheck.collateExtendedResults">true</str>
<str name="spellcheck.maxCollationTries">10</str>
<str name="spellcheck.maxCollations">5</str>
<str name="spellcheck.maxCollations">5</str>
</lst>
<arr name="last-components">
<str>spellcheck</str>
@ -1006,7 +1009,7 @@
<lst name="suggester">
<str name="name">mySuggester</str>
<str name="lookupImpl">FuzzyLookupFactory</str> <!-- org.apache.solr.spelling.suggest.fst -->
<str name="dictionaryImpl">DocumentDictionaryFactory</str> <!-- org.apache.solr.spelling.suggest.HighFrequencyDictionaryFactory -->
<str name="dictionaryImpl">DocumentDictionaryFactory</str> <!-- org.apache.solr.spelling.suggest.HighFrequencyDictionaryFactory -->
<str name="field">cat</str>
<str name="weightField">price</str>
<str name="suggestAnalyzerFieldType">string</str>
@ -1032,8 +1035,8 @@
This is purely as an example.
In reality you will likely want to add the component to your
already specified request handlers.
In reality you will likely want to add the component to your
already specified request handlers.
-->
<requestHandler name="/tvrh" class="solr.SearchHandler" startup="lazy">
<lst name="defaults">
@ -1044,7 +1047,7 @@
<str>tvComponent</str>
</arr>
</requestHandler>
<!-- Terms Component
http://wiki.apache.org/solr/TermsComponent
@ -1059,7 +1062,7 @@
<lst name="defaults">
<bool name="terms">true</bool>
<bool name="distrib">false</bool>
</lst>
</lst>
<arr name="components">
<str>terms</str>
</arr>
@ -1099,7 +1102,7 @@
<highlighting>
<!-- Configure the standard fragmenter -->
<!-- This could most likely be commented out in the "default" case -->
<fragmenter name="gap"
<fragmenter name="gap"
default="true"
class="solr.highlight.GapFragmenter">
<lst name="defaults">
@ -1107,10 +1110,10 @@
</lst>
</fragmenter>
<!-- A regular-expression-based fragmenter
(for sentence extraction)
<!-- A regular-expression-based fragmenter
(for sentence extraction)
-->
<fragmenter name="regex"
<fragmenter name="regex"
class="solr.highlight.RegexFragmenter">
<lst name="defaults">
<!-- slightly smaller fragsizes work better because of slop -->
@ -1123,7 +1126,7 @@
</fragmenter>
<!-- Configure the standard formatter -->
<formatter name="html"
<formatter name="html"
default="true"
class="solr.highlight.HtmlFormatter">
<lst name="defaults">
@ -1133,27 +1136,27 @@
</formatter>
<!-- Configure the standard encoder -->
<encoder name="html"
<encoder name="html"
class="solr.highlight.HtmlEncoder" />
<!-- Configure the standard fragListBuilder -->
<fragListBuilder name="simple"
<fragListBuilder name="simple"
class="solr.highlight.SimpleFragListBuilder"/>
<!-- Configure the single fragListBuilder -->
<fragListBuilder name="single"
<fragListBuilder name="single"
class="solr.highlight.SingleFragListBuilder"/>
<!-- Configure the weighted fragListBuilder -->
<fragListBuilder name="weighted"
<fragListBuilder name="weighted"
default="true"
class="solr.highlight.WeightedFragListBuilder"/>
<!-- default tag FragmentsBuilder -->
<fragmentsBuilder name="default"
<fragmentsBuilder name="default"
default="true"
class="solr.highlight.ScoreOrderFragmentsBuilder">
<!--
<!--
<lst name="defaults">
<str name="hl.multiValuedSeparatorChar">/</str>
</lst>
@ -1161,7 +1164,7 @@
</fragmentsBuilder>
<!-- multi-colored tag FragmentsBuilder -->
<fragmentsBuilder name="colored"
<fragmentsBuilder name="colored"
class="solr.highlight.ScoreOrderFragmentsBuilder">
<lst name="defaults">
<str name="hl.tag.pre"><![CDATA[
@ -1173,8 +1176,8 @@
<str name="hl.tag.post"><![CDATA[</b>]]></str>
</lst>
</fragmentsBuilder>
<boundaryScanner name="default"
<boundaryScanner name="default"
default="true"
class="solr.highlight.SimpleBoundaryScanner">
<lst name="defaults">
@ -1182,8 +1185,8 @@
<str name="hl.bs.chars">.,!? &#9;&#10;&#13;</str>
</lst>
</boundaryScanner>
<boundaryScanner name="breakIterator"
<boundaryScanner name="breakIterator"
class="solr.highlight.BreakIteratorBoundaryScanner">
<lst name="defaults">
<!-- type should be one of CHARACTER, WORD(default), LINE and SENTENCE -->
@ -1205,15 +1208,15 @@
http://wiki.apache.org/solr/UpdateRequestProcessor
-->
-->
<!-- Deduplication
An example dedup update processor that creates the "id" field
on the fly based on the hash code of some other fields. This
example has overwriteDupes set to false since we are using the
id field as the signatureField and Solr will maintain
uniqueness based on that anyway.
uniqueness based on that anyway.
-->
<!--
<updateRequestProcessorChain name="dedupe">
@ -1228,7 +1231,7 @@
<processor class="solr.RunUpdateProcessorFactory" />
</updateRequestProcessorChain>
-->
<!-- Language identification
This example update chain identifies the language of the incoming
@ -1268,7 +1271,7 @@
<processor class="solr.RunUpdateProcessorFactory" />
</updateRequestProcessorChain>
-->
<!-- Response Writers
http://wiki.apache.org/solr/QueryResponseWriter
@ -1284,7 +1287,7 @@
overridden...
-->
<!--
<queryResponseWriter name="xml"
<queryResponseWriter name="xml"
default="true"
class="solr.XMLResponseWriter" />
<queryResponseWriter name="json" class="solr.JSONResponseWriter"/>
@ -1303,7 +1306,7 @@
-->
<str name="content-type">text/plain; charset=UTF-8</str>
</queryResponseWriter>
<!--
Custom response writers can be declared as needed...
-->
@ -1313,7 +1316,7 @@
<!-- XSLT response writer transforms the XML output by any xslt file found
in Solr's conf/xslt directory. Changes to xslt files are checked for
every xsltCacheLifetimeSeconds.
every xsltCacheLifetimeSeconds.
-->
<queryResponseWriter name="xslt" class="solr.XSLTResponseWriter">
<int name="xsltCacheLifetimeSeconds">5</int>
@ -1321,7 +1324,7 @@
<!-- Query Parsers
https://cwiki.apache.org/confluence/display/solr/Query+Syntax+and+Parsing
https://lucene.apache.org/solr/guide/query-syntax-and-parsing.html
Multiple QParserPlugins can be registered by name, and then
used in either the "defType" param for the QueryComponent (used
@ -1341,11 +1344,11 @@
-->
<!-- example of registering a custom function parser -->
<!--
<valueSourceParser name="myfunc"
<valueSourceParser name="myfunc"
class="com.mycompany.MyValueSourceParser" />
-->
<!-- Document Transformers
http://wiki.apache.org/solr/DocTransformers
-->
@ -1354,12 +1357,12 @@
<transformer name="db" class="com.mycompany.LoadFromDatabaseTransformer" >
<int name="connection">jdbc://....</int>
</transformer>
To add a constant value to all docs, use:
<transformer name="mytrans2" class="org.apache.solr.response.transform.ValueAugmenterFactory" >
<int name="value">5</int>
</transformer>
If you want the user to still be able to change it with _value:something_ use this:
<transformer name="mytrans3" class="org.apache.solr.response.transform.ValueAugmenterFactory" >
<double name="defaultValue">5</double>

View File

@ -16,9 +16,9 @@
limitations under the License.
-->
<!--
<!--
For more details about configurations options that may appear in
this file, see http://wiki.apache.org/solr/SolrConfigXml.
this file, see http://wiki.apache.org/solr/SolrConfigXml.
-->
<config>
<!-- In all configuration below, a prefix of "solr." for class names
@ -46,19 +46,19 @@
instanceDir.
Please note that <lib/> directives are processed in the order
that they appear in your solrconfig.xml file, and are "stacked"
on top of each other when building a ClassLoader - so if you have
plugin jars with dependencies on other jars, the "lower level"
that they appear in your solrconfig.xml file, and are "stacked"
on top of each other when building a ClassLoader - so if you have
plugin jars with dependencies on other jars, the "lower level"
dependency jars should be loaded first.
If a "./lib" directory exists in your instanceDir, all files
found in it are included as if you had used the following
syntax...
<lib dir="./lib" />
-->
<!-- A 'dir' option by itself adds any files found in the directory
<!-- A 'dir' option by itself adds any files found in the directory
to the classpath, this is useful for including all jars in a
directory.
@ -69,7 +69,7 @@
If a 'dir' option (with or without a regex) is used and nothing
is found that matches, a warning will be logged.
The examples below can be used to load some solr-contribs along
The examples below can be used to load some solr-contribs along
with their external dependencies.
-->
<lib dir="${solr.install.dir:../../../..}/contrib/dataimporthandler/lib/" regex=".*\.jar" />
@ -86,14 +86,14 @@
<lib dir="${solr.install.dir:../../../..}/contrib/velocity/lib" regex=".*\.jar" />
<lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-velocity-\d.*\.jar" />
<!-- an exact 'path' can be used instead of a 'dir' to specify a
specific jar file. This will cause a serious error to be logged
<!-- an exact 'path' can be used instead of a 'dir' to specify a
specific jar file. This will cause a serious error to be logged
if it can't be loaded.
-->
<!--
<lib path="../a-jar-that-does-not-exist.jar" />
<lib path="../a-jar-that-does-not-exist.jar" />
-->
<!-- Data Directory
Used to specify an alternate directory to hold all index data
@ -105,7 +105,7 @@
<!-- The DirectoryFactory to use for indexes.
solr.StandardDirectoryFactory is filesystem
based and tries to pick the best implementation for the current
JVM and platform. solr.NRTCachingDirectoryFactory, the default,
@ -117,7 +117,7 @@
solr.RAMDirectoryFactory is memory based and not persistent.
-->
<directoryFactory name="DirectoryFactory"
<directoryFactory name="DirectoryFactory"
class="${solr.directoryFactory:solr.NRTCachingDirectoryFactory}"/>
<!-- The CodecFactory for defining the format of the inverted index.
@ -135,19 +135,19 @@
Index Config - These settings control low-level behavior of indexing
Most example settings here show the default value, but are commented
out, to more easily see where customizations have been made.
Note: This replaces <indexDefaults> and <mainIndex> from older versions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->
<indexConfig>
<!-- maxFieldLength was removed in 4.0. To get similar behavior, include a
LimitTokenCountFilterFactory in your fieldType definition. E.g.
<!-- maxFieldLength was removed in 4.0. To get similar behavior, include a
LimitTokenCountFilterFactory in your fieldType definition. E.g.
<filter class="solr.LimitTokenCountFilterFactory" maxTokenCount="10000"/>
-->
<!-- Maximum time to wait for a write lock (ms) for an IndexWriter. Default: 1000 -->
<!-- <writeLockTimeout>1000</writeLockTimeout> -->
<!-- Expert: Enabling compound file will use less files for the index,
using fewer file descriptors on the expense of performance decrease.
<!-- Expert: Enabling compound file will use less files for the index,
using fewer file descriptors on the expense of performance decrease.
Default in Lucene is "true". Default in Solr is "false" (since 3.6) -->
<!-- <useCompoundFile>false</useCompoundFile> -->
@ -162,7 +162,7 @@
<!-- <ramBufferSizeMB>100</ramBufferSizeMB> -->
<!-- <maxBufferedDocs>1000</maxBufferedDocs> -->
<!-- Expert: Merge Policy
<!-- Expert: Merge Policy
The Merge Policy in Lucene controls how merging of segments is done.
The default since Solr/Lucene 3.3 is TieredMergePolicy.
The default since Lucene 2.3 was the LogByteSizeMergePolicy,
@ -181,15 +181,15 @@
can perform merges in the background using separate threads.
The SerialMergeScheduler (Lucene 2.2 default) does not.
-->
<!--
<!--
<mergeScheduler class="org.apache.lucene.index.ConcurrentMergeScheduler"/>
-->
<!-- LockFactory
<!-- LockFactory
This option specifies which Lucene LockFactory implementation
to use.
single = SingleInstanceLockFactory - suggested for a
read-only index or when there is no possibility of
another process trying to modify the index.
@ -213,11 +213,11 @@
The default Solr IndexDeletionPolicy implementation supports
deleting index commit points on number of commits, age of
commit point and optimized status.
The latest commit point should always be preserved regardless
of the criteria.
-->
<!--
<!--
<deletionPolicy class="solr.SolrDeletionPolicy">
-->
<!-- The number of commit points to be kept -->
@ -232,12 +232,12 @@
<str name="maxCommitAge">30MINUTES</str>
<str name="maxCommitAge">1DAY</str>
-->
<!--
<!--
</deletionPolicy>
-->
<!-- Lucene Infostream
To aid in advanced debugging, Lucene provides an "InfoStream"
of detailed information when indexing.
@ -250,7 +250,7 @@
<!-- JMX
This example enables JMX if and only if an existing MBeanServer
is found, use this if you want to configure JMX through JVM
parameters. Remove this to disable exposing Solr configuration
@ -260,7 +260,7 @@
-->
<jmx />
<!-- If you want to connect to a particular server, specify the
agentId
agentId
-->
<!-- <jmx agentId="myAgent" /> -->
<!-- If you want to start a new MBeanServer, specify the serviceUrl -->
@ -275,16 +275,16 @@
uncommitted changes to the index, so use of a hard autoCommit
is recommended (see below).
"dir" - the target directory for transaction logs, defaults to the
solr data directory. -->
solr data directory. -->
<updateLog>
<str name="dir">${solr.ulog.dir:}</str>
</updateLog>
<!-- AutoCommit
Perform a hard commit automatically under certain conditions.
Instead of enabling autoCommit, consider using "commitWithin"
when adding documents.
when adding documents.
http://wiki.apache.org/solr/UpdateXmlMessages
@ -293,7 +293,7 @@
maxTime - Maximum amount of time in ms that is allowed to pass
since a document was added before automatically
triggering a new commit.
triggering a new commit.
openSearcher - if false, the commit causes recent index changes
to be flushed to stable storage, but does not cause a new
searcher to be opened to make those changes visible.
@ -301,9 +301,9 @@
If the updateLog is enabled, then it's highly recommended to
have some sort of hard autoCommit to limit the log size.
-->
<autoCommit>
<maxTime>${solr.autoCommit.maxTime:15000}</maxTime>
<openSearcher>false</openSearcher>
<autoCommit>
<maxTime>${solr.autoCommit.maxTime:15000}</maxTime>
<openSearcher>false</openSearcher>
</autoCommit>
<!-- softAutoCommit is like autoCommit except it causes a
@ -312,12 +312,12 @@
faster and more near-realtime friendly than a hard commit.
-->
<autoSoftCommit>
<maxTime>${solr.autoSoftCommit.maxTime:-1}</maxTime>
<autoSoftCommit>
<maxTime>${solr.autoSoftCommit.maxTime:-1}</maxTime>
</autoSoftCommit>
<!-- Update Related Event Listeners
Various IndexWriter related events can trigger Listeners to
take actions.
@ -326,10 +326,10 @@
-->
<!-- The RunExecutableListener executes an external command from a
hook such as postCommit or postOptimize.
exe - the name of the executable to run
dir - dir to use as the current working directory. (default=".")
wait - the calling thread waits until the executable returns.
wait - the calling thread waits until the executable returns.
(default="true")
args - the arguments to pass to the program. (default is none)
env - environment variables to set. (default is none)
@ -349,7 +349,7 @@
-->
</updateHandler>
<!-- IndexReaderFactory
Use the following format to specify a custom IndexReaderFactory,
@ -388,12 +388,12 @@
is thrown if exceeded.
** WARNING **
This option actually modifies a global Lucene property that
will affect all SolrCores. If multiple solrconfig.xml files
disagree on this property, the value at any given moment will
be based on the last SolrCore to be initialized.
-->
<maxBooleanClauses>1024</maxBooleanClauses>
@ -402,7 +402,7 @@
There are two implementations of cache available for Solr,
LRUCache, based on a synchronized LinkedHashMap, and
FastLRUCache, based on a ConcurrentHashMap.
FastLRUCache, based on a ConcurrentHashMap.
FastLRUCache has faster gets and slower puts in single
threaded operation and thus is generally faster than LRUCache
@ -427,7 +427,7 @@
initialSize - the initial capacity (number of entries) of
the cache. (see java.util.HashMap)
autowarmCount - the number of entries to prepopulate from
and old cache.
and old cache.
-->
<filterCache class="solr.FastLRUCache"
size="512"
@ -435,27 +435,27 @@
autowarmCount="0"/>
<!-- Query Result Cache
Caches results of searches - ordered lists of document ids
(DocList) based on a query, a sort, and the range of documents requested.
(DocList) based on a query, a sort, and the range of documents requested.
-->
<queryResultCache class="solr.LRUCache"
size="512"
initialSize="512"
autowarmCount="0"/>
<!-- Document Cache
Caches Lucene Document objects (the stored fields for each
document). Since Lucene internal document ids are transient,
this cache will not be autowarmed.
this cache will not be autowarmed.
-->
<documentCache class="solr.LRUCache"
size="512"
initialSize="512"
autowarmCount="0"/>
<!-- custom cache currently used by block join -->
<!-- custom cache currently used by block join -->
<cache name="perSegFilter"
class="solr.search.LRUCache"
size="10"
@ -464,7 +464,7 @@
regenerator="solr.NoOpRegenerator" />
<!-- Field Value Cache
Cache used to hold field values that are quickly accessible
by document id. The fieldValueCache is created by default
even if not configured here.
@ -482,8 +482,8 @@
name through SolrIndexSearcher.getCache(),cacheLookup(), and
cacheInsert(). The purpose is to enable easy caching of
user/application level data. The regenerator argument should
be specified as an implementation of solr.CacheRegenerator
if autowarming is desired.
be specified as an implementation of solr.CacheRegenerator
if autowarming is desired.
-->
<!--
<cache name="myUserCache"
@ -530,12 +530,12 @@
are collected. For example, if a search for a particular query
requests matching documents 10 through 19, and queryWindowSize is 50,
then documents 0 through 49 will be collected and cached. Any further
requests in that range can be satisfied via the cache.
requests in that range can be satisfied via the cache.
-->
<queryResultWindowSize>20</queryResultWindowSize>
<!-- Maximum number of documents to cache for any entry in the
queryResultCache.
queryResultCache.
-->
<queryResultMaxDocsCached>200</queryResultMaxDocsCached>
@ -553,10 +553,10 @@
prepared but there is no current registered searcher to handle
requests or to gain autowarming data from.
-->
<!-- QuerySenderListener takes an array of NamedList and executes a
local query request for each NamedList in sequence.
local query request for each NamedList in sequence.
-->
<listener event="newSearcher" class="solr.QuerySenderListener">
<arr name="queries">
@ -603,19 +603,19 @@
multipartUploadLimitInKB - specifies the max size (in KiB) of
Multipart File Uploads that Solr will allow in a Request.
formdataUploadLimitInKB - specifies the max size (in KiB) of
form data (application/x-www-form-urlencoded) sent via
POST. You can use POST to pass request parameters not
fitting into the URL.
addHttpRequestToContext - if set to true, it will instruct
the requestParsers to include the original HttpServletRequest
object in the context map of the SolrQueryRequest under the
object in the context map of the SolrQueryRequest under the
key "httpRequest". It will not be used by any of the existing
Solr components, but may be useful when developing custom
Solr components, but may be useful when developing custom
plugins.
*** WARNING ***
Before enabling remote streaming, you should make sure your
system has authentication enabled.
@ -637,21 +637,21 @@
<!-- If you include a <cacheControl> directive, it will be used to
generate a Cache-Control header (as well as an Expires header
if the value contains "max-age=")
By default, no Cache-Control header is generated.
You can use the <cacheControl> option even if you have set
never304="true"
-->
<!--
<httpCaching never304="true" >
<cacheControl>max-age=30, public</cacheControl>
<cacheControl>max-age=30, public</cacheControl>
</httpCaching>
-->
<!-- To enable Solr to respond with automatically generated HTTP
Caching headers, and to response to Cache Validation requests
correctly, set the value of never304="false"
This will cause Solr to generate Last-Modified and ETag
headers based on the properties of the Index.
@ -676,12 +676,12 @@
<!--
<httpCaching lastModifiedFrom="openTime"
etagSeed="Solr">
<cacheControl>max-age=30, public</cacheControl>
<cacheControl>max-age=30, public</cacheControl>
</httpCaching>
-->
</requestDispatcher>
<!-- Request Handlers
<!-- Request Handlers
http://wiki.apache.org/solr/SolrRequestHandler
@ -716,6 +716,9 @@
<str name="echoParams">explicit</str>
<int name="rows">10</int>
<str name="df">text</str>
<!-- Change from JSON to XML format (the default prior to Solr 7.0)
<str name="wt">xml</str>
-->
</lst>
<!-- In addition to defaults, "appends" params can be specified
to identify values which should be appended to the list of
@ -820,10 +823,10 @@
<!-- Solr Cell Update Request Handler
http://wiki.apache.org/solr/ExtractingRequestHandler
http://wiki.apache.org/solr/ExtractingRequestHandler
-->
<requestHandler name="/update/extract"
<requestHandler name="/update/extract"
startup="lazy"
class="solr.extraction.ExtractingRequestHandler" >
<lst name="defaults">
@ -839,18 +842,18 @@
<!-- Search Components
Search components are registered to SolrCore and used by
Search components are registered to SolrCore and used by
instances of SearchHandler (which can access them by name)
By default, the following components are available:
<searchComponent name="query" class="solr.QueryComponent" />
<searchComponent name="facet" class="solr.FacetComponent" />
<searchComponent name="mlt" class="solr.MoreLikeThisComponent" />
<searchComponent name="highlight" class="solr.HighlightComponent" />
<searchComponent name="stats" class="solr.StatsComponent" />
<searchComponent name="debug" class="solr.DebugComponent" />
Default configuration in a requestHandler would look like:
<arr name="components">
@ -862,28 +865,28 @@
<str>debug</str>
</arr>
If you register a searchComponent to one of the standard names,
If you register a searchComponent to one of the standard names,
that will be used instead of the default.
To insert components before or after the 'standard' components, use:
<arr name="first-components">
<str>myFirstComponentName</str>
</arr>
<arr name="last-components">
<str>myLastComponentName</str>
</arr>
NOTE: The component registered with the name "debug" will
always be executed after the "last-components"
always be executed after the "last-components"
-->
<!-- Spell Check
The spell check component can return a list of alternative spelling
suggestions.
suggestions.
http://wiki.apache.org/solr/SpellCheckComponent
-->
@ -918,11 +921,11 @@
<float name="thresholdTokenFrequency">.01</float>
-->
</lst>
<!-- a spellchecker that can break or combine words. See "/spell" handler below for usage -->
<lst name="spellchecker">
<str name="name">wordbreak</str>
<str name="classname">solr.WordBreakSolrSpellChecker</str>
<str name="classname">solr.WordBreakSolrSpellChecker</str>
<str name="field">name</str>
<str name="combineWords">true</str>
<str name="breakWords">true</str>
@ -941,7 +944,7 @@
</lst>
-->
<!-- a spellchecker that use an alternate comparator
<!-- a spellchecker that use an alternate comparator
comparatorClass be one of:
1. score (default)
@ -967,8 +970,8 @@
</lst>
-->
</searchComponent>
<!-- A request handler for demonstrating the spellcheck component.
<!-- A request handler for demonstrating the spellcheck component.
NOTE: This is purely as an example. The whole purpose of the
SpellCheckComponent is to hook it into the request handler that
@ -977,7 +980,7 @@
IN OTHER WORDS, THERE IS REALLY GOOD CHANCE THE SETUP BELOW IS
NOT WHAT YOU WANT FOR YOUR PRODUCTION SYSTEM!
See http://wiki.apache.org/solr/SpellCheckComponent for details
on the request parameters.
-->
@ -991,14 +994,14 @@
<str name="spellcheck.dictionary">default</str>
<str name="spellcheck.dictionary">wordbreak</str>
<str name="spellcheck">on</str>
<str name="spellcheck.extendedResults">true</str>
<str name="spellcheck.extendedResults">true</str>
<str name="spellcheck.count">10</str>
<str name="spellcheck.alternativeTermCount">5</str>
<str name="spellcheck.maxResultsForSuggest">5</str>
<str name="spellcheck.maxResultsForSuggest">5</str>
<str name="spellcheck.collate">true</str>
<str name="spellcheck.collateExtendedResults">true</str>
<str name="spellcheck.collateExtendedResults">true</str>
<str name="spellcheck.maxCollationTries">10</str>
<str name="spellcheck.maxCollations">5</str>
<str name="spellcheck.maxCollations">5</str>
</lst>
<arr name="last-components">
<str>spellcheck</str>
@ -1009,7 +1012,7 @@
<lst name="suggester">
<str name="name">mySuggester</str>
<str name="lookupImpl">FuzzyLookupFactory</str> <!-- org.apache.solr.spelling.suggest.fst -->
<str name="dictionaryImpl">DocumentDictionaryFactory</str> <!-- org.apache.solr.spelling.suggest.HighFrequencyDictionaryFactory -->
<str name="dictionaryImpl">DocumentDictionaryFactory</str> <!-- org.apache.solr.spelling.suggest.HighFrequencyDictionaryFactory -->
<str name="field">cat</str>
<str name="weightField">price</str>
<str name="suggestAnalyzerFieldType">string</str>
@ -1035,8 +1038,8 @@
This is purely as an example.
In reality you will likely want to add the component to your
already specified request handlers.
In reality you will likely want to add the component to your
already specified request handlers.
-->
<requestHandler name="/tvrh" class="solr.SearchHandler" startup="lazy">
<lst name="defaults">
@ -1062,7 +1065,7 @@
<lst name="defaults">
<bool name="terms">true</bool>
<bool name="distrib">false</bool>
</lst>
</lst>
<arr name="components">
<str>terms</str>
</arr>
@ -1102,7 +1105,7 @@
<highlighting>
<!-- Configure the standard fragmenter -->
<!-- This could most likely be commented out in the "default" case -->
<fragmenter name="gap"
<fragmenter name="gap"
default="true"
class="solr.highlight.GapFragmenter">
<lst name="defaults">
@ -1110,10 +1113,10 @@
</lst>
</fragmenter>
<!-- A regular-expression-based fragmenter
(for sentence extraction)
<!-- A regular-expression-based fragmenter
(for sentence extraction)
-->
<fragmenter name="regex"
<fragmenter name="regex"
class="solr.highlight.RegexFragmenter">
<lst name="defaults">
<!-- slightly smaller fragsizes work better because of slop -->
@ -1126,7 +1129,7 @@
</fragmenter>
<!-- Configure the standard formatter -->
<formatter name="html"
<formatter name="html"
default="true"
class="solr.highlight.HtmlFormatter">
<lst name="defaults">
@ -1136,27 +1139,27 @@
</formatter>
<!-- Configure the standard encoder -->
<encoder name="html"
<encoder name="html"
class="solr.highlight.HtmlEncoder" />
<!-- Configure the standard fragListBuilder -->
<fragListBuilder name="simple"
<fragListBuilder name="simple"
class="solr.highlight.SimpleFragListBuilder"/>
<!-- Configure the single fragListBuilder -->
<fragListBuilder name="single"
<fragListBuilder name="single"
class="solr.highlight.SingleFragListBuilder"/>
<!-- Configure the weighted fragListBuilder -->
<fragListBuilder name="weighted"
<fragListBuilder name="weighted"
default="true"
class="solr.highlight.WeightedFragListBuilder"/>
<!-- default tag FragmentsBuilder -->
<fragmentsBuilder name="default"
<fragmentsBuilder name="default"
default="true"
class="solr.highlight.ScoreOrderFragmentsBuilder">
<!--
<!--
<lst name="defaults">
<str name="hl.multiValuedSeparatorChar">/</str>
</lst>
@ -1164,7 +1167,7 @@
</fragmentsBuilder>
<!-- multi-colored tag FragmentsBuilder -->
<fragmentsBuilder name="colored"
<fragmentsBuilder name="colored"
class="solr.highlight.ScoreOrderFragmentsBuilder">
<lst name="defaults">
<str name="hl.tag.pre"><![CDATA[
@ -1176,8 +1179,8 @@
<str name="hl.tag.post"><![CDATA[</b>]]></str>
</lst>
</fragmentsBuilder>
<boundaryScanner name="default"
<boundaryScanner name="default"
default="true"
class="solr.highlight.SimpleBoundaryScanner">
<lst name="defaults">
@ -1185,8 +1188,8 @@
<str name="hl.bs.chars">.,!? &#9;&#10;&#13;</str>
</lst>
</boundaryScanner>
<boundaryScanner name="breakIterator"
<boundaryScanner name="breakIterator"
class="solr.highlight.BreakIteratorBoundaryScanner">
<lst name="defaults">
<!-- type should be one of CHARACTER, WORD(default), LINE and SENTENCE -->
@ -1208,15 +1211,15 @@
http://wiki.apache.org/solr/UpdateRequestProcessor
-->
-->
<!-- Deduplication
An example dedup update processor that creates the "id" field
on the fly based on the hash code of some other fields. This
example has overwriteDupes set to false since we are using the
id field as the signatureField and Solr will maintain
uniqueness based on that anyway.
uniqueness based on that anyway.
-->
<!--
<updateRequestProcessorChain name="dedupe">
@ -1231,7 +1234,7 @@
<processor class="solr.RunUpdateProcessorFactory" />
</updateRequestProcessorChain>
-->
<!-- Language identification
This example update chain identifies the language of the incoming
@ -1271,7 +1274,7 @@
<processor class="solr.RunUpdateProcessorFactory" />
</updateRequestProcessorChain>
-->
<!-- Response Writers
http://wiki.apache.org/solr/QueryResponseWriter
@ -1287,7 +1290,7 @@
overridden...
-->
<!--
<queryResponseWriter name="xml"
<queryResponseWriter name="xml"
default="true"
class="solr.XMLResponseWriter" />
<queryResponseWriter name="json" class="solr.JSONResponseWriter"/>
@ -1306,7 +1309,7 @@
-->
<str name="content-type">text/plain; charset=UTF-8</str>
</queryResponseWriter>
<!--
Custom response writers can be declared as needed...
-->
@ -1316,7 +1319,7 @@
<!-- XSLT response writer transforms the XML output by any xslt file found
in Solr's conf/xslt directory. Changes to xslt files are checked for
every xsltCacheLifetimeSeconds.
every xsltCacheLifetimeSeconds.
-->
<queryResponseWriter name="xslt" class="solr.XSLTResponseWriter">
<int name="xsltCacheLifetimeSeconds">5</int>
@ -1324,7 +1327,7 @@
<!-- Query Parsers
https://cwiki.apache.org/confluence/display/solr/Query+Syntax+and+Parsing
https://lucene.apache.org/solr/guide/query-syntax-and-parsing.html
Multiple QParserPlugins can be registered by name, and then
used in either the "defType" param for the QueryComponent (used
@ -1344,11 +1347,11 @@
-->
<!-- example of registering a custom function parser -->
<!--
<valueSourceParser name="myfunc"
<valueSourceParser name="myfunc"
class="com.mycompany.MyValueSourceParser" />
-->
<!-- Document Transformers
http://wiki.apache.org/solr/DocTransformers
-->
@ -1357,12 +1360,12 @@
<transformer name="db" class="com.mycompany.LoadFromDatabaseTransformer" >
<int name="connection">jdbc://....</int>
</transformer>
To add a constant value to all docs, use:
<transformer name="mytrans2" class="org.apache.solr.response.transform.ValueAugmenterFactory" >
<int name="value">5</int>
</transformer>
If you want the user to still be able to change it with _value:something_ use this:
<transformer name="mytrans3" class="org.apache.solr.response.transform.ValueAugmenterFactory" >
<double name="defaultValue">5</double>

View File

@ -16,9 +16,9 @@
limitations under the License.
-->
<!--
<!--
For more details about configurations options that may appear in
this file, see http://wiki.apache.org/solr/SolrConfigXml.
this file, see http://wiki.apache.org/solr/SolrConfigXml.
-->
<config>
<!-- In all configuration below, a prefix of "solr." for class names
@ -46,19 +46,19 @@
instanceDir.
Please note that <lib/> directives are processed in the order
that they appear in your solrconfig.xml file, and are "stacked"
on top of each other when building a ClassLoader - so if you have
plugin jars with dependencies on other jars, the "lower level"
that they appear in your solrconfig.xml file, and are "stacked"
on top of each other when building a ClassLoader - so if you have
plugin jars with dependencies on other jars, the "lower level"
dependency jars should be loaded first.
If a "./lib" directory exists in your instanceDir, all files
found in it are included as if you had used the following
syntax...
<lib dir="./lib" />
-->
<!-- A 'dir' option by itself adds any files found in the directory
<!-- A 'dir' option by itself adds any files found in the directory
to the classpath, this is useful for including all jars in a
directory.
@ -69,7 +69,7 @@
If a 'dir' option (with or without a regex) is used and nothing
is found that matches, a warning will be logged.
The examples below can be used to load some solr-contribs along
The examples below can be used to load some solr-contribs along
with their external dependencies.
-->
<lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-dataimporthandler-.*\.jar" />
@ -83,14 +83,14 @@
<lib dir="${solr.install.dir:../../../..}/contrib/velocity/lib" regex=".*\.jar" />
<lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-velocity-\d.*\.jar" />
<!-- an exact 'path' can be used instead of a 'dir' to specify a
specific jar file. This will cause a serious error to be logged
<!-- an exact 'path' can be used instead of a 'dir' to specify a
specific jar file. This will cause a serious error to be logged
if it can't be loaded.
-->
<!--
<lib path="../a-jar-that-does-not-exist.jar" />
<lib path="../a-jar-that-does-not-exist.jar" />
-->
<!-- Data Directory
Used to specify an alternate directory to hold all index data
@ -102,7 +102,7 @@
<!-- The DirectoryFactory to use for indexes.
solr.StandardDirectoryFactory is filesystem
based and tries to pick the best implementation for the current
JVM and platform. solr.NRTCachingDirectoryFactory, the default,
@ -114,7 +114,7 @@
solr.RAMDirectoryFactory is memory based and not persistent.
-->
<directoryFactory name="DirectoryFactory"
<directoryFactory name="DirectoryFactory"
class="${solr.directoryFactory:solr.NRTCachingDirectoryFactory}"/>
<!-- The CodecFactory for defining the format of the inverted index.
@ -132,19 +132,19 @@
Index Config - These settings control low-level behavior of indexing
Most example settings here show the default value, but are commented
out, to more easily see where customizations have been made.
Note: This replaces <indexDefaults> and <mainIndex> from older versions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->
<indexConfig>
<!-- maxFieldLength was removed in 4.0. To get similar behavior, include a
LimitTokenCountFilterFactory in your fieldType definition. E.g.
<!-- maxFieldLength was removed in 4.0. To get similar behavior, include a
LimitTokenCountFilterFactory in your fieldType definition. E.g.
<filter class="solr.LimitTokenCountFilterFactory" maxTokenCount="10000"/>
-->
<!-- Maximum time to wait for a write lock (ms) for an IndexWriter. Default: 1000 -->
<!-- <writeLockTimeout>1000</writeLockTimeout> -->
<!-- Expert: Enabling compound file will use less files for the index,
using fewer file descriptors on the expense of performance decrease.
<!-- Expert: Enabling compound file will use less files for the index,
using fewer file descriptors on the expense of performance decrease.
Default in Lucene is "true". Default in Solr is "false" (since 3.6) -->
<!-- <useCompoundFile>false</useCompoundFile> -->
@ -159,7 +159,7 @@
<!-- <ramBufferSizeMB>100</ramBufferSizeMB> -->
<!-- <maxBufferedDocs>1000</maxBufferedDocs> -->
<!-- Expert: Merge Policy
<!-- Expert: Merge Policy
The Merge Policy in Lucene controls how merging of segments is done.
The default since Solr/Lucene 3.3 is TieredMergePolicy.
The default since Lucene 2.3 was the LogByteSizeMergePolicy,
@ -178,15 +178,15 @@
can perform merges in the background using separate threads.
The SerialMergeScheduler (Lucene 2.2 default) does not.
-->
<!--
<!--
<mergeScheduler class="org.apache.lucene.index.ConcurrentMergeScheduler"/>
-->
<!-- LockFactory
<!-- LockFactory
This option specifies which Lucene LockFactory implementation
to use.
single = SingleInstanceLockFactory - suggested for a
read-only index or when there is no possibility of
another process trying to modify the index.
@ -210,11 +210,11 @@
The default Solr IndexDeletionPolicy implementation supports
deleting index commit points on number of commits, age of
commit point and optimized status.
The latest commit point should always be preserved regardless
of the criteria.
-->
<!--
<!--
<deletionPolicy class="solr.SolrDeletionPolicy">
-->
<!-- The number of commit points to be kept -->
@ -229,12 +229,12 @@
<str name="maxCommitAge">30MINUTES</str>
<str name="maxCommitAge">1DAY</str>
-->
<!--
<!--
</deletionPolicy>
-->
<!-- Lucene Infostream
To aid in advanced debugging, Lucene provides an "InfoStream"
of detailed information when indexing.
@ -247,7 +247,7 @@
<!-- JMX
This example enables JMX if and only if an existing MBeanServer
is found, use this if you want to configure JMX through JVM
parameters. Remove this to disable exposing Solr configuration
@ -257,7 +257,7 @@
-->
<jmx />
<!-- If you want to connect to a particular server, specify the
agentId
agentId
-->
<!-- <jmx agentId="myAgent" /> -->
<!-- If you want to start a new MBeanServer, specify the serviceUrl -->
@ -272,16 +272,16 @@
uncommitted changes to the index, so use of a hard autoCommit
is recommended (see below).
"dir" - the target directory for transaction logs, defaults to the
solr data directory. -->
solr data directory. -->
<updateLog>
<str name="dir">${solr.ulog.dir:}</str>
</updateLog>
<!-- AutoCommit
Perform a hard commit automatically under certain conditions.
Instead of enabling autoCommit, consider using "commitWithin"
when adding documents.
when adding documents.
http://wiki.apache.org/solr/UpdateXmlMessages
@ -290,7 +290,7 @@
maxTime - Maximum amount of time in ms that is allowed to pass
since a document was added before automatically
triggering a new commit.
triggering a new commit.
openSearcher - if false, the commit causes recent index changes
to be flushed to stable storage, but does not cause a new
searcher to be opened to make those changes visible.
@ -298,9 +298,9 @@
If the updateLog is enabled, then it's highly recommended to
have some sort of hard autoCommit to limit the log size.
-->
<autoCommit>
<maxTime>${solr.autoCommit.maxTime:15000}</maxTime>
<openSearcher>false</openSearcher>
<autoCommit>
<maxTime>${solr.autoCommit.maxTime:15000}</maxTime>
<openSearcher>false</openSearcher>
</autoCommit>
<!-- softAutoCommit is like autoCommit except it causes a
@ -309,12 +309,12 @@
faster and more near-realtime friendly than a hard commit.
-->
<autoSoftCommit>
<maxTime>${solr.autoSoftCommit.maxTime:-1}</maxTime>
<autoSoftCommit>
<maxTime>${solr.autoSoftCommit.maxTime:-1}</maxTime>
</autoSoftCommit>
<!-- Update Related Event Listeners
Various IndexWriter related events can trigger Listeners to
take actions.
@ -323,10 +323,10 @@
-->
<!-- The RunExecutableListener executes an external command from a
hook such as postCommit or postOptimize.
exe - the name of the executable to run
dir - dir to use as the current working directory. (default=".")
wait - the calling thread waits until the executable returns.
wait - the calling thread waits until the executable returns.
(default="true")
args - the arguments to pass to the program. (default is none)
env - environment variables to set. (default is none)
@ -346,7 +346,7 @@
-->
</updateHandler>
<!-- IndexReaderFactory
Use the following format to specify a custom IndexReaderFactory,
@ -385,12 +385,12 @@
is thrown if exceeded.
** WARNING **
This option actually modifies a global Lucene property that
will affect all SolrCores. If multiple solrconfig.xml files
disagree on this property, the value at any given moment will
be based on the last SolrCore to be initialized.
-->
<maxBooleanClauses>1024</maxBooleanClauses>
@ -399,7 +399,7 @@
There are two implementations of cache available for Solr,
LRUCache, based on a synchronized LinkedHashMap, and
FastLRUCache, based on a ConcurrentHashMap.
FastLRUCache, based on a ConcurrentHashMap.
FastLRUCache has faster gets and slower puts in single
threaded operation and thus is generally faster than LRUCache
@ -424,7 +424,7 @@
initialSize - the initial capacity (number of entries) of
the cache. (see java.util.HashMap)
autowarmCount - the number of entries to prepopulate from
and old cache.
and old cache.
-->
<filterCache class="solr.FastLRUCache"
size="512"
@ -432,27 +432,27 @@
autowarmCount="0"/>
<!-- Query Result Cache
Caches results of searches - ordered lists of document ids
(DocList) based on a query, a sort, and the range of documents requested.
(DocList) based on a query, a sort, and the range of documents requested.
-->
<queryResultCache class="solr.LRUCache"
size="512"
initialSize="512"
autowarmCount="0"/>
<!-- Document Cache
Caches Lucene Document objects (the stored fields for each
document). Since Lucene internal document ids are transient,
this cache will not be autowarmed.
this cache will not be autowarmed.
-->
<documentCache class="solr.LRUCache"
size="512"
initialSize="512"
autowarmCount="0"/>
<!-- custom cache currently used by block join -->
<!-- custom cache currently used by block join -->
<cache name="perSegFilter"
class="solr.search.LRUCache"
size="10"
@ -461,7 +461,7 @@
regenerator="solr.NoOpRegenerator" />
<!-- Field Value Cache
Cache used to hold field values that are quickly accessible
by document id. The fieldValueCache is created by default
even if not configured here.
@ -479,8 +479,8 @@
name through SolrIndexSearcher.getCache(),cacheLookup(), and
cacheInsert(). The purpose is to enable easy caching of
user/application level data. The regenerator argument should
be specified as an implementation of solr.CacheRegenerator
if autowarming is desired.
be specified as an implementation of solr.CacheRegenerator
if autowarming is desired.
-->
<!--
<cache name="myUserCache"
@ -527,12 +527,12 @@
are collected. For example, if a search for a particular query
requests matching documents 10 through 19, and queryWindowSize is 50,
then documents 0 through 49 will be collected and cached. Any further
requests in that range can be satisfied via the cache.
requests in that range can be satisfied via the cache.
-->
<queryResultWindowSize>20</queryResultWindowSize>
<!-- Maximum number of documents to cache for any entry in the
queryResultCache.
queryResultCache.
-->
<queryResultMaxDocsCached>200</queryResultMaxDocsCached>
@ -550,10 +550,10 @@
prepared but there is no current registered searcher to handle
requests or to gain autowarming data from.
-->
<!-- QuerySenderListener takes an array of NamedList and executes a
local query request for each NamedList in sequence.
local query request for each NamedList in sequence.
-->
<listener event="newSearcher" class="solr.QuerySenderListener">
<arr name="queries">
@ -600,19 +600,19 @@
multipartUploadLimitInKB - specifies the max size (in KiB) of
Multipart File Uploads that Solr will allow in a Request.
formdataUploadLimitInKB - specifies the max size (in KiB) of
form data (application/x-www-form-urlencoded) sent via
POST. You can use POST to pass request parameters not
fitting into the URL.
addHttpRequestToContext - if set to true, it will instruct
the requestParsers to include the original HttpServletRequest
object in the context map of the SolrQueryRequest under the
object in the context map of the SolrQueryRequest under the
key "httpRequest". It will not be used by any of the existing
Solr components, but may be useful when developing custom
Solr components, but may be useful when developing custom
plugins.
*** WARNING ***
Before enabling remote streaming, you should make sure your
system has authentication enabled.
@ -634,21 +634,21 @@
<!-- If you include a <cacheControl> directive, it will be used to
generate a Cache-Control header (as well as an Expires header
if the value contains "max-age=")
By default, no Cache-Control header is generated.
You can use the <cacheControl> option even if you have set
never304="true"
-->
<!--
<httpCaching never304="true" >
<cacheControl>max-age=30, public</cacheControl>
<cacheControl>max-age=30, public</cacheControl>
</httpCaching>
-->
<!-- To enable Solr to respond with automatically generated HTTP
Caching headers, and to response to Cache Validation requests
correctly, set the value of never304="false"
This will cause Solr to generate Last-Modified and ETag
headers based on the properties of the Index.
@ -673,12 +673,12 @@
<!--
<httpCaching lastModifiedFrom="openTime"
etagSeed="Solr">
<cacheControl>max-age=30, public</cacheControl>
<cacheControl>max-age=30, public</cacheControl>
</httpCaching>
-->
</requestDispatcher>
<!-- Request Handlers
<!-- Request Handlers
http://wiki.apache.org/solr/SolrRequestHandler
@ -713,6 +713,9 @@
<str name="echoParams">explicit</str>
<int name="rows">10</int>
<str name="df">text</str>
<!-- Change from JSON to XML format (the default prior to Solr 7.0)
<str name="wt">xml</str>
-->
</lst>
<!-- In addition to defaults, "appends" params can be specified
to identify values which should be appended to the list of
@ -816,10 +819,10 @@
<!-- Solr Cell Update Request Handler
http://wiki.apache.org/solr/ExtractingRequestHandler
http://wiki.apache.org/solr/ExtractingRequestHandler
-->
<requestHandler name="/update/extract"
<requestHandler name="/update/extract"
startup="lazy"
class="solr.extraction.ExtractingRequestHandler" >
<lst name="defaults">
@ -834,18 +837,18 @@
</requestHandler>
<!-- Search Components
Search components are registered to SolrCore and used by
Search components are registered to SolrCore and used by
instances of SearchHandler (which can access them by name)
By default, the following components are available:
<searchComponent name="query" class="solr.QueryComponent" />
<searchComponent name="facet" class="solr.FacetComponent" />
<searchComponent name="mlt" class="solr.MoreLikeThisComponent" />
<searchComponent name="highlight" class="solr.HighlightComponent" />
<searchComponent name="stats" class="solr.StatsComponent" />
<searchComponent name="debug" class="solr.DebugComponent" />
Default configuration in a requestHandler would look like:
<arr name="components">
@ -857,28 +860,28 @@
<str>debug</str>
</arr>
If you register a searchComponent to one of the standard names,
If you register a searchComponent to one of the standard names,
that will be used instead of the default.
To insert components before or after the 'standard' components, use:
<arr name="first-components">
<str>myFirstComponentName</str>
</arr>
<arr name="last-components">
<str>myLastComponentName</str>
</arr>
NOTE: The component registered with the name "debug" will
always be executed after the "last-components"
always be executed after the "last-components"
-->
<!-- Spell Check
The spell check component can return a list of alternative spelling
suggestions.
suggestions.
http://wiki.apache.org/solr/SpellCheckComponent
-->
@ -913,11 +916,11 @@
<float name="thresholdTokenFrequency">.01</float>
-->
</lst>
<!-- a spellchecker that can break or combine words. See "/spell" handler below for usage -->
<lst name="spellchecker">
<str name="name">wordbreak</str>
<str name="classname">solr.WordBreakSolrSpellChecker</str>
<str name="classname">solr.WordBreakSolrSpellChecker</str>
<str name="field">name</str>
<str name="combineWords">true</str>
<str name="breakWords">true</str>
@ -936,7 +939,7 @@
</lst>
-->
<!-- a spellchecker that use an alternate comparator
<!-- a spellchecker that use an alternate comparator
comparatorClass be one of:
1. score (default)
@ -962,8 +965,8 @@
</lst>
-->
</searchComponent>
<!-- A request handler for demonstrating the spellcheck component.
<!-- A request handler for demonstrating the spellcheck component.
NOTE: This is purely as an example. The whole purpose of the
SpellCheckComponent is to hook it into the request handler that
@ -972,7 +975,7 @@
IN OTHER WORDS, THERE IS REALLY GOOD CHANCE THE SETUP BELOW IS
NOT WHAT YOU WANT FOR YOUR PRODUCTION SYSTEM!
See http://wiki.apache.org/solr/SpellCheckComponent for details
on the request parameters.
-->
@ -986,14 +989,14 @@
<str name="spellcheck.dictionary">default</str>
<str name="spellcheck.dictionary">wordbreak</str>
<str name="spellcheck">on</str>
<str name="spellcheck.extendedResults">true</str>
<str name="spellcheck.extendedResults">true</str>
<str name="spellcheck.count">10</str>
<str name="spellcheck.alternativeTermCount">5</str>
<str name="spellcheck.maxResultsForSuggest">5</str>
<str name="spellcheck.maxResultsForSuggest">5</str>
<str name="spellcheck.collate">true</str>
<str name="spellcheck.collateExtendedResults">true</str>
<str name="spellcheck.collateExtendedResults">true</str>
<str name="spellcheck.maxCollationTries">10</str>
<str name="spellcheck.maxCollations">5</str>
<str name="spellcheck.maxCollations">5</str>
</lst>
<arr name="last-components">
<str>spellcheck</str>
@ -1004,7 +1007,7 @@
<lst name="suggester">
<str name="name">mySuggester</str>
<str name="lookupImpl">FuzzyLookupFactory</str> <!-- org.apache.solr.spelling.suggest.fst -->
<str name="dictionaryImpl">DocumentDictionaryFactory</str> <!-- org.apache.solr.spelling.suggest.HighFrequencyDictionaryFactory -->
<str name="dictionaryImpl">DocumentDictionaryFactory</str> <!-- org.apache.solr.spelling.suggest.HighFrequencyDictionaryFactory -->
<str name="field">cat</str>
<str name="weightField">price</str>
<str name="suggestAnalyzerFieldType">string</str>
@ -1030,8 +1033,8 @@
This is purely as an example.
In reality you will likely want to add the component to your
already specified request handlers.
In reality you will likely want to add the component to your
already specified request handlers.
-->
<requestHandler name="/tvrh" class="solr.SearchHandler" startup="lazy">
<lst name="defaults">
@ -1057,7 +1060,7 @@
<lst name="defaults">
<bool name="terms">true</bool>
<bool name="distrib">false</bool>
</lst>
</lst>
<arr name="components">
<str>terms</str>
</arr>
@ -1097,7 +1100,7 @@
<highlighting>
<!-- Configure the standard fragmenter -->
<!-- This could most likely be commented out in the "default" case -->
<fragmenter name="gap"
<fragmenter name="gap"
default="true"
class="solr.highlight.GapFragmenter">
<lst name="defaults">
@ -1105,10 +1108,10 @@
</lst>
</fragmenter>
<!-- A regular-expression-based fragmenter
(for sentence extraction)
<!-- A regular-expression-based fragmenter
(for sentence extraction)
-->
<fragmenter name="regex"
<fragmenter name="regex"
class="solr.highlight.RegexFragmenter">
<lst name="defaults">
<!-- slightly smaller fragsizes work better because of slop -->
@ -1121,7 +1124,7 @@
</fragmenter>
<!-- Configure the standard formatter -->
<formatter name="html"
<formatter name="html"
default="true"
class="solr.highlight.HtmlFormatter">
<lst name="defaults">
@ -1131,27 +1134,27 @@
</formatter>
<!-- Configure the standard encoder -->
<encoder name="html"
<encoder name="html"
class="solr.highlight.HtmlEncoder" />
<!-- Configure the standard fragListBuilder -->
<fragListBuilder name="simple"
<fragListBuilder name="simple"
class="solr.highlight.SimpleFragListBuilder"/>
<!-- Configure the single fragListBuilder -->
<fragListBuilder name="single"
<fragListBuilder name="single"
class="solr.highlight.SingleFragListBuilder"/>
<!-- Configure the weighted fragListBuilder -->
<fragListBuilder name="weighted"
<fragListBuilder name="weighted"
default="true"
class="solr.highlight.WeightedFragListBuilder"/>
<!-- default tag FragmentsBuilder -->
<fragmentsBuilder name="default"
<fragmentsBuilder name="default"
default="true"
class="solr.highlight.ScoreOrderFragmentsBuilder">
<!--
<!--
<lst name="defaults">
<str name="hl.multiValuedSeparatorChar">/</str>
</lst>
@ -1159,7 +1162,7 @@
</fragmentsBuilder>
<!-- multi-colored tag FragmentsBuilder -->
<fragmentsBuilder name="colored"
<fragmentsBuilder name="colored"
class="solr.highlight.ScoreOrderFragmentsBuilder">
<lst name="defaults">
<str name="hl.tag.pre"><![CDATA[
@ -1171,8 +1174,8 @@
<str name="hl.tag.post"><![CDATA[</b>]]></str>
</lst>
</fragmentsBuilder>
<boundaryScanner name="default"
<boundaryScanner name="default"
default="true"
class="solr.highlight.SimpleBoundaryScanner">
<lst name="defaults">
@ -1180,8 +1183,8 @@
<str name="hl.bs.chars">.,!? &#9;&#10;&#13;</str>
</lst>
</boundaryScanner>
<boundaryScanner name="breakIterator"
<boundaryScanner name="breakIterator"
class="solr.highlight.BreakIteratorBoundaryScanner">
<lst name="defaults">
<!-- type should be one of CHARACTER, WORD(default), LINE and SENTENCE -->
@ -1203,15 +1206,15 @@
http://wiki.apache.org/solr/UpdateRequestProcessor
-->
-->
<!-- Deduplication
An example dedup update processor that creates the "id" field
on the fly based on the hash code of some other fields. This
example has overwriteDupes set to false since we are using the
id field as the signatureField and Solr will maintain
uniqueness based on that anyway.
uniqueness based on that anyway.
-->
<!--
<updateRequestProcessorChain name="dedupe">
@ -1226,7 +1229,7 @@
<processor class="solr.RunUpdateProcessorFactory" />
</updateRequestProcessorChain>
-->
<!-- Language identification
This example update chain identifies the language of the incoming
@ -1266,7 +1269,7 @@
<processor class="solr.RunUpdateProcessorFactory" />
</updateRequestProcessorChain>
-->
<!-- Response Writers
http://wiki.apache.org/solr/QueryResponseWriter
@ -1282,7 +1285,7 @@
overridden...
-->
<!--
<queryResponseWriter name="xml"
<queryResponseWriter name="xml"
default="true"
class="solr.XMLResponseWriter" />
<queryResponseWriter name="json" class="solr.JSONResponseWriter"/>
@ -1301,7 +1304,7 @@
-->
<str name="content-type">text/plain; charset=UTF-8</str>
</queryResponseWriter>
<!--
Custom response writers can be declared as needed...
-->
@ -1311,7 +1314,7 @@
<!-- XSLT response writer transforms the XML output by any xslt file found
in Solr's conf/xslt directory. Changes to xslt files are checked for
every xsltCacheLifetimeSeconds.
every xsltCacheLifetimeSeconds.
-->
<queryResponseWriter name="xslt" class="solr.XSLTResponseWriter">
<int name="xsltCacheLifetimeSeconds">5</int>
@ -1319,7 +1322,7 @@
<!-- Query Parsers
https://cwiki.apache.org/confluence/display/solr/Query+Syntax+and+Parsing
https://lucene.apache.org/solr/guide/query-syntax-and-parsing.html
Multiple QParserPlugins can be registered by name, and then
used in either the "defType" param for the QueryComponent (used
@ -1339,11 +1342,11 @@
-->
<!-- example of registering a custom function parser -->
<!--
<valueSourceParser name="myfunc"
<valueSourceParser name="myfunc"
class="com.mycompany.MyValueSourceParser" />
-->
<!-- Document Transformers
http://wiki.apache.org/solr/DocTransformers
-->
@ -1352,12 +1355,12 @@
<transformer name="db" class="com.mycompany.LoadFromDatabaseTransformer" >
<int name="connection">jdbc://....</int>
</transformer>
To add a constant value to all docs, use:
<transformer name="mytrans2" class="org.apache.solr.response.transform.ValueAugmenterFactory" >
<int name="value">5</int>
</transformer>
If you want the user to still be able to change it with _value:something_ use this:
<transformer name="mytrans3" class="org.apache.solr.response.transform.ValueAugmenterFactory" >
<double name="defaultValue">5</double>

View File

@ -16,7 +16,7 @@
limitations under the License.
-->
<!--
<!--
This is a DEMO configuration highlighting elements
specifically needed to get this example running
such as libraries and request handler specifics.
@ -26,7 +26,7 @@
See Solr Reference Guide and other examples for
more details on a well configured solrconfig.xml
https://cwiki.apache.org/confluence/display/solr/The+Well-Configured+Solr+Instance
https://lucene.apache.org/solr/guide/the-well-configured-solr-instance.html
-->
<config>
@ -46,6 +46,9 @@
<lst name="defaults">
<str name="echoParams">explicit</str>
<str name="df">text</str>
<!-- Change from JSON to XML format (the default prior to Solr 7.0)
<str name="wt">xml</str>
-->
</lst>
</requestHandler>

View File

@ -82,8 +82,8 @@ else
echo "ERROR: HTTP POST + URL params is not accepting UTF-8 beyond the basic multilingual plane"
fi
#curl "$SOLR_URL/select?q=$UTF8_Q&echoParams=explicit&wt=json" 2> /dev/null | od -tx1 -w1000 | sed 's/ //g' | grep 'f4808198' > /dev/null 2>&1
curl "$SOLR_URL/select?q=$UTF8_Q&echoParams=explicit&wt=json" 2> /dev/null | grep "$CHAR" > /dev/null 2>&1
#curl "$SOLR_URL/select?q=$UTF8_Q&echoParams=explicit" 2> /dev/null | od -tx1 -w1000 | sed 's/ //g' | grep 'f4808198' > /dev/null 2>&1
curl "$SOLR_URL/select?q=$UTF8_Q&echoParams=explicit" 2> /dev/null | grep "$CHAR" > /dev/null 2>&1
if [ $? = 0 ]; then
echo "Response correctly returns UTF-8 beyond the basic multilingual plane"
else

View File

@ -16,9 +16,9 @@
limitations under the License.
-->
<!--
<!--
For more details about configurations options that may appear in
this file, see http://wiki.apache.org/solr/SolrConfigXml.
this file, see http://wiki.apache.org/solr/SolrConfigXml.
-->
<config>
<!-- In all configuration below, a prefix of "solr." for class names
@ -46,19 +46,19 @@
instanceDir.
Please note that <lib/> directives are processed in the order
that they appear in your solrconfig.xml file, and are "stacked"
on top of each other when building a ClassLoader - so if you have
plugin jars with dependencies on other jars, the "lower level"
that they appear in your solrconfig.xml file, and are "stacked"
on top of each other when building a ClassLoader - so if you have
plugin jars with dependencies on other jars, the "lower level"
dependency jars should be loaded first.
If a "./lib" directory exists in your instanceDir, all files
found in it are included as if you had used the following
syntax...
<lib dir="./lib" />
-->
<!-- A 'dir' option by itself adds any files found in the directory
<!-- A 'dir' option by itself adds any files found in the directory
to the classpath, this is useful for including all jars in a
directory.
@ -69,7 +69,7 @@
If a 'dir' option (with or without a regex) is used and nothing
is found that matches, a warning will be logged.
The examples below can be used to load some solr-contribs along
The examples below can be used to load some solr-contribs along
with their external dependencies.
-->
<lib dir="${solr.install.dir:../../../..}/contrib/extraction/lib" regex=".*\.jar" />
@ -85,12 +85,12 @@
<!-- browse-resources must come before solr-velocity JAR in order to override localized resources -->
<lib path="${solr.install.dir:../../../..}/example/files/browse-resources"/>
<lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-velocity-\d.*\.jar" />
<!-- an exact 'path' can be used instead of a 'dir' to specify a
specific jar file. This will cause a serious error to be logged
<!-- an exact 'path' can be used instead of a 'dir' to specify a
specific jar file. This will cause a serious error to be logged
if it can't be loaded.
-->
<!--
<lib path="../a-jar-that-does-not-exist.jar" />
<lib path="../a-jar-that-does-not-exist.jar" />
-->
<!-- Data Directory
@ -104,7 +104,7 @@
<!-- The DirectoryFactory to use for indexes.
solr.StandardDirectoryFactory is filesystem
based and tries to pick the best implementation for the current
JVM and platform. solr.NRTCachingDirectoryFactory, the default,
@ -134,19 +134,19 @@
Index Config - These settings control low-level behavior of indexing
Most example settings here show the default value, but are commented
out, to more easily see where customizations have been made.
Note: This replaces <indexDefaults> and <mainIndex> from older versions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->
<indexConfig>
<!-- maxFieldLength was removed in 4.0. To get similar behavior, include a
LimitTokenCountFilterFactory in your fieldType definition. E.g.
<!-- maxFieldLength was removed in 4.0. To get similar behavior, include a
LimitTokenCountFilterFactory in your fieldType definition. E.g.
<filter class="solr.LimitTokenCountFilterFactory" maxTokenCount="10000"/>
-->
<!-- Maximum time to wait for a write lock (ms) for an IndexWriter. Default: 1000 -->
<!-- <writeLockTimeout>1000</writeLockTimeout> -->
<!-- Expert: Enabling compound file will use less files for the index,
using fewer file descriptors on the expense of performance decrease.
<!-- Expert: Enabling compound file will use less files for the index,
using fewer file descriptors on the expense of performance decrease.
Default in Lucene is "true". Default in Solr is "false" (since 3.6) -->
<!-- <useCompoundFile>false</useCompoundFile> -->
@ -160,7 +160,7 @@
<!-- <ramBufferSizeMB>100</ramBufferSizeMB> -->
<!-- <maxBufferedDocs>1000</maxBufferedDocs> -->
<!-- Expert: Merge Policy
<!-- Expert: Merge Policy
The Merge Policy in Lucene controls how merging of segments is done.
The default since Solr/Lucene 3.3 is TieredMergePolicy.
The default since Lucene 2.3 was the LogByteSizeMergePolicy,
@ -179,15 +179,15 @@
can perform merges in the background using separate threads.
The SerialMergeScheduler (Lucene 2.2 default) does not.
-->
<!--
<!--
<mergeScheduler class="org.apache.lucene.index.ConcurrentMergeScheduler"/>
-->
<!-- LockFactory
<!-- LockFactory
This option specifies which Lucene LockFactory implementation
to use.
single = SingleInstanceLockFactory - suggested for a
read-only index or when there is no possibility of
another process trying to modify the index.
@ -211,11 +211,11 @@
The default Solr IndexDeletionPolicy implementation supports
deleting index commit points on number of commits, age of
commit point and optimized status.
The latest commit point should always be preserved regardless
of the criteria.
-->
<!--
<!--
<deletionPolicy class="solr.SolrDeletionPolicy">
-->
<!-- The number of commit points to be kept -->
@ -230,12 +230,12 @@
<str name="maxCommitAge">30MINUTES</str>
<str name="maxCommitAge">1DAY</str>
-->
<!--
<!--
</deletionPolicy>
-->
<!-- Lucene Infostream
To aid in advanced debugging, Lucene provides an "InfoStream"
of detailed information when indexing.
@ -247,7 +247,7 @@
<!-- JMX
This example enables JMX if and only if an existing MBeanServer
is found, use this if you want to configure JMX through JVM
parameters. Remove this to disable exposing Solr configuration
@ -257,7 +257,7 @@
-->
<jmx />
<!-- If you want to connect to a particular server, specify the
agentId
agentId
-->
<!-- <jmx agentId="myAgent" /> -->
<!-- If you want to start a new MBeanServer, specify the serviceUrl -->
@ -281,7 +281,7 @@
Perform a hard commit automatically under certain conditions.
Instead of enabling autoCommit, consider using "commitWithin"
when adding documents.
when adding documents.
http://wiki.apache.org/solr/UpdateXmlMessages
@ -290,7 +290,7 @@
maxTime - Maximum amount of time in ms that is allowed to pass
since a document was added before automatically
triggering a new commit.
triggering a new commit.
openSearcher - if false, the commit causes recent index changes
to be flushed to stable storage, but does not cause a new
searcher to be opened to make those changes visible.
@ -309,13 +309,13 @@
faster and more near-realtime friendly than a hard commit.
-->
<!--
<autoSoftCommit>
<maxTime>1000</maxTime>
<autoSoftCommit>
<maxTime>1000</maxTime>
</autoSoftCommit>
-->
<!-- Update Related Event Listeners
Various IndexWriter related events can trigger Listeners to
take actions.
@ -324,10 +324,10 @@
-->
<!-- The RunExecutableListener executes an external command from a
hook such as postCommit or postOptimize.
exe - the name of the executable to run
dir - dir to use as the current working directory. (default=".")
wait - the calling thread waits until the executable returns.
wait - the calling thread waits until the executable returns.
(default="true")
args - the arguments to pass to the program. (default is none)
env - environment variables to set. (default is none)
@ -386,12 +386,12 @@
is thrown if exceeded.
** WARNING **
This option actually modifies a global Lucene property that
will affect all SolrCores. If multiple solrconfig.xml files
disagree on this property, the value at any given moment will
be based on the last SolrCore to be initialized.
-->
<maxBooleanClauses>1024</maxBooleanClauses>
@ -400,7 +400,7 @@
There are two implementations of cache available for Solr,
LRUCache, based on a synchronized LinkedHashMap, and
FastLRUCache, based on a ConcurrentHashMap.
FastLRUCache, based on a ConcurrentHashMap.
FastLRUCache has faster gets and slower puts in single
threaded operation and thus is generally faster than LRUCache
@ -433,7 +433,7 @@
autowarmCount="0"/>
<!-- Query Result Cache
Caches results of searches - ordered lists of document ids
(DocList) based on a query, a sort, and the range of documents requested.
Additional supported parameter by LRUCache:
@ -457,7 +457,7 @@
autowarmCount="0"/>
<!-- Field Value Cache
Cache used to hold field values that are quickly accessible
by document id. The fieldValueCache is created by default
even if not configured here.
@ -475,8 +475,8 @@
name through SolrIndexSearcher.getCache(),cacheLookup(), and
cacheInsert(). The purpose is to enable easy caching of
user/application level data. The regenerator argument should
be specified as an implementation of solr.CacheRegenerator
if autowarming is desired.
be specified as an implementation of solr.CacheRegenerator
if autowarming is desired.
-->
<!--
<cache name="myUserCache"
@ -500,14 +500,14 @@
<enableLazyFieldLoading>true</enableLazyFieldLoading>
<!-- Use Filter For Sorted Query
A possible optimization that attempts to use a filter to
satisfy a search. If the requested sort does not include
score, then the filterCache will be checked for a filter
matching the query. If found, the filter will be used as the
source of document ids, and then the sort will be applied to
that.
For most situations, this will not be useful unless you
frequently get the same search repeatedly with different sort
options, and none of them ever use "score"
@ -517,39 +517,39 @@
-->
<!-- Result Window Size
An optimization for use with the queryResultCache. When a search
is requested, a superset of the requested number of document ids
are collected. For example, if a search for a particular query
requests matching documents 10 through 19, and queryWindowSize is 50,
then documents 0 through 49 will be collected and cached. Any further
requests in that range can be satisfied via the cache.
requests in that range can be satisfied via the cache.
-->
<queryResultWindowSize>20</queryResultWindowSize>
<!-- Maximum number of documents to cache for any entry in the
queryResultCache.
queryResultCache.
-->
<queryResultMaxDocsCached>200</queryResultMaxDocsCached>
<!-- Query Related Event Listeners
Various IndexSearcher related events can trigger Listeners to
take actions.
newSearcher - fired whenever a new searcher is being prepared
and there is a current searcher handling requests (aka
registered). It can be used to prime certain caches to
prevent long request times for certain requests.
firstSearcher - fired whenever a new searcher is being
prepared but there is no current registered searcher to handle
requests or to gain autowarming data from.
-->
<!-- QuerySenderListener takes an array of NamedList and executes a
local query request for each NamedList in sequence.
local query request for each NamedList in sequence.
-->
<listener event="newSearcher" class="solr.QuerySenderListener">
<arr name="queries">
@ -598,19 +598,19 @@
multipartUploadLimitInKB - specifies the max size (in KiB) of
Multipart File Uploads that Solr will allow in a Request.
formdataUploadLimitInKB - specifies the max size (in KiB) of
form data (application/x-www-form-urlencoded) sent via
POST. You can use POST to pass request parameters not
fitting into the URL.
addHttpRequestToContext - if set to true, it will instruct
the requestParsers to include the original HttpServletRequest
object in the context map of the SolrQueryRequest under the
object in the context map of the SolrQueryRequest under the
key "httpRequest". It will not be used by any of the existing
Solr components, but may be useful when developing custom
Solr components, but may be useful when developing custom
plugins.
*** WARNING ***
Before enabling remote streaming, you should make sure your
system has authentication enabled.
@ -632,21 +632,21 @@
<!-- If you include a <cacheControl> directive, it will be used to
generate a Cache-Control header (as well as an Expires header
if the value contains "max-age=")
By default, no Cache-Control header is generated.
You can use the <cacheControl> option even if you have set
never304="true"
-->
<!--
<httpCaching never304="true" >
<cacheControl>max-age=30, public</cacheControl>
<cacheControl>max-age=30, public</cacheControl>
</httpCaching>
-->
<!-- To enable Solr to respond with automatically generated HTTP
Caching headers, and to response to Cache Validation requests
correctly, set the value of never304="false"
This will cause Solr to generate Last-Modified and ETag
headers based on the properties of the Index.
@ -671,12 +671,12 @@
<!--
<httpCaching lastModifiedFrom="openTime"
etagSeed="Solr">
<cacheControl>max-age=30, public</cacheControl>
<cacheControl>max-age=30, public</cacheControl>
</httpCaching>
-->
</requestDispatcher>
<!-- Request Handlers
<!-- Request Handlers
http://wiki.apache.org/solr/SolrRequestHandler
@ -703,7 +703,12 @@
<lst name="defaults">
<str name="echoParams">explicit</str>
<int name="rows">10</int>
<!-- <str name="df">text</str> -->
<!-- Default search field
<str name="df">text</str>
-->
<!-- Change from JSON to XML format (the default prior to Solr 7.0)
<str name="wt">xml</str>
-->
</lst>
<!-- In addition to defaults, "appends" params can be specified
to identify values which should be appended to the list of
@ -787,7 +792,7 @@
<!-- Solr Cell Update Request Handler
http://wiki.apache.org/solr/ExtractingRequestHandler
http://wiki.apache.org/solr/ExtractingRequestHandler
-->
<requestHandler name="/update/extract"
@ -803,18 +808,18 @@
</requestHandler>
<!-- Search Components
Search components are registered to SolrCore and used by
Search components are registered to SolrCore and used by
instances of SearchHandler (which can access them by name)
By default, the following components are available:
<searchComponent name="query" class="solr.QueryComponent" />
<searchComponent name="facet" class="solr.FacetComponent" />
<searchComponent name="mlt" class="solr.MoreLikeThisComponent" />
<searchComponent name="highlight" class="solr.HighlightComponent" />
<searchComponent name="stats" class="solr.StatsComponent" />
<searchComponent name="debug" class="solr.DebugComponent" />
Default configuration in a requestHandler would look like:
<arr name="components">
@ -826,28 +831,28 @@
<str>debug</str>
</arr>
If you register a searchComponent to one of the standard names,
If you register a searchComponent to one of the standard names,
that will be used instead of the default.
To insert components before or after the 'standard' components, use:
<arr name="first-components">
<str>myFirstComponentName</str>
</arr>
<arr name="last-components">
<str>myLastComponentName</str>
</arr>
NOTE: The component registered with the name "debug" will
always be executed after the "last-components"
always be executed after the "last-components"
-->
<!-- Spell Check
The spell check component can return a list of alternative spelling
suggestions.
suggestions.
http://wiki.apache.org/solr/SpellCheckComponent
-->
@ -905,7 +910,7 @@
</lst>
-->
<!-- a spellchecker that use an alternate comparator
<!-- a spellchecker that use an alternate comparator
comparatorClass be one of:
1. score (default)
@ -932,7 +937,7 @@
-->
</searchComponent>
<!-- A request handler for demonstrating the spellcheck component.
<!-- A request handler for demonstrating the spellcheck component.
NOTE: This is purely as an example. The whole purpose of the
SpellCheckComponent is to hook it into the request handler that
@ -941,7 +946,7 @@
IN OTHER WORDS, THERE IS REALLY GOOD CHANCE THE SETUP BELOW IS
NOT WHAT YOU WANT FOR YOUR PRODUCTION SYSTEM!
See http://wiki.apache.org/solr/SpellCheckComponent for details
on the request parameters.
-->
@ -978,8 +983,8 @@
This is purely as an example.
In reality you will likely want to add the component to your
already specified request handlers.
In reality you will likely want to add the component to your
already specified request handlers.
-->
<requestHandler name="/tvrh" class="solr.SearchHandler" startup="lazy">
<lst name="defaults">
@ -1053,8 +1058,8 @@
</lst>
</fragmenter>
<!-- A regular-expression-based fragmenter
(for sentence extraction)
<!-- A regular-expression-based fragmenter
(for sentence extraction)
-->
<fragmenter name="regex"
class="solr.highlight.RegexFragmenter">
@ -1099,7 +1104,7 @@
<fragmentsBuilder name="default"
default="true"
class="solr.highlight.ScoreOrderFragmentsBuilder">
<!--
<!--
<lst name="defaults">
<str name="hl.multiValuedSeparatorChar">/</str>
</lst>
@ -1152,18 +1157,18 @@
http://wiki.apache.org/solr/UpdateRequestProcessor
-->
<!-- Add unknown fields to the schema
<!-- Add unknown fields to the schema
An example field type guessing update processor that will
attempt to parse string-typed field values as Booleans, Longs,
Doubles, or Dates, and then add schema fields with the guessed
field types.
field types.
This requires that the schema is both managed and mutable, by
declaring schemaFactory as ManagedIndexSchemaFactory, with
mutable specified as true.
mutable specified as true.
See http://wiki.apache.org/solr/GuessingFieldTypes
-->
<updateRequestProcessorChain name="files-update-processor">
@ -1245,8 +1250,8 @@
on the fly based on the hash code of some other fields. This
example has overwriteDupes set to false since we are using the
id field as the signatureField and Solr will maintain
uniqueness based on that anyway.
uniqueness based on that anyway.
-->
<!--
<updateRequestProcessorChain name="dedupe">
@ -1317,7 +1322,7 @@
overridden...
-->
<!--
<queryResponseWriter name="xml"
<queryResponseWriter name="xml"
default="true"
class="solr.XMLResponseWriter" />
<queryResponseWriter name="json" class="solr.JSONResponseWriter"/>
@ -1346,7 +1351,7 @@
<!-- XSLT response writer transforms the XML output by any xslt file found
in Solr's conf/xslt directory. Changes to xslt files are checked for
every xsltCacheLifetimeSeconds.
every xsltCacheLifetimeSeconds.
-->
<queryResponseWriter name="xslt" class="solr.XSLTResponseWriter">
<int name="xsltCacheLifetimeSeconds">5</int>
@ -1354,7 +1359,7 @@
<!-- Query Parsers
https://cwiki.apache.org/confluence/display/solr/Query+Syntax+and+Parsing
https://lucene.apache.org/solr/guide/query-syntax-and-parsing.html
Multiple QParserPlugins can be registered by name, and then
used in either the "defType" param for the QueryComponent (used
@ -1374,7 +1379,7 @@
-->
<!-- example of registering a custom function parser -->
<!--
<valueSourceParser name="myfunc"
<valueSourceParser name="myfunc"
class="com.mycompany.MyValueSourceParser" />
-->
@ -1387,12 +1392,12 @@
<transformer name="db" class="com.mycompany.LoadFromDatabaseTransformer" >
<int name="connection">jdbc://....</int>
</transformer>
To add a constant value to all docs, use:
<transformer name="mytrans2" class="org.apache.solr.response.transform.ValueAugmenterFactory" >
<int name="value">5</int>
</transformer>
If you want the user to still be able to change it with _value:something_ use this:
<transformer name="mytrans3" class="org.apache.solr.response.transform.ValueAugmenterFactory" >
<double name="defaultValue">5</double>

View File

@ -17,7 +17,7 @@
Default Solr Home Directory
=============================
This directory is the default Solr home directory which holds
This directory is the default Solr home directory which holds
configuration files and Solr indexes (called cores).
@ -38,17 +38,17 @@ it is recommended to just use automatic core discovery instead of
listing cores in solr.xml.
If no solr.xml file is found, then Solr assumes that there should be
a single SolrCore named "collection1" and that the "Instance Directory"
a single SolrCore named "collection1" and that the "Instance Directory"
for collection1 should be the same as the Solr Home Directory.
For more information about solr.xml, please see:
https://cwiki.apache.org/confluence/display/solr/Solr+Cores+and+solr.xml
https://lucene.apache.org/solr/guide/solr-cores-and-solr-xml.html
* Individual SolrCore Instance Directories *
Although solr.xml can be configured to look for SolrCore Instance Directories
in any path, simple sub-directories of the Solr Home Dir using relative paths
are common for many installations.
Although solr.xml can be configured to look for SolrCore Instance Directories
in any path, simple sub-directories of the Solr Home Dir using relative paths
are common for many installations.
* Core Discovery *
@ -60,18 +60,18 @@ defined in core.properties. For an example of core.properties, please see:
example/solr/collection1/core.properties
For more information about core discovery, please see:
https://cwiki.apache.org/confluence/display/solr/Moving+to+the+New+solr.xml+Format
https://lucene.apache.org/solr/guide/defining-core-properties.html
* A Shared 'lib' Directory *
Although solr.xml can be configured with an optional "sharedLib" attribute
that can point to any path, it is common to use a "./lib" sub-directory of the
Although solr.xml can be configured with an optional "sharedLib" attribute
that can point to any path, it is common to use a "./lib" sub-directory of the
Solr Home Directory.
* ZooKeeper Files *
When using SolrCloud using the embedded ZooKeeper option for Solr, it is
common to have a "zoo.cfg" file and "zoo_data" directories in the Solr Home
When using SolrCloud using the embedded ZooKeeper option for Solr, it is
common to have a "zoo.cfg" file and "zoo_data" directories in the Solr Home
Directory. Please see the SolrCloud wiki page for more details...
https://wiki.apache.org/solr/SolrCloud

View File

@ -16,9 +16,9 @@
limitations under the License.
-->
<!--
<!--
For more details about configurations options that may appear in
this file, see http://wiki.apache.org/solr/SolrConfigXml.
this file, see http://wiki.apache.org/solr/SolrConfigXml.
-->
<config>
<!-- In all configuration below, a prefix of "solr." for class names
@ -46,19 +46,19 @@
instanceDir.
Please note that <lib/> directives are processed in the order
that they appear in your solrconfig.xml file, and are "stacked"
on top of each other when building a ClassLoader - so if you have
plugin jars with dependencies on other jars, the "lower level"
that they appear in your solrconfig.xml file, and are "stacked"
on top of each other when building a ClassLoader - so if you have
plugin jars with dependencies on other jars, the "lower level"
dependency jars should be loaded first.
If a "./lib" directory exists in your instanceDir, all files
found in it are included as if you had used the following
syntax...
<lib dir="./lib" />
-->
<!-- A 'dir' option by itself adds any files found in the directory
<!-- A 'dir' option by itself adds any files found in the directory
to the classpath, this is useful for including all jars in a
directory.
@ -69,7 +69,7 @@
If a 'dir' option (with or without a regex) is used and nothing
is found that matches, a warning will be logged.
The examples below can be used to load some solr-contribs along
The examples below can be used to load some solr-contribs along
with their external dependencies.
-->
<lib dir="${solr.install.dir:../../../..}/contrib/extraction/lib" regex=".*\.jar" />
@ -83,12 +83,12 @@
<lib dir="${solr.install.dir:../../../..}/contrib/velocity/lib" regex=".*\.jar" />
<lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-velocity-\d.*\.jar" />
<!-- an exact 'path' can be used instead of a 'dir' to specify a
specific jar file. This will cause a serious error to be logged
<!-- an exact 'path' can be used instead of a 'dir' to specify a
specific jar file. This will cause a serious error to be logged
if it can't be loaded.
-->
<!--
<lib path="../a-jar-that-does-not-exist.jar" />
<lib path="../a-jar-that-does-not-exist.jar" />
-->
<!-- Data Directory
@ -102,7 +102,7 @@
<!-- The DirectoryFactory to use for indexes.
solr.StandardDirectoryFactory is filesystem
based and tries to pick the best implementation for the current
JVM and platform. solr.NRTCachingDirectoryFactory, the default,
@ -125,7 +125,7 @@
are experimental, so if you choose to customize the index format, it's a good
idea to convert back to the official format e.g. via IndexWriter.addIndexes(IndexReader)
before upgrading to a newer version to avoid unnecessary reindexing.
A "compressionMode" string element can be added to <codecFactory> to choose
A "compressionMode" string element can be added to <codecFactory> to choose
between the existing compression modes in the default codec: "BEST_SPEED" (default)
or "BEST_COMPRESSION".
-->
@ -135,19 +135,19 @@
Index Config - These settings control low-level behavior of indexing
Most example settings here show the default value, but are commented
out, to more easily see where customizations have been made.
Note: This replaces <indexDefaults> and <mainIndex> from older versions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->
<indexConfig>
<!-- maxFieldLength was removed in 4.0. To get similar behavior, include a
LimitTokenCountFilterFactory in your fieldType definition. E.g.
<!-- maxFieldLength was removed in 4.0. To get similar behavior, include a
LimitTokenCountFilterFactory in your fieldType definition. E.g.
<filter class="solr.LimitTokenCountFilterFactory" maxTokenCount="10000"/>
-->
<!-- Maximum time to wait for a write lock (ms) for an IndexWriter. Default: 1000 -->
<!-- <writeLockTimeout>1000</writeLockTimeout> -->
<!-- Expert: Enabling compound file will use less files for the index,
using fewer file descriptors on the expense of performance decrease.
<!-- Expert: Enabling compound file will use less files for the index,
using fewer file descriptors on the expense of performance decrease.
Default in Lucene is "true". Default in Solr is "false" (since 3.6) -->
<!-- <useCompoundFile>false</useCompoundFile> -->
@ -161,7 +161,7 @@
<!-- <ramBufferSizeMB>100</ramBufferSizeMB> -->
<!-- <maxBufferedDocs>1000</maxBufferedDocs> -->
<!-- Expert: Merge Policy
<!-- Expert: Merge Policy
The Merge Policy in Lucene controls how merging of segments is done.
The default since Solr/Lucene 3.3 is TieredMergePolicy.
The default since Lucene 2.3 was the LogByteSizeMergePolicy,
@ -181,15 +181,15 @@
can perform merges in the background using separate threads.
The SerialMergeScheduler (Lucene 2.2 default) does not.
-->
<!--
<!--
<mergeScheduler class="org.apache.lucene.index.ConcurrentMergeScheduler"/>
-->
<!-- LockFactory
<!-- LockFactory
This option specifies which Lucene LockFactory implementation
to use.
single = SingleInstanceLockFactory - suggested for a
read-only index or when there is no possibility of
another process trying to modify the index.
@ -213,11 +213,11 @@
The default Solr IndexDeletionPolicy implementation supports
deleting index commit points on number of commits, age of
commit point and optimized status.
The latest commit point should always be preserved regardless
of the criteria.
-->
<!--
<!--
<deletionPolicy class="solr.SolrDeletionPolicy">
-->
<!-- The number of commit points to be kept -->
@ -232,12 +232,12 @@
<str name="maxCommitAge">30MINUTES</str>
<str name="maxCommitAge">1DAY</str>
-->
<!--
<!--
</deletionPolicy>
-->
<!-- Lucene Infostream
To aid in advanced debugging, Lucene provides an "InfoStream"
of detailed information when indexing.
@ -249,7 +249,7 @@
<!-- JMX
This example enables JMX if and only if an existing MBeanServer
is found, use this if you want to configure JMX through JVM
parameters. Remove this to disable exposing Solr configuration
@ -259,7 +259,7 @@
-->
<jmx />
<!-- If you want to connect to a particular server, specify the
agentId
agentId
-->
<!-- <jmx agentId="myAgent" /> -->
<!-- If you want to start a new MBeanServer, specify the serviceUrl -->
@ -291,7 +291,7 @@
Perform a hard commit automatically under certain conditions.
Instead of enabling autoCommit, consider using "commitWithin"
when adding documents.
when adding documents.
http://wiki.apache.org/solr/UpdateXmlMessages
@ -300,7 +300,7 @@
maxTime - Maximum amount of time in ms that is allowed to pass
since a document was added before automatically
triggering a new commit.
triggering a new commit.
openSearcher - if false, the commit causes recent index changes
to be flushed to stable storage, but does not cause a new
searcher to be opened to make those changes visible.
@ -324,7 +324,7 @@
</autoSoftCommit>
<!-- Update Related Event Listeners
Various IndexWriter related events can trigger Listeners to
take actions.
@ -333,10 +333,10 @@
-->
<!-- The RunExecutableListener executes an external command from a
hook such as postCommit or postOptimize.
exe - the name of the executable to run
dir - dir to use as the current working directory. (default=".")
wait - the calling thread waits until the executable returns.
wait - the calling thread waits until the executable returns.
(default="true")
args - the arguments to pass to the program. (default is none)
env - environment variables to set. (default is none)
@ -401,7 +401,7 @@
There are two implementations of cache available for Solr,
LRUCache, based on a synchronized LinkedHashMap, and
FastLRUCache, based on a ConcurrentHashMap.
FastLRUCache, based on a ConcurrentHashMap.
FastLRUCache has faster gets and slower puts in single
threaded operation and thus is generally faster than LRUCache
@ -437,7 +437,7 @@
autowarmCount="0"/>
<!-- Query Result Cache
Caches results of searches - ordered lists of document ids
(DocList) based on a query, a sort, and the range of documents requested.
Additional supported parameter by LRUCache:
@ -469,7 +469,7 @@
regenerator="solr.NoOpRegenerator" />
<!-- Field Value Cache
Cache used to hold field values that are quickly accessible
by document id. The fieldValueCache is created by default
even if not configured here.
@ -487,8 +487,8 @@
name through SolrIndexSearcher.getCache(),cacheLookup(), and
cacheInsert(). The purpose is to enable easy caching of
user/application level data. The regenerator argument should
be specified as an implementation of solr.CacheRegenerator
if autowarming is desired.
be specified as an implementation of solr.CacheRegenerator
if autowarming is desired.
-->
<!--
<cache name="myUserCache"
@ -512,14 +512,14 @@
<enableLazyFieldLoading>true</enableLazyFieldLoading>
<!-- Use Filter For Sorted Query
A possible optimization that attempts to use a filter to
satisfy a search. If the requested sort does not include
score, then the filterCache will be checked for a filter
matching the query. If found, the filter will be used as the
source of document ids, and then the sort will be applied to
that.
For most situations, this will not be useful unless you
frequently get the same search repeatedly with different sort
options, and none of them ever use "score"
@ -529,39 +529,39 @@
-->
<!-- Result Window Size
An optimization for use with the queryResultCache. When a search
is requested, a superset of the requested number of document ids
are collected. For example, if a search for a particular query
requests matching documents 10 through 19, and queryWindowSize is 50,
then documents 0 through 49 will be collected and cached. Any further
requests in that range can be satisfied via the cache.
requests in that range can be satisfied via the cache.
-->
<queryResultWindowSize>20</queryResultWindowSize>
<!-- Maximum number of documents to cache for any entry in the
queryResultCache.
queryResultCache.
-->
<queryResultMaxDocsCached>200</queryResultMaxDocsCached>
<!-- Query Related Event Listeners
Various IndexSearcher related events can trigger Listeners to
take actions.
newSearcher - fired whenever a new searcher is being prepared
and there is a current searcher handling requests (aka
registered). It can be used to prime certain caches to
prevent long request times for certain requests.
firstSearcher - fired whenever a new searcher is being
prepared but there is no current registered searcher to handle
requests or to gain autowarming data from.
-->
<!-- QuerySenderListener takes an array of NamedList and executes a
local query request for each NamedList in sequence.
local query request for each NamedList in sequence.
-->
<listener event="newSearcher" class="solr.QuerySenderListener">
<arr name="queries">
@ -611,19 +611,19 @@
multipartUploadLimitInKB - specifies the max size (in KiB) of
Multipart File Uploads that Solr will allow in a Request.
formdataUploadLimitInKB - specifies the max size (in KiB) of
form data (application/x-www-form-urlencoded) sent via
POST. You can use POST to pass request parameters not
fitting into the URL.
addHttpRequestToContext - if set to true, it will instruct
the requestParsers to include the original HttpServletRequest
object in the context map of the SolrQueryRequest under the
object in the context map of the SolrQueryRequest under the
key "httpRequest". It will not be used by any of the existing
Solr components, but may be useful when developing custom
Solr components, but may be useful when developing custom
plugins.
*** WARNING ***
Before enabling remote streaming, you should make sure your
system has authentication enabled.
@ -645,21 +645,21 @@
<!-- If you include a <cacheControl> directive, it will be used to
generate a Cache-Control header (as well as an Expires header
if the value contains "max-age=")
By default, no Cache-Control header is generated.
You can use the <cacheControl> option even if you have set
never304="true"
-->
<!--
<httpCaching never304="true" >
<cacheControl>max-age=30, public</cacheControl>
<cacheControl>max-age=30, public</cacheControl>
</httpCaching>
-->
<!-- To enable Solr to respond with automatically generated HTTP
Caching headers, and to response to Cache Validation requests
correctly, set the value of never304="false"
This will cause Solr to generate Last-Modified and ETag
headers based on the properties of the Index.
@ -684,12 +684,12 @@
<!--
<httpCaching lastModifiedFrom="openTime"
etagSeed="Solr">
<cacheControl>max-age=30, public</cacheControl>
<cacheControl>max-age=30, public</cacheControl>
</httpCaching>
-->
</requestDispatcher>
<!-- Request Handlers
<!-- Request Handlers
http://wiki.apache.org/solr/SolrRequestHandler
@ -716,7 +716,12 @@
<lst name="defaults">
<str name="echoParams">explicit</str>
<int name="rows">10</int>
<!-- <str name="df">text</str> -->
<!-- Default search field
<str name="df">text</str>
-->
<!-- Change from JSON to XML format (the default prior to Solr 7.0)
<str name="wt">xml</str>
-->
</lst>
<!-- In addition to defaults, "appends" params can be specified
to identify values which should be appended to the list of
@ -783,7 +788,7 @@
<!-- A Robust Example
This example SearchHandler declaration shows off usage of the
SearchHandler with many defaults declared
@ -805,7 +810,7 @@
<!-- Solr Cell Update Request Handler
http://wiki.apache.org/solr/ExtractingRequestHandler
http://wiki.apache.org/solr/ExtractingRequestHandler
-->
<requestHandler name="/update/extract"
@ -820,18 +825,18 @@
<!-- Search Components
Search components are registered to SolrCore and used by
Search components are registered to SolrCore and used by
instances of SearchHandler (which can access them by name)
By default, the following components are available:
<searchComponent name="query" class="solr.QueryComponent" />
<searchComponent name="facet" class="solr.FacetComponent" />
<searchComponent name="mlt" class="solr.MoreLikeThisComponent" />
<searchComponent name="highlight" class="solr.HighlightComponent" />
<searchComponent name="stats" class="solr.StatsComponent" />
<searchComponent name="debug" class="solr.DebugComponent" />
Default configuration in a requestHandler would look like:
<arr name="components">
@ -843,28 +848,28 @@
<str>debug</str>
</arr>
If you register a searchComponent to one of the standard names,
If you register a searchComponent to one of the standard names,
that will be used instead of the default.
To insert components before or after the 'standard' components, use:
<arr name="first-components">
<str>myFirstComponentName</str>
</arr>
<arr name="last-components">
<str>myLastComponentName</str>
</arr>
NOTE: The component registered with the name "debug" will
always be executed after the "last-components"
always be executed after the "last-components"
-->
<!-- Spell Check
The spell check component can return a list of alternative spelling
suggestions.
suggestions.
http://wiki.apache.org/solr/SpellCheckComponent
-->
@ -913,7 +918,7 @@
-->
</searchComponent>
<!-- A request handler for demonstrating the spellcheck component.
<!-- A request handler for demonstrating the spellcheck component.
NOTE: This is purely as an example. The whole purpose of the
SpellCheckComponent is to hook it into the request handler that
@ -922,7 +927,7 @@
IN OTHER WORDS, THERE IS REALLY GOOD CHANCE THE SETUP BELOW IS
NOT WHAT YOU WANT FOR YOUR PRODUCTION SYSTEM!
See http://wiki.apache.org/solr/SpellCheckComponent for details
on the request parameters.
-->
@ -958,8 +963,8 @@
This is purely as an example.
In reality you will likely want to add the component to your
already specified request handlers.
In reality you will likely want to add the component to your
already specified request handlers.
-->
<requestHandler name="/tvrh" class="solr.SearchHandler" startup="lazy">
<lst name="defaults">
@ -1032,8 +1037,8 @@
</lst>
</fragmenter>
<!-- A regular-expression-based fragmenter
(for sentence extraction)
<!-- A regular-expression-based fragmenter
(for sentence extraction)
-->
<fragmenter name="regex"
class="solr.highlight.RegexFragmenter">
@ -1078,7 +1083,7 @@
<fragmentsBuilder name="default"
default="true"
class="solr.highlight.ScoreOrderFragmentsBuilder">
<!--
<!--
<lst name="defaults">
<str name="hl.multiValuedSeparatorChar">/</str>
</lst>
@ -1131,19 +1136,19 @@
http://wiki.apache.org/solr/UpdateRequestProcessor
-->
<!-- Add unknown fields to the schema
<!-- Add unknown fields to the schema
Field type guessing update processors that will
attempt to parse string-typed field values as Booleans, Longs,
Doubles, or Dates, and then add schema fields with the guessed
field types. Text content will be indexed as "text_general" as
well as a copy to a plain string version in *_str.
well as a copy to a plain string version in *_str.
These require that the schema is both managed and mutable, by
declaring schemaFactory as ManagedIndexSchemaFactory, with
mutable specified as true.
mutable specified as true.
See http://wiki.apache.org/solr/GuessingFieldTypes
-->
<updateProcessor class="solr.UUIDUpdateProcessorFactory" name="uuid"/>
@ -1220,8 +1225,8 @@
on the fly based on the hash code of some other fields. This
example has overwriteDupes set to false since we are using the
id field as the signatureField and Solr will maintain
uniqueness based on that anyway.
uniqueness based on that anyway.
-->
<!--
<updateRequestProcessorChain name="dedupe">
@ -1292,7 +1297,7 @@
overridden...
-->
<!--
<queryResponseWriter name="xml"
<queryResponseWriter name="xml"
default="true"
class="solr.XMLResponseWriter" />
<queryResponseWriter name="json" class="solr.JSONResponseWriter"/>
@ -1323,7 +1328,7 @@
<!-- XSLT response writer transforms the XML output by any xslt file found
in Solr's conf/xslt directory. Changes to xslt files are checked for
every xsltCacheLifetimeSeconds.
every xsltCacheLifetimeSeconds.
-->
<queryResponseWriter name="xslt" class="solr.XSLTResponseWriter">
<int name="xsltCacheLifetimeSeconds">5</int>
@ -1331,7 +1336,7 @@
<!-- Query Parsers
https://cwiki.apache.org/confluence/display/solr/Query+Syntax+and+Parsing
https://lucene.apache.org/solr/guide/query-syntax-and-parsing.html
Multiple QParserPlugins can be registered by name, and then
used in either the "defType" param for the QueryComponent (used
@ -1351,7 +1356,7 @@
-->
<!-- example of registering a custom function parser -->
<!--
<valueSourceParser name="myfunc"
<valueSourceParser name="myfunc"
class="com.mycompany.MyValueSourceParser" />
-->
@ -1364,12 +1369,12 @@
<transformer name="db" class="com.mycompany.LoadFromDatabaseTransformer" >
<int name="connection">jdbc://....</int>
</transformer>
To add a constant value to all docs, use:
<transformer name="mytrans2" class="org.apache.solr.response.transform.ValueAugmenterFactory" >
<int name="value">5</int>
</transformer>
If you want the user to still be able to change it with _value:something_ use this:
<transformer name="mytrans3" class="org.apache.solr.response.transform.ValueAugmenterFactory" >
<double name="defaultValue">5</double>

View File

@ -16,9 +16,9 @@
limitations under the License.
-->
<!--
<!--
For more details about configurations options that may appear in
this file, see http://wiki.apache.org/solr/SolrConfigXml.
this file, see http://wiki.apache.org/solr/SolrConfigXml.
-->
<config>
<!-- In all configuration below, a prefix of "solr." for class names
@ -46,19 +46,19 @@
instanceDir.
Please note that <lib/> directives are processed in the order
that they appear in your solrconfig.xml file, and are "stacked"
on top of each other when building a ClassLoader - so if you have
plugin jars with dependencies on other jars, the "lower level"
that they appear in your solrconfig.xml file, and are "stacked"
on top of each other when building a ClassLoader - so if you have
plugin jars with dependencies on other jars, the "lower level"
dependency jars should be loaded first.
If a "./lib" directory exists in your instanceDir, all files
found in it are included as if you had used the following
syntax...
<lib dir="./lib" />
-->
<!-- A 'dir' option by itself adds any files found in the directory
<!-- A 'dir' option by itself adds any files found in the directory
to the classpath, this is useful for including all jars in a
directory.
@ -69,7 +69,7 @@
If a 'dir' option (with or without a regex) is used and nothing
is found that matches, a warning will be logged.
The examples below can be used to load some solr-contribs along
The examples below can be used to load some solr-contribs along
with their external dependencies.
-->
<lib dir="${solr.install.dir:../../../..}/contrib/extraction/lib" regex=".*\.jar" />
@ -87,14 +87,14 @@
<lib dir="${solr.install.dir:../../../..}/contrib/velocity/lib" regex=".*\.jar" />
<lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-velocity-\d.*\.jar" />
<!-- an exact 'path' can be used instead of a 'dir' to specify a
specific jar file. This will cause a serious error to be logged
<!-- an exact 'path' can be used instead of a 'dir' to specify a
specific jar file. This will cause a serious error to be logged
if it can't be loaded.
-->
<!--
<lib path="../a-jar-that-does-not-exist.jar" />
<lib path="../a-jar-that-does-not-exist.jar" />
-->
<!-- Data Directory
Used to specify an alternate directory to hold all index data
@ -106,7 +106,7 @@
<!-- The DirectoryFactory to use for indexes.
solr.StandardDirectoryFactory is filesystem
based and tries to pick the best implementation for the current
JVM and platform. solr.NRTCachingDirectoryFactory, the default,
@ -118,7 +118,7 @@
solr.RAMDirectoryFactory is memory based and not persistent.
-->
<directoryFactory name="DirectoryFactory"
<directoryFactory name="DirectoryFactory"
class="${solr.directoryFactory:solr.NRTCachingDirectoryFactory}"/>
<!-- The CodecFactory for defining the format of the inverted index.
@ -129,7 +129,7 @@
are experimental, so if you choose to customize the index format, it's a good
idea to convert back to the official format e.g. via IndexWriter.addIndexes(IndexReader)
before upgrading to a newer version to avoid unnecessary reindexing.
A "compressionMode" string element can be added to <codecFactory> to choose
A "compressionMode" string element can be added to <codecFactory> to choose
between the existing compression modes in the default codec: "BEST_SPEED" (default)
or "BEST_COMPRESSION".
-->
@ -139,19 +139,19 @@
Index Config - These settings control low-level behavior of indexing
Most example settings here show the default value, but are commented
out, to more easily see where customizations have been made.
Note: This replaces <indexDefaults> and <mainIndex> from older versions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->
<indexConfig>
<!-- maxFieldLength was removed in 4.0. To get similar behavior, include a
LimitTokenCountFilterFactory in your fieldType definition. E.g.
<!-- maxFieldLength was removed in 4.0. To get similar behavior, include a
LimitTokenCountFilterFactory in your fieldType definition. E.g.
<filter class="solr.LimitTokenCountFilterFactory" maxTokenCount="10000"/>
-->
<!-- Maximum time to wait for a write lock (ms) for an IndexWriter. Default: 1000 -->
<!-- <writeLockTimeout>1000</writeLockTimeout> -->
<!-- Expert: Enabling compound file will use less files for the index,
using fewer file descriptors on the expense of performance decrease.
<!-- Expert: Enabling compound file will use less files for the index,
using fewer file descriptors on the expense of performance decrease.
Default in Lucene is "true". Default in Solr is "false" (since 3.6) -->
<!-- <useCompoundFile>false</useCompoundFile> -->
@ -166,7 +166,7 @@
<!-- <ramBufferSizeMB>100</ramBufferSizeMB> -->
<!-- <maxBufferedDocs>1000</maxBufferedDocs> -->
<!-- Expert: Merge Policy
<!-- Expert: Merge Policy
The Merge Policy in Lucene controls how merging of segments is done.
The default since Solr/Lucene 3.3 is TieredMergePolicy.
The default since Lucene 2.3 was the LogByteSizeMergePolicy,
@ -186,15 +186,15 @@
can perform merges in the background using separate threads.
The SerialMergeScheduler (Lucene 2.2 default) does not.
-->
<!--
<!--
<mergeScheduler class="org.apache.lucene.index.ConcurrentMergeScheduler"/>
-->
<!-- LockFactory
<!-- LockFactory
This option specifies which Lucene LockFactory implementation
to use.
single = SingleInstanceLockFactory - suggested for a
read-only index or when there is no possibility of
another process trying to modify the index.
@ -218,11 +218,11 @@
The default Solr IndexDeletionPolicy implementation supports
deleting index commit points on number of commits, age of
commit point and optimized status.
The latest commit point should always be preserved regardless
of the criteria.
-->
<!--
<!--
<deletionPolicy class="solr.SolrDeletionPolicy">
-->
<!-- The number of commit points to be kept -->
@ -237,12 +237,12 @@
<str name="maxCommitAge">30MINUTES</str>
<str name="maxCommitAge">1DAY</str>
-->
<!--
<!--
</deletionPolicy>
-->
<!-- Lucene Infostream
To aid in advanced debugging, Lucene provides an "InfoStream"
of detailed information when indexing.
@ -255,7 +255,7 @@
<!-- JMX
This example enables JMX if and only if an existing MBeanServer
is found, use this if you want to configure JMX through JVM
parameters. Remove this to disable exposing Solr configuration
@ -265,7 +265,7 @@
-->
<jmx />
<!-- If you want to connect to a particular server, specify the
agentId
agentId
-->
<!-- <jmx agentId="myAgent" /> -->
<!-- If you want to start a new MBeanServer, specify the serviceUrl -->
@ -292,12 +292,12 @@
<str name="dir">${solr.ulog.dir:}</str>
<int name="numVersionBuckets">${solr.ulog.numVersionBuckets:65536}</int>
</updateLog>
<!-- AutoCommit
Perform a hard commit automatically under certain conditions.
Instead of enabling autoCommit, consider using "commitWithin"
when adding documents.
when adding documents.
http://wiki.apache.org/solr/UpdateXmlMessages
@ -306,7 +306,7 @@
maxTime - Maximum amount of time in ms that is allowed to pass
since a document was added before automatically
triggering a new commit.
triggering a new commit.
openSearcher - if false, the commit causes recent index changes
to be flushed to stable storage, but does not cause a new
searcher to be opened to make those changes visible.
@ -314,9 +314,9 @@
If the updateLog is enabled, then it's highly recommended to
have some sort of hard autoCommit to limit the log size.
-->
<autoCommit>
<maxTime>${solr.autoCommit.maxTime:15000}</maxTime>
<openSearcher>false</openSearcher>
<autoCommit>
<maxTime>${solr.autoCommit.maxTime:15000}</maxTime>
<openSearcher>false</openSearcher>
</autoCommit>
<!-- softAutoCommit is like autoCommit except it causes a
@ -325,12 +325,12 @@
faster and more near-realtime friendly than a hard commit.
-->
<autoSoftCommit>
<maxTime>${solr.autoSoftCommit.maxTime:-1}</maxTime>
<autoSoftCommit>
<maxTime>${solr.autoSoftCommit.maxTime:-1}</maxTime>
</autoSoftCommit>
<!-- Update Related Event Listeners
Various IndexWriter related events can trigger Listeners to
take actions.
@ -339,10 +339,10 @@
-->
<!-- The RunExecutableListener executes an external command from a
hook such as postCommit or postOptimize.
exe - the name of the executable to run
dir - dir to use as the current working directory. (default=".")
wait - the calling thread waits until the executable returns.
wait - the calling thread waits until the executable returns.
(default="true")
args - the arguments to pass to the program. (default is none)
env - environment variables to set. (default is none)
@ -362,7 +362,7 @@
-->
</updateHandler>
<!-- IndexReaderFactory
Use the following format to specify a custom IndexReaderFactory,
@ -403,15 +403,15 @@
-->
<maxBooleanClauses>1024</maxBooleanClauses>
<!-- Slow Query Threshold (in millis)
At high request rates, logging all requests can become a bottleneck
At high request rates, logging all requests can become a bottleneck
and therefore INFO logging is often turned off. However, it is still
useful to be able to set a latency threshold above which a request
is considered "slow" and log that request at WARN level so we can
easily identify slow queries.
-->
-->
<slowQueryThresholdMillis>-1</slowQueryThresholdMillis>
@ -419,7 +419,7 @@
There are two implementations of cache available for Solr,
LRUCache, based on a synchronized LinkedHashMap, and
FastLRUCache, based on a ConcurrentHashMap.
FastLRUCache, based on a ConcurrentHashMap.
FastLRUCache has faster gets and slower puts in single
threaded operation and thus is generally faster than LRUCache
@ -466,19 +466,19 @@
size="512"
initialSize="512"
autowarmCount="0"/>
<!-- Document Cache
Caches Lucene Document objects (the stored fields for each
document). Since Lucene internal document ids are transient,
this cache will not be autowarmed.
this cache will not be autowarmed.
-->
<documentCache class="solr.LRUCache"
size="512"
initialSize="512"
autowarmCount="0"/>
<!-- custom cache currently used by block join -->
<!-- custom cache currently used by block join -->
<cache name="perSegFilter"
class="solr.search.LRUCache"
size="10"
@ -487,7 +487,7 @@
regenerator="solr.NoOpRegenerator" />
<!-- Field Value Cache
Cache used to hold field values that are quickly accessible
by document id. The fieldValueCache is created by default
even if not configured here.
@ -507,7 +507,7 @@
when running solr to run with ltr enabled:
-Dsolr.ltr.enabled=true
https://cwiki.apache.org/confluence/display/solr/Learning+To+Rank
https://lucene.apache.org/solr/guide/learning-to-rank.html
-->
<cache enable="${solr.ltr.enabled:false}" name="QUERY_DOC_FV"
class="solr.search.LRUCache"
@ -522,8 +522,8 @@
name through SolrIndexSearcher.getCache(),cacheLookup(), and
cacheInsert(). The purpose is to enable easy caching of
user/application level data. The regenerator argument should
be specified as an implementation of solr.CacheRegenerator
if autowarming is desired.
be specified as an implementation of solr.CacheRegenerator
if autowarming is desired.
-->
<!--
<cache name="myUserCache"
@ -570,12 +570,12 @@
are collected. For example, if a search for a particular query
requests matching documents 10 through 19, and queryWindowSize is 50,
then documents 0 through 49 will be collected and cached. Any further
requests in that range can be satisfied via the cache.
requests in that range can be satisfied via the cache.
-->
<queryResultWindowSize>20</queryResultWindowSize>
<!-- Maximum number of documents to cache for any entry in the
queryResultCache.
queryResultCache.
-->
<queryResultMaxDocsCached>200</queryResultMaxDocsCached>
@ -593,10 +593,10 @@
prepared but there is no current registered searcher to handle
requests or to gain autowarming data from.
-->
<!-- QuerySenderListener takes an array of NamedList and executes a
local query request for each NamedList in sequence.
local query request for each NamedList in sequence.
-->
<listener event="newSearcher" class="solr.QuerySenderListener">
<arr name="queries">
@ -644,19 +644,19 @@
multipartUploadLimitInKB - specifies the max size (in KiB) of
Multipart File Uploads that Solr will allow in a Request.
formdataUploadLimitInKB - specifies the max size (in KiB) of
form data (application/x-www-form-urlencoded) sent via
POST. You can use POST to pass request parameters not
fitting into the URL.
addHttpRequestToContext - if set to true, it will instruct
the requestParsers to include the original HttpServletRequest
object in the context map of the SolrQueryRequest under the
object in the context map of the SolrQueryRequest under the
key "httpRequest". It will not be used by any of the existing
Solr components, but may be useful when developing custom
Solr components, but may be useful when developing custom
plugins.
*** WARNING ***
Before enabling remote streaming, you should make sure your
system has authentication enabled.
@ -678,21 +678,21 @@
<!-- If you include a <cacheControl> directive, it will be used to
generate a Cache-Control header (as well as an Expires header
if the value contains "max-age=")
By default, no Cache-Control header is generated.
You can use the <cacheControl> option even if you have set
never304="true"
-->
<!--
<httpCaching never304="true" >
<cacheControl>max-age=30, public</cacheControl>
<cacheControl>max-age=30, public</cacheControl>
</httpCaching>
-->
<!-- To enable Solr to respond with automatically generated HTTP
Caching headers, and to response to Cache Validation requests
correctly, set the value of never304="false"
This will cause Solr to generate Last-Modified and ETag
headers based on the properties of the Index.
@ -717,12 +717,12 @@
<!--
<httpCaching lastModifiedFrom="openTime"
etagSeed="Solr">
<cacheControl>max-age=30, public</cacheControl>
<cacheControl>max-age=30, public</cacheControl>
</httpCaching>
-->
</requestDispatcher>
<!-- Request Handlers
<!-- Request Handlers
http://wiki.apache.org/solr/SolrRequestHandler
@ -749,6 +749,12 @@
<lst name="defaults">
<str name="echoParams">explicit</str>
<int name="rows">10</int>
<!-- Default search field
<str name="df">text</str>
-->
<!-- Change from JSON to XML format (the default prior to Solr 7.0)
<str name="wt">xml</str>
-->
<!-- Controls the distribution of a query to shards other than itself.
Consider making 'preferLocalShards' true when:
1) maxShardsPerNode > 1
@ -830,7 +836,7 @@
</requestHandler>
<!-- A Robust Example
This example SearchHandler declaration shows off usage of the
SearchHandler with many defaults declared
@ -911,14 +917,14 @@
<!-- Spell checking defaults -->
<str name="spellcheck">on</str>
<str name="spellcheck.extendedResults">false</str>
<str name="spellcheck.extendedResults">false</str>
<str name="spellcheck.count">5</str>
<str name="spellcheck.alternativeTermCount">2</str>
<str name="spellcheck.maxResultsForSuggest">5</str>
<str name="spellcheck.maxResultsForSuggest">5</str>
<str name="spellcheck.collate">true</str>
<str name="spellcheck.collateExtendedResults">true</str>
<str name="spellcheck.collateExtendedResults">true</str>
<str name="spellcheck.maxCollationTries">5</str>
<str name="spellcheck.maxCollations">3</str>
<str name="spellcheck.maxCollations">3</str>
</lst>
<!-- append spellchecking to our list of components -->
@ -949,10 +955,10 @@
<!-- Solr Cell Update Request Handler
http://wiki.apache.org/solr/ExtractingRequestHandler
http://wiki.apache.org/solr/ExtractingRequestHandler
-->
<requestHandler name="/update/extract"
<requestHandler name="/update/extract"
startup="lazy"
class="solr.extraction.ExtractingRequestHandler" >
<lst name="defaults">
@ -968,18 +974,18 @@
<!-- Search Components
Search components are registered to SolrCore and used by
Search components are registered to SolrCore and used by
instances of SearchHandler (which can access them by name)
By default, the following components are available:
<searchComponent name="query" class="solr.QueryComponent" />
<searchComponent name="facet" class="solr.FacetComponent" />
<searchComponent name="mlt" class="solr.MoreLikeThisComponent" />
<searchComponent name="highlight" class="solr.HighlightComponent" />
<searchComponent name="stats" class="solr.StatsComponent" />
<searchComponent name="debug" class="solr.DebugComponent" />
Default configuration in a requestHandler would look like:
<arr name="components">
@ -991,28 +997,28 @@
<str>debug</str>
</arr>
If you register a searchComponent to one of the standard names,
If you register a searchComponent to one of the standard names,
that will be used instead of the default.
To insert components before or after the 'standard' components, use:
<arr name="first-components">
<str>myFirstComponentName</str>
</arr>
<arr name="last-components">
<str>myLastComponentName</str>
</arr>
NOTE: The component registered with the name "debug" will
always be executed after the "last-components"
always be executed after the "last-components"
-->
<!-- Spell Check
The spell check component can return a list of alternative spelling
suggestions.
suggestions.
http://wiki.apache.org/solr/SpellCheckComponent
-->
@ -1047,11 +1053,11 @@
<float name="thresholdTokenFrequency">.01</float>
-->
</lst>
<!-- a spellchecker that can break or combine words. See "/spell" handler below for usage -->
<lst name="spellchecker">
<str name="name">wordbreak</str>
<str name="classname">solr.WordBreakSolrSpellChecker</str>
<str name="classname">solr.WordBreakSolrSpellChecker</str>
<str name="field">name</str>
<str name="combineWords">true</str>
<str name="breakWords">true</str>
@ -1070,7 +1076,7 @@
</lst>
-->
<!-- a spellchecker that use an alternate comparator
<!-- a spellchecker that use an alternate comparator
comparatorClass be one of:
1. score (default)
@ -1096,8 +1102,8 @@
</lst>
-->
</searchComponent>
<!-- A request handler for demonstrating the spellcheck component.
<!-- A request handler for demonstrating the spellcheck component.
NOTE: This is purely as an example. The whole purpose of the
SpellCheckComponent is to hook it into the request handler that
@ -1106,7 +1112,7 @@
IN OTHER WORDS, THERE IS REALLY GOOD CHANCE THE SETUP BELOW IS
NOT WHAT YOU WANT FOR YOUR PRODUCTION SYSTEM!
See http://wiki.apache.org/solr/SpellCheckComponent for details
on the request parameters.
-->
@ -1119,33 +1125,33 @@
<str name="spellcheck.dictionary">default</str>
<str name="spellcheck.dictionary">wordbreak</str>
<str name="spellcheck">on</str>
<str name="spellcheck.extendedResults">true</str>
<str name="spellcheck.extendedResults">true</str>
<str name="spellcheck.count">10</str>
<str name="spellcheck.alternativeTermCount">5</str>
<str name="spellcheck.maxResultsForSuggest">5</str>
<str name="spellcheck.maxResultsForSuggest">5</str>
<str name="spellcheck.collate">true</str>
<str name="spellcheck.collateExtendedResults">true</str>
<str name="spellcheck.collateExtendedResults">true</str>
<str name="spellcheck.maxCollationTries">10</str>
<str name="spellcheck.maxCollations">5</str>
<str name="spellcheck.maxCollations">5</str>
</lst>
<arr name="last-components">
<str>spellcheck</str>
</arr>
</requestHandler>
<!-- The SuggestComponent in Solr provides users with automatic suggestions for query terms.
<!-- The SuggestComponent in Solr provides users with automatic suggestions for query terms.
You can use this to implement a powerful auto-suggest feature in your search application.
As with the rest of this solrconfig.xml file, the configuration of this component is purely
an example that applies specifically to this configset and example documents.
an example that applies specifically to this configset and example documents.
More information about this component and other configuration options are described in the
"Suggester" section of the reference guide available at
"Suggester" section of the reference guide available at
http://archive.apache.org/dist/lucene/solr/ref-guide
-->
<searchComponent name="suggest" class="solr.SuggestComponent">
<lst name="suggester">
<str name="name">mySuggester</str>
<str name="lookupImpl">FuzzyLookupFactory</str>
<str name="lookupImpl">FuzzyLookupFactory</str>
<str name="dictionaryImpl">DocumentDictionaryFactory</str>
<str name="field">cat</str>
<str name="weightField">price</str>
@ -1154,7 +1160,7 @@
</lst>
</searchComponent>
<requestHandler name="/suggest" class="solr.SearchHandler"
<requestHandler name="/suggest" class="solr.SearchHandler"
startup="lazy" >
<lst name="defaults">
<str name="suggest">true</str>
@ -1176,8 +1182,8 @@
This is purely as an example.
In reality you will likely want to add the component to your
already specified request handlers.
In reality you will likely want to add the component to your
already specified request handlers.
-->
<requestHandler name="/tvrh" class="solr.SearchHandler" startup="lazy">
<lst name="defaults">
@ -1194,7 +1200,7 @@
when running solr to run with clustering enabled:
-Dsolr.clustering.enabled=true
https://cwiki.apache.org/confluence/display/solr/Result+Clustering
https://lucene.apache.org/solr/guide/result-clustering.html
-->
<searchComponent name="clustering"
enable="${solr.clustering.enabled:false}"
@ -1240,8 +1246,8 @@
<!-- A request handler for demonstrating the clustering component.
This is meant as an example.
In reality you will likely want to add the component to your
already specified request handlers.
In reality you will likely want to add the component to your
already specified request handlers.
-->
<requestHandler name="/clustering"
startup="lazy"
@ -1291,7 +1297,7 @@
<lst name="defaults">
<bool name="terms">true</bool>
<bool name="distrib">false</bool>
</lst>
</lst>
<arr name="components">
<str>terms</str>
</arr>
@ -1330,7 +1336,7 @@
<highlighting>
<!-- Configure the standard fragmenter -->
<!-- This could most likely be commented out in the "default" case -->
<fragmenter name="gap"
<fragmenter name="gap"
default="true"
class="solr.highlight.GapFragmenter">
<lst name="defaults">
@ -1338,10 +1344,10 @@
</lst>
</fragmenter>
<!-- A regular-expression-based fragmenter
(for sentence extraction)
<!-- A regular-expression-based fragmenter
(for sentence extraction)
-->
<fragmenter name="regex"
<fragmenter name="regex"
class="solr.highlight.RegexFragmenter">
<lst name="defaults">
<!-- slightly smaller fragsizes work better because of slop -->
@ -1354,7 +1360,7 @@
</fragmenter>
<!-- Configure the standard formatter -->
<formatter name="html"
<formatter name="html"
default="true"
class="solr.highlight.HtmlFormatter">
<lst name="defaults">
@ -1364,27 +1370,27 @@
</formatter>
<!-- Configure the standard encoder -->
<encoder name="html"
<encoder name="html"
class="solr.highlight.HtmlEncoder" />
<!-- Configure the standard fragListBuilder -->
<fragListBuilder name="simple"
<fragListBuilder name="simple"
class="solr.highlight.SimpleFragListBuilder"/>
<!-- Configure the single fragListBuilder -->
<fragListBuilder name="single"
<fragListBuilder name="single"
class="solr.highlight.SingleFragListBuilder"/>
<!-- Configure the weighted fragListBuilder -->
<fragListBuilder name="weighted"
<fragListBuilder name="weighted"
default="true"
class="solr.highlight.WeightedFragListBuilder"/>
<!-- default tag FragmentsBuilder -->
<fragmentsBuilder name="default"
<fragmentsBuilder name="default"
default="true"
class="solr.highlight.ScoreOrderFragmentsBuilder">
<!--
<!--
<lst name="defaults">
<str name="hl.multiValuedSeparatorChar">/</str>
</lst>
@ -1392,7 +1398,7 @@
</fragmentsBuilder>
<!-- multi-colored tag FragmentsBuilder -->
<fragmentsBuilder name="colored"
<fragmentsBuilder name="colored"
class="solr.highlight.ScoreOrderFragmentsBuilder">
<lst name="defaults">
<str name="hl.tag.pre"><![CDATA[
@ -1404,8 +1410,8 @@
<str name="hl.tag.post"><![CDATA[</b>]]></str>
</lst>
</fragmentsBuilder>
<boundaryScanner name="default"
<boundaryScanner name="default"
default="true"
class="solr.highlight.SimpleBoundaryScanner">
<lst name="defaults">
@ -1413,8 +1419,8 @@
<str name="hl.bs.chars">.,!? &#9;&#10;&#13;</str>
</lst>
</boundaryScanner>
<boundaryScanner name="breakIterator"
<boundaryScanner name="breakIterator"
class="solr.highlight.BreakIteratorBoundaryScanner">
<lst name="defaults">
<!-- type should be one of CHARACTER, WORD(default), LINE and SENTENCE -->
@ -1436,15 +1442,15 @@
http://wiki.apache.org/solr/UpdateRequestProcessor
-->
-->
<!-- Deduplication
An example dedup update processor that creates the "id" field
on the fly based on the hash code of some other fields. This
example has overwriteDupes set to false since we are using the
id field as the signatureField and Solr will maintain
uniqueness based on that anyway.
uniqueness based on that anyway.
-->
<!--
<updateRequestProcessorChain name="dedupe">
@ -1459,7 +1465,7 @@
<processor class="solr.RunUpdateProcessorFactory" />
</updateRequestProcessorChain>
-->
<!-- Language identification
This example update chain identifies the language of the incoming
@ -1499,7 +1505,7 @@
<processor class="solr.RunUpdateProcessorFactory" />
</updateRequestProcessorChain>
-->
<!-- Response Writers
http://wiki.apache.org/solr/QueryResponseWriter
@ -1515,7 +1521,7 @@
overridden...
-->
<!--
<queryResponseWriter name="xml"
<queryResponseWriter name="xml"
default="true"
class="solr.XMLResponseWriter" />
<queryResponseWriter name="json" class="solr.JSONResponseWriter"/>
@ -1534,18 +1540,18 @@
-->
<str name="content-type">text/plain; charset=UTF-8</str>
</queryResponseWriter>
<!--
Custom response writers can be declared as needed...
-->
<queryResponseWriter name="velocity" class="solr.VelocityResponseWriter" startup="lazy">
<str name="template.base.dir">${velocity.template.base.dir:}</str>
</queryResponseWriter>
<!-- XSLT response writer transforms the XML output by any xslt file found
in Solr's conf/xslt directory. Changes to xslt files are checked for
every xsltCacheLifetimeSeconds.
every xsltCacheLifetimeSeconds.
-->
<queryResponseWriter name="xslt" class="solr.XSLTResponseWriter">
<int name="xsltCacheLifetimeSeconds">5</int>
@ -1553,7 +1559,7 @@
<!-- Query Parsers
https://cwiki.apache.org/confluence/display/solr/Query+Syntax+and+Parsing
https://lucene.apache.org/solr/guide/query-syntax-and-parsing.html
Multiple QParserPlugins can be registered by name, and then
used in either the "defType" param for the QueryComponent (used
@ -1573,7 +1579,7 @@
-->
<!-- example of registering a custom function parser -->
<!--
<valueSourceParser name="myfunc"
<valueSourceParser name="myfunc"
class="com.mycompany.MyValueSourceParser" />
-->
@ -1583,7 +1589,7 @@
when running solr to run with ltr enabled:
-Dsolr.ltr.enabled=true
https://cwiki.apache.org/confluence/display/solr/Learning+To+Rank
https://lucene.apache.org/solr/guide/learning-to-rank.html
Query parser is used to rerank top docs with a provided model
-->
@ -1597,12 +1603,12 @@
<transformer name="db" class="com.mycompany.LoadFromDatabaseTransformer" >
<int name="connection">jdbc://....</int>
</transformer>
To add a constant value to all docs, use:
<transformer name="mytrans2" class="org.apache.solr.response.transform.ValueAugmenterFactory" >
<int name="value">5</int>
</transformer>
If you want the user to still be able to change it with _value:something_ use this:
<transformer name="mytrans3" class="org.apache.solr.response.transform.ValueAugmenterFactory" >
<double name="defaultValue">5</double>
@ -1624,7 +1630,7 @@
when running solr to run with ltr enabled:
-Dsolr.ltr.enabled=true
https://cwiki.apache.org/confluence/display/solr/Learning+To+Rank
https://lucene.apache.org/solr/guide/learning-to-rank.html
-->
<transformer enable="${solr.ltr.enabled:false}" name="features" class="org.apache.solr.ltr.response.transform.LTRFeatureLoggerTransformerFactory">
<str name="fvCacheName">QUERY_DOC_FV</str>

View File

@ -22,13 +22,15 @@ Note that the base directory name may vary with the version of Solr downloaded.
Cygwin, or MacOS:
/:$ ls solr*
solr-6.2.0.zip
/:$ unzip -q solr-6.2.0.zip
/:$ cd solr-6.2.0/
solr-X.Y.Z.zip
/:$ unzip -q solr-X.Y.Z.zip
/:$ cd solr-X.Y.Z/
Note that "X.Y.Z" will be replaced by an official Solr version (i.e. 6.4.3, 7.0.0, etc.)
To launch Solr, run: `bin/solr start -e cloud -noprompt`
/solr-6.2.0:$ bin/solr start -e cloud -noprompt
/solr-X.Y.Z:$ bin/solr start -e cloud -noprompt
Welcome to the SolrCloud example!
@ -43,7 +45,7 @@ To launch Solr, run: `bin/solr start -e cloud -noprompt`
SolrCloud example running, please visit http://localhost:8983/solr
/solr-6.2.0:$ _
/solr-X.Y.Z:$ _
You can see that the Solr is running by loading the Solr Admin UI in your web browser: <http://localhost:8983/solr/>.
This is the main starting point for administering Solr.
@ -79,8 +81,8 @@ subdirectory, so that makes a convenient set of (mostly) HTML files built-in to
Here's what it'll look like:
/solr-6.2.0:$ bin/post -c gettingstarted docs/
java -classpath /solr-6.2.0/dist/solr-core-6.2.0.jar -Dauto=yes -Dc=gettingstarted -Ddata=files -Drecursive=yes org.apache.solr.util.SimplePostTool docs/
/solr-X.Y.Z:$ bin/post -c gettingstarted docs/
java -classpath /solr-X.Y.Z/dist/solr-core-X.Y.Z.jar -Dauto=yes -Dc=gettingstarted -Ddata=files -Drecursive=yes org.apache.solr.util.SimplePostTool docs/
SimplePostTool version 5.0.0
Posting files to [base] url http://localhost:8983/solr/gettingstarted/update...
Entering auto mode. File endings considered are xml,json,jsonl,csv,pdf,doc,docx,ppt,pptx,xls,xlsx,odt,odp,ods,ott,otp,ots,rtf,htm,html,txt,log
@ -133,8 +135,8 @@ Using `bin/post`, index the example Solr XML files in `example/exampledocs/`:
Here's what you'll see:
/solr-6.2.0:$ bin/post -c gettingstarted example/exampledocs/*.xml
java -classpath /solr-6.2.0/dist/solr-core-6.2.0.jar -Dauto=yes -Dc=gettingstarted -Ddata=files org.apache.solr.util.SimplePostTool example/exampledocs/gb18030-example.xml ...
/solr-X.Y.Z:$ bin/post -c gettingstarted example/exampledocs/*.xml
java -classpath /solr-X.Y.Z/dist/solr-core-X.Y.Z.jar -Dauto=yes -Dc=gettingstarted -Ddata=files org.apache.solr.util.SimplePostTool example/exampledocs/gb18030-example.xml ...
SimplePostTool version 5.0.0
Posting files to [base] url http://localhost:8983/solr/gettingstarted/update...
Entering auto mode. File endings considered are xml,json,jsonl,csv,pdf,doc,docx,ppt,pptx,xls,xlsx,odt,odp,ods,ott,otp,ots,rtf,htm,html,txt,log
@ -178,8 +180,8 @@ sample JSON file:
You'll see:
/solr-6.2.0:$ bin/post -c gettingstarted example/exampledocs/books.json
java -classpath /solr-6.2.0/dist/solr-core-6.2.0.jar -Dauto=yes -Dc=gettingstarted -Ddata=files org.apache.solr.util.SimplePostTool example/exampledocs/books.json
/solr-X.Y.Z:$ bin/post -c gettingstarted example/exampledocs/books.json
java -classpath /solr-X.Y.Z/dist/solr-core-X.Y.Z.jar -Dauto=yes -Dc=gettingstarted -Ddata=files org.apache.solr.util.SimplePostTool example/exampledocs/books.json
SimplePostTool version 5.0.0
Posting files to [base] url http://localhost:8983/solr/gettingstarted/update...
Entering auto mode. File endings considered are xml,json,jsonl,csv,pdf,doc,docx,ppt,pptx,xls,xlsx,odt,odp,ods,ott,otp,ots,rtf,htm,html,txt,log
@ -207,8 +209,8 @@ Using `bin/post` index the included example CSV file:
In your terminal you'll see:
/solr-6.2.0:$ bin/post -c gettingstarted example/exampledocs/books.csv
java -classpath /solr-6.2.0/dist/solr-core-6.2.0.jar -Dauto=yes -Dc=gettingstarted -Ddata=files org.apache.solr.util.SimplePostTool example/exampledocs/books.csv
/solr-X.Y.Z:$ bin/post -c gettingstarted example/exampledocs/books.csv
java -classpath /solr-X.Y.Z/dist/solr-core-X.Y.Z.jar -Dauto=yes -Dc=gettingstarted -Ddata=files org.apache.solr.util.SimplePostTool example/exampledocs/books.csv
SimplePostTool version 5.0.0
Posting files to [base] url http://localhost:8983/solr/gettingstarted/update...
Entering auto mode. File endings considered are xml,json,jsonl,csv,pdf,doc,docx,ppt,pptx,xls,xlsx,odt,odp,ods,ott,otp,ots,rtf,htm,html,txt,log
@ -277,7 +279,7 @@ in the form, you'll get 10 documents in JSON format (`*:*` in the `q` param matc
The URL sent by the Admin UI to Solr is shown in light grey near the top right of the above screenshot - if you click on
it, your browser will show you the raw response. To use cURL, give the same URL in quotes on the `curl` command line:
curl "http://localhost:8983/solr/gettingstarted/select?indent=on&q=*:*&wt=json"
curl "http://localhost:8983/solr/gettingstarted/select?q=*:*"
### Basics
@ -287,11 +289,11 @@ it, your browser will show you the raw response. To use cURL, give the same URL
To search for a term, give it as the `q` param value in the core-specific Solr Admin UI Query section, replace `*:*`
with the term you want to find. To search for "foundation":
curl "http://localhost:8983/solr/gettingstarted/select?wt=json&indent=true&q=foundation"
curl "http://localhost:8983/solr/gettingstarted/select?q=foundation"
You'll see:
/solr-6.2.0$ curl "http://localhost:8983/solr/gettingstarted/select?wt=json&indent=true&q=foundation"
$ curl "http://localhost:8983/solr/gettingstarted/select?q=foundation"
{
"responseHeader":{
"zkConnected":true,
@ -315,13 +317,13 @@ default `start=0` and `rows=10`. You can specify these params to page through r
To restrict fields returned in the response, use the `fl` param, which takes a comma-separated list of field names.
E.g. to only return the `id` field:
curl "http://localhost:8983/solr/gettingstarted/select?wt=json&indent=true&q=foundation&fl=id"
curl "http://localhost:8983/solr/gettingstarted/select?q=foundation&fl=id"
`q=foundation` matches nearly all of the docs we've indexed, since most of the files under `docs/` contain
"The Apache Software Foundation". To restrict search to a particular field, use the syntax "`q=field:value`",
e.g. to search for `Foundation` only in the `name` field:
curl "http://localhost:8983/solr/gettingstarted/select?wt=json&indent=true&q=name:Foundation"
curl "http://localhost:8983/solr/gettingstarted/select?q=name:Foundation"
The above request returns only one document (`"numFound":1`) - from the response:
@ -339,7 +341,7 @@ To search for a multi-term phrase, enclose it in double quotes: `q="multiple ter
"CAS latency" - note that the space between terms must be converted to "`+`" in a URL (the Admin UI will handle URL
encoding for you automatically):
curl "http://localhost:8983/solr/gettingstarted/select?wt=json&indent=true&q=\"CAS+latency\""
curl "http://localhost:8983/solr/gettingstarted/select?indent=true&q=\"CAS+latency\""
You'll get back:
@ -374,12 +376,12 @@ To find documents that contain both terms "`one`" and "`three`", enter `+one +th
Admin UI Query tab. Because the "`+`" character has a reserved purpose in URLs (encoding the space character),
you must URL encode it for `curl` as "`%2B`":
curl "http://localhost:8983/solr/gettingstarted/select?wt=json&indent=true&q=%2Bone+%2Bthree"
curl "http://localhost:8983/solr/gettingstarted/select?q=%2Bone+%2Bthree"
To search for documents that contain the term "`two`" but **don't** contain the term "`one`", enter `+two -one` in the
`q` param in the Admin UI. Again, URL encode "`+`" as "`%2B`":
curl "http://localhost:8983/solr/gettingstarted/select?wt=json&indent=true&q=%2Btwo+-one"
curl "http://localhost:8983/solr/gettingstarted/select?q=%2Btwo+-one"
#### In depth
@ -407,7 +409,7 @@ To see facet counts from all documents (`q=*:*`): turn on faceting (`facet=true`
the `facet.field` param. If you only want facets, and no document contents, specify `rows=0`. The `curl` command below
will return facet counts for the `manu_id_s` field:
curl 'http://localhost:8983/solr/gettingstarted/select?wt=json&indent=true&q=*:*&rows=0'\
curl 'http://localhost:8983/solr/gettingstarted/select?q=*:*&rows=0'\
'&facet=true&facet.field=manu_id_s'
In your terminal, you'll see:
@ -458,7 +460,7 @@ like this:
The data for these price range facets can be seen in JSON format with this command:
curl 'http://localhost:8983/solr/gettingstarted/select?q=*:*&wt=json&indent=on&rows=0'\
curl 'http://localhost:8983/solr/gettingstarted/select?q=*:*&rows=0'\
'&facet=true'\
'&facet.range=price'\
'&f.price.facet.range.start=0'\
@ -518,8 +520,7 @@ the various possible combinations. Using the example technical product data, pi
of the products in the "book" category (the `cat` field) are in stock or not in stock. Here's how to get at the raw
data for this scenario:
curl 'http://localhost:8983/solr/gettingstarted/select?q=*:*&rows=0&wt=json&indent=on'\
'&facet=on&facet.pivot=cat,inStock'
curl 'http://localhost:8983/solr/gettingstarted/select?q=*:*&rows=0&facet=on&facet.pivot=cat,inStock'
This results in the following response (trimmed to just the book category output), which says out of 14 items in the
"book" category, 12 are in stock and 2 are not in stock:

View File

@ -1,5 +1,5 @@
# placed here for translation purposes
search_placeholder_text: search...
search_placeholder_text: Page title lookup...
search_no_results_text: No results found.

View File

@ -1082,7 +1082,7 @@ Returns the current status of the overseer, performance statistics of various ov
[source,text]
----
http://localhost:8983/solr/admin/collections?action=OVERSEERSTATUS&wt=json
http://localhost:8983/solr/admin/collections?action=OVERSEERSTATUS
----
[source,json]
@ -1171,7 +1171,7 @@ The response will include the status of the request and the status of the cluste
[source,text]
----
http://localhost:8983/solr/admin/collections?action=clusterstatus&wt=json
http://localhost:8983/solr/admin/collections?action=CLUSTERSTATUS
----
*Output*
@ -1393,7 +1393,7 @@ Fetch the names of the collections in the cluster.
[source,text]
----
http://localhost:8983/solr/admin/collections?action=LIST&wt=json
http://localhost:8983/solr/admin/collections?action=LIST
----
*Output*

View File

@ -231,6 +231,8 @@ If set to `true`, this parameter excludes the header from the returned results.
The `wt` parameter selects the Response Writer that Solr should use to format the query's response. For detailed descriptions of Response Writers, see <<response-writers.adoc#response-writers,Response Writers>>.
If you do not define the `wt` parameter in your queries, JSON will be returned as the format of the response.
== cache Parameter
Solr caches the results of all queries and filter queries by default. To disable result caching, set the `cache=false` parameter.

View File

@ -214,8 +214,7 @@ Here is what a request handler looks like in `solrconfig.xml`:
<requestHandler name="/query" class="solr.SearchHandler">
<lst name="defaults">
<str name="echoParams">explicit</str>
<str name="wt">json</str>
<str name="indent">true</str>
<int name="rows">10</str>
</lst>
</requestHandler>
----
@ -230,8 +229,7 @@ The same request handler defined with the Config API would look like this:
"class":"solr.SearchHandler",
"defaults":{
"echoParams":"explicit",
"wt":"json",
"indent":true
"rows": 10
}
}
}
@ -400,7 +398,7 @@ curl http://localhost:8983/solr/techproducts/config -H 'Content-type:application
"add-requesthandler" : {
"name": "/mypath",
"class":"solr.DumpRequestHandler",
"defaults":{ "x":"y" ,"a":"b", "wt":"json", "indent":true },
"defaults":{ "x":"y" ,"a":"b", "rows":10 },
"useParams":"x"
}
}'
@ -422,7 +420,7 @@ And you should see the following as output:
"indent":"true",
"a":"b",
"x":"y",
"wt":"json"},
"rows":"10"},
"context":{
"webapp":"/solr",
"path":"/mypath",
@ -437,7 +435,7 @@ curl http://localhost:8983/solr/techproducts/config -H 'Content-type:application
"update-requesthandler": {
"name": "/mypath",
"class":"solr.DumpRequestHandler",
"defaults": {"x":"new value for X", "wt":"json", "indent":true},
"defaults": {"x":"new value for X", "rows":"20"},
"useParams":"x"
}
}'

View File

@ -133,7 +133,7 @@ Fetch the names of the ConfigSets in the cluster.
[source,text]
----
http://localhost:8983/solr/admin/configs?action=LIST&wt=json
http://localhost:8983/solr/admin/configs?action=LIST
----
*Output*

View File

@ -65,7 +65,7 @@ There is also a way of sending REST commands to the logging endpoint to do the s
[source,bash]
----
# Set the root logger to level WARN
curl -s http://localhost:8983/solr/admin/info/logging --data-binary "set=root:WARN&wt=json"
curl -s http://localhost:8983/solr/admin/info/logging --data-binary "set=root:WARN"
----
== Choosing Log Level at Startup

Some files were not shown because too many files have changed in this diff Show More