mirror of
https://github.com/honeymoose/OpenSearch.git
synced 2025-02-23 21:38:15 +00:00
Standardized use of “*_length” for parameter names rather than “*_len”.
Java Builder apis drop old “len” methods in favour of new “length” Rest APIs support both old “len: and new “length” forms using new ParseField class to a) provide compiler-checked consistency between Builder and Parser classes and b) a common means of handling deprecated syntax in the DSL. Documentation and rest specs only document the new “*length” forms Closes #4083
This commit is contained in:
parent
ed254b56e0
commit
2795f4e55d
@ -51,11 +51,11 @@ not occur in at least this many docs. Defaults to `5`.
|
||||
Words that appear in more than this many docs will be ignored. Defaults
|
||||
to unbounded.
|
||||
|
||||
|`min_word_len` |The minimum word length below which words will be
|
||||
ignored. Defaults to `0`.
|
||||
|`min_word_length` |The minimum word length below which words will be
|
||||
ignored. Defaults to `0`. (Old name "min_word_len" is deprecated)
|
||||
|
||||
|`max_word_len` |The maximum word length above which words will be
|
||||
ignored. Defaults to unbounded (`0`).
|
||||
|`max_word_length` |The maximum word length above which words will be
|
||||
ignored. Defaults to unbounded (`0`). (Old name "max_word_len" is deprecated)
|
||||
|
||||
|`boost_terms` |Sets the boost factor to use when boosting terms.
|
||||
Defaults to `1`.
|
||||
|
@ -50,11 +50,11 @@ not occur in at least this many docs. Defaults to `5`.
|
||||
Words that appear in more than this many docs will be ignored. Defaults
|
||||
to unbounded.
|
||||
|
||||
|`min_word_len` |The minimum word length below which words will be
|
||||
ignored. Defaults to `0`.
|
||||
|`min_word_length` |The minimum word length below which words will be
|
||||
ignored. Defaults to `0`.(Old name "min_word_len" is deprecated)
|
||||
|
||||
|`max_word_len` |The maximum word length above which words will be
|
||||
ignored. Defaults to unbounded (`0`).
|
||||
|`max_word_length` |The maximum word length above which words will be
|
||||
ignored. Defaults to unbounded (`0`). (Old name "max_word_len" is deprecated)
|
||||
|
||||
|`boost_terms` |Sets the boost factor to use when boosting terms.
|
||||
Defaults to `1`.
|
||||
|
@ -79,13 +79,13 @@ Mapping supports the following parameters:
|
||||
`The Beatles`, no need to change a simple analyzer, if you are able to
|
||||
enrich your data.
|
||||
|
||||
`max_input_len`::
|
||||
`max_input_length`::
|
||||
Limits the length of a single input, defaults to `50` UTF-16 code points.
|
||||
This limit is only used at index time to reduce the total number of
|
||||
characters per input string in order to prevent massive inputs from
|
||||
bloating the underlying datastructure. The most usecases won't be influenced
|
||||
by the default value since prefix completions hardly grow beyond prefixes longer
|
||||
than a handful of characters.
|
||||
than a handful of characters. (Old name "max_input_len" is deprecated)
|
||||
|
||||
[[indexing]]
|
||||
==== Indexing
|
||||
|
@ -36,7 +36,7 @@ curl -XPOST 'localhost:9200/_search' -d {
|
||||
"direct_generator" : [ {
|
||||
"field" : "body",
|
||||
"suggest_mode" : "always",
|
||||
"min_word_len" : 1
|
||||
"min_word_length" : 1
|
||||
} ],
|
||||
"highlight": {
|
||||
"pre_tag": "<em>",
|
||||
@ -229,15 +229,15 @@ The direct generators support the following parameters:
|
||||
and 2. Any other value result in an bad request error being thrown.
|
||||
Defaults to 2.
|
||||
|
||||
`prefix_len`::
|
||||
`prefix_length`::
|
||||
The number of minimal prefix characters that must
|
||||
match in order be a candidate suggestions. Defaults to 1. Increasing
|
||||
this number improves spellcheck performance. Usually misspellings don't
|
||||
occur in the beginning of terms.
|
||||
occur in the beginning of terms. (Old name "prefix_len" is deprecated)
|
||||
|
||||
`min_word_len`::
|
||||
`min_word_length`::
|
||||
The minimum length a suggest text term must have in
|
||||
order to be included. Defaults to 4.
|
||||
order to be included. Defaults to 4. (Old name "min_word_len" is deprecated)
|
||||
|
||||
`max_inspections`::
|
||||
A factor that is used to multiply with the
|
||||
@ -298,11 +298,11 @@ curl -s -XPOST 'localhost:9200/_search' -d {
|
||||
"direct_generator" : [ {
|
||||
"field" : "body",
|
||||
"suggest_mode" : "always",
|
||||
"min_word_len" : 1
|
||||
"min_word_length" : 1
|
||||
}, {
|
||||
"field" : "reverse",
|
||||
"suggest_mode" : "always",
|
||||
"min_word_len" : 1,
|
||||
"min_word_length" : 1,
|
||||
"pre_filter" : "reverse",
|
||||
"post_filter" : "reverse"
|
||||
} ]
|
||||
|
@ -62,15 +62,15 @@ doesn't take the query into account that is part of request.
|
||||
between 1 and 2. Any other value result in an bad request error being
|
||||
thrown. Defaults to 2.
|
||||
|
||||
`prefix_len`::
|
||||
`prefix_length`::
|
||||
The number of minimal prefix characters that must
|
||||
match in order be a candidate suggestions. Defaults to 1. Increasing
|
||||
this number improves spellcheck performance. Usually misspellings don't
|
||||
occur in the beginning of terms.
|
||||
occur in the beginning of terms. (Old name "prefix_len" is deprecated)
|
||||
|
||||
`min_word_len`::
|
||||
`min_word_length`::
|
||||
The minimum length a suggest text term must have in
|
||||
order to be included. Defaults to 4.
|
||||
order to be included. Defaults to 4. (Old name "min_word_len" is deprecated)
|
||||
|
||||
`shard_size`::
|
||||
Sets the maximum number of suggestions to be retrieved
|
||||
|
@ -35,7 +35,7 @@
|
||||
"type" : "number",
|
||||
"description" : "The maximum query terms to be included in the generated query"
|
||||
},
|
||||
"max_word_len": {
|
||||
"max_word_length": {
|
||||
"type" : "number",
|
||||
"description" : "The minimum length of the word: longer words will be ignored"
|
||||
},
|
||||
@ -47,7 +47,7 @@
|
||||
"type" : "number",
|
||||
"description" : "The term frequency as percent: terms with lower occurence in the source document will be ignored"
|
||||
},
|
||||
"min_word_len": {
|
||||
"min_word_length": {
|
||||
"type" : "number",
|
||||
"description" : "The minimum length of the word: shorter words will be ignored"
|
||||
},
|
||||
|
@ -73,8 +73,8 @@ public class MoreLikeThisRequest extends ActionRequest<MoreLikeThisRequest> {
|
||||
private String[] stopWords = null;
|
||||
private int minDocFreq = -1;
|
||||
private int maxDocFreq = -1;
|
||||
private int minWordLen = -1;
|
||||
private int maxWordLen = -1;
|
||||
private int minWordLength = -1;
|
||||
private int maxWordLength = -1;
|
||||
private float boostTerms = -1;
|
||||
|
||||
private SearchType searchType = SearchType.DEFAULT;
|
||||
@ -275,31 +275,31 @@ public class MoreLikeThisRequest extends ActionRequest<MoreLikeThisRequest> {
|
||||
/**
|
||||
* The minimum word length below which words will be ignored. Defaults to <tt>0</tt>.
|
||||
*/
|
||||
public MoreLikeThisRequest minWordLen(int minWordLen) {
|
||||
this.minWordLen = minWordLen;
|
||||
public MoreLikeThisRequest minWordLength(int minWordLength) {
|
||||
this.minWordLength = minWordLength;
|
||||
return this;
|
||||
}
|
||||
|
||||
/**
|
||||
* The minimum word length below which words will be ignored. Defaults to <tt>0</tt>.
|
||||
*/
|
||||
public int minWordLen() {
|
||||
return this.minWordLen;
|
||||
public int minWordLength() {
|
||||
return this.minWordLength;
|
||||
}
|
||||
|
||||
/**
|
||||
* The maximum word length above which words will be ignored. Defaults to unbounded.
|
||||
*/
|
||||
public MoreLikeThisRequest maxWordLen(int maxWordLen) {
|
||||
this.maxWordLen = maxWordLen;
|
||||
public MoreLikeThisRequest maxWordLength(int maxWordLength) {
|
||||
this.maxWordLength = maxWordLength;
|
||||
return this;
|
||||
}
|
||||
|
||||
/**
|
||||
* The maximum word length above which words will be ignored. Defaults to unbounded.
|
||||
*/
|
||||
public int maxWordLen() {
|
||||
return this.maxWordLen;
|
||||
public int maxWordLength() {
|
||||
return this.maxWordLength;
|
||||
}
|
||||
|
||||
/**
|
||||
@ -554,8 +554,8 @@ public class MoreLikeThisRequest extends ActionRequest<MoreLikeThisRequest> {
|
||||
}
|
||||
minDocFreq = in.readVInt();
|
||||
maxDocFreq = in.readVInt();
|
||||
minWordLen = in.readVInt();
|
||||
maxWordLen = in.readVInt();
|
||||
minWordLength = in.readVInt();
|
||||
maxWordLength = in.readVInt();
|
||||
boostTerms = in.readFloat();
|
||||
searchType = SearchType.fromId(in.readByte());
|
||||
if (in.readBoolean()) {
|
||||
@ -625,8 +625,8 @@ public class MoreLikeThisRequest extends ActionRequest<MoreLikeThisRequest> {
|
||||
}
|
||||
out.writeVInt(minDocFreq);
|
||||
out.writeVInt(maxDocFreq);
|
||||
out.writeVInt(minWordLen);
|
||||
out.writeVInt(maxWordLen);
|
||||
out.writeVInt(minWordLength);
|
||||
out.writeVInt(maxWordLength);
|
||||
out.writeFloat(boostTerms);
|
||||
|
||||
out.writeByte(searchType.id());
|
||||
|
@ -120,7 +120,7 @@ public class MoreLikeThisRequestBuilder extends ActionRequestBuilder<MoreLikeThi
|
||||
* The minimum word length below which words will be ignored. Defaults to <tt>0</tt>.
|
||||
*/
|
||||
public MoreLikeThisRequestBuilder setMinWordLen(int minWordLen) {
|
||||
request.minWordLen(minWordLen);
|
||||
request.minWordLength(minWordLen);
|
||||
return this;
|
||||
}
|
||||
|
||||
@ -128,7 +128,7 @@ public class MoreLikeThisRequestBuilder extends ActionRequestBuilder<MoreLikeThi
|
||||
* The maximum word length above which words will be ignored. Defaults to unbounded.
|
||||
*/
|
||||
public MoreLikeThisRequestBuilder setMaxWordLen(int maxWordLen) {
|
||||
request().maxWordLen(maxWordLen);
|
||||
request().maxWordLength(maxWordLen);
|
||||
return this;
|
||||
}
|
||||
|
||||
|
@ -314,8 +314,8 @@ public class TransportMoreLikeThisAction extends TransportAction<MoreLikeThisReq
|
||||
.boostTerms(request.boostTerms())
|
||||
.minDocFreq(request.minDocFreq())
|
||||
.maxDocFreq(request.maxDocFreq())
|
||||
.minWordLen(request.minWordLen())
|
||||
.maxWordLen(request.maxWordLen())
|
||||
.minWordLength(request.minWordLength())
|
||||
.maxWordLen(request.maxWordLength())
|
||||
.minTermFreq(request.minTermFreq())
|
||||
.maxQueryTerms(request.maxQueryTerms())
|
||||
.stopWords(request.stopWords())
|
||||
|
@ -51,10 +51,18 @@ public class ParseField {
|
||||
}
|
||||
}
|
||||
|
||||
public String getPreferredName(){
|
||||
return underscoreName;
|
||||
}
|
||||
|
||||
public ParseField withDeprecation(String... deprecatedNames) {
|
||||
return new ParseField(this.underscoreName, deprecatedNames);
|
||||
}
|
||||
|
||||
public boolean match(String currentFieldName) {
|
||||
return match(currentFieldName, EMPTY_FLAGS);
|
||||
}
|
||||
|
||||
public boolean match(String currentFieldName, EnumSet<Flag> flags) {
|
||||
if (currentFieldName.equals(camelCaseName) || currentFieldName.equals(underscoreName)) {
|
||||
return true;
|
||||
|
@ -27,6 +27,7 @@ import org.apache.lucene.document.FieldType;
|
||||
import org.apache.lucene.search.suggest.analyzing.XAnalyzingSuggester;
|
||||
import org.apache.lucene.util.BytesRef;
|
||||
import org.elasticsearch.ElasticsearchIllegalArgumentException;
|
||||
import org.elasticsearch.common.ParseField;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
import org.elasticsearch.common.xcontent.XContentFactory;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
@ -73,13 +74,13 @@ public class CompletionFieldMapper extends AbstractFieldMapper<String> {
|
||||
public static class Fields {
|
||||
// Mapping field names
|
||||
public static final String ANALYZER = "analyzer";
|
||||
public static final String INDEX_ANALYZER = "index_analyzer";
|
||||
public static final String SEARCH_ANALYZER = "search_analyzer";
|
||||
public static final String PRESERVE_SEPARATORS = "preserve_separators";
|
||||
public static final String PRESERVE_POSITION_INCREMENTS = "preserve_position_increments";
|
||||
public static final ParseField INDEX_ANALYZER = new ParseField("index_analyzer");
|
||||
public static final ParseField SEARCH_ANALYZER = new ParseField("search_analyzer");
|
||||
public static final ParseField PRESERVE_SEPARATORS = new ParseField("preserve_separators");
|
||||
public static final ParseField PRESERVE_POSITION_INCREMENTS = new ParseField("preserve_position_increments");
|
||||
public static final String PAYLOADS = "payloads";
|
||||
public static final String TYPE = "type";
|
||||
public static final String MAX_INPUT_LENGTH = "max_input_len";
|
||||
public static final ParseField MAX_INPUT_LENGTH = new ParseField("max_input_length", "max_input_len");
|
||||
// Content field names
|
||||
public static final String CONTENT_FIELD_NAME_INPUT = "input";
|
||||
public static final String CONTENT_FIELD_NAME_OUTPUT = "output";
|
||||
@ -119,7 +120,7 @@ public class CompletionFieldMapper extends AbstractFieldMapper<String> {
|
||||
|
||||
public Builder maxInputLength(int maxInputLength) {
|
||||
if (maxInputLength <= 0) {
|
||||
throw new ElasticsearchIllegalArgumentException(Fields.MAX_INPUT_LENGTH + " must be > 0 but was [" + maxInputLength + "]");
|
||||
throw new ElasticsearchIllegalArgumentException(Fields.MAX_INPUT_LENGTH.getPreferredName() + " must be > 0 but was [" + maxInputLength + "]");
|
||||
}
|
||||
this.maxInputLength = maxInputLength;
|
||||
return this;
|
||||
@ -147,17 +148,17 @@ public class CompletionFieldMapper extends AbstractFieldMapper<String> {
|
||||
NamedAnalyzer analyzer = getNamedAnalyzer(parserContext, fieldNode.toString());
|
||||
builder.indexAnalyzer(analyzer);
|
||||
builder.searchAnalyzer(analyzer);
|
||||
} else if (fieldName.equals(Fields.INDEX_ANALYZER) || fieldName.equals("indexAnalyzer")) {
|
||||
} else if (Fields.INDEX_ANALYZER.match(fieldName)) {
|
||||
builder.indexAnalyzer(getNamedAnalyzer(parserContext, fieldNode.toString()));
|
||||
} else if (fieldName.equals(Fields.SEARCH_ANALYZER) || fieldName.equals("searchAnalyzer")) {
|
||||
} else if (Fields.SEARCH_ANALYZER.match(fieldName)) {
|
||||
builder.searchAnalyzer(getNamedAnalyzer(parserContext, fieldNode.toString()));
|
||||
} else if (fieldName.equals(Fields.PAYLOADS)) {
|
||||
builder.payloads(Boolean.parseBoolean(fieldNode.toString()));
|
||||
} else if (fieldName.equals(Fields.PRESERVE_SEPARATORS) || fieldName.equals("preserveSeparators")) {
|
||||
} else if (Fields.PRESERVE_SEPARATORS.match(fieldName)) {
|
||||
builder.preserveSeparators(Boolean.parseBoolean(fieldNode.toString()));
|
||||
} else if (fieldName.equals(Fields.PRESERVE_POSITION_INCREMENTS) || fieldName.equals("preservePositionIncrements")) {
|
||||
} else if (Fields.PRESERVE_POSITION_INCREMENTS.match(fieldName)) {
|
||||
builder.preservePositionIncrements(Boolean.parseBoolean(fieldNode.toString()));
|
||||
} else if (fieldName.equals(Fields.MAX_INPUT_LENGTH) || fieldName.equals("maxInputLen")) {
|
||||
} else if (Fields.MAX_INPUT_LENGTH.match(fieldName)) {
|
||||
builder.maxInputLength(Integer.parseInt(fieldNode.toString()));
|
||||
} else {
|
||||
throw new MapperParsingException("Unknown field [" + fieldName + "]");
|
||||
@ -347,13 +348,13 @@ public class CompletionFieldMapper extends AbstractFieldMapper<String> {
|
||||
if (indexAnalyzer.name().equals(searchAnalyzer.name())) {
|
||||
builder.field(Fields.ANALYZER, indexAnalyzer.name());
|
||||
} else {
|
||||
builder.field(Fields.INDEX_ANALYZER, indexAnalyzer.name())
|
||||
.field(Fields.SEARCH_ANALYZER, searchAnalyzer.name());
|
||||
builder.field(Fields.INDEX_ANALYZER.getPreferredName(), indexAnalyzer.name())
|
||||
.field(Fields.SEARCH_ANALYZER.getPreferredName(), searchAnalyzer.name());
|
||||
}
|
||||
builder.field(Fields.PAYLOADS, this.payloads);
|
||||
builder.field(Fields.PRESERVE_SEPARATORS, this.preserveSeparators);
|
||||
builder.field(Fields.PRESERVE_POSITION_INCREMENTS, this.preservePositionIncrements);
|
||||
builder.field(Fields.MAX_INPUT_LENGTH, this.maxInputLength);
|
||||
builder.field(Fields.PRESERVE_SEPARATORS.getPreferredName(), this.preserveSeparators);
|
||||
builder.field(Fields.PRESERVE_POSITION_INCREMENTS.getPreferredName(), this.preservePositionIncrements);
|
||||
builder.field(Fields.MAX_INPUT_LENGTH.getPreferredName(), this.maxInputLength);
|
||||
return builder.endObject();
|
||||
}
|
||||
|
||||
|
@ -38,8 +38,8 @@ public class MoreLikeThisFieldQueryBuilder extends BaseQueryBuilder implements B
|
||||
private String[] stopWords = null;
|
||||
private int minDocFreq = -1;
|
||||
private int maxDocFreq = -1;
|
||||
private int minWordLen = -1;
|
||||
private int maxWordLen = -1;
|
||||
private int minWordLength = -1;
|
||||
private int maxWordLength = -1;
|
||||
private float boostTerms = -1;
|
||||
private float boost = -1;
|
||||
private String analyzer;
|
||||
@ -123,8 +123,8 @@ public class MoreLikeThisFieldQueryBuilder extends BaseQueryBuilder implements B
|
||||
* Sets the minimum word length below which words will be ignored. Defaults
|
||||
* to <tt>0</tt>.
|
||||
*/
|
||||
public MoreLikeThisFieldQueryBuilder minWordLen(int minWordLen) {
|
||||
this.minWordLen = minWordLen;
|
||||
public MoreLikeThisFieldQueryBuilder minWordLength(int minWordLength) {
|
||||
this.minWordLength = minWordLength;
|
||||
return this;
|
||||
}
|
||||
|
||||
@ -133,7 +133,7 @@ public class MoreLikeThisFieldQueryBuilder extends BaseQueryBuilder implements B
|
||||
* unbounded (<tt>0</tt>).
|
||||
*/
|
||||
public MoreLikeThisFieldQueryBuilder maxWordLen(int maxWordLen) {
|
||||
this.maxWordLen = maxWordLen;
|
||||
this.maxWordLength = maxWordLen;
|
||||
return this;
|
||||
}
|
||||
|
||||
@ -179,39 +179,40 @@ public class MoreLikeThisFieldQueryBuilder extends BaseQueryBuilder implements B
|
||||
builder.startObject(MoreLikeThisFieldQueryParser.NAME);
|
||||
builder.startObject(name);
|
||||
if (likeText == null) {
|
||||
throw new ElasticsearchIllegalArgumentException("moreLikeThisField requires 'like_text' to be provided");
|
||||
throw new ElasticsearchIllegalArgumentException("moreLikeThisField requires '"+
|
||||
MoreLikeThisQueryParser.Fields.LIKE_TEXT.getPreferredName() +"' to be provided");
|
||||
}
|
||||
builder.field("like_text", likeText);
|
||||
builder.field(MoreLikeThisQueryParser.Fields.LIKE_TEXT.getPreferredName(), likeText);
|
||||
if (percentTermsToMatch != -1) {
|
||||
builder.field("percent_terms_to_match", percentTermsToMatch);
|
||||
builder.field(MoreLikeThisQueryParser.Fields.PERCENT_TERMS_TO_MATCH.getPreferredName(), percentTermsToMatch);
|
||||
}
|
||||
if (minTermFreq != -1) {
|
||||
builder.field("min_term_freq", minTermFreq);
|
||||
builder.field(MoreLikeThisQueryParser.Fields.MIN_TERM_FREQ.getPreferredName(), minTermFreq);
|
||||
}
|
||||
if (maxQueryTerms != -1) {
|
||||
builder.field("max_query_terms", maxQueryTerms);
|
||||
builder.field(MoreLikeThisQueryParser.Fields.MAX_QUERY_TERMS.getPreferredName(), maxQueryTerms);
|
||||
}
|
||||
if (stopWords != null && stopWords.length > 0) {
|
||||
builder.startArray("stop_words");
|
||||
builder.startArray(MoreLikeThisQueryParser.Fields.STOP_WORDS.getPreferredName());
|
||||
for (String stopWord : stopWords) {
|
||||
builder.value(stopWord);
|
||||
}
|
||||
builder.endArray();
|
||||
}
|
||||
if (minDocFreq != -1) {
|
||||
builder.field("min_doc_freq", minDocFreq);
|
||||
builder.field(MoreLikeThisQueryParser.Fields.MIN_DOC_FREQ.getPreferredName(), minDocFreq);
|
||||
}
|
||||
if (maxDocFreq != -1) {
|
||||
builder.field("max_doc_freq", maxDocFreq);
|
||||
builder.field(MoreLikeThisQueryParser.Fields.MAX_DOC_FREQ.getPreferredName(), maxDocFreq);
|
||||
}
|
||||
if (minWordLen != -1) {
|
||||
builder.field("min_word_len", minWordLen);
|
||||
if (minWordLength != -1) {
|
||||
builder.field(MoreLikeThisQueryParser.Fields.MIN_WORD_LENGTH.getPreferredName(), minWordLength);
|
||||
}
|
||||
if (maxWordLen != -1) {
|
||||
builder.field("max_word_len", maxWordLen);
|
||||
if (maxWordLength != -1) {
|
||||
builder.field(MoreLikeThisQueryParser.Fields.MAX_WORD_LENGTH.getPreferredName(), maxWordLength);
|
||||
}
|
||||
if (boostTerms != -1) {
|
||||
builder.field("boost_terms", boostTerms);
|
||||
builder.field(MoreLikeThisQueryParser.Fields.BOOST_TERMS.getPreferredName(), boostTerms);
|
||||
}
|
||||
if (boost != -1) {
|
||||
builder.field("boost", boost);
|
||||
@ -220,7 +221,7 @@ public class MoreLikeThisFieldQueryBuilder extends BaseQueryBuilder implements B
|
||||
builder.field("analyzer", analyzer);
|
||||
}
|
||||
if (failOnUnsupportedField != null) {
|
||||
builder.field("fail_on_unsupported_field", failOnUnsupportedField);
|
||||
builder.field(MoreLikeThisQueryParser.Fields.FAIL_ON_UNSUPPORTED_FIELD.getPreferredName(), failOnUnsupportedField);
|
||||
}
|
||||
if (queryName != null) {
|
||||
builder.field("_name", queryName);
|
||||
|
@ -51,6 +51,7 @@ public class MoreLikeThisFieldQueryParser implements QueryParser {
|
||||
return new String[]{NAME, "more_like_this_field", Strings.toCamelCase(NAME), "moreLikeThisField"};
|
||||
}
|
||||
|
||||
|
||||
@Override
|
||||
public Query parse(QueryParseContext parseContext) throws IOException, QueryParsingException {
|
||||
XContentParser parser = parseContext.parser();
|
||||
@ -75,30 +76,30 @@ public class MoreLikeThisFieldQueryParser implements QueryParser {
|
||||
if (token == XContentParser.Token.FIELD_NAME) {
|
||||
currentFieldName = parser.currentName();
|
||||
} else if (token.isValue()) {
|
||||
if ("like_text".equals(currentFieldName)) {
|
||||
if (MoreLikeThisQueryParser.Fields.LIKE_TEXT.match(currentFieldName,parseContext.parseFlags()) ) {
|
||||
mltQuery.setLikeText(parser.text());
|
||||
} else if ("min_term_freq".equals(currentFieldName) || "minTermFreq".equals(currentFieldName)) {
|
||||
} else if (MoreLikeThisQueryParser.Fields.MIN_TERM_FREQ.match(currentFieldName,parseContext.parseFlags()) ) {
|
||||
mltQuery.setMinTermFrequency(parser.intValue());
|
||||
} else if ("max_query_terms".equals(currentFieldName) || "maxQueryTerms".equals(currentFieldName)) {
|
||||
} else if (MoreLikeThisQueryParser.Fields.MAX_QUERY_TERMS.match(currentFieldName,parseContext.parseFlags())) {
|
||||
mltQuery.setMaxQueryTerms(parser.intValue());
|
||||
} else if ("min_doc_freq".equals(currentFieldName) || "minDocFreq".equals(currentFieldName)) {
|
||||
} else if (MoreLikeThisQueryParser.Fields.MIN_DOC_FREQ.match(currentFieldName,parseContext.parseFlags())) {
|
||||
mltQuery.setMinDocFreq(parser.intValue());
|
||||
} else if ("max_doc_freq".equals(currentFieldName) || "maxDocFreq".equals(currentFieldName)) {
|
||||
} else if (MoreLikeThisQueryParser.Fields.MAX_DOC_FREQ.match(currentFieldName,parseContext.parseFlags())) {
|
||||
mltQuery.setMaxDocFreq(parser.intValue());
|
||||
} else if ("min_word_len".equals(currentFieldName) || "minWordLen".equals(currentFieldName)) {
|
||||
} else if (MoreLikeThisQueryParser.Fields.MIN_WORD_LENGTH.match(currentFieldName,parseContext.parseFlags())) {
|
||||
mltQuery.setMinWordLen(parser.intValue());
|
||||
} else if ("max_word_len".equals(currentFieldName) || "maxWordLen".equals(currentFieldName)) {
|
||||
} else if (MoreLikeThisQueryParser.Fields.MAX_WORD_LENGTH.match(currentFieldName,parseContext.parseFlags())) {
|
||||
mltQuery.setMaxWordLen(parser.intValue());
|
||||
} else if ("boost_terms".equals(currentFieldName) || "boostTerms".equals(currentFieldName)) {
|
||||
} else if (MoreLikeThisQueryParser.Fields.BOOST_TERMS.match(currentFieldName,parseContext.parseFlags())) {
|
||||
mltQuery.setBoostTerms(true);
|
||||
mltQuery.setBoostTermsFactor(parser.floatValue());
|
||||
} else if ("percent_terms_to_match".equals(currentFieldName) || "percentTermsToMatch".equals(currentFieldName)) {
|
||||
} else if (MoreLikeThisQueryParser.Fields.PERCENT_TERMS_TO_MATCH.match(currentFieldName,parseContext.parseFlags())) {
|
||||
mltQuery.setPercentTermsToMatch(parser.floatValue());
|
||||
} else if ("analyzer".equals(currentFieldName)) {
|
||||
analyzer = parseContext.analysisService().analyzer(parser.text());
|
||||
} else if ("boost".equals(currentFieldName)) {
|
||||
mltQuery.setBoost(parser.floatValue());
|
||||
} else if ("fail_on_unsupported_field".equals(currentFieldName) || "failOnUnsupportedField".equals(currentFieldName)) {
|
||||
} else if (MoreLikeThisQueryParser.Fields.FAIL_ON_UNSUPPORTED_FIELD.match(currentFieldName,parseContext.parseFlags())) {
|
||||
failOnUnsupportedField = parser.booleanValue();
|
||||
} else if ("_name".equals(currentFieldName)) {
|
||||
queryName = parser.text();
|
||||
@ -106,7 +107,8 @@ public class MoreLikeThisFieldQueryParser implements QueryParser {
|
||||
throw new QueryParsingException(parseContext.index(), "[mlt_field] query does not support [" + currentFieldName + "]");
|
||||
}
|
||||
} else if (token == XContentParser.Token.START_ARRAY) {
|
||||
if ("stop_words".equals(currentFieldName) || "stopWords".equals(currentFieldName)) {
|
||||
if (MoreLikeThisQueryParser.Fields.STOP_WORDS.match(currentFieldName,parseContext.parseFlags())) {
|
||||
|
||||
Set<String> stopWords = Sets.newHashSet();
|
||||
while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) {
|
||||
stopWords.add(parser.text());
|
||||
|
@ -39,8 +39,8 @@ public class MoreLikeThisQueryBuilder extends BaseQueryBuilder implements Boosta
|
||||
private String[] stopWords = null;
|
||||
private int minDocFreq = -1;
|
||||
private int maxDocFreq = -1;
|
||||
private int minWordLen = -1;
|
||||
private int maxWordLen = -1;
|
||||
private int minWordLength = -1;
|
||||
private int maxWordLength = -1;
|
||||
private float boostTerms = -1;
|
||||
private float boost = -1;
|
||||
private String analyzer;
|
||||
@ -131,8 +131,8 @@ public class MoreLikeThisQueryBuilder extends BaseQueryBuilder implements Boosta
|
||||
* Sets the minimum word length below which words will be ignored. Defaults
|
||||
* to <tt>0</tt>.
|
||||
*/
|
||||
public MoreLikeThisQueryBuilder minWordLen(int minWordLen) {
|
||||
this.minWordLen = minWordLen;
|
||||
public MoreLikeThisQueryBuilder minWordLength(int minWordLength) {
|
||||
this.minWordLength = minWordLength;
|
||||
return this;
|
||||
}
|
||||
|
||||
@ -140,8 +140,8 @@ public class MoreLikeThisQueryBuilder extends BaseQueryBuilder implements Boosta
|
||||
* Sets the maximum word length above which words will be ignored. Defaults to
|
||||
* unbounded (<tt>0</tt>).
|
||||
*/
|
||||
public MoreLikeThisQueryBuilder maxWordLen(int maxWordLen) {
|
||||
this.maxWordLen = maxWordLen;
|
||||
public MoreLikeThisQueryBuilder maxWordLength(int maxWordLength) {
|
||||
this.maxWordLength = maxWordLength;
|
||||
return this;
|
||||
}
|
||||
|
||||
@ -193,39 +193,40 @@ public class MoreLikeThisQueryBuilder extends BaseQueryBuilder implements Boosta
|
||||
builder.endArray();
|
||||
}
|
||||
if (likeText == null) {
|
||||
throw new ElasticsearchIllegalArgumentException("moreLikeThis requires 'likeText' to be provided");
|
||||
throw new ElasticsearchIllegalArgumentException("moreLikeThis requires '"+
|
||||
MoreLikeThisQueryParser.Fields.LIKE_TEXT.getPreferredName() +"' to be provided");
|
||||
}
|
||||
builder.field("like_text", likeText);
|
||||
builder.field(MoreLikeThisQueryParser.Fields.LIKE_TEXT.getPreferredName(), likeText);
|
||||
if (percentTermsToMatch != -1) {
|
||||
builder.field("percent_terms_to_match", percentTermsToMatch);
|
||||
builder.field(MoreLikeThisQueryParser.Fields.PERCENT_TERMS_TO_MATCH.getPreferredName(), percentTermsToMatch);
|
||||
}
|
||||
if (minTermFreq != -1) {
|
||||
builder.field("min_term_freq", minTermFreq);
|
||||
builder.field(MoreLikeThisQueryParser.Fields.MIN_TERM_FREQ.getPreferredName(), minTermFreq);
|
||||
}
|
||||
if (maxQueryTerms != -1) {
|
||||
builder.field("max_query_terms", maxQueryTerms);
|
||||
builder.field(MoreLikeThisQueryParser.Fields.MAX_QUERY_TERMS.getPreferredName(), maxQueryTerms);
|
||||
}
|
||||
if (stopWords != null && stopWords.length > 0) {
|
||||
builder.startArray("stop_words");
|
||||
builder.startArray(MoreLikeThisQueryParser.Fields.STOP_WORDS.getPreferredName());
|
||||
for (String stopWord : stopWords) {
|
||||
builder.value(stopWord);
|
||||
}
|
||||
builder.endArray();
|
||||
}
|
||||
if (minDocFreq != -1) {
|
||||
builder.field("min_doc_freq", minDocFreq);
|
||||
builder.field(MoreLikeThisQueryParser.Fields.MIN_DOC_FREQ.getPreferredName(), minDocFreq);
|
||||
}
|
||||
if (maxDocFreq != -1) {
|
||||
builder.field("max_doc_freq", maxDocFreq);
|
||||
builder.field(MoreLikeThisQueryParser.Fields.MAX_DOC_FREQ.getPreferredName(), maxDocFreq);
|
||||
}
|
||||
if (minWordLen != -1) {
|
||||
builder.field("min_word_len", minWordLen);
|
||||
if (minWordLength != -1) {
|
||||
builder.field(MoreLikeThisQueryParser.Fields.MIN_WORD_LENGTH.getPreferredName(), minWordLength);
|
||||
}
|
||||
if (maxWordLen != -1) {
|
||||
builder.field("max_word_len", maxWordLen);
|
||||
if (maxWordLength != -1) {
|
||||
builder.field(MoreLikeThisQueryParser.Fields.MAX_WORD_LENGTH.getPreferredName(), maxWordLength);
|
||||
}
|
||||
if (boostTerms != -1) {
|
||||
builder.field("boost_terms", boostTerms);
|
||||
builder.field(MoreLikeThisQueryParser.Fields.BOOST_TERMS.getPreferredName(), boostTerms);
|
||||
}
|
||||
if (boost != -1) {
|
||||
builder.field("boost", boost);
|
||||
@ -234,7 +235,7 @@ public class MoreLikeThisQueryBuilder extends BaseQueryBuilder implements Boosta
|
||||
builder.field("analyzer", analyzer);
|
||||
}
|
||||
if (failOnUnsupportedField != null) {
|
||||
builder.field("fail_on_unsupported_field", failOnUnsupportedField);
|
||||
builder.field(MoreLikeThisQueryParser.Fields.FAIL_ON_UNSUPPORTED_FIELD.getPreferredName(), failOnUnsupportedField);
|
||||
}
|
||||
if (queryName != null) {
|
||||
builder.field("_name", queryName);
|
||||
|
@ -24,6 +24,7 @@ import com.google.common.collect.Sets;
|
||||
import org.apache.lucene.analysis.Analyzer;
|
||||
import org.apache.lucene.search.Query;
|
||||
import org.elasticsearch.ElasticsearchIllegalArgumentException;
|
||||
import org.elasticsearch.common.ParseField;
|
||||
import org.elasticsearch.common.Strings;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.common.lucene.search.MoreLikeThisQuery;
|
||||
@ -42,6 +43,21 @@ public class MoreLikeThisQueryParser implements QueryParser {
|
||||
|
||||
public static final String NAME = "mlt";
|
||||
|
||||
|
||||
public static class Fields {
|
||||
public static final ParseField LIKE_TEXT = new ParseField("like_text");
|
||||
public static final ParseField MIN_TERM_FREQ = new ParseField("min_term_freq");
|
||||
public static final ParseField MAX_QUERY_TERMS = new ParseField("max_query_terms");
|
||||
public static final ParseField MIN_WORD_LENGTH = new ParseField("min_word_length", "min_word_len");
|
||||
public static final ParseField MAX_WORD_LENGTH = new ParseField("max_word_length", "max_word_len");
|
||||
public static final ParseField MIN_DOC_FREQ = new ParseField("min_doc_freq");
|
||||
public static final ParseField MAX_DOC_FREQ = new ParseField("max_doc_freq");
|
||||
public static final ParseField BOOST_TERMS = new ParseField("boost_terms");
|
||||
public static final ParseField PERCENT_TERMS_TO_MATCH = new ParseField("percent_terms_to_match");
|
||||
public static final ParseField FAIL_ON_UNSUPPORTED_FIELD = new ParseField("fail_on_unsupported_field");
|
||||
public static final ParseField STOP_WORDS = new ParseField("stop_words");
|
||||
}
|
||||
|
||||
@Inject
|
||||
public MoreLikeThisQueryParser() {
|
||||
}
|
||||
@ -68,38 +84,38 @@ public class MoreLikeThisQueryParser implements QueryParser {
|
||||
if (token == XContentParser.Token.FIELD_NAME) {
|
||||
currentFieldName = parser.currentName();
|
||||
} else if (token.isValue()) {
|
||||
if ("like_text".equals(currentFieldName) || "likeText".equals(currentFieldName)) {
|
||||
if (Fields.LIKE_TEXT.match(currentFieldName, parseContext.parseFlags())) {
|
||||
mltQuery.setLikeText(parser.text());
|
||||
} else if ("min_term_freq".equals(currentFieldName) || "minTermFreq".equals(currentFieldName)) {
|
||||
} else if (Fields.MIN_TERM_FREQ.match(currentFieldName, parseContext.parseFlags())) {
|
||||
mltQuery.setMinTermFrequency(parser.intValue());
|
||||
} else if ("max_query_terms".equals(currentFieldName) || "maxQueryTerms".equals(currentFieldName)) {
|
||||
} else if (Fields.MAX_QUERY_TERMS.match(currentFieldName, parseContext.parseFlags())) {
|
||||
mltQuery.setMaxQueryTerms(parser.intValue());
|
||||
} else if ("min_doc_freq".equals(currentFieldName) || "minDocFreq".equals(currentFieldName)) {
|
||||
} else if (Fields.MIN_DOC_FREQ.match(currentFieldName, parseContext.parseFlags())) {
|
||||
mltQuery.setMinDocFreq(parser.intValue());
|
||||
} else if ("max_doc_freq".equals(currentFieldName) || "maxDocFreq".equals(currentFieldName)) {
|
||||
} else if (Fields.MAX_DOC_FREQ.match(currentFieldName, parseContext.parseFlags())) {
|
||||
mltQuery.setMaxDocFreq(parser.intValue());
|
||||
} else if ("min_word_len".equals(currentFieldName) || "minWordLen".equals(currentFieldName)) {
|
||||
} else if (Fields.MIN_WORD_LENGTH.match(currentFieldName, parseContext.parseFlags())) {
|
||||
mltQuery.setMinWordLen(parser.intValue());
|
||||
} else if ("max_word_len".equals(currentFieldName) || "maxWordLen".equals(currentFieldName)) {
|
||||
} else if (Fields.MAX_WORD_LENGTH.match(currentFieldName, parseContext.parseFlags())) {
|
||||
mltQuery.setMaxWordLen(parser.intValue());
|
||||
} else if ("boost_terms".equals(currentFieldName) || "boostTerms".equals(currentFieldName)) {
|
||||
} else if (Fields.BOOST_TERMS.match(currentFieldName, parseContext.parseFlags())) {
|
||||
mltQuery.setBoostTerms(true);
|
||||
mltQuery.setBoostTermsFactor(parser.floatValue());
|
||||
} else if ("percent_terms_to_match".equals(currentFieldName) || "percentTermsToMatch".equals(currentFieldName)) {
|
||||
} else if (Fields.PERCENT_TERMS_TO_MATCH.match(currentFieldName, parseContext.parseFlags())) {
|
||||
mltQuery.setPercentTermsToMatch(parser.floatValue());
|
||||
} else if ("analyzer".equals(currentFieldName)) {
|
||||
analyzer = parseContext.analysisService().analyzer(parser.text());
|
||||
} else if ("boost".equals(currentFieldName)) {
|
||||
mltQuery.setBoost(parser.floatValue());
|
||||
} else if ("fail_on_unsupported_field".equals(currentFieldName) || "failOnUnsupportedField".equals(currentFieldName)) {
|
||||
} else if (Fields.FAIL_ON_UNSUPPORTED_FIELD.match(currentFieldName, parseContext.parseFlags())) {
|
||||
failOnUnsupportedField = parser.booleanValue();
|
||||
} else if ("_name".equals(currentFieldName)) {
|
||||
queryName = parser.text();
|
||||
} else {
|
||||
throw new QueryParsingException(parseContext.index(), "[mlt] query does not support [" + currentFieldName + "]");
|
||||
}
|
||||
} else if (token == XContentParser.Token.START_ARRAY) {
|
||||
if ("stop_words".equals(currentFieldName) || "stopWords".equals(currentFieldName)) {
|
||||
} else if (token == XContentParser.Token.START_ARRAY) {
|
||||
if (Fields.STOP_WORDS.match(currentFieldName, parseContext.parseFlags())) {
|
||||
Set<String> stopWords = Sets.newHashSet();
|
||||
while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) {
|
||||
stopWords.add(parser.text());
|
||||
|
@ -59,6 +59,9 @@ public class RestMoreLikeThisAction extends BaseRestHandler {
|
||||
|
||||
mltRequest.listenerThreaded(false);
|
||||
try {
|
||||
//TODO the ParseField class that encapsulates the supported names used for an attribute
|
||||
//needs some work if it is to be used in a REST context like this too
|
||||
// See the MoreLikeThisQueryParser constants that hold the valid syntax
|
||||
mltRequest.fields(request.paramAsStringArray("mlt_fields", null));
|
||||
mltRequest.percentTermsToMatch(request.paramAsFloat("percent_terms_to_match", -1));
|
||||
mltRequest.minTermFreq(request.paramAsInt("min_term_freq", -1));
|
||||
@ -66,8 +69,8 @@ public class RestMoreLikeThisAction extends BaseRestHandler {
|
||||
mltRequest.stopWords(request.paramAsStringArray("stop_words", null));
|
||||
mltRequest.minDocFreq(request.paramAsInt("min_doc_freq", -1));
|
||||
mltRequest.maxDocFreq(request.paramAsInt("max_doc_freq", -1));
|
||||
mltRequest.minWordLen(request.paramAsInt("min_word_len", -1));
|
||||
mltRequest.maxWordLen(request.paramAsInt("max_word_len", -1));
|
||||
mltRequest.minWordLength(request.paramAsInt("min_word_len", request.paramAsInt("min_word_length",-1)));
|
||||
mltRequest.maxWordLength(request.paramAsInt("max_word_len", request.paramAsInt("max_word_length",-1)));
|
||||
mltRequest.boostTerms(request.paramAsFloat("boost_terms", -1));
|
||||
|
||||
mltRequest.searchType(SearchType.fromString(request.param("search_type")));
|
||||
|
@ -18,30 +18,18 @@
|
||||
*/
|
||||
package org.elasticsearch.search.suggest;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.Comparator;
|
||||
import java.util.Locale;
|
||||
|
||||
import org.apache.lucene.analysis.Analyzer;
|
||||
import org.apache.lucene.analysis.TokenStream;
|
||||
import org.apache.lucene.analysis.tokenattributes.CharTermAttribute;
|
||||
import org.apache.lucene.analysis.tokenattributes.OffsetAttribute;
|
||||
import org.apache.lucene.analysis.tokenattributes.PositionIncrementAttribute;
|
||||
import org.apache.lucene.search.spell.DirectSpellChecker;
|
||||
import org.apache.lucene.search.spell.JaroWinklerDistance;
|
||||
import org.apache.lucene.search.spell.LevensteinDistance;
|
||||
import org.apache.lucene.search.spell.LuceneLevenshteinDistance;
|
||||
import org.apache.lucene.search.spell.NGramDistance;
|
||||
import org.apache.lucene.search.spell.StringDistance;
|
||||
import org.apache.lucene.search.spell.SuggestMode;
|
||||
import org.apache.lucene.search.spell.SuggestWord;
|
||||
import org.apache.lucene.search.spell.SuggestWordFrequencyComparator;
|
||||
import org.apache.lucene.search.spell.SuggestWordQueue;
|
||||
import org.apache.lucene.search.spell.*;
|
||||
import org.apache.lucene.util.BytesRef;
|
||||
import org.apache.lucene.util.CharsRef;
|
||||
import org.apache.lucene.util.UnicodeUtil;
|
||||
import org.apache.lucene.util.automaton.LevenshteinAutomata;
|
||||
import org.elasticsearch.ElasticsearchIllegalArgumentException;
|
||||
import org.elasticsearch.common.ParseField;
|
||||
import org.elasticsearch.common.io.FastCharArrayReader;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
import org.elasticsearch.index.analysis.CustomAnalyzer;
|
||||
@ -51,6 +39,10 @@ import org.elasticsearch.index.analysis.TokenFilterFactory;
|
||||
import org.elasticsearch.index.mapper.MapperService;
|
||||
import org.elasticsearch.search.suggest.SuggestionSearchContext.SuggestionContext;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.Comparator;
|
||||
import java.util.Locale;
|
||||
|
||||
public final class SuggestUtils {
|
||||
public static Comparator<SuggestWord> LUCENE_FREQUENCY = new SuggestWordFrequencyComparator();
|
||||
public static Comparator<SuggestWord> SCORE_COMPARATOR = SuggestWordQueue.DEFAULT_COMPARATOR;
|
||||
@ -193,6 +185,7 @@ public final class SuggestUtils {
|
||||
return new LuceneLevenshteinDistance();
|
||||
} else if ("levenstein".equals(distanceVal)) {
|
||||
return new LevensteinDistance();
|
||||
//TODO Jaro and Winkler are 2 people - so apply same naming logic as damerau_levenshtein
|
||||
} else if ("jarowinkler".equals(distanceVal)) {
|
||||
return new JaroWinklerDistance();
|
||||
} else if ("ngram".equals(distanceVal)) {
|
||||
@ -202,30 +195,45 @@ public final class SuggestUtils {
|
||||
}
|
||||
}
|
||||
|
||||
public static class Fields {
|
||||
public static final ParseField STRING_DISTANCE = new ParseField("string_distance");
|
||||
public static final ParseField SUGGEST_MODE = new ParseField("suggest_mode");
|
||||
public static final ParseField MAX_EDITS = new ParseField("max_edits");
|
||||
public static final ParseField MAX_INSPECTIONS = new ParseField("max_inspections");
|
||||
// TODO some of these constants are the same as MLT constants and
|
||||
// could be moved to a shared class for maintaining consistency across
|
||||
// the platform
|
||||
public static final ParseField MAX_TERM_FREQ = new ParseField("max_term_freq");
|
||||
public static final ParseField PREFIX_LENGTH = new ParseField("prefix_length", "prefix_len");
|
||||
public static final ParseField MIN_WORD_LENGTH = new ParseField("min_word_length", "min_word_len");
|
||||
public static final ParseField MIN_DOC_FREQ = new ParseField("min_doc_freq");
|
||||
public static final ParseField SHARD_SIZE = new ParseField("shard_size");
|
||||
}
|
||||
|
||||
public static boolean parseDirectSpellcheckerSettings(XContentParser parser, String fieldName,
|
||||
DirectSpellcheckerSettings suggestion) throws IOException {
|
||||
if ("accuracy".equals(fieldName)) {
|
||||
suggestion.accuracy(parser.floatValue());
|
||||
} else if ("suggest_mode".equals(fieldName) || "suggestMode".equals(fieldName)) {
|
||||
} else if (Fields.SUGGEST_MODE.match(fieldName)) {
|
||||
suggestion.suggestMode(SuggestUtils.resolveSuggestMode(parser.text()));
|
||||
} else if ("sort".equals(fieldName)) {
|
||||
suggestion.sort(SuggestUtils.resolveSort(parser.text()));
|
||||
} else if ("string_distance".equals(fieldName) || "stringDistance".equals(fieldName)) {
|
||||
} else if (Fields.STRING_DISTANCE.match(fieldName)) {
|
||||
suggestion.stringDistance(SuggestUtils.resolveDistance(parser.text()));
|
||||
} else if ("max_edits".equals(fieldName) || "maxEdits".equals(fieldName)) {
|
||||
} else if (Fields.MAX_EDITS.match(fieldName)) {
|
||||
suggestion.maxEdits(parser.intValue());
|
||||
if (suggestion.maxEdits() < 1 || suggestion.maxEdits() > LevenshteinAutomata.MAXIMUM_SUPPORTED_DISTANCE) {
|
||||
throw new ElasticsearchIllegalArgumentException("Illegal max_edits value " + suggestion.maxEdits());
|
||||
}
|
||||
} else if ("max_inspections".equals(fieldName) || "maxInspections".equals(fieldName)) {
|
||||
} else if (Fields.MAX_INSPECTIONS.match(fieldName)) {
|
||||
suggestion.maxInspections(parser.intValue());
|
||||
} else if ("max_term_freq".equals(fieldName) || "maxTermFreq".equals(fieldName)) {
|
||||
} else if (Fields.MAX_TERM_FREQ.match(fieldName)) {
|
||||
suggestion.maxTermFreq(parser.floatValue());
|
||||
} else if ("prefix_len".equals(fieldName) || "prefixLen".equals(fieldName)) {
|
||||
} else if (Fields.PREFIX_LENGTH.match(fieldName)) {
|
||||
suggestion.prefixLength(parser.intValue());
|
||||
} else if ("min_word_len".equals(fieldName) || "minWordLen".equals(fieldName)) {
|
||||
} else if (Fields.MIN_WORD_LENGTH.match(fieldName)) {
|
||||
suggestion.minQueryLength(parser.intValue());
|
||||
} else if ("min_doc_freq".equals(fieldName) || "minDocFreq".equals(fieldName)) {
|
||||
} else if (Fields.MIN_DOC_FREQ.match(fieldName)) {
|
||||
suggestion.minDocFreq(parser.floatValue());
|
||||
} else {
|
||||
return false;
|
||||
@ -247,7 +255,7 @@ public final class SuggestUtils {
|
||||
suggestion.setField(parser.text());
|
||||
} else if ("size".equals(fieldName)) {
|
||||
suggestion.setSize(parser.intValue());
|
||||
} else if ("shard_size".equals(fieldName) || "shardSize".equals(fieldName)) {
|
||||
} else if (Fields.SHARD_SIZE.match(fieldName)) {
|
||||
suggestion.setShardSize(parser.intValue());
|
||||
} else {
|
||||
return false;
|
||||
|
@ -590,10 +590,10 @@ public final class PhraseSuggestionBuilder extends SuggestionBuilder<PhraseSugge
|
||||
builder.field("max_term_freq", maxTermFreq);
|
||||
}
|
||||
if (prefixLength != null) {
|
||||
builder.field("prefix_len", prefixLength);
|
||||
builder.field("prefix_length", prefixLength);
|
||||
}
|
||||
if (minWordLength != null) {
|
||||
builder.field("min_word_len", minWordLength);
|
||||
builder.field("min_word_length", minWordLength);
|
||||
}
|
||||
if (minDocFreq != null) {
|
||||
builder.field("min_doc_freq", minDocFreq);
|
||||
|
@ -17,11 +17,11 @@
|
||||
* under the License.
|
||||
*/
|
||||
package org.elasticsearch.search.suggest.term;
|
||||
import java.io.IOException;
|
||||
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
import org.elasticsearch.search.suggest.SuggestBuilder.SuggestionBuilder;
|
||||
|
||||
import java.io.IOException;
|
||||
|
||||
/**
|
||||
* Defines the actual suggest command. Each command uses the global options
|
||||
* unless defined in the suggestion itself. All options are the same as the
|
||||
@ -211,10 +211,10 @@ public class TermSuggestionBuilder extends SuggestionBuilder<TermSuggestionBuild
|
||||
builder.field("max_term_freq", maxTermFreq);
|
||||
}
|
||||
if (prefixLength != null) {
|
||||
builder.field("prefix_len", prefixLength);
|
||||
builder.field("prefix_length", prefixLength);
|
||||
}
|
||||
if (minWordLength != null) {
|
||||
builder.field("min_word_len", minWordLength);
|
||||
builder.field("min_word_length", minWordLength);
|
||||
}
|
||||
if (minDocFreq != null) {
|
||||
builder.field("min_doc_freq", minDocFreq);
|
||||
|
@ -31,7 +31,6 @@ import java.io.IOException;
|
||||
import java.util.Map;
|
||||
|
||||
import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;
|
||||
import static org.hamcrest.MatcherAssert.assertThat;
|
||||
import static org.hamcrest.Matchers.instanceOf;
|
||||
import static org.hamcrest.Matchers.is;
|
||||
|
||||
@ -64,7 +63,7 @@ public class CompletionFieldMapperTests extends ElasticsearchTestCase {
|
||||
.field("payloads", true)
|
||||
.field("preserve_separators", false)
|
||||
.field("preserve_position_increments", true)
|
||||
.field("max_input_len", 14)
|
||||
.field("max_input_length", 14)
|
||||
|
||||
.endObject().endObject()
|
||||
.endObject().endObject().string();
|
||||
@ -85,7 +84,7 @@ public class CompletionFieldMapperTests extends ElasticsearchTestCase {
|
||||
assertThat(Boolean.valueOf(configMap.get("payloads").toString()), is(true));
|
||||
assertThat(Boolean.valueOf(configMap.get("preserve_separators").toString()), is(false));
|
||||
assertThat(Boolean.valueOf(configMap.get("preserve_position_increments").toString()), is(true));
|
||||
assertThat(Integer.valueOf(configMap.get("max_input_len").toString()), is(14));
|
||||
assertThat(Integer.valueOf(configMap.get("max_input_length").toString()), is(14));
|
||||
}
|
||||
|
||||
@Test
|
||||
|
@ -999,7 +999,7 @@ public class CompletionSuggestSearchTests extends ElasticsearchIntegrationTest {
|
||||
.startObject(TYPE).startObject("properties")
|
||||
.startObject(FIELD)
|
||||
.field("type", "completion")
|
||||
.field("max_input_len", maxInputLen)
|
||||
.field("max_input_length", maxInputLen)
|
||||
// upgrade mapping each time
|
||||
.field("analyzer", "keyword")
|
||||
.endObject()
|
||||
@ -1038,7 +1038,7 @@ public class CompletionSuggestSearchTests extends ElasticsearchIntegrationTest {
|
||||
.endObject().endObject()
|
||||
.endObject()));
|
||||
ensureYellow();
|
||||
// can cause stack overflow without the default max_input_len
|
||||
// can cause stack overflow without the default max_input_length
|
||||
String longString = replaceReservedChars(randomRealisticUnicodeOfLength(atLeast(5000)), (char) 0x01);
|
||||
client().prepareIndex(INDEX, TYPE, "1").setSource(jsonBuilder()
|
||||
.startObject().startObject(FIELD)
|
||||
@ -1061,7 +1061,7 @@ public class CompletionSuggestSearchTests extends ElasticsearchIntegrationTest {
|
||||
.endObject().endObject()
|
||||
.endObject()));
|
||||
ensureYellow();
|
||||
// can cause stack overflow without the default max_input_len
|
||||
// can cause stack overflow without the default max_input_length
|
||||
String string = "foo" + (char) 0x00 + "bar";
|
||||
client().prepareIndex(INDEX, TYPE, "1").setSource(jsonBuilder()
|
||||
.startObject().startObject(FIELD)
|
||||
|
Loading…
x
Reference in New Issue
Block a user