Commit Graph

4634 Commits

Author SHA1 Message Date
Shay Banon a39469a252 gather the field data that are changed
(we will make use of that later)
2013-01-24 15:55:23 +01:00
Martijn van Groningen 98a674fc6e Added suggest api.
# Suggest feature
The suggest feature suggests similar looking terms based on a provided text by using a suggester. At the moment there the only supported suggester is `fuzzy`. The suggest feature is available since version `0.21.0`.

# Fuzzy suggester
The `fuzzy` suggester suggests terms based on edit distance. The provided suggest text is analyzed before terms are suggested. The suggested terms are provided per analyzed suggest text token. The `fuzzy` suggester doesn't take the query into account that is part of request.

# Suggest API
The suggest request part is defined along side the query part as top field in the json request.

```
curl -s -XPOST 'localhost:9200/_search' -d '{
    "query" : {
        ...
    },
    "suggest" : {
        ...
    }
}'
```

Several suggestions can be specified per request. Each suggestion is identified with an arbitary name. In the example below two suggestions are requested. The `my-suggest-1` suggestion uses the `body` field and `my-suggest-2` uses the `title` field. The `type` field is a required field and defines what suggester to use for a suggestion.

```
"suggest" : {
    "suggestions" : {
        "my-suggest-1" : {
            "type" : "fuzzy",
            "field" : "body",
            "text" : "the amsterdma meetpu"
        },
        "my-suggest-2" : {
            "type" : "fuzzy",
            "field" : "title",
            "text" : "the rottredam meetpu"
        }
    }
}
```

The below suggest response example includes the suggestions part for `my-suggest-1` and `my-suggest-2`. Each suggestion part contains a terms array, that contains all terms outputted by the analyzed suggest text. Each term object includes the term itself, the original start and end offset in the suggest text and if found an arbitary number of suggestions.

```
{
    ...
    "suggest": {
        "my-suggest-1": {
            "terms" : [
              {
                "term" : "amsterdma",
                "start_offset": 5,
                "end_offset": 14,
                "suggestions": [
                   ...
                ]
              }
              ...
            ]
        },
        "my-suggest-2" : {
          "terms" : [
            ...
          ]
        }
    }
```

Each suggestions array contains a suggestion object that includes the suggested term, its document frequency and score compared to the suggest text term. The meaning of the score depends on the used suggester. The fuzzy suggester's score is based on the edit distance.

```
"suggestions": [
    {
        "term": "amsterdam",
        "frequency": 77,
        "score": 0.8888889
    },
    ...
]
```

# Global suggest text

To avoid repitition of the suggest text, it is possible to define a global text. In the example below the suggest text is a global option and applies to the `my-suggest-1` and `my-suggest-2` suggestions.

```
"suggest" : {
    "suggestions" : {
        "text" : "the amsterdma meetpu",
        "my-suggest-1" : {
            "type" : "fuzzy",
            "field" : "title"
        },
        "my-suggest-2" : {
            "type" : "fuzzy",
            "field" : "body"
        }
    }
}
```

The suggest text can be specied as global option or as suggestion specific option. The suggest text specified on suggestion level override the suggest text on the global level.

# Other suggest example.

In the below example we request suggestions for the following suggest text: `devloping distibutd saerch engies` on the `title` field with a maximum of 3 suggestions per term inside the suggest text. Note that in this example we use the `count` search type. This isn't required, but a nice optimalization. The suggestions are gather in the `query` phase and in the case that we only care about suggestions (so no hits) we don't need to execute the `fetch` phase.

```
curl -s -XPOST 'localhost:9200/_search?search_type=count' -d '{
  "suggest" : {
      "suggestions" : {
        "my-title-suggestions" : {
          "suggester" : "fuzzy",
          "field" : "title",
          "text" : "devloping distibutd saerch engies",
          "size" : 3
        }
      }
  }
}'
```

The above request could yield the response as stated in the code example below. As you can see if we take the first suggested term of each suggest text term we get `developing distributed search engines` as result.

```
{
  ...
  "suggest": {
    "my-title-suggestions": {
      "terms": [
        {
          "term": "devloping",
          "start_offset": 0,
          "end_offset": 9,
          "suggestions": [
            {
              "term": "developing",
              "frequency": 77,
              "score": 0.8888889
            },
            {
              "term": "deloping",
              "frequency": 1,
              "score": 0.875
            },
            {
              "term": "deploying",
              "frequency": 2,
              "score": 0.7777778
            }
          ]
        },
        {
          "term": "distibutd",
          "start_offset": 10,
          "end_offset": 19,
          "suggestions": [
            {
              "term": "distributed",
              "frequency": 217,
              "score": 0.7777778
            },
            {
              "term": "disributed",
              "frequency": 1,
              "score": 0.7777778
            },
            {
              "term": "distribute",
              "frequency": 1,
              "score": 0.7777778
            }
          ]
        },
        {
          "term": "saerch",
          "start_offset": 20,
          "end_offset": 26,
          "suggestions": [
            {
              "term": "search",
              "frequency": 1038,
              "score": 0.8333333
            },
            {
              "term": "smerch",
              "frequency": 3,
              "score": 0.8333333
            },
            {
              "term": "serch",
              "frequency": 2,
              "score": 0.8
            }
          ]
        },
        {
          "term": "engies",
          "start_offset": 27,
          "end_offset": 33,
          "suggestions": [
            {
              "term": "engines",
              "frequency": 568,
              "score": 0.8333333
            },
            {
              "term": "engles",
              "frequency": 3,
              "score": 0.8333333
            },
            {
              "term": "eggies",
              "frequency": 1,
              "score": 0.8333333
            }
          ]
        }
      ]
    }
  }
  ...
}
```

# Common suggest options:
* `suggester` - The suggester implementation type. The only supported value is 'fuzzy'. This is a required option.
* `text` - The suggest text. The suggest text is a required option that needs to be set globally or per suggestion.

# Common fuzzy suggest options
* `field` - The field to fetch the candidate suggestions from. This is an required option that either needs to be set globally or per suggestion.
* `analyzer` - The analyzer to analyse the suggest text with. Defaults to the search analyzer of the suggest field.
* `size` - The maximum corrections to be returned per suggest text token.
* `sort` - Defines how suggestions should be sorted per suggest text term. Two possible value:
** `score` - Sort by sore first, then document frequency and then the term itself.
** `frequency` - Sort by document frequency first, then simlarity score and then the term itself.
* `suggest_mode` - The suggest mode controls what suggestions are included or controls for what suggest text terms, suggestions should be suggested. Three possible values can be specified:
** `missing` - Only suggest terms in the suggest text that aren't in the index. This is the default.
** `popular` - Only suggest suggestions that occur in more docs then the original suggest text term.
** `always` - Suggest any matching suggestions based on terms in the suggest text.

# Other fuzzy suggest options:
* `lowercase_terms` - Lower cases the suggest text terms after text analyzation.
* `max_edits` - The maximum edit distance candidate suggestions can have in order to be considered as a suggestion. Can only be a value between 1 and 2. Any other value result in an bad request error being thrown. Defaults to 2.
* `min_prefix` - The number of minimal prefix characters that must match in order be a candidate suggestions. Defaults to 1. Increasing this number improves spellcheck performance. Usually misspellings don't occur in the beginning of terms.
* `min_query_length` -  The minimum length a suggest text term must have in order to be included. Defaults to 4.
* `shard_size` - Sets the maximum number of suggestions to be retrieved from each individual shard. During the reduce phase only the top N suggestions are returned based on the `size` option. Defaults to the `size` option. Setting this to a value higher than the `size` can be useful in order to get a more accurate document frequency for spelling corrections at the cost of performance. Due to the fact that terms are partitioned amongst shards, the shard level document frequencies of spelling corrections may not be precise. Increasing this will make these document frequencies more precise.
* `max_inspections` - A factor that is used to multiply with the `shards_size` in order to inspect more candidate spell corrections on the shard level. Can improve accuracy at the cost of performance. Defaults to 5.
* `threshold_frequency` - The minimal threshold in number of documents a suggestion should appear in. This can be specified as an absolute number or as a relative percentage of number of documents. This can improve quality by only suggesting high frequency terms. Defaults to 0f and is not enabled. If a value higher than 1 is specified then the number cannot be fractional. The shard level document frequencies are used for this option.
* `max_query_frequency` - The maximum threshold in number of documents a sugges text token can exist in order to be included. Can be a relative percentage number (e.g 0.4) or an absolute number to represent document frequencies. If an value higher than 1 is specified then fractional can not be specified. Defaults to 0.01f. This can be used to exclude high frequency terms from being spellchecked. High frequency terms are usually spelled correctly on top of this this also improves the spellcheck performance.  The shard level document frequencies are used for this option.

 Closes #2585
2013-01-24 15:41:06 +01:00
Shay Banon 9673a1c366 expose field data settings in mapping, they can be updated using merge mapping 2013-01-24 15:33:24 +01:00
Simon Willnauer 4eefcb9c82 Expose CommonTermsQuery
Closes #2583
2013-01-24 14:18:01 +01:00
Simon Willnauer c4eab90b2e Cleanup MatchQuery 2013-01-24 14:11:56 +01:00
Shay Banon c2f35621f6 allow to get settings as delimited string 2013-01-24 12:03:16 +01:00
Shay Banon b143822bac allow to load settings from delimited string 2013-01-24 12:00:14 +01:00
Simon Willnauer 88f68264c7 Reuse MemoryIndex instances across Percolator requests.
* added configurable MemoryIndexPool that pools MemoryIndex instance across Threads
* Pool can be configured based on the number of pooled instances as well as the maximum number of bytes that is reused across the pooled instances

Closes #2581
2013-01-24 11:53:21 +01:00
Shay Banon e8c1180ede add field data stats 2013-01-24 11:38:18 +01:00
Shay Banon 613b746299 move field data type to simply be type and settings 2013-01-24 09:33:16 +01:00
Martijn van Groningen 50ac477d92 Fixed small bug. Index name should be used to lookup entry. 2013-01-23 23:53:20 +01:00
Shay Banon 4967a97faf don't use private since its accessed from inner class, remove $$ need 2013-01-23 22:17:27 +01:00
Martijn van Groningen 346422b747 Added sparse multi ordinals implementation for field data. 2013-01-23 22:11:31 +01:00
Daniel Muller 9e79f54cb1 Check for java-6-openjdk-amd64 2013-01-23 18:34:37 +01:00
synhershko e0f711a94a Updating Lucene version 2013-01-23 16:18:18 +02:00
Shay Banon a74e7f8099 refactor geo to extract common classes 2013-01-23 14:14:21 +01:00
Simon Willnauer 9c729fad2c remove flush check IW#commit always adds a commit point now even if nothing has changed ie. docs are added, updated or deleted. 2013-01-23 14:06:01 +01:00
Shay Banon 22f0e79a84 use merge trigger to control when to do merges
now with merge trigger, we can simply decide when to do merges based on it
2013-01-23 13:24:20 +01:00
Shay Banon d969e61999 Remove settings option for index store compression, compression is always enabled
closes #2577
2013-01-23 13:11:48 +01:00
Simon Willnauer 2880cd0172 Upgrade to Lucene 4.1
* Removed CustmoMemoryIndex in favor of MemoryIndex which as of 4.1 supports adding the same field twice
* Replaced duplicated logic in X[*]FSDirectory for rate limiting with a RateLimitedFSDirectory wrapper
* Remove hacks to find out merge context in rate limiting in favor of IOContext
* replaced Scorer#freq() return type (from float to int)
* Upgraded FVHighlighter to new 'centered' highlighting
* Fixed RobinEngine to use seperate setCommitData
2013-01-23 11:54:11 +01:00
Shay Banon 20f43bf54c add hasSingleArrayBackingStorage
allow for optimization only when there really is a single array, and not when there is a multi dimensional one
2013-01-23 10:24:43 +01:00
Igor Motov bbfd3957eb Improve stability of the testNodesInfos test 2013-01-22 12:29:38 -05:00
Igor Motov 9becdb814a Improve stability of the shardsCleanup test 2013-01-22 10:20:18 -05:00
Shay Banon 1185d4eb19 Merge branch 'fielddata' 2013-01-22 16:17:15 +01:00
Shay Banon c295211a85 final move to new field data 2013-01-22 16:16:33 +01:00
Shay Banon 27bfb341ff better logging on missing format, and allow to configure format on a type on the index level 2013-01-22 16:16:33 +01:00
uboness 09cc70b8c9 added predefined empty implementation for all atomic field datas 2013-01-22 16:16:33 +01:00
Shay Banon 6b92b592b4 allow to clear by reader the new field data cache 2013-01-22 16:16:32 +01:00
Shay Banon c67386f644 properly invalidate on core closed reader 2013-01-22 16:16:32 +01:00
Shay Banon af757fd821 more usage of field data
note, removed field data from cache stats, it will have its own stats later on (cache part is really misleading)
2013-01-22 16:16:32 +01:00
Shay Banon de013babf8 move geo filters and numeric range to use new field data 2013-01-22 16:16:32 +01:00
Shay Banon be1e5becbb move scripts to use new field data 2013-01-22 16:16:32 +01:00
Shay Banon 772ee9db54 move terms to use new field data 2013-01-22 16:16:32 +01:00
Shay Banon e5b651321f remove some safe methods because of the new makeSafe method usage 2013-01-22 16:16:32 +01:00
Shay Banon f189a832c5 grr pages -> paged 2013-01-22 16:16:32 +01:00
Shay Banon 5b7173fc35 move sorting to work with new field data 2013-01-22 16:16:32 +01:00
uboness b739bf97d4 added missing dedicated value comparators for the different indices field data 2013-01-22 16:16:32 +01:00
Shay Banon 45f27fe96a add packed bytes variant for strings/bytes 2013-01-22 16:16:32 +01:00
uboness 855b64a8a7 byte field data implementation 2013-01-22 16:16:31 +01:00
uboness f1f3c241fd short field data implementation 2013-01-22 16:16:31 +01:00
uboness 3840439365 float field data implementation 2013-01-22 16:16:31 +01:00
Shay Banon 9137fcc6fc move geo distance sorting to use new field data 2013-01-22 16:16:31 +01:00
Shay Banon d5e70a27df integer type to support int field data type 2013-01-22 16:16:31 +01:00
uboness fc09ce7ac9 Implemented int field data 2013-01-22 16:16:31 +01:00
Shay Banon d82859c82b geo point new field mapper with geo distance facet based impl 2013-01-22 16:16:31 +01:00
Shay Banon 2e86081f7b use smartNameMapper on context 2013-01-22 16:16:31 +01:00
Shay Banon d88e3f73ac add specific makeSafe method to make an unsafe (shared) bytes based value to a "safe" one 2013-01-22 16:16:31 +01:00
Shay Banon 1765b0b813 date histogram to use new field data 2013-01-22 16:16:31 +01:00
Shay Banon 37acba1b57 terms stats to use new field data 2013-01-22 16:16:31 +01:00
Shay Banon f1f86efed5 move statistical facet to use new field data 2013-01-22 16:16:30 +01:00