This feature adds the option to configure a `PostingsFormat` and assign it to a field in the mapping. This feature is very expert and in almost all cases Elasticsearch's defaults will suite your needs.
## Configuring a postingsformat per field
There're several default postings formats configured by default which can be used in your mapping:
a* `direct` - A codec that wraps the default postings format during write time, but loads the terms and postinglists into memory directly in memory during read time as raw arrays. This postings format is exceptional memory intensive, but can give a substantial increase in search performance.
* `memory` - A codec that loads and stores terms and postinglists in memory using a FST. Acts like a cached postingslist.
* `bloom_default` - Maintains a bloom filter for the indexed terms, which is stored to disk and builds on top of the `default` postings format. This postings format is useful for low document frequency terms and offers a fail fast for seeks to terms that don't exist.
* `bloom_pulsing` - Similar to the `bloom_default` postings format, but builds on top of the `pulsing` postings format.
* `default` - The default postings format. The default if none is specified.
On all fields it possible to configure a `postings_format` attribute. Example mapping:
```
{
"person" : {
"properties" : {
"second_person_id" : {"type" : "string", "postings_format" : "pulsing"}
}
}
}
```
## Configuring a custom postingsformat
It is possible the instantiate custom postingsformats. This can be specified via the index settings.
```
{
"codec" : {
"postings_format" : {
"my_format" : {
"type" : "pulsing40"
"freq_cut_off" : "5"
}
}
}
}
```
In the above example the `freq_cut_off` is set the 5 (defaults to 1). This tells the pulsing postings format to inline the postinglist of terms with a document frequency lower or equal to 5 in the term dictionary.
Closes#2411
no need to test for boost, we already have specific boost tests, in general, we should get rid of this test, and use more specialized tests if we are missing some
I am not completely sure about this one, but it reduces the number of failing tests from 98 to 31 so I am going to check it in. Please, review and fix it, if there is a better solution.
Because of change in Lucene 4.0, ContextIndexSearcher was bypassed and elasticsearch filters and collectors were ignored.
In lucene 3.6 the stack of Searcher search calls looked like this:
search(Query query, int n)
search(Query query, Filter filter, int n)
search(Weight weight, Filter filter, int nDocs)
search(Weight weight, Filter filter, ScoreDoc after, int nDocs)
search(Weight weight, Filter filter, Collector collector) <-- this is ContextIndexSearcher was injecting combined filter and collector
search(Weight weight, Filter filter, Collector collector)
In Lucene 4.0 the stack looks like this:
search(Query query, int n)
search(Query query, Filter filter, int n) <-- here lucene wraps Query and Filter into Weight
search(Weight weight, ScoreDoc after, int nDocs)
search(List<AtomicReaderContext> leaves, Weight weight, ScoreDoc after, int nDocs)
search(List<AtomicReaderContext> leaves, Weight weight, Collector collector)
...
In other words, when we have Filter, we don't have a Collector yet, but when we have Collector, Filter is already wrapped inside Weight. The only way to fix for the problem that I could think of is by introducing two injection points: one for Filters and another one for Collectors:
search(Query query, int n)
search(Query query, Filter filter, int n) <-- here combined Filters are injected
search(Weight weight, ScoreDoc after, int nDocs)
search(List<AtomicReaderContext> leaves, Weight weight, ScoreDoc after, int nDocs)
search(List<AtomicReaderContext> leaves, Weight weight, Collector collector) <-- here Collectors are injected
Similar problem existed for count(), so I had to override search(Query query, Collector results) as well.