2013-08-28 19:24:34 -04:00
|
|
|
[[index-modules-fielddata]]
|
|
|
|
== Field data
|
|
|
|
|
|
|
|
The field data cache is used mainly when sorting on or faceting on a
|
|
|
|
field. It loads all the field values to memory in order to provide fast
|
|
|
|
document based access to those values. The field data cache can be
|
|
|
|
expensive to build for a field, so its recommended to have enough memory
|
|
|
|
to allocate it, and to keep it loaded.
|
|
|
|
|
2013-09-03 15:27:49 -04:00
|
|
|
The amount of memory used for the field
|
2013-08-28 19:24:34 -04:00
|
|
|
data cache can be controlled using `indices.fielddata.cache.size`. Note:
|
|
|
|
reloading the field data which does not fit into your cache will be expensive
|
|
|
|
and perform poorly.
|
|
|
|
|
|
|
|
[cols="<,<",options="header",]
|
|
|
|
|=======================================================================
|
|
|
|
|Setting |Description
|
|
|
|
|`indices.fielddata.cache.size` |The max size of the field data cache,
|
|
|
|
eg `30%` of node heap space, or an absolute value, eg `12GB`. Defaults
|
|
|
|
to unbounded.
|
|
|
|
|
|
|
|
|`indices.fielddata.cache.expire` |A time based setting that expires
|
|
|
|
field data after a certain time of inactivity. Defaults to `-1`. For
|
|
|
|
example, can be set to `5m` for a 5 minute expiry.
|
|
|
|
|=======================================================================
|
|
|
|
|
Doc values integration.
This commit allows for using Lucene doc values as a backend for field data,
moving the cost of building field data from the refresh operation to indexing.
In addition, Lucene doc values can be stored on disk (partially, or even
entirely), so that memory management is done at the operating system level
(file-system cache) instead of the JVM, avoiding long pauses during major
collections due to large heaps.
So far doc values are supported on numeric types and non-analyzed strings
(index:no or index:not_analyzed). Under the hood, it uses SORTED_SET doc values
which is the only type to support multi-valued fields. Since the field data API
set is a bit wider than the doc values API set, some operations are not
supported:
- field data filtering: this will fail if doc values are enabled,
- field data cache clearing, even for memory-based doc values formats,
- getting the memory usage for a specific field,
- knowing whether a field is actually multi-valued.
This commit also allows for configuring doc-values formats on a per-field basis
similarly to postings formats. In particular the doc values format of the
_version field can be configured through its own field mapper (it used to be
handled in UidFieldMapper previously).
Closes #3806
2013-06-12 06:51:51 -04:00
|
|
|
=== Field data formats
|
|
|
|
|
|
|
|
Depending on the field type, there might be several field data types
|
|
|
|
available. In particular, string and numeric types support the `doc_values`
|
|
|
|
format which allows for computing the field data data-structures at indexing
|
|
|
|
time and storing them on disk. Although it will make the index larger and may
|
|
|
|
be slightly slower, this implementation will be more near-realtime-friendly
|
|
|
|
and will require much less memory from the JVM than other implementations.
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
--------------------------------------------------
|
|
|
|
{
|
|
|
|
tag: {
|
|
|
|
type: "string",
|
|
|
|
fielddata: {
|
|
|
|
format: "fst"
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
--------------------------------------------------
|
|
|
|
|
|
|
|
[float]
|
|
|
|
==== String field data types
|
|
|
|
|
|
|
|
`paged_bytes` (default)::
|
|
|
|
Stores unique terms sequentially in a large buffer and maps documents to
|
|
|
|
the indices of the terms they contain in this large buffer.
|
|
|
|
|
|
|
|
`fst`::
|
|
|
|
Stores terms in a FST. Slower to build than `paged_bytes` but can help lower
|
|
|
|
memory usage if many terms share common prefixes and/or suffixes.
|
|
|
|
|
|
|
|
`doc_values`::
|
|
|
|
Computes and stores field data data-structures on disk at indexing time.
|
|
|
|
Lowers memory usage but only works on non-analyzed strings (`index`: `no` or
|
|
|
|
`not_analyzed`) and doesn't support filtering.
|
|
|
|
|
|
|
|
[float]
|
|
|
|
==== Numeric field data types
|
|
|
|
|
|
|
|
`array` (default)::
|
|
|
|
Stores field values in memory using arrays.
|
|
|
|
|
|
|
|
`doc_values`::
|
|
|
|
Computes and stores field data data-structures on disk at indexing time.
|
|
|
|
Doesn't support filtering.
|
|
|
|
|
|
|
|
[float]
|
|
|
|
==== Geo point field data types
|
|
|
|
|
|
|
|
`array` (default)::
|
|
|
|
Stores latitudes and longitudes in arrays.
|
|
|
|
|
2013-10-04 15:40:37 -04:00
|
|
|
[float]
|
|
|
|
=== Fielddata loading
|
|
|
|
|
|
|
|
By default, field data is loaded lazily, on the first time that a query that
|
|
|
|
requires field data is fired. However, this can make the first requests that
|
|
|
|
follow a merge operation quite slow since fielddata loading is a heavy
|
|
|
|
operation.
|
|
|
|
|
|
|
|
It is possible to force field data to be loaded and cached eagerly through the
|
|
|
|
`loading` setting of fielddata:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
--------------------------------------------------
|
|
|
|
{
|
|
|
|
category: {
|
|
|
|
type: "string",
|
|
|
|
fielddata: {
|
|
|
|
loading: "eager"
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
--------------------------------------------------
|
|
|
|
|
2013-08-28 19:24:34 -04:00
|
|
|
[float]
|
2013-09-30 17:32:00 -04:00
|
|
|
[[field-data-filtering]]
|
2013-08-28 19:24:34 -04:00
|
|
|
=== Filtering fielddata
|
|
|
|
|
|
|
|
It is possible to control which field values are loaded into memory,
|
|
|
|
which is particularly useful for string fields. When specifying the
|
|
|
|
<<mapping-core-types,mapping>> for a field, you
|
|
|
|
can also specify a fielddata filter.
|
|
|
|
|
|
|
|
Fielddata filters can be changed using the
|
|
|
|
<<indices-put-mapping,PUT mapping>>
|
|
|
|
API. After changing the filters, use the
|
|
|
|
<<indices-clearcache,Clear Cache>> API
|
|
|
|
to reload the fielddata using the new filters.
|
|
|
|
|
|
|
|
[float]
|
|
|
|
==== Filtering by frequency:
|
|
|
|
|
|
|
|
The frequency filter allows you to only load terms whose frequency falls
|
|
|
|
between a `min` and `max` value, which can be expressed an absolute
|
|
|
|
number or as a percentage (eg `0.01` is `1%`). Frequency is calculated
|
|
|
|
*per segment*. Percentages are based on the number of docs which have a
|
|
|
|
value for the field, as opposed to all docs in the segment.
|
|
|
|
|
|
|
|
Small segments can be excluded completely by specifying the minimum
|
|
|
|
number of docs that the segment should contain with `min_segment_size`:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
--------------------------------------------------
|
|
|
|
{
|
|
|
|
tag: {
|
|
|
|
type: "string",
|
|
|
|
fielddata: {
|
|
|
|
filter: {
|
|
|
|
frequency: {
|
|
|
|
min: 0.001,
|
|
|
|
max: 0.1,
|
|
|
|
min_segment_size: 500
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
--------------------------------------------------
|
|
|
|
|
|
|
|
[float]
|
|
|
|
==== Filtering by regex
|
|
|
|
|
|
|
|
Terms can also be filtered by regular expression - only values which
|
|
|
|
match the regular expression are loaded. Note: the regular expression is
|
|
|
|
applied to each term in the field, not to the whole field value. For
|
|
|
|
instance, to only load hashtags from a tweet, we can use a regular
|
|
|
|
expression which matches terms beginning with `#`:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
--------------------------------------------------
|
|
|
|
{
|
|
|
|
tweet: {
|
|
|
|
type: "string",
|
|
|
|
analyzer: "whitespace"
|
|
|
|
fielddata: {
|
|
|
|
filter: {
|
2013-09-04 17:17:46 -04:00
|
|
|
regex: {
|
|
|
|
pattern: "^#.*"
|
|
|
|
}
|
2013-08-28 19:24:34 -04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
--------------------------------------------------
|
|
|
|
|
|
|
|
[float]
|
|
|
|
==== Combining filters
|
|
|
|
|
|
|
|
The `frequency` and `regex` filters can be combined:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
--------------------------------------------------
|
|
|
|
{
|
|
|
|
tweet: {
|
|
|
|
type: "string",
|
|
|
|
analyzer: "whitespace"
|
|
|
|
fielddata: {
|
|
|
|
filter: {
|
2013-09-04 17:17:46 -04:00
|
|
|
regex: {
|
|
|
|
pattern: "^#.*",
|
|
|
|
},
|
2013-08-28 19:24:34 -04:00
|
|
|
frequency: {
|
|
|
|
min: 0.001,
|
|
|
|
max: 0.1,
|
|
|
|
min_segment_size: 500
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
--------------------------------------------------
|
|
|
|
|
|
|
|
[float]
|
2013-09-30 17:32:00 -04:00
|
|
|
[[field-data-monitoring]]
|
2013-08-28 19:24:34 -04:00
|
|
|
=== Monitoring field data
|
|
|
|
|
|
|
|
You can monitor memory usage for field data using
|
|
|
|
<<cluster-nodes-stats,Nodes Stats API>>
|