2016-06-21 10:31:44 -04:00
[[tune-for-disk-usage]]
== Tune for disk usage
2020-07-23 12:42:33 -04:00
[discrete]
2016-06-21 10:31:44 -04:00
=== Disable the features you do not need
2017-11-29 03:44:25 -05:00
By default Elasticsearch indexes and adds doc values to most fields so that they
2016-06-21 10:31:44 -04:00
can be searched and aggregated out of the box. For instance if you have a numeric
field called `foo` that you need to run histograms on but that you never need to
filter on, you can safely disable indexing on this field in your
<<mappings,mappings>>:
2019-09-09 13:38:14 -04:00
[source,console]
2016-06-21 10:31:44 -04:00
--------------------------------------------------
2019-01-22 09:13:52 -05:00
PUT index
2016-06-21 10:31:44 -04:00
{
"mappings": {
2019-01-22 09:13:52 -05:00
"properties": {
"foo": {
"type": "integer",
"index": false
2016-06-21 10:31:44 -04:00
}
}
}
}
--------------------------------------------------
<<text,`text`>> fields store normalization factors in the index in order to be
able to score documents. If you only need matching capabilities on a `text`
2017-11-29 03:44:25 -05:00
field but do not care about the produced scores, you can configure Elasticsearch
2016-06-21 10:31:44 -04:00
to not write norms to the index:
2019-09-09 13:38:14 -04:00
[source,console]
2016-06-21 10:31:44 -04:00
--------------------------------------------------
2019-01-22 09:13:52 -05:00
PUT index
2016-06-21 10:31:44 -04:00
{
"mappings": {
2019-01-22 09:13:52 -05:00
"properties": {
"foo": {
"type": "text",
"norms": false
2016-06-21 10:31:44 -04:00
}
}
}
}
--------------------------------------------------
<<text,`text`>> fields also store frequencies and positions in the index by
default. Frequencies are used to compute scores and positions are used to run
phrase queries. If you do not need to run phrase queries, you can tell
2017-11-29 03:44:25 -05:00
Elasticsearch to not index positions:
2016-06-21 10:31:44 -04:00
2019-09-09 13:38:14 -04:00
[source,console]
2016-06-21 10:31:44 -04:00
--------------------------------------------------
2019-01-22 09:13:52 -05:00
PUT index
2016-06-21 10:31:44 -04:00
{
"mappings": {
2019-01-22 09:13:52 -05:00
"properties": {
"foo": {
"type": "text",
"index_options": "freqs"
2016-06-21 10:31:44 -04:00
}
}
}
}
--------------------------------------------------
Furthermore if you do not care about scoring either, you can configure
2017-11-29 03:44:25 -05:00
Elasticsearch to just index matching documents for every term. You will
2016-06-21 10:31:44 -04:00
still be able to search on this field, but phrase queries will raise errors
and scoring will assume that terms appear only once in every document.
2019-09-09 13:38:14 -04:00
[source,console]
2016-06-21 10:31:44 -04:00
--------------------------------------------------
2019-01-22 09:13:52 -05:00
PUT index
2016-06-21 10:31:44 -04:00
{
"mappings": {
2019-01-22 09:13:52 -05:00
"properties": {
"foo": {
"type": "text",
"norms": false,
"index_options": "freqs"
2016-06-21 10:31:44 -04:00
}
}
}
}
--------------------------------------------------
2020-07-23 12:42:33 -04:00
[discrete]
2019-04-30 10:19:09 -04:00
[[default-dynamic-string-mapping]]
2016-06-21 10:31:44 -04:00
=== Don't use default dynamic string mappings
The default <<dynamic-mapping,dynamic string mappings>> will index string fields
both as <<text,`text`>> and <<keyword,`keyword`>>. This is wasteful if you only
need one of them. Typically an `id` field will only need to be indexed as a
`keyword` while a `body` field will only need to be indexed as a `text` field.
This can be disabled by either configuring explicit mappings on string fields
or setting up dynamic templates that will map string fields as either `text`
or `keyword`.
For instance, here is a template that can be used in order to only map string
fields as `keyword`:
2019-09-09 13:38:14 -04:00
[source,console]
2016-06-21 10:31:44 -04:00
--------------------------------------------------
2019-01-22 09:13:52 -05:00
PUT index
2016-06-21 10:31:44 -04:00
{
"mappings": {
2019-01-22 09:13:52 -05:00
"dynamic_templates": [
{
"strings": {
"match_mapping_type": "string",
"mapping": {
"type": "keyword"
2016-06-21 10:31:44 -04:00
}
}
2019-01-22 09:13:52 -05:00
}
]
2016-06-21 10:31:44 -04:00
}
}
--------------------------------------------------
2020-07-23 12:42:33 -04:00
[discrete]
2017-08-21 15:07:54 -04:00
=== Watch your shard size
2020-04-30 13:30:47 -04:00
Larger shards are going to be more efficient at storing data. To increase the size of your shards, you can decrease the number of primary shards in an index by <<indices-create-index,creating indices>> with fewer primary shards, creating fewer indices (e.g. by leveraging the <<indices-rollover-index,Rollover API>>), or modifying an existing index using the <<indices-shrink-index,Shrink API>>.
2017-08-21 15:07:54 -04:00
Keep in mind that large shard sizes come with drawbacks, such as long full recovery times.
2020-07-23 12:42:33 -04:00
[discrete]
2019-04-30 10:19:09 -04:00
[[disable-source]]
2017-08-21 15:07:54 -04:00
=== Disable `_source`
The <<mapping-source-field,`_source`>> field stores the original JSON body of the document. If you don’ t need access to it you can disable it. However, APIs that needs access to `_source` such as update and reindex won’ t work.
2020-07-23 12:42:33 -04:00
[discrete]
2019-04-30 10:19:09 -04:00
[[best-compression]]
2016-06-21 10:31:44 -04:00
=== Use `best_compression`
The `_source` and stored fields can easily take a non negligible amount of disk
space. They can be compressed more aggressively by using the `best_compression`
<<index-codec,codec>>.
2020-07-23 12:42:33 -04:00
[discrete]
2017-08-21 15:07:54 -04:00
=== Force Merge
Indices in Elasticsearch are stored in one or more shards. Each shard is a Lucene index and made up of one or more segments - the actual files on disk. Larger segments are more efficient for storing data.
The <<indices-forcemerge,`_forcemerge` API>> can be used to reduce the number of segments per shard. In many cases, the number of segments can be reduced to one per shard by setting `max_num_segments=1`.
2020-07-23 12:42:33 -04:00
[discrete]
2017-08-21 15:07:54 -04:00
=== Shrink Index
The <<indices-shrink-index,Shrink API>> allows you to reduce the number of shards in an index. Together with the Force Merge API above, this can significantly reduce the number of shards and segments of an index.
2020-07-23 12:42:33 -04:00
[discrete]
2016-06-21 10:31:44 -04:00
=== Use the smallest numeric type that is sufficient
2016-07-05 05:08:45 -04:00
The type that you pick for <<number,numeric data>> can have a significant impact
on disk usage. In particular, integers should be stored using an integer type
(`byte`, `short`, `integer` or `long`) and floating points should either be
stored in a `scaled_float` if appropriate or in the smallest type that fits the
use-case: using `float` over `double`, or `half_float` over `float` will help
save storage.
2017-06-16 05:23:40 -04:00
2020-07-23 12:42:33 -04:00
[discrete]
2017-06-16 05:23:40 -04:00
=== Use index sorting to colocate similar documents
When Elasticsearch stores `_source`, it compresses multiple documents at once
in order to improve the overall compression ratio. For instance it is very
common that documents share the same field names, and quite common that they
share some field values, especially on fields that have a low cardinality or
2020-08-17 11:27:04 -04:00
a {wikipedia}/Zipf%27s_law[zipfian] distribution.
2017-06-16 05:23:40 -04:00
By default documents are compressed together in the order that they are added
to the index. If you enabled <<index-modules-index-sorting,index sorting>>
then instead they are compressed in sorted order. Sorting documents with similar
structure, fields, and values together should improve the compression ratio.
2020-07-23 12:42:33 -04:00
[discrete]
2017-06-16 05:23:40 -04:00
=== Put fields in the same order in documents
Due to the fact that multiple documents are compressed together into blocks,
it is more likely to find longer duplicate strings in those `_source` documents
if fields always occur in the same order.
2020-07-31 16:10:57 -04:00
[discrete]
[[roll-up-historical-data]]
=== Roll up historical data
Keeping older data can useful for later analysis but is often avoided due to
storage costs. You can use data rollups to summarize and store historical data
at a fraction of the raw data's storage cost. See <<xpack-rollup>>.