329 lines
9.2 KiB
Plaintext
329 lines
9.2 KiB
Plaintext
[[tune-for-search-speed]]
|
|
== Tune for search speed
|
|
|
|
[float]
|
|
=== Give memory to the filesystem cache
|
|
|
|
Elasticsearch heavily relies on the filesystem cache in order to make search
|
|
fast. In general, you should make sure that at least half the available memory
|
|
goes to the filesystem cache so that elasticsearch can keep hot regions of the
|
|
index in physical memory.
|
|
|
|
[float]
|
|
=== Use faster hardware
|
|
|
|
If your search is I/O bound, you should investigate giving more memory to the
|
|
filesystem cache (see above) or buying faster drives. In particular SSD drives
|
|
are known to perform better than spinning disks. Always use local storage,
|
|
remote filesystems such as `NFS` or `SMB` should be avoided. Also beware of
|
|
virtualized storage such as Amazon's `Elastic Block Storage`. Virtualized
|
|
storage works very well with Elasticsearch, and it is appealing since it is so
|
|
fast and simple to set up, but it is also unfortunately inherently slower on an
|
|
ongoing basis when compared to dedicated local storage. If you put an index on
|
|
`EBS`, be sure to use provisioned IOPS otherwise operations could be quickly
|
|
throttled.
|
|
|
|
If your search is CPU-bound, you should investigate buying faster CPUs.
|
|
|
|
[float]
|
|
=== Document modeling
|
|
|
|
Documents should be modeled so that search-time operations are as cheap as possible.
|
|
|
|
In particular, joins should be avoided. <<nested,`nested`>> can make queries
|
|
several times slower and <<mapping-parent-field,parent-child>> relations can make
|
|
queries hundreds of times slower. So if the same questions can be answered without
|
|
joins by denormalizing documents, significant speedups can be expected.
|
|
|
|
[float]
|
|
=== Pre-index data
|
|
|
|
You should leverage patterns in your queries to optimize the way data is indexed.
|
|
For instance, if all your documents have a `price` field and most queries run
|
|
<<search-aggregations-bucket-range-aggregation,`range`>> aggregations on a fixed
|
|
list of ranges, you could make this aggregation faster by pre-indexing the ranges
|
|
into the index and using a <<search-aggregations-bucket-terms-aggregation,`terms`>>
|
|
aggregations.
|
|
|
|
For instance, if documents look like:
|
|
|
|
[source,js]
|
|
--------------------------------------------------
|
|
PUT index/type/1
|
|
{
|
|
"designation": "spoon",
|
|
"price": 13
|
|
}
|
|
--------------------------------------------------
|
|
// CONSOLE
|
|
|
|
and search requests look like:
|
|
|
|
[source,js]
|
|
--------------------------------------------------
|
|
GET index/_search
|
|
{
|
|
"aggs": {
|
|
"price_ranges": {
|
|
"range": {
|
|
"field": "price",
|
|
"ranges": [
|
|
{ "to": 10 },
|
|
{ "from": 10, "to": 100 },
|
|
{ "from": 100 }
|
|
]
|
|
}
|
|
}
|
|
}
|
|
}
|
|
--------------------------------------------------
|
|
// CONSOLE
|
|
// TEST[continued]
|
|
|
|
Then documents could be enriched by a `price_range` field at index time, which
|
|
should be mapped as a <<keyword,`keyword`>>:
|
|
|
|
[source,js]
|
|
--------------------------------------------------
|
|
PUT index
|
|
{
|
|
"mappings": {
|
|
"type": {
|
|
"properties": {
|
|
"price_range": {
|
|
"type": "keyword"
|
|
}
|
|
}
|
|
}
|
|
}
|
|
}
|
|
|
|
PUT index/type/1
|
|
{
|
|
"designation": "spoon",
|
|
"price": 13,
|
|
"price_range": "10-100"
|
|
}
|
|
--------------------------------------------------
|
|
// CONSOLE
|
|
|
|
And then search requests could aggregate this new field rather than running a
|
|
`range` aggregation on the `price` field.
|
|
|
|
[source,js]
|
|
--------------------------------------------------
|
|
GET index/_search
|
|
{
|
|
"aggs": {
|
|
"price_ranges": {
|
|
"terms": {
|
|
"field": "price_range"
|
|
}
|
|
}
|
|
}
|
|
}
|
|
--------------------------------------------------
|
|
// CONSOLE
|
|
// TEST[continued]
|
|
|
|
[float]
|
|
=== Mappings
|
|
|
|
The fact that some data is numeric does not mean it should always be mapped as a
|
|
<<number,numeric field>>. Typically, fields storing identifiers such as an `ISBN`
|
|
or any number identifying a record from another database, might benefit from
|
|
being mapped as <<keyword,`keyword`>> rather than `integer` or `long`.
|
|
|
|
[float]
|
|
=== Avoid scripts
|
|
|
|
In general, scripts should be avoided. If they are absolutely needed, you
|
|
should prefer the `painless` and `expressions` engines.
|
|
|
|
[float]
|
|
=== Search rounded dates
|
|
|
|
Queries on date fields that use `now` are typically not cacheable since the
|
|
range that is being matched changes all the time. However switching to a
|
|
rounded date is often acceptable in terms of user experience, and has the
|
|
benefit of making better use of the query cache.
|
|
|
|
For instance the below query:
|
|
|
|
[source,js]
|
|
--------------------------------------------------
|
|
PUT index/type/1
|
|
{
|
|
"my_date": "2016-05-11T16:30:55.328Z"
|
|
}
|
|
|
|
GET index/_search
|
|
{
|
|
"query": {
|
|
"constant_score": {
|
|
"filter": {
|
|
"range": {
|
|
"my_date": {
|
|
"gte": "now-1h",
|
|
"lte": "now"
|
|
}
|
|
}
|
|
}
|
|
}
|
|
}
|
|
}
|
|
--------------------------------------------------
|
|
// CONSOLE
|
|
|
|
could be replaced with the following query:
|
|
|
|
[source,js]
|
|
--------------------------------------------------
|
|
GET index/_search
|
|
{
|
|
"query": {
|
|
"constant_score": {
|
|
"filter": {
|
|
"range": {
|
|
"my_date": {
|
|
"gte": "now-1h/m",
|
|
"lte": "now/m"
|
|
}
|
|
}
|
|
}
|
|
}
|
|
}
|
|
}
|
|
--------------------------------------------------
|
|
// CONSOLE
|
|
// TEST[continued]
|
|
|
|
In that case we rounded to the minute, so if the current time is `16:31:29`,
|
|
the range query will match everything whose value of the `my_date` field is
|
|
between `15:31:00` and `16:31:59`. And if several users run a query that
|
|
contains this range in the same minute, the query cache could help speed things
|
|
up a bit. The longer the interval that is used for rounding, the more the query
|
|
cache can help, but beware that too aggressive rounding might also hurt user
|
|
experience.
|
|
|
|
|
|
NOTE: It might be tempting to split ranges into a large cacheable part and
|
|
smaller not cacheable parts in order to be able to leverage the query cache,
|
|
as shown below:
|
|
|
|
[source,js]
|
|
--------------------------------------------------
|
|
GET index/_search
|
|
{
|
|
"query": {
|
|
"constant_score": {
|
|
"filter": {
|
|
"bool": {
|
|
"should": [
|
|
{
|
|
"range": {
|
|
"my_date": {
|
|
"gte": "now-1h",
|
|
"lte": "now-1h/m"
|
|
}
|
|
}
|
|
},
|
|
{
|
|
"range": {
|
|
"my_date": {
|
|
"gt": "now-1h/m",
|
|
"lt": "now/m"
|
|
}
|
|
}
|
|
},
|
|
{
|
|
"range": {
|
|
"my_date": {
|
|
"gte": "now/m",
|
|
"lte": "now"
|
|
}
|
|
}
|
|
}
|
|
]
|
|
}
|
|
}
|
|
}
|
|
}
|
|
}
|
|
--------------------------------------------------
|
|
// CONSOLE
|
|
// TEST[continued]
|
|
|
|
However such practice might make the query run slower in some cases since the
|
|
overhead introduced by the `bool` query may defeat the savings from better
|
|
leveraging the query cache.
|
|
|
|
[float]
|
|
=== Force-merge read-only indices
|
|
|
|
Indices that are read-only would benefit from being
|
|
<<indices-forcemerge,merged down to a single segment>>. This is typically the
|
|
case with time-based indices: only the index for the current time frame is
|
|
getting new documents while older indices are read-only.
|
|
|
|
IMPORTANT: Don't force-merge indices that are still being written to -- leave
|
|
merging to the background merge process.
|
|
|
|
[float]
|
|
=== Warm up global ordinals
|
|
|
|
Global ordinals are a data-structure that is used in order to run
|
|
<<search-aggregations-bucket-terms-aggregation,`terms`>> aggregations on
|
|
<<keyword,`keyword`>> fields. They are loaded lazily in memory because
|
|
elasticsearch does not know which fields will be used in `terms` aggregations
|
|
and which fields won't. You can tell elasticsearch to load global ordinals
|
|
eagerly at refresh-time by configuring mappings as described below:
|
|
|
|
[source,js]
|
|
--------------------------------------------------
|
|
PUT index
|
|
{
|
|
"mappings": {
|
|
"type": {
|
|
"properties": {
|
|
"foo": {
|
|
"type": "keyword",
|
|
"eager_global_ordinals": true
|
|
}
|
|
}
|
|
}
|
|
}
|
|
}
|
|
--------------------------------------------------
|
|
// CONSOLE
|
|
|
|
[float]
|
|
=== Warm up the filesystem cache
|
|
|
|
If the machine running elasticsearch is restarted, the filesystem cache will be
|
|
empty, so it will take some time before the operating system loads hot regions
|
|
of the index into memory so that search operations are fast. You can explicitly
|
|
tell the operating system which files should be loaded into memory eagerly
|
|
depending on the file extension using the <<file-system,`index.store.preload`>>
|
|
setting.
|
|
|
|
WARNING: Loading data into the filesystem cache eagerly on too many indices or
|
|
too many files will make search _slower_ if the filesystem cache is not large
|
|
enough to hold all the data. Use with caution.
|
|
|
|
[float]
|
|
=== Map identifiers as `keyword`
|
|
|
|
When you have numeric identifiers in your documents, it is tempting to map them
|
|
as numbers, which is consistent with their json type. However, the way that
|
|
Elasticsearch indexes numbers optimizes for `range` queries while `keyword`
|
|
fields are better at `term` queries. Since identifiers are never used in `range`
|
|
queries, they should be mapped as a `keyword`.
|
|
|
|
[float]
|
|
=== Use index sorting to speed up conjunctions
|
|
|
|
<<index-modules-index-sorting,Index sorting>> can be useful in order to make
|
|
conjunctions faster at the cost of slightly slower indexing. Read more about it
|
|
in the <<index-modules-index-sorting-conjunctions,index sorting documentation>>.
|