mirror of
https://github.com/honeymoose/OpenSearch.git
synced 2025-02-06 13:08:29 +00:00
fadbe0de08
Today we require users to prepare their indices for split operations. Yet, we can do this automatically when an index is created which would make the split feature a much more appealing option since it doesn't have any 3rd party prerequisites anymore. This change automatically sets the number of routinng shards such that an index is guaranteed to be able to split once into twice as many shards. The number of routing shards is scaled towards the default shard limit per index such that indices with a smaller amount of shards can be split more often than larger ones. For instance an index with 1 or 2 shards can be split 10x (until it approaches 1024 shards) while an index created with 128 shards can only be split 3x by a factor of 2. Please note this is just a default value and users can still prepare their indices with `index.number_of_routing_shards` for custom splitting. NOTE: this change has an impact on the document distribution since we are changing the hash space. Documents are still uniformly distributed across all shards but since we are artificually changing the number of buckets in the consistent hashign space document might be hashed into different shards compared to previous versions. This is a 7.0 only change.
168 lines
4.6 KiB
Plaintext
168 lines
4.6 KiB
Plaintext
[[search-shards]]
|
|
== Search Shards API
|
|
|
|
The search shards api returns the indices and shards that a search request would
|
|
be executed against. This can give useful feedback for working out issues or
|
|
planning optimizations with routing and shard preferences. When filtered aliases
|
|
are used, the filter is returned as part of the `indices` section [5.1.0] Added in 5.1.0.
|
|
|
|
The `index` may be a single value, or comma-separated.
|
|
|
|
[float]
|
|
=== Usage
|
|
|
|
Full example:
|
|
|
|
[source,js]
|
|
--------------------------------------------------
|
|
GET /twitter/_search_shards
|
|
--------------------------------------------------
|
|
// CONSOLE
|
|
// TEST[s/^/PUT twitter\n/]
|
|
|
|
This will yield the following result:
|
|
|
|
[source,js]
|
|
--------------------------------------------------
|
|
{
|
|
"nodes": ...,
|
|
"indices" : {
|
|
"twitter": { }
|
|
},
|
|
"shards": [
|
|
[
|
|
{
|
|
"index": "twitter",
|
|
"node": "JklnKbD7Tyqi9TP3_Q_tBg",
|
|
"primary": true,
|
|
"shard": 0,
|
|
"state": "STARTED",
|
|
"allocation_id": {"id":"0TvkCyF7TAmM1wHP4a42-A"},
|
|
"relocating_node": null
|
|
}
|
|
],
|
|
[
|
|
{
|
|
"index": "twitter",
|
|
"node": "JklnKbD7Tyqi9TP3_Q_tBg",
|
|
"primary": true,
|
|
"shard": 1,
|
|
"state": "STARTED",
|
|
"allocation_id": {"id":"fMju3hd1QHWmWrIgFnI4Ww"},
|
|
"relocating_node": null
|
|
}
|
|
],
|
|
[
|
|
{
|
|
"index": "twitter",
|
|
"node": "JklnKbD7Tyqi9TP3_Q_tBg",
|
|
"primary": true,
|
|
"shard": 2,
|
|
"state": "STARTED",
|
|
"allocation_id": {"id":"Nwl0wbMBTHCWjEEbGYGapg"},
|
|
"relocating_node": null
|
|
}
|
|
],
|
|
[
|
|
{
|
|
"index": "twitter",
|
|
"node": "JklnKbD7Tyqi9TP3_Q_tBg",
|
|
"primary": true,
|
|
"shard": 3,
|
|
"state": "STARTED",
|
|
"allocation_id": {"id":"bU_KLGJISbW0RejwnwDPKw"},
|
|
"relocating_node": null
|
|
}
|
|
],
|
|
[
|
|
{
|
|
"index": "twitter",
|
|
"node": "JklnKbD7Tyqi9TP3_Q_tBg",
|
|
"primary": true,
|
|
"shard": 4,
|
|
"state": "STARTED",
|
|
"allocation_id": {"id":"DMs7_giNSwmdqVukF7UydA"},
|
|
"relocating_node": null
|
|
}
|
|
]
|
|
]
|
|
}
|
|
--------------------------------------------------
|
|
// TESTRESPONSE[s/"nodes": ...,/"nodes": $body.nodes,/]
|
|
// TESTRESPONSE[s/JklnKbD7Tyqi9TP3_Q_tBg/$body.shards.0.0.node/]
|
|
// TESTRESPONSE[s/0TvkCyF7TAmM1wHP4a42-A/$body.shards.0.0.allocation_id.id/]
|
|
// TESTRESPONSE[s/fMju3hd1QHWmWrIgFnI4Ww/$body.shards.1.0.allocation_id.id/]
|
|
// TESTRESPONSE[s/Nwl0wbMBTHCWjEEbGYGapg/$body.shards.2.0.allocation_id.id/]
|
|
// TESTRESPONSE[s/bU_KLGJISbW0RejwnwDPKw/$body.shards.3.0.allocation_id.id/]
|
|
// TESTRESPONSE[s/DMs7_giNSwmdqVukF7UydA/$body.shards.4.0.allocation_id.id/]
|
|
|
|
And specifying the same request, this time with a routing value:
|
|
|
|
[source,js]
|
|
--------------------------------------------------
|
|
GET /twitter/_search_shards?routing=foo,bar
|
|
--------------------------------------------------
|
|
// CONSOLE
|
|
// TEST[s/^/PUT twitter\n/]
|
|
|
|
This will yield the following result:
|
|
|
|
[source,js]
|
|
--------------------------------------------------
|
|
{
|
|
"nodes": ...,
|
|
"indices" : {
|
|
"twitter": { }
|
|
},
|
|
"shards": [
|
|
[
|
|
{
|
|
"index": "twitter",
|
|
"node": "JklnKbD7Tyqi9TP3_Q_tBg",
|
|
"primary": true,
|
|
"shard": 2,
|
|
"state": "STARTED",
|
|
"allocation_id": {"id":"fMju3hd1QHWmWrIgFnI4Ww"},
|
|
"relocating_node": null
|
|
}
|
|
],
|
|
[
|
|
{
|
|
"index": "twitter",
|
|
"node": "JklnKbD7Tyqi9TP3_Q_tBg",
|
|
"primary": true,
|
|
"shard": 3,
|
|
"state": "STARTED",
|
|
"allocation_id": {"id":"0TvkCyF7TAmM1wHP4a42-A"},
|
|
"relocating_node": null
|
|
}
|
|
]
|
|
]
|
|
}
|
|
--------------------------------------------------
|
|
// TESTRESPONSE[s/"nodes": ...,/"nodes": $body.nodes,/]
|
|
// TESTRESPONSE[s/JklnKbD7Tyqi9TP3_Q_tBg/$body.shards.1.0.node/]
|
|
// TESTRESPONSE[s/0TvkCyF7TAmM1wHP4a42-A/$body.shards.1.0.allocation_id.id/]
|
|
// TESTRESPONSE[s/fMju3hd1QHWmWrIgFnI4Ww/$body.shards.0.0.allocation_id.id/]
|
|
|
|
This time the search will only be executed against two of the shards, because
|
|
routing values have been specified.
|
|
|
|
[float]
|
|
=== All parameters:
|
|
|
|
[horizontal]
|
|
`routing`::
|
|
A comma-separated list of routing values to take into account when
|
|
determining which shards a request would be executed against.
|
|
|
|
`preference`::
|
|
Controls a `preference` of which shard replicas to execute the search
|
|
request on. By default, the operation is randomized between the shard
|
|
replicas. See the link:search-request-preference.html[preference]
|
|
documentation for a list of all acceptable values.
|
|
|
|
`local`::
|
|
A boolean value whether to read the cluster state locally in order to
|
|
determine where shards are allocated instead of using the Master node's
|
|
cluster state. |