Fix asciidoc structure for sliced reindex
Asciidoc likes headings just so and will complain and fail the docs build without it. Related to #20767
This commit is contained in:
parent
a13a050271
commit
7ff9ba1604
|
@ -341,8 +341,7 @@ take effect on after completing the current batch. This prevents scroll
|
||||||
timeouts.
|
timeouts.
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
=== Slicing
|
[[docs-delete-by-query-manual-slice]]
|
||||||
|
|
||||||
=== Manually slicing
|
=== Manually slicing
|
||||||
|
|
||||||
Delete-by-query supports <<sliced-scroll>> allowing you to manually parallelize
|
Delete-by-query supports <<sliced-scroll>> allowing you to manually parallelize
|
||||||
|
@ -413,7 +412,9 @@ Which results in a sensible `total` like this one:
|
||||||
----------------------------------------------------------------
|
----------------------------------------------------------------
|
||||||
// TESTRESPONSE
|
// TESTRESPONSE
|
||||||
|
|
||||||
==== Automatic slicing
|
[float]
|
||||||
|
[[docs-delete-by-query-automatic-slice]]
|
||||||
|
=== Automatic slicing
|
||||||
|
|
||||||
You can also let delete-by-query automatically parallelize using
|
You can also let delete-by-query automatically parallelize using
|
||||||
<<sliced-scroll>> to slice on `_uid`:
|
<<sliced-scroll>> to slice on `_uid`:
|
||||||
|
@ -487,7 +488,9 @@ above about distribution being uneven and you should conclude that the using
|
||||||
* Each sub-requests gets a slightly different snapshot of the source index
|
* Each sub-requests gets a slightly different snapshot of the source index
|
||||||
though these are all taken at approximately the same time.
|
though these are all taken at approximately the same time.
|
||||||
|
|
||||||
==== Picking the number of slices
|
[float]
|
||||||
|
[[docs-delete-by-query-picking-slices]]
|
||||||
|
=== Picking the number of slices
|
||||||
|
|
||||||
At this point we have a few recommendations around the number of `slices` to
|
At this point we have a few recommendations around the number of `slices` to
|
||||||
use (the `max` parameter in the slice API if manually parallelizing):
|
use (the `max` parameter in the slice API if manually parallelizing):
|
||||||
|
|
|
@ -737,8 +737,7 @@ and it'll look like:
|
||||||
Or you can search by `tag` or whatever you want.
|
Or you can search by `tag` or whatever you want.
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
=== Slicing
|
[[docs-reindex-manual-slice]]
|
||||||
|
|
||||||
==== Manual slicing
|
==== Manual slicing
|
||||||
Reindex supports <<sliced-scroll>>, allowing you to manually parallelize the
|
Reindex supports <<sliced-scroll>>, allowing you to manually parallelize the
|
||||||
process relatively easily:
|
process relatively easily:
|
||||||
|
@ -797,7 +796,9 @@ Which results in a sensible `total` like this one:
|
||||||
----------------------------------------------------------------
|
----------------------------------------------------------------
|
||||||
// TESTRESPONSE
|
// TESTRESPONSE
|
||||||
|
|
||||||
==== Automatic slicing
|
[float]
|
||||||
|
[[docs-reindex-automatic-slice]]
|
||||||
|
=== Automatic slicing
|
||||||
|
|
||||||
You can also let reindex automatically parallelize using <<sliced-scroll>> to
|
You can also let reindex automatically parallelize using <<sliced-scroll>> to
|
||||||
slice on `_uid`:
|
slice on `_uid`:
|
||||||
|
@ -860,7 +861,9 @@ above about distribution being uneven and you should conclude that the using
|
||||||
* Each sub-requests gets a slightly different snapshot of the source index
|
* Each sub-requests gets a slightly different snapshot of the source index
|
||||||
though these are all taken at approximately the same time.
|
though these are all taken at approximately the same time.
|
||||||
|
|
||||||
==== Picking the number of slices
|
[float]
|
||||||
|
[[docs-reindex-picking-slices]]
|
||||||
|
=== Picking the number of slices
|
||||||
|
|
||||||
At this point we have a few recommendations around the number of `slices` to
|
At this point we have a few recommendations around the number of `slices` to
|
||||||
use (the `max` parameter in the slice API if manually parallelizing):
|
use (the `max` parameter in the slice API if manually parallelizing):
|
||||||
|
|
|
@ -406,8 +406,7 @@ take effect on after completing the current batch. This prevents scroll
|
||||||
timeouts.
|
timeouts.
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
=== Slicing
|
[[docs-update-by-query-manual-slice]]
|
||||||
|
|
||||||
==== Manual slicing
|
==== Manual slicing
|
||||||
Update-by-query supports <<sliced-scroll>> allowing you to manually parallelize
|
Update-by-query supports <<sliced-scroll>> allowing you to manually parallelize
|
||||||
the process relatively easily:
|
the process relatively easily:
|
||||||
|
@ -460,7 +459,9 @@ Which results in a sensible `total` like this one:
|
||||||
----------------------------------------------------------------
|
----------------------------------------------------------------
|
||||||
// TESTRESPONSE
|
// TESTRESPONSE
|
||||||
|
|
||||||
==== Automatic slicing
|
[float]
|
||||||
|
[[docs-update-by-query-automatic-slice]]
|
||||||
|
=== Automatic slicing
|
||||||
|
|
||||||
You can also let update-by-query automatically parallelize using
|
You can also let update-by-query automatically parallelize using
|
||||||
<<sliced-scroll>> to slice on `_uid`:
|
<<sliced-scroll>> to slice on `_uid`:
|
||||||
|
@ -521,7 +522,9 @@ above about distribution being uneven and you should conclude that the using
|
||||||
* Each sub-requests gets a slightly different snapshot of the source index
|
* Each sub-requests gets a slightly different snapshot of the source index
|
||||||
though these are all taken at approximately the same time.
|
though these are all taken at approximately the same time.
|
||||||
|
|
||||||
==== Picking the number of slices
|
[float]
|
||||||
|
[[docs-update-by-query-picking-slices]]
|
||||||
|
=== Picking the number of slices
|
||||||
|
|
||||||
At this point we have a few recommendations around the number of `slices` to
|
At this point we have a few recommendations around the number of `slices` to
|
||||||
use (the `max` parameter in the slice API if manually parallelizing):
|
use (the `max` parameter in the slice API if manually parallelizing):
|
||||||
|
|
Loading…
Reference in New Issue