CONSOLEify some of the docs documentation

delete, index, and update.

Relates to #18160
This commit is contained in:
Nik Everett 2017-04-24 17:06:54 -04:00
parent e429d66956
commit db93735321
4 changed files with 57 additions and 33 deletions

View File

@ -64,11 +64,8 @@ buildRestTests.expectedUnconvertedCandidates = [
'reference/cluster/stats.asciidoc',
'reference/cluster/tasks.asciidoc',
'reference/docs/delete-by-query.asciidoc',
'reference/docs/delete.asciidoc',
'reference/docs/index_.asciidoc',
'reference/docs/reindex.asciidoc',
'reference/docs/update-by-query.asciidoc',
'reference/docs/update.asciidoc',
'reference/index-modules/similarity.asciidoc',
'reference/index-modules/store.asciidoc',
'reference/index-modules/translog.asciidoc',

View File

@ -8,8 +8,10 @@ from an index called twitter, under a type called tweet, with id valued
[source,js]
--------------------------------------------------
$ curl -XDELETE 'http://localhost:9200/twitter/tweet/1'
DELETE /twitter/tweet/1
--------------------------------------------------
// CONSOLE
// TEST[setup:twitter]
The result of the above delete operation is:
@ -17,18 +19,21 @@ The result of the above delete operation is:
--------------------------------------------------
{
"_shards" : {
"total" : 10,
"total" : 2,
"failed" : 0,
"successful" : 10
"successful" : 2
},
"found" : true,
"_index" : "twitter",
"_type" : "tweet",
"_id" : "1",
"_version" : 2,
"_primary_term": 1,
"_seq_no": 5,
"result": "deleted"
}
--------------------------------------------------
// TESTRESPONSE[s/"successful" : 2/"successful" : 1/]
[float]
[[delete-versioning]]
@ -48,10 +53,26 @@ When indexing using the ability to control the routing, in order to
delete a document, the routing value should also be provided. For
example:
////
Example to delete with routing
[source,js]
--------------------------------------------------
$ curl -XDELETE 'http://localhost:9200/twitter/tweet/1?routing=kimchy'
PUT /twitter/tweet/1?routing=kimhcy
{
"test": "test"
}
--------------------------------------------------
// CONSOLE
////
[source,js]
--------------------------------------------------
DELETE /twitter/tweet/1?routing=kimchy
--------------------------------------------------
// CONSOLE
// TEST[continued]
The above will delete a tweet with id 1, but will be routed based on the
user. Note, issuing a delete without the correct routing, will cause the
@ -130,5 +151,7 @@ to 5 minutes:
[source,js]
--------------------------------------------------
$ curl -XDELETE 'http://localhost:9200/twitter/tweet/1?timeout=5m'
DELETE /twitter/tweet/1?timeout=5m
--------------------------------------------------
// CONSOLE
// TEST[setup:twitter]

View File

@ -46,9 +46,9 @@ The `_shards` header provides information about the replication process of the i
The index operation is successful in the case `successful` is at least 1.
NOTE: Replica shards may not all be started when an indexing operation successfully returns (by default, only the
primary is required, but this behavior can be <<index-wait-for-active-shards,changed>>). In that case,
`total` will be equal to the total shards based on the `number_of_replicas` setting and `successful` will be
NOTE: Replica shards may not all be started when an indexing operation successfully returns (by default, only the
primary is required, but this behavior can be <<index-wait-for-active-shards,changed>>). In that case,
`total` will be equal to the total shards based on the `number_of_replicas` setting and `successful` will be
equal to the number of shards started (primary plus replicas). If there were no failures, the `failed` will be 0.
[float]
@ -101,6 +101,7 @@ PUT twitter/tweet/1?version=2
}
--------------------------------------------------
// CONSOLE
// TEST[continued]
// TEST[catch: conflict]
*NOTE:* versioning is completely real time, and is not affected by the
@ -312,46 +313,46 @@ if needed, the update is distributed to applicable replicas.
[[index-wait-for-active-shards]]
=== Wait For Active Shards
To improve the resiliency of writes to the system, indexing operations
can be configured to wait for a certain number of active shard copies
To improve the resiliency of writes to the system, indexing operations
can be configured to wait for a certain number of active shard copies
before proceeding with the operation. If the requisite number of active
shard copies are not available, then the write operation must wait and
retry, until either the requisite shard copies have started or a timeout
occurs. By default, write operations only wait for the primary shards
shard copies are not available, then the write operation must wait and
retry, until either the requisite shard copies have started or a timeout
occurs. By default, write operations only wait for the primary shards
to be active before proceeding (i.e. `wait_for_active_shards=1`).
This default can be overridden in the index settings dynamically
by setting `index.write.wait_for_active_shards`. To alter this behavior
by setting `index.write.wait_for_active_shards`. To alter this behavior
per operation, the `wait_for_active_shards` request parameter can be used.
Valid values are `all` or any positive integer up to the total number
of configured copies per shard in the index (which is `number_of_replicas+1`).
Specifying a negative value or a number greater than the number of
Specifying a negative value or a number greater than the number of
shard copies will throw an error.
For example, suppose we have a cluster of three nodes, `A`, `B`, and `C` and
we create an index `index` with the number of replicas set to 3 (resulting in
4 shard copies, one more copy than there are nodes). If we
we create an index `index` with the number of replicas set to 3 (resulting in
4 shard copies, one more copy than there are nodes). If we
attempt an indexing operation, by default the operation will only ensure
the primary copy of each shard is available before proceeding. This means
that even if `B` and `C` went down, and `A` hosted the primary shard copies,
the indexing operation would still proceed with only one copy of the data.
the indexing operation would still proceed with only one copy of the data.
If `wait_for_active_shards` is set on the request to `3` (and all 3 nodes
are up), then the indexing operation will require 3 active shard copies
are up), then the indexing operation will require 3 active shard copies
before proceeding, a requirement which should be met because there are 3
active nodes in the cluster, each one holding a copy of the shard. However,
if we set `wait_for_active_shards` to `all` (or to `4`, which is the same),
the indexing operation will not proceed as we do not have all 4 copies of
each shard active in the index. The operation will timeout
if we set `wait_for_active_shards` to `all` (or to `4`, which is the same),
the indexing operation will not proceed as we do not have all 4 copies of
each shard active in the index. The operation will timeout
unless a new node is brought up in the cluster to host the fourth copy of
the shard.
It is important to note that this setting greatly reduces the chances of
the write operation not writing to the requisite number of shard copies,
It is important to note that this setting greatly reduces the chances of
the write operation not writing to the requisite number of shard copies,
but it does not completely eliminate the possibility, because this check
occurs before the write operation commences. Once the write operation
is underway, it is still possible for replication to fail on any number of
is underway, it is still possible for replication to fail on any number of
shard copies but still succeed on the primary. The `_shards` section of the
write operation's response reveals the number of shard copies on which
write operation's response reveals the number of shard copies on which
replication succeeded/failed.
[source,js]
@ -364,6 +365,7 @@ replication succeeded/failed.
}
}
--------------------------------------------------
// NOTCONSOLE
[float]
[[index-refresh]]

View File

@ -75,7 +75,7 @@ We can also add a new field to the document:
--------------------------------------------------
POST test/type1/1/_update
{
"script" : "ctx._source.new_field = \"value_of_new_field\""
"script" : "ctx._source.new_field = 'value_of_new_field'"
}
--------------------------------------------------
// CONSOLE
@ -87,7 +87,7 @@ Or remove a field from the document:
--------------------------------------------------
POST test/type1/1/_update
{
"script" : "ctx._source.remove(\"new_field\")"
"script" : "ctx._source.remove('new_field')"
}
--------------------------------------------------
// CONSOLE
@ -102,7 +102,7 @@ the doc if the `tags` field contain `green`, otherwise it does nothing
POST test/type1/1/_update
{
"script" : {
"inline": "if (ctx._source.tags.contains(params.tag)) { ctx.op = \"delete\" } else { ctx.op = \"none\" }",
"inline": "if (ctx._source.tags.contains(params.tag)) { ctx.op = 'delete' } else { ctx.op = 'none' }",
"lang": "painless",
"params" : {
"tag" : "green"
@ -242,6 +242,9 @@ POST sessions/session/dh3sgudg8gsrgl/_update
"upsert" : {}
}
--------------------------------------------------
// CONSOLE
// TEST[s/"id": "my_web_session_summariser"/"inline": "ctx._source.page_view_event = params.pageViewEvent"/]
// TEST[continued]
[float]
==== `doc_as_upsert`
@ -263,7 +266,6 @@ POST test/type1/1/_update
// CONSOLE
// TEST[continued]
[float]
=== Parameters