Docs: Use "js" instead of "json" and "sh" instead of "shell" for source highlighting

This commit is contained in:
Clinton Gormley 2015-07-14 18:14:09 +02:00
parent 01601e9a3d
commit 2b512f1f29
37 changed files with 114 additions and 114 deletions

View File

@ -31,7 +31,7 @@ type:
The `stopwords` parameter accepts either an array of stopwords:
[source,json]
[source,js]
------------------------------------
PUT /my_index
{
@ -50,7 +50,7 @@ PUT /my_index
or a predefined language-specific list:
[source,json]
[source,js]
------------------------------------
PUT /my_index
{

View File

@ -27,7 +27,7 @@ the available commands.
Each of the commands accepts a query string parameter `v` to turn on
verbose output.
[source,shell]
[source,sh]
--------------------------------------------------
% curl 'localhost:9200/_cat/master?v'
id ip node
@ -41,7 +41,7 @@ EGtKWZlWQYWDmX29fUnp3Q 127.0.0.1 Grey, Sara
Each of the commands accepts a query string parameter `help` which will
output its available columns.
[source,shell]
[source,sh]
--------------------------------------------------
% curl 'localhost:9200/_cat/master?help'
id | node id
@ -56,7 +56,7 @@ node | node name
Each of the commands accepts a query string parameter `h` which forces
only those columns to appear.
[source,shell]
[source,sh]
--------------------------------------------------
% curl 'n1:9200/_cat/nodes?h=ip,port,heapPercent,name'
192.168.56.40 9300 40.3 Captain Universe
@ -87,7 +87,7 @@ off human mode. We'll use a byte-level resolution. Then we'll pipe
our output into `sort` using the appropriate column, which in this
case is the eight one.
[source,shell]
[source,sh]
--------------------------------------------------
% curl '192.168.56.10:9200/_cat/indices?bytes=b' | sort -rnk8
green wiki2 3 0 10000 0 105274918 105274918

View File

@ -4,7 +4,7 @@
`aliases` shows information about currently configured aliases to indices
including filter and routing infos.
[source,shell]
[source,sh]
--------------------------------------------------
% curl '192.168.56.10:9200/_cat/aliases?v'
alias index filter indexRouting searchRouting

View File

@ -4,7 +4,7 @@
`allocation` provides a snapshot of how many shards are allocated to each data node
and how much disk space they are using.
[source,shell]
[source,sh]
--------------------------------------------------
% curl '192.168.56.10:9200/_cat/allocation?v'
shards diskUsed diskAvail diskRatio ip node

View File

@ -4,7 +4,7 @@
`count` provides quick access to the document count of the entire
cluster, or individual indices.
[source,shell]
[source,sh]
--------------------------------------------------
% curl 192.168.56.10:9200/_cat/indices
green wiki1 3 0 10000 331 168.5mb 168.5mb

View File

@ -4,7 +4,7 @@
`fielddata` shows how much heap memory is currently being used by fielddata
on every data node in the cluster.
[source,shell]
[source,sh]
--------------------------------------------------
% curl '192.168.56.10:9200/_cat/fielddata?v'
id host ip node total body text
@ -15,7 +15,7 @@ yaDkp-G3R0q1AJ-HUEvkSQ myhost3 10.20.100.202 Microchip 284.6kb 109.2kb 175.3
Fields can be specified either as a query parameter, or in the URL path:
[source,shell]
[source,sh]
--------------------------------------------------
% curl '192.168.56.10:9200/_cat/fielddata?v&fields=body'
id host ip node total body

View File

@ -5,7 +5,7 @@
from `/_cluster/health`. It has one option `ts` to disable the
timestamping.
[source,shell]
[source,sh]
--------------------------------------------------
% curl 192.168.56.10:9200/_cat/health
1384308967 18:16:07 foo green 3 3 3 3 0 0 0
@ -17,7 +17,7 @@ foo green 3 3 3 3 0 0 0 0
A common use of this command is to verify the health is consistent
across nodes:
[source,shell]
[source,sh]
--------------------------------------------------
% pssh -i -h list.of.cluster.hosts curl -s localhost:9200/_cat/health
[1] 20:20:52 [SUCCESS] es3.vm
@ -33,7 +33,7 @@ time. With enough shards, starting a cluster, or even recovering after
losing a node, can take time (depending on your network & disk). A way
to track its progress is by using this command in a delayed loop:
[source,shell]
[source,sh]
--------------------------------------------------
% while true; do curl 192.168.56.10:9200/_cat/health; sleep 120; done
1384309446 18:24:06 foo red 3 3 20 20 0 0 1812 0

View File

@ -4,7 +4,7 @@
The `indices` command provides a cross-section of each index. This
information *spans nodes*.
[source,shell]
[source,sh]
--------------------------------------------------
% curl 'localhost:9200/_cat/indices/twi*?v'
health status index pri rep docs.count docs.deleted store.size pri.store.size
@ -30,7 +30,7 @@ the view of relevant stats in the context of only the primaries.
Which indices are yellow?
[source,shell]
[source,sh]
--------------------------------------------------
% curl localhost:9200/_cat/indices | grep ^yell
yellow open wiki 2 1 6401 1115 151.4mb 151.4mb
@ -39,7 +39,7 @@ yellow open twitter 5 1 11434 0 32mb 32mb
What's my largest index by disk usage not including replicas?
[source,shell]
[source,sh]
--------------------------------------------------
% curl 'localhost:9200/_cat/indices?bytes=b' | sort -rnk8
green open wiki 2 0 6401 1115 158843725 158843725
@ -49,7 +49,7 @@ green open twitter2 2 0 2030 0 6125085 6125085
How many merge operations have the shards for the `wiki` completed?
[source,shell]
[source,sh]
--------------------------------------------------
% curl 'localhost:9200/_cat/indices/wiki?pri&v&h=health,index,prirep,docs.count,mt'
health index docs.count mt pri.mt
@ -58,7 +58,7 @@ green wiki 9646 16 16
How much memory is used per index?
[source,shell]
[source,sh]
--------------------------------------------------
% curl 'localhost:9200/_cat/indices?v&h=i,tm'
i tm

View File

@ -4,7 +4,7 @@
`master` doesn't have any extra options. It simply displays the
master's node ID, bound IP address, and node name.
[source,shell]
[source,sh]
--------------------------------------------------
% curl 'localhost:9200/_cat/master?v'
id ip node
@ -15,7 +15,7 @@ This information is also available via the `nodes` command, but this
is slightly shorter when all you want to do, for example, is verify
all nodes agree on the master:
[source,shell]
[source,sh]
--------------------------------------------------
% pssh -i -h list.of.cluster.hosts curl -s localhost:9200/_cat/master
[1] 19:16:37 [SUCCESS] es3.vm

View File

@ -25,7 +25,7 @@ ActN 3806 192.168.56.20 9300 {version} {jdk}
The next few give a picture of your heap, memory, and load.
[source,shell]
[source,sh]
--------------------------------------------------
diskAvail heapPercent heapMax ramPercent ramMax load
72.1gb 31.3 93.9mb 81 239.1mb 0.24
@ -39,7 +39,7 @@ ones. How many master-eligible nodes do I have? How many client
nodes? It looks like someone restarted a node recently; which one was
it?
[source,shell]
[source,sh]
--------------------------------------------------
uptime data/client master name
3.5h d m Boneyard

View File

@ -5,7 +5,7 @@
<<cluster-pending,`/_cluster/pending_tasks`>> API in a
convenient tabular format.
[source,shell]
[source,sh]
--------------------------------------------------
% curl 'localhost:9200/_cat/pending_tasks?v'
insertOrder timeInQueue priority source

View File

@ -3,7 +3,7 @@
The `plugins` command provides a view per node of running plugins. This information *spans nodes*.
[source,shell]
[source,sh]
------------------------------------------------------------------------------
% curl 'localhost:9200/_cat/plugins?v'
name component version type isolation url

View File

@ -12,7 +12,7 @@ way for shards to be loaded from disk when a node starts up.
As an example, here is what the recovery state of a cluster may look like when there
are no shards in transit from one node to another:
[source,shell]
[source,sh]
----------------------------------------------------------------------------
> curl -XGET 'localhost:9200/_cat/recovery?v'
index shard time type stage source target files percent bytes percent
@ -28,7 +28,7 @@ Now let's see what a live recovery looks like. By increasing the replica count
of our index and bringing another node online to host the replicas, we can see
what a live shard recovery looks like.
[source,shell]
[source,sh]
----------------------------------------------------------------------------
> curl -XPUT 'localhost:9200/wiki/_settings' -d'{"number_of_replicas":1}'
{"acknowledged":true}
@ -51,7 +51,7 @@ Finally, let's see what a snapshot recovery looks like. Assuming I have previous
made a backup of my index, I can restore it using the <<modules-snapshots,snapshot and restore>>
API.
[source,shell]
[source,sh]
--------------------------------------------------------------------------------
> curl -XPOST 'localhost:9200/_snapshot/imdb/snapshot_2/_restore'
{"acknowledged":true}

View File

@ -5,7 +5,7 @@ The `segments` command provides low level information about the segments
in the shards of an index. It provides information similar to the
link:indices-segments.html[_segments] endpoint.
[source,shell]
[source,sh]
--------------------------------------------------
% curl 'http://localhost:9200/_cat/segments?v'
index shard prirep ip segment generation docs.count [...]
@ -14,7 +14,7 @@ test1 2 p 192.168.2.105 _0 0 1
test1 3 p 192.168.2.105 _2 2 1
--------------------------------------------------
[source,shell]
[source,sh]
--------------------------------------------------
[...] docs.deleted size size.memory committed searchable version compound
0 2.9kb 7818 false true 4.10.2 true

View File

@ -7,7 +7,7 @@ docs, the bytes it takes on disk, and the node where it's located.
Here we see a single index, with three primary shards and no replicas:
[source,shell]
[source,sh]
--------------------------------------------------
% curl 192.168.56.20:9200/_cat/shards
wiki1 0 p STARTED 3014 31.1mb 192.168.56.10 Stiletto
@ -22,7 +22,7 @@ If you have many shards, you may wish to limit which indices show up
in the output. You can always do this with `grep`, but you can save
some bandwidth by supplying an index pattern to the end.
[source,shell]
[source,sh]
--------------------------------------------------
% curl 192.168.56.20:9200/_cat/shards/wiki2
wiki2 0 p STARTED 197 3.2mb 192.168.56.10 Stiletto
@ -37,7 +37,7 @@ wiki2 2 p STARTED 275 7.8mb 192.168.56.20 Commander Kraken
Let's say you've checked your health and you see two relocating
shards. Where are they from and where are they going?
[source,shell]
[source,sh]
--------------------------------------------------
% curl 192.168.56.10:9200/_cat/health
1384315316 20:01:56 foo green 3 3 12 6 2 0 0
@ -52,7 +52,7 @@ wiki1 1 r RELOCATING 3013 29.6mb 192.168.56.10 Stiletto -> 192.168.56.30 Frankie
Before a shard can be used, it goes through an `INITIALIZING` state.
`shards` can show you which ones.
[source,shell]
[source,sh]
--------------------------------------------------
% curl -XPUT 192.168.56.20:9200/_settings -d'{"number_of_replicas":1}'
{"acknowledged":true}
@ -69,7 +69,7 @@ If a shard cannot be assigned, for example you've overallocated the
number of replicas for the number of nodes in the cluster, they will
remain `UNASSIGNED`.
[source,shell]
[source,sh]
--------------------------------------------------
% curl -XPUT 192.168.56.20:9200/_settings -d'{"number_of_replicas":3}'
% curl 192.168.56.20:9200/_cat/health

View File

@ -4,7 +4,7 @@
The `thread_pool` command shows cluster wide thread pool statistics per node. By default the active, queue and rejected
statistics are returned for the bulk, index and search thread pools.
[source,shell]
[source,sh]
--------------------------------------------------
% curl 192.168.56.10:9200/_cat/thread_pool
host1 192.168.1.35 0 0 0 0 0 0 0 0 0
@ -13,7 +13,7 @@ host2 192.168.1.36 0 0 0 0 0 0 0 0 0
The first two columns contain the host and ip of a node.
[source,shell]
[source,sh]
--------------------------------------------------
host ip
host1 192.168.1.35
@ -22,7 +22,7 @@ host2 192.168.1.36
The next three columns show the active queue and rejected statistics for the bulk thread pool.
[source,shell]
[source,sh]
--------------------------------------------------
bulk.active bulk.queue bulk.rejected
0 0 0
@ -32,7 +32,7 @@ The remaining columns show the active queue and rejected statistics of the index
Also other statistics of different thread pools can be retrieved by using the `h` (header) parameter.
[source,shell]
[source,sh]
--------------------------------------------------
% curl 'localhost:9200/_cat/thread_pool?v&h=id,host,suggest.active,suggest.rejected,suggest.completed'
host suggest.active suggest.rejected suggest.completed

View File

@ -23,7 +23,7 @@ These metadata attributes can be used with the
group of nodes. For instance, we can move the index `test` to either `big` or
`medium` nodes as follows:
[source,json]
[source,js]
------------------------
PUT test/_settings
{
@ -35,7 +35,7 @@ PUT test/_settings
Alternatively, we can move the index `test` away from the `small` nodes with
an `exclude` rule:
[source,json]
[source,js]
------------------------
PUT test/_settings
{
@ -48,7 +48,7 @@ Multiple rules can be specified, in which case all conditions must be
satisfied. For instance, we could move the index `test` to `big` nodes in
`rack1` with the following:
[source,json]
[source,js]
------------------------
PUT test/_settings
{
@ -87,7 +87,7 @@ These special attributes are also supported:
All attribute values can be specified with wildcards, eg:
[source,json]
[source,js]
------------------------
PUT test/_settings
{

View File

@ -23,7 +23,7 @@ index.store.type: niofs
It is a _static_ setting that can be set on a per-index basis at index
creation time:
[source,json]
[source,js]
---------------------------------
PUT /my_index
{

View File

@ -70,7 +70,7 @@ recovery without the synced flush marker would take a long time.
To check whether a shard has a marker or not, look for the `commit` section of shard stats returned by
the <<indices-stats,indices stats>> API:
[source,bash]
[source,sh]
--------------------------------------------------
GET /twitter/_stats/commit?level=shards
--------------------------------------------------
@ -134,7 +134,7 @@ NOTE: It is harmless to request a synced flush while there is ongoing indexing.
that are not will fail. Any shards that succeeded will have faster recovery times.
[source,bash]
[source,sh]
--------------------------------------------------
POST /twitter/_flush/synced
--------------------------------------------------

View File

@ -47,7 +47,7 @@ source filtering. It can be highlighted if it is marked as stored.
The get endpoint will retransform the source if the `_source_transform`
parameter is set. Example:
[source,bash]
[source,sh]
--------------------------------------------------
curl -XGET "http://localhost:9200/test/example/3?pretty&_source_transform"
--------------------------------------------------

View File

@ -189,7 +189,7 @@ for filtering or aggregations.
In case you would like to disable norms after the fact, it is possible to do so
by using the <<indices-put-mapping,PUT mapping API>>, like this:
[source,json]
[source,js]
------------
PUT my_index/_mapping/my_type
{

View File

@ -126,7 +126,7 @@ rules:
* The response format always has the index name, then the section, then the
element name, for instance:
+
[source,json]
[source,js]
---------------
{
"my_index": {
@ -151,7 +151,7 @@ mapping`>>, <<indices-get-field-mapping,`get-field-mapping`>>,
Previously a document could be indexed as itself, or wrapped in an outer
object which specified the `type` name:
[source,json]
[source,js]
---------------
PUT /my_index/my_type/1
{
@ -173,7 +173,7 @@ While the `search` API takes a top-level `query` parameter, the
<<search-validate,`validate-query`>> requests expected the whole body to be a
query. These now _require_ a top-level `query` parameter:
[source,json]
[source,js]
---------------
GET /_count
{
@ -194,7 +194,7 @@ results AFTER aggregations have been calculated.
This example counts the top colors in all matching docs, but only returns docs
with color `red`:
[source,json]
[source,js]
---------------
GET /_search
{
@ -221,7 +221,7 @@ Multi-fields are dead! Long live multi-fields! Well, the field type
(excluding `object` and `nested`) now accept a `fields` parameter. It's the
same thing, but nicer. Instead of:
[source,json]
[source,js]
---------------
"title": {
"type": "multi_field",
@ -234,7 +234,7 @@ same thing, but nicer. Instead of:
you can now write:
[source,json]
[source,js]
---------------
"title": {
"type": "string",
@ -322,7 +322,7 @@ parameters instead.
* Settings, like `index.analysis.analyzer.default` are now returned as proper
nested JSON objects, which makes them easier to work with programatically:
+
[source,json]
[source,js]
---------------
{
"index": {

View File

@ -90,7 +90,7 @@ Script fields in 1.x were only returned as a single value. So even if the return
value of a script used to be list, it would be returned as an array containing
a single value that is a list too, such as:
[source,json]
[source,js]
---------------
"fields": {
"my_field": [
@ -106,7 +106,7 @@ In elasticsearch 2.x, scripts that return a list of values are considered as
multivalued fields. So the same example would return the following response,
with values in a single array.
[source,json]
[source,js]
---------------
"fields": {
"my_field": [
@ -200,7 +200,7 @@ Types can no longer be specified on fields within queries. Instead, specify typ
The following is an example query in 1.x over types `t1` and `t2`:
[source,json]
[source,js]
---------------
curl -XGET 'localhost:9200/index/_search'
{
@ -217,7 +217,7 @@ curl -XGET 'localhost:9200/index/_search'
In 2.0, the query should look like the following:
[source,json]
[source,js]
---------------
curl -XGET 'localhost:9200/index/t1,t2/_search'
{
@ -240,7 +240,7 @@ The following example illustrates the difference between 1.x and 2.0.
Given these mappings:
[source,json]
[source,js]
---------------
curl -XPUT 'localhost:9200/index'
{
@ -262,7 +262,7 @@ curl -XPUT 'localhost:9200/index'
The following query was possible in 1.x:
[source,json]
[source,js]
---------------
curl -XGET 'localhost:9200/index/type/_search'
{
@ -274,7 +274,7 @@ curl -XGET 'localhost:9200/index/type/_search'
In 2.0, the same query should now be:
[source,json]
[source,js]
---------------
curl -XGET 'localhost:9200/index/type/_search'
{
@ -347,7 +347,7 @@ In addition, terms aggregations use a custom formatter for boolean (like for
dates and ip addresses, which are also backed by numbers) in order to return
the user-friendly representation of boolean fields: `false`/`true`:
[source,json]
[source,js]
---------------
"buckets": [
{
@ -486,7 +486,7 @@ The `filtered` query is deprecated. Instead you should use a `bool` query with
a `must` clause for the query and a `filter` clause for the filter. For instance
the below query:
[source,json]
[source,js]
---------------
{
"filtered": {
@ -500,7 +500,7 @@ the below query:
}
---------------
can be replaced with
[source,json]
[source,js]
---------------
{
"bool": {
@ -707,7 +707,7 @@ put under `fields` like regular stored fields.
curl -XGET 'localhost:9200/test/_search?fields=_timestamp,foo'
---------------
[source,json]
[source,js]
---------------
{
[...]

View File

@ -12,7 +12,7 @@ node to other nodes in the cluster before shutting it down.
For instance, we could decomission a node using its IP address as follows:
[source,json]
[source,js]
--------------------------------------------------
PUT /_cluster/settings
{
@ -57,7 +57,7 @@ These special attributes are also supported:
All attribute values can be specified with wildcards, eg:
[source,json]
[source,js]
------------------------
PUT _cluster/settings
{

View File

@ -24,7 +24,7 @@ The settings which control logging can be updated dynamically with the
`logger.` prefix. For instance, to increase the logging level of the
`indices.recovery` module to `DEBUG`, issue this request:
[source,json]
[source,js]
-------------------------------
PUT /_cluster/settings
{

View File

@ -40,7 +40,7 @@ evicted.
The cache can be expired manually with the <<indices-clearcache,`clear-cache` API>>:
[source,json]
[source,js]
------------------------
curl -XPOST 'localhost:9200/kimchy,elasticsearch/_cache/clear?request_cache=true'
------------------------
@ -51,7 +51,7 @@ curl -XPOST 'localhost:9200/kimchy,elasticsearch/_cache/clear?request_cache=true
The cache is not enabled by default, but can be enabled when creating a new
index as follows:
[source,json]
[source,js]
-----------------------------
curl -XPUT localhost:9200/my_index -d'
{
@ -65,7 +65,7 @@ curl -XPUT localhost:9200/my_index -d'
It can also be enabled or disabled dynamically on an existing index with the
<<indices-update-settings,`update-settings`>> API:
[source,json]
[source,js]
-----------------------------
curl -XPUT localhost:9200/my_index/_settings -d'
{ "index.requests.cache.enable": true }
@ -78,7 +78,7 @@ curl -XPUT localhost:9200/my_index/_settings -d'
The `query_cache` query-string parameter can be used to enable or disable
caching on a *per-request* basis. If set, it overrides the index-level setting:
[source,json]
[source,js]
-----------------------------
curl 'localhost:9200/my_index/_search?request_cache=true' -d'
{
@ -131,14 +131,14 @@ setting is provided for completeness' sake only.
The size of the cache (in bytes) and the number of evictions can be viewed
by index, with the <<indices-stats,`indices-stats`>> API:
[source,json]
[source,js]
------------------------
curl 'localhost:9200/_stats/request_cache?pretty&human'
------------------------
or by node with the <<cluster-nodes-stats,`nodes-stats`>> API:
[source,json]
[source,js]
------------------------
curl 'localhost:9200/_nodes/stats/indices/request_cache?pretty&human'
------------------------

View File

@ -18,7 +18,7 @@ Installing plugins can either be done manually by placing them under the
Installing plugins typically take the following form:
[source,shell]
[source,sh]
-----------------------------------
bin/plugin --install plugin_name
-----------------------------------
@ -28,7 +28,7 @@ same version as your elasticsearch version.
For older version of elasticsearch (prior to 2.0.0) or community plugins, you would use the following form:
[source,shell]
[source,sh]
-----------------------------------
bin/plugin --install <org>/<user/component>/<version>
-----------------------------------
@ -43,7 +43,7 @@ the `artifactId`.
A plugin can also be installed directly by specifying the URL for it,
for example:
[source,shell]
[source,sh]
-----------------------------------
bin/plugin --url file:///path/to/plugin --install plugin-name
-----------------------------------
@ -106,7 +106,7 @@ Removing plugins can either be done manually by removing them under the
Removing plugins typically take the following form:
[source,shell]
[source,sh]
-----------------------------------
plugin --remove <pluginname>
-----------------------------------
@ -124,7 +124,7 @@ Note that exit codes could be:
* `74`: IO error
* `70`: other errors
[source,shell]
[source,sh]
-----------------------------------
bin/plugin --install mobz/elasticsearch-head --verbose
plugin --remove head --silent
@ -137,7 +137,7 @@ By default, the `plugin` script will wait indefinitely when downloading before f
The timeout parameter can be used to explicitly specify how long it waits. Here is some examples of setting it to
different values:
[source,shell]
[source,sh]
-----------------------------------
# Wait for 30 seconds before failing
bin/plugin --install mobz/elasticsearch-head --timeout 30s
@ -156,14 +156,14 @@ To install a plugin via a proxy, you can pass the proxy details using the enviro
On Linux and Mac, here is an example of setting it:
[source,shell]
[source,sh]
-----------------------------------
bin/plugin -DproxyHost=host_name -DproxyPort=port_number --install mobz/elasticsearch-head
-----------------------------------
On Windows, here is an example of setting it:
[source,shell]
[source,sh]
-----------------------------------
set JAVA_OPTS="-DproxyHost=host_name -DproxyPort=port_number"
bin/plugin --install mobz/elasticsearch-head

View File

@ -23,7 +23,7 @@ in the `config/scripts/` directory on every node.
To convert an inline script to a file, take this simple script
as an example:
[source,json]
[source,js]
-----------------------------------
GET /_search
{
@ -48,7 +48,7 @@ on every data node in the cluster:
Now you can access the script by file name (without the extension):
[source,json]
[source,js]
-----------------------------------
GET /_search
{

View File

@ -224,7 +224,7 @@ filtering settings and rebalancing algorithm) once the snapshot is finished.
Once a snapshot is created information about this snapshot can be obtained using the following command:
[source,shell]
[source,sh]
-----------------------------------
GET /_snapshot/my_backup/snapshot_1
-----------------------------------
@ -232,7 +232,7 @@ GET /_snapshot/my_backup/snapshot_1
All snapshots currently stored in the repository can be listed using the following command:
[source,shell]
[source,sh]
-----------------------------------
GET /_snapshot/my_backup/_all
-----------------------------------
@ -240,14 +240,14 @@ GET /_snapshot/my_backup/_all
coming[2.0] A currently running snapshot can be retrieved using the following command:
[source,shell]
[source,sh]
-----------------------------------
$ curl -XGET "localhost:9200/_snapshot/my_backup/_current"
-----------------------------------
A snapshot can be deleted from the repository using the following command:
[source,shell]
[source,sh]
-----------------------------------
DELETE /_snapshot/my_backup/snapshot_1
-----------------------------------
@ -261,7 +261,7 @@ started by mistake.
A repository can be deleted using the following command:
[source,shell]
[source,sh]
-----------------------------------
DELETE /_snapshot/my_backup
-----------------------------------
@ -275,7 +275,7 @@ the snapshots. The snapshots themselves are left untouched and in place.
A snapshot can be restored using the following command:
[source,shell]
[source,sh]
-----------------------------------
POST /_snapshot/my_backup/snapshot_1/_restore
-----------------------------------
@ -368,7 +368,7 @@ index will not be successfully restored unless these index allocation settings a
A list of currently running snapshots with their detailed status information can be obtained using the following command:
[source,shell]
[source,sh]
-----------------------------------
GET /_snapshot/_status
-----------------------------------
@ -377,7 +377,7 @@ GET /_snapshot/_status
In this format, the command will return information about all currently running snapshots. By specifying a repository name, it's possible
to limit the results to a particular repository:
[source,shell]
[source,sh]
-----------------------------------
GET /_snapshot/my_backup/_status
-----------------------------------
@ -386,7 +386,7 @@ GET /_snapshot/my_backup/_status
If both repository name and snapshot id are specified, this command will return detailed status information for the given snapshot even
if it's not currently running:
[source,shell]
[source,sh]
-----------------------------------
GET /_snapshot/my_backup/snapshot_1/_status
-----------------------------------
@ -394,7 +394,7 @@ GET /_snapshot/my_backup/snapshot_1/_status
Multiple ids are also supported:
[source,shell]
[source,sh]
-----------------------------------
GET /_snapshot/my_backup/snapshot_1,snapshot_2/_status
-----------------------------------
@ -409,7 +409,7 @@ the simplest method that can be used to get notified about operation completion.
The snapshot operation can be also monitored by periodic calls to the snapshot info:
[source,shell]
[source,sh]
-----------------------------------
GET /_snapshot/my_backup/snapshot_1
-----------------------------------
@ -421,7 +421,7 @@ for available resources before returning the result. On very large shards the wa
To get more immediate and complete information about snapshots the snapshot status command can be used instead:
[source,shell]
[source,sh]
-----------------------------------
GET /_snapshot/my_backup/snapshot_1/_status
-----------------------------------

View File

@ -45,7 +45,7 @@ conditions are met:
* The `status` field contains the exact word `published`.
* The `publish_date` field contains a date from 1 Jan 2015 onwards.
[source,json]
[source,js]
------------------------------------
GET _search
{

View File

@ -100,7 +100,7 @@ curl -XGET "http://localhost:9200/_field_stats?fields=rating,answer_count,creati
Response:
[source,json]
[source,js]
--------------------------------------------------
{
"_shards": {

View File

@ -10,7 +10,7 @@ Imagine that you are selling shirts, and the user has specified two filters:
Gucci in the search results. Normally you would do this with a
<<query-dsl-filtered-query,`filtered` query>>:
[source,json]
[source,js]
--------------------------------------------------
curl -XGET localhost:9200/shirts/_search -d '
{
@ -38,7 +38,7 @@ that would allow the user to limit their search results to red Gucci
This can be done with a
<<search-aggregations-bucket-terms-aggregation,`terms` aggregation>>:
[source,json]
[source,js]
--------------------------------------------------
curl -XGET localhost:9200/shirts/_search -d '
{
@ -73,7 +73,7 @@ Instead, you want to include shirts of all colors during aggregation, then
apply the `colors` filter only to the search results. This is the purpose of
the `post_filter`:
[source,json]
[source,js]
--------------------------------------------------
curl -XGET localhost:9200/shirts/_search -d '
{

View File

@ -210,7 +210,7 @@ _section_ markers like `{{#line_no}}`. For this reason, the template should
either be stored in a file (see <<pre-registered-templates>>) or, when used
via the REST API, should be written as a string:
[source,json]
[source,js]
--------------------
"inline": "{\"query\":{\"filtered\":{\"query\":{\"match\":{\"line\":\"{{text}}\"}},\"filter\":{{{#line_no}}\"range\":{\"line_no\":{{{#start}}\"gte\":\"{{start}}\"{{#end}},{{/end}}{{/start}}{{#end}}\"lte\":\"{{end}}\"{{/end}}}}{{/line_no}}}}}}"
--------------------

View File

@ -29,7 +29,7 @@ To back up a running 0.90.x system:
This will prevent indices from being flushed to disk while the backup is in
process:
[source,json]
[source,js]
-----------------------------------
PUT /_all/_settings
{
@ -45,7 +45,7 @@ PUT /_all/_settings
This will prevent the cluster from moving data files from one node to another
while the backup is in process:
[source,json]
[source,js]
-----------------------------------
PUT /_cluster/settings
{
@ -67,7 +67,7 @@ array snapshots, backup software).
When the backup is complete and data no longer needs to be read from the
Elasticsearch data path, allocation and index flushing must be re-enabled:
[source,json]
[source,js]
-----------------------------------
PUT /_all/_settings
{

View File

@ -14,7 +14,7 @@ replicate the shards that were on that node to other nodes in the cluster,
causing a lot of wasted I/O. This can be avoided by disabling allocation
before shutting down a node:
[source,json]
[source,js]
--------------------------------------------------
PUT /_cluster/settings
{
@ -27,7 +27,7 @@ PUT /_cluster/settings
If upgrading from 0.90.x to 1.x, then use these settings instead:
[source,json]
[source,js]
--------------------------------------------------
PUT /_cluster/settings
{
@ -103,7 +103,7 @@ allows the master to allocate replicas to nodes which already have local shard
copies. At this point, with all the nodes in the cluster, it is safe to
reenable shard allocation:
[source,json]
[source,js]
------------------------------------------------------
PUT /_cluster/settings
{
@ -116,7 +116,7 @@ PUT /_cluster/settings
If upgrading from 0.90.x to 1.x, then use these settings instead:
[source,json]
[source,js]
--------------------------------------------------
PUT /_cluster/settings
{

View File

@ -56,7 +56,7 @@ operating system limits on mmap counts is likely to be too low, which may
result in out of memory exceptions. On Linux, you can increase the limits by
running the following command as `root`:
[source,bash]
[source,sh]
-------------------------------------
sysctl -w vm.max_map_count=262144
-------------------------------------

View File

@ -19,7 +19,7 @@ replicate the shards that were on that node to other nodes in the cluster,
causing a lot of wasted I/O. This can be avoided by disabling allocation
before shutting down a node:
[source,json]
[source,js]
--------------------------------------------------
PUT /_cluster/settings
{
@ -38,7 +38,7 @@ You may happily continue indexing during the upgrade. However, shard recovery
will be much faster if you temporarily stop non-essential indexing and issue a
<<indices-synced-flush, synced-flush>> request:
[source,json]
[source,js]
--------------------------------------------------
POST /_flush/synced
--------------------------------------------------
@ -105,7 +105,7 @@ GET _cat/nodes
Once the node has joined the cluster, reenable shard allocation to start using
the node:
[source,json]
[source,js]
--------------------------------------------------
PUT /_cluster/settings
{