[Docs] Remove repeating words (#33087)

This commit is contained in:
lipsill 2018-08-28 13:16:43 +02:00 committed by Christoph Büscher
parent 525cda0331
commit b7c0d2830a
18 changed files with 20 additions and 20 deletions

View File

@ -10,7 +10,7 @@ The license can be added or updated using the `putLicense()` method:
--------------------------------------------------
include-tagged::{doc-tests}/LicensingDocumentationIT.java[put-license-execute]
--------------------------------------------------
<1> Set the categories of information to retrieve. The the default is to
<1> Set the categories of information to retrieve. The default is to
return no information which is useful for checking if {xpack} is installed
but not much else.
<2> A JSON document containing the license information.

View File

@ -270,7 +270,7 @@ include-tagged::{doc-tests}/MigrationDocumentationIT.java[migration-cluster-heal
helper requires the content type of the response to be passed as an argument and returns
a `Map` of objects. Values in the map can be of any type, including inner `Map` that are
used to represent the JSON object hierarchy.
<5> Retrieve the value of the `status` field in the response map, casts it as a a `String`
<5> Retrieve the value of the `status` field in the response map, casts it as a `String`
object and use the `ClusterHealthStatus.fromString()` method to convert it as a `ClusterHealthStatus`
object. This method throws an exception if the value does not corresponds to a valid cluster
health status.

View File

@ -13,7 +13,7 @@ include-tagged::{doc-tests}/MiscellaneousDocumentationIT.java[x-pack-info-execut
--------------------------------------------------
<1> Enable verbose mode. The default is `false` but `true` will return
more information.
<2> Set the categories of information to retrieve. The the default is to
<2> Set the categories of information to retrieve. The default is to
return no information which is useful for checking if {xpack} is installed
but not much else.

View File

@ -5,7 +5,7 @@
Painless doesn't have a
https://en.wikipedia.org/wiki/Read%E2%80%93eval%E2%80%93print_loop[REPL]
and while it'd be nice for it to have one one day, it wouldn't tell you the
and while it'd be nice for it to have one day, it wouldn't tell you the
whole story around debugging painless scripts embedded in Elasticsearch because
the data that the scripts have access to or "context" is so important. For now
the best way to debug embedded scripts is by throwing exceptions at choice

View File

@ -254,7 +254,7 @@ and `]` tokens.
*Errors*
* If a value other than an `int` type value or a value that is castable to an
`int` type value is specified for for a dimension's size.
`int` type value is specified for a dimension's size.
*Grammar*

View File

@ -433,8 +433,8 @@ Scripts can be inline (as in above example), indexed or stored on disk. For deta
Available parameters in the script are
[horizontal]
`_subset_freq`:: Number of documents the term appears in in the subset.
`_superset_freq`:: Number of documents the term appears in in the superset.
`_subset_freq`:: Number of documents the term appears in the subset.
`_superset_freq`:: Number of documents the term appears in the superset.
`_subset_size`:: Number of documents in the subset.
`_superset_size`:: Number of documents in the superset.

View File

@ -307,7 +307,7 @@ POST /_search
===== stdDev Function
This function accepts a collection of doubles and and average, then returns the standard deviation of the values in that window.
This function accepts a collection of doubles and average, then returns the standard deviation of the values in that window.
`null` and `NaN` values are ignored; the sum is only calculated over the real values. If the window is empty, or all values are
`null`/`NaN`, `0.0` is returned as the result.

View File

@ -10,7 +10,7 @@ GET /_remote/info
----------------------------------
// CONSOLE
This command returns returns connection and endpoint information keyed by
This command returns connection and endpoint information keyed by
the configured remote cluster alias.
[float]

View File

@ -31,7 +31,7 @@ POST /_cluster/reroute
// CONSOLE
// TEST[skip:doc tests run with only a single node]
It is important to note that that after processing any reroute commands
It is important to note that after processing any reroute commands
Elasticsearch will perform rebalancing as normal (respecting the values of
settings such as `cluster.routing.rebalance.enable`) in order to remain in a
balanced state. For example, if the requested allocation includes moving a

View File

@ -127,7 +127,7 @@ might look like:
The new `description` field contains human readable text that identifies the
particular request that the task is performing such as identifying the search
request being performed by a search task like the example above. Other kinds of
task have have different descriptions, like <<docs-reindex,`_reindex`>> which
task have different descriptions, like <<docs-reindex,`_reindex`>> which
has the search and the destination, or <<docs-bulk,`_bulk`>> which just has the
number of requests and the destination indices. Many requests will only have an
empty description because more detailed information about the request is not

View File

@ -51,7 +51,7 @@ NOTE: These settings only take effect on a full cluster restart.
=== Dangling indices
When a node joins the cluster, any shards stored in its local data directory
When a node joins the cluster, any shards stored in its local data
directory which do not already exist in the cluster will be imported into the
cluster. This functionality is intended as a best effort to help users who
lose all master nodes. If a new master node is started which is unaware of

View File

@ -96,7 +96,7 @@ see <<http-exporter-settings>>.
[[http-exporter-dns]]
==== Using DNS Hosts in HTTP Exporters
{monitoring} runs inside of the the JVM security manager. When the JVM has the
{monitoring} runs inside of the JVM security manager. When the JVM has the
security manager enabled, the JVM changes the duration so that it caches DNS
lookups indefinitely (for example, the mapping of a DNS hostname to an IP
address). For this reason, if you are in an environment where the DNS response

View File

@ -41,5 +41,5 @@ WARNING: `span_multi` queries will hit too many clauses failure if the number of
boolean query limit (defaults to 1024).To avoid an unbounded expansion you can set the <<query-dsl-multi-term-rewrite,
rewrite method>> of the multi term query to `top_terms_*` rewrite. Or, if you use `span_multi` on `prefix` query only,
you can activate the <<index-prefix-config,`index_prefixes`>> field option of the `text` field instead. This will
rewrite any prefix query on the field to a a single term query that matches the indexed prefix.
rewrite any prefix query on the field to a single term query that matches the indexed prefix.

View File

@ -217,4 +217,4 @@ Response:
--------------------------------------------------
// NOTCONSOLE
NOTE: Second level of of collapsing doesn't allow `inner_hits`.
NOTE: Second level of collapsing doesn't allow `inner_hits`.

View File

@ -334,7 +334,7 @@ the filter. If not set, the user DN is passed into the filter. Defaults to Empt
`unmapped_groups_as_roles`::
If set to `true`, the names of any unmapped LDAP groups are used as role names
and assigned to the user. A group is considered to be _unmapped_ if it is not
not referenced in a
referenced in a
{xpack-ref}/mapping-roles.html#mapping-roles-file[role-mapping file]. API-based
role mappings are not considered. Defaults to `false`.
@ -479,7 +479,7 @@ this setting controls the amount of time to cache DNS lookups. Defaults
to `1h`.
`domain_name`::
The domain name of Active Directory. If the the `url` and `user_search_dn`
The domain name of Active Directory. If the `url` and the `user_search_dn`
settings are not specified, the cluster can derive those values from this
setting. Required.

View File

@ -25,7 +25,7 @@ So let's start from the bottom; these roughly are:
|`column`
|`field`
|In both cases, at the lowest level, data is stored in in _named_ entries, of a variety of <<sql-data-types, data types>>, containing _one_ value. SQL calls such an entry a _column_ while {es} a _field_.
|In both cases, at the lowest level, data is stored in _named_ entries, of a variety of <<sql-data-types, data types>>, containing _one_ value. SQL calls such an entry a _column_ while {es} a _field_.
Notice that in {es} a field can contain _multiple_ values of the same type (esentially a list) while in SQL, a _column_ can contain _exactly_ one value of said type.
{es-sql} will do its best to preserve the SQL semantic and, depending on the query, reject those that return fields with more than one value.

View File

@ -230,7 +230,7 @@ As many Elasticsearch tests are checking for a similar output, like the amount o
`assertMatchCount()`:: Asserts a matching count from a percolation response
`assertFirstHit()`:: Asserts the first hit hits the specified matcher
`assertSecondHit()`:: Asserts the second hit hits the specified matcher
`assertThirdHit()`:: Asserts the third hits hits the specified matcher
`assertThirdHit()`:: Asserts the third hit hits the specified matcher
`assertSearchHit()`:: Assert a certain element in a search response hits the specified matcher
`assertNoFailures()`:: Asserts that no shard failures have occurred in the response
`assertFailures()`:: Asserts that shard failures have happened during a search request

View File

@ -459,7 +459,7 @@ Upgrading indices create with Lucene 3.x (Elasticsearch v0.20 and before) to Luc
[float]
=== Improve error handling when deleting files (STATUS: DONE, v1.4.0.Beta1)
Lucene uses reference counting to prevent files that are still in use from being deleted. Lucene testing discovered a bug ({JIRA}5919[LUCENE-5919]) when decrementing the ref count on a batch of files. If deleting some of the files resulted in an exception (e.g. due to interference from a virus scanner), the files that had had their ref counts decremented successfully could later have their ref counts deleted again, incorrectly, resulting in files being physically deleted before their time. This is fixed in Lucene 4.10.
Lucene uses reference counting to prevent files that are still in use from being deleted. Lucene testing discovered a bug ({JIRA}5919[LUCENE-5919]) when decrementing the ref count on a batch of files. If deleting some of the files resulted in an exception (e.g. due to interference from a virus scanner), the files that had their ref counts decremented successfully could later have their ref counts deleted again, incorrectly, resulting in files being physically deleted before their time. This is fixed in Lucene 4.10.
[float]
=== Using Lucene Checksums to verify shards during snapshot/restore (STATUS:DONE, v1.3.3)