Fix typos in docs.

This commit is contained in:
Dongjoon Hyun 2016-02-09 02:07:32 -08:00
parent 77a1649905
commit 21ea552070
32 changed files with 40 additions and 40 deletions

View File

@ -10,7 +10,7 @@ The queries in this group are:
<<java-query-dsl-geo-shape-query,`geo_shape`>> query::
Find document with geo-shapes which either intersect, are contained by, or
do not interesect with the specified geo-shape.
do not intersect with the specified geo-shape.
<<java-query-dsl-geo-bounding-box-query,`geo_bounding_box`>> query::

View File

@ -32,7 +32,7 @@ to your classpath in order to use this type:
[source,java]
--------------------------------------------------
// Import ShapeRelationn and ShapeBuilder
// Import ShapeRelation and ShapeBuilder
import org.elasticsearch.common.geo.ShapeRelation;
import org.elasticsearch.common.geo.builders.ShapeBuilder;
--------------------------------------------------

View File

@ -43,7 +43,7 @@ releases 2.0 and later do not support rivers.
* https://www.elastic.co/guide/en/logstash/current/plugins-inputs-elasticsearch.html[Elasticsearch input to Logstash]
The Logstash `elasticsearch` input plugin.
* https://www.elastic.co/guide/en/logstash/current/plugins-filters-elasticsearch.html[Elasticsearch event filtering in Logstash]
The Logstash `elasticearch` filter plugin.
The Logstash `elasticsearch` filter plugin.
* https://www.elastic.co/guide/en/logstash/current/plugins-codecs-es_bulk.html[Elasticsearch bulk codec]
The Logstash `es_bulk` plugin decodes the Elasticsearch bulk format into individual events.

View File

@ -2,7 +2,7 @@
=== Range Aggregation
A multi-bucket value source based aggregation that enables the user to define a set of ranges - each representing a bucket. During the aggregation process, the values extracted from each document will be checked against each bucket range and "bucket" the relevant/matching document.
Note that this aggregration includes the `from` value and excludes the `to` value for each range.
Note that this aggregation includes the `from` value and excludes the `to` value for each range.
Example:

View File

@ -77,7 +77,7 @@ tags of the issues the user has commented on:
}
--------------------------------------------------
As you can see above, the the `reverse_nested` aggregation is put in to a `nested` aggregation as this is the only place
As you can see above, the `reverse_nested` aggregation is put in to a `nested` aggregation as this is the only place
in the dsl where the `reversed_nested` aggregation can be used. Its sole purpose is to join back to a parent doc higher
up in the nested structure.

View File

@ -34,7 +34,7 @@ Credits for the hyphenation code go to the Apache FOP project .
[float]
=== Dictionary decompounder
The `dictionary_decompounder` uses a brute force approach in conjuction with
The `dictionary_decompounder` uses a brute force approach in conjunction with
only the word dictionary to find subwords in a compound word. It is much
slower than the hyphenation decompounder but can be used as a first start to
check the quality of your dictionary.

View File

@ -16,7 +16,7 @@ attribute as follows:
------------------------
bin/elasticsearch --node.rack rack1 --node.size big <1>
------------------------
<1> These attribute settings can also be specfied in the `elasticsearch.yml` config file.
<1> These attribute settings can also be specified in the `elasticsearch.yml` config file.
These metadata attributes can be used with the
`index.routing.allocation.*` settings to allocate an index to a particular

View File

@ -186,7 +186,7 @@ Here is what it looks like when one shard group failed due to pending operations
}
--------------------------------------------------
NOTE: The above error is shown when the synced flush failes due to concurrent indexing operations. The HTTP
NOTE: The above error is shown when the synced flush fails due to concurrent indexing operations. The HTTP
status code in that case will be `409 CONFLICT`.
Sometimes the failures are specific to a shard copy. The copies that failed will not be eligible for

View File

@ -3,7 +3,7 @@
Provides store information for shard copies of indices.
Store information reports on which nodes shard copies exist, the shard
copy allocation ID, a unique identifer for each shard copy, and any exceptions
copy allocation ID, a unique identifier for each shard copy, and any exceptions
encountered while opening the shard index or from earlier engine failure.
By default, only lists store information for shards that have at least one

View File

@ -61,7 +61,7 @@ All processors are defined in the following way within a pipeline definition:
Each processor defines its own configuration parameters, but all processors have
the ability to declare `tag` and `on_failure` fields. These fields are optional.
A `tag` is simply a string identifier of the specific instatiation of a certain
A `tag` is simply a string identifier of the specific instantiation of a certain
processor in a pipeline. The `tag` field does not affect any processor's behavior,
but is very useful for bookkeeping and tracing errors to specific processors.
@ -1079,7 +1079,7 @@ response:
It is often useful to see how each processor affects the ingest document
as it is passed through the pipeline. To see the intermediate results of
each processor in the simulat request, a `verbose` parameter may be added
each processor in the simulate request, a `verbose` parameter may be added
to the request
Here is an example verbose request and its response:

View File

@ -24,7 +24,7 @@ GET my_index/my_type/1?routing=user1 <2>
// AUTOSENSE
<1> This document uses `user1` as its routing value, instead of its ID.
<2> The the same `routing` value needs to be provided when
<2> The same `routing` value needs to be provided when
<<docs-get,getting>>, <<docs-delete,deleting>>, or <<docs-update,updating>>
the document.

View File

@ -93,7 +93,7 @@ used for future documents.
==== Note on documents expiration
Expired documents will be automatically deleted periodoically. The following
Expired documents will be automatically deleted periodically. The following
settings control the expiry process:
`indices.ttl.interval`::

View File

@ -22,7 +22,7 @@ are searchable. It accepts three values:
This option applies only to `string` fields, for which it is the default.
The string field value is first <<analysis,analyzed>> to convert the
string into terms (e.g. a list of individual words), which are then
indexed. At search time, the the query string is passed through
indexed. At search time, the query string is passed through
(<<search-analyzer,usually>>) the same analyzer to generate terms
in the same format as those in the index. It is this process that enables
<<full-text-queries,full text search>>.

View File

@ -7,7 +7,7 @@ contain sub-fields, called `properties`. These properties may be of any
be added:
* explicitly by defining them when <<indices-create-index,creating an index>>.
* explicitily by defining them when adding or updating a mapping type with the <<indices-put-mapping,PUT mapping>> API.
* explicitly by defining them when adding or updating a mapping type with the <<indices-put-mapping,PUT mapping>> API.
* <<dynamic-mapping,dynamically>> just by indexing documents containing new fields.
Below is an example of adding `properties` to a mapping type, an `object`

View File

@ -22,7 +22,7 @@ configuration are:
`BM25`::
The Okapi BM25 algorithm.
See {defguide}/pluggable-similarites.html[Plugggable Similarity Algorithms]
See {defguide}/pluggable-similarites.html[Pluggable Similarity Algorithms]
for more information.

View File

@ -21,7 +21,7 @@ document:
<<nested>>:: `nested` for arrays of JSON objects
[float]
=== Geo dataypes
=== Geo datatypes
<<geo-point>>:: `geo_point` for lat/lon points
<<geo-shape>>:: `geo_shape` for complex shapes like polygons

View File

@ -9,7 +9,7 @@ Fields of type `geo_point` accept latitude-longitude pairs, which can be used:
<<query-dsl-geohash-cell-query,geohash>> cell.
* to aggregate documents by <<search-aggregations-bucket-geohashgrid-aggregation,geographically>>
or by <<search-aggregations-bucket-geodistance-aggregation,distance>> from a central point.
* to integerate distance into a document's <<query-dsl-function-score-query,relevance score>>.
* to integrate distance into a document's <<query-dsl-function-score-query,relevance score>>.
* to <<geo-sorting,sort>> documents by distance.
There are four ways that a geo-point may be specified, as demonstrated below:

View File

@ -44,7 +44,7 @@ The <<cluster-state,`cluster_state`>>, <<cluster-nodes-info,`nodes_info`>>,
<<cluster-nodes-stats,`nodes_stats`>> and <<indices-stats,`indices_stats`>>
APIs have all been changed to make their format more RESTful and less clumsy.
For instance, if you just want the `nodes` section of the the `cluster_state`,
For instance, if you just want the `nodes` section of the `cluster_state`,
instead of:
[source,sh]
@ -320,7 +320,7 @@ longer be used to return whole objects and it no longer accepts the
parameters instead.
* Settings, like `index.analysis.analyzer.default` are now returned as proper
nested JSON objects, which makes them easier to work with programatically:
nested JSON objects, which makes them easier to work with programmatically:
+
[source,js]
---------------

View File

@ -25,7 +25,7 @@ Index templates can no longer be configured on disk. Use the
==== Analyze API changes
The Analyze API now returns the the `position` of the first token as `0`
The Analyze API now returns the `position` of the first token as `0`
instead of `1`.
The `prefer_local` parameter has been removed. The `_analyze` API is a light

View File

@ -153,7 +153,7 @@ PUT my_index
}
}
----------------------------
<1> These two fields cannot be distinguised as both are referred to as `foo.bar`.
<1> These two fields cannot be distinguished as both are referred to as `foo.bar`.
You can no longer create fields with dots in the name.

View File

@ -550,7 +550,7 @@ Removing individual setters for lon() and lat() values, both values should be se
Removing setters for to(Object ...) and from(Object ...) in favour of the only two allowed input
arguments (String, Number). Removing setter for center point (point(), geohash()) because parameter
is mandatory and should already be set in constructor.
Also removing setters for lt(), lte(), gt(), gte() since they can all be replaced by equivallent
Also removing setters for lt(), lte(), gt(), gte() since they can all be replaced by equivalent
calls to to/from() and inludeLower()/includeUpper().
==== GeoPolygonQueryBuilder

View File

@ -17,7 +17,7 @@ There are a number of settings available to control the shard allocation process
be distributed across different racks or availability zones.
* <<allocation-filtering>> allows certain nodes or groups of nodes excluded
from allocation so that they can be decommisioned.
from allocation so that they can be decommissioned.
Besides these, there are a few other <<misc-cluster,miscellaneous cluster-level settings>>.

View File

@ -7,10 +7,10 @@ you to allow or disallow the allocation of shards from *any* index to
particular nodes.
The typical use case for cluster-wide shard allocation filtering is when you
want to decommision a node, and you would like to move the shards from that
want to decommission a node, and you would like to move the shards from that
node to other nodes in the cluster before shutting it down.
For instance, we could decomission a node using its IP address as follows:
For instance, we could decommission a node using its IP address as follows:
[source,js]
--------------------------------------------------

View File

@ -28,7 +28,7 @@ Defaults to `_local_`.
`discovery.zen.ping.unicast.hosts`::
In order to join a cluster, a node needs to know the hostname or IP address of
at least some of the other nodes in the cluster. This settting provides the
at least some of the other nodes in the cluster. This setting provides the
initial list of other nodes that this node will try to contact. Accepts IP
addresses or hostnames.
+

View File

@ -11,7 +11,7 @@ The queries in this group are:
<<query-dsl-geo-shape-query,`geo_shape`>> query::
Find document with geo-shapes which either intersect, are contained by, or
do not interesect with the specified geo-shape.
do not intersect with the specified geo-shape.
<<query-dsl-geo-bounding-box-query,`geo_bounding_box`>> query::

View File

@ -77,7 +77,7 @@ GET /_search
}
}
------------------------------------------
<1> Name of the the query template in `config/scripts/`, i.e., `my_template.mustache`.
<1> Name of the query template in `config/scripts/`, i.e., `my_template.mustache`.
Alternatively, you can register a query template in the special `.scripts` index with:
@ -106,7 +106,7 @@ GET /_search
}
}
------------------------------------------
<1> Name of the the query template in `config/scripts/`, i.e., `storedTemplate.mustache`.
<1> Name of the query template in `config/scripts/`, i.e., `storedTemplate.mustache`.
There is also a dedicated `template` endpoint, allows you to template an entire search request.

View File

@ -261,7 +261,7 @@ The meaning of the stats are as follows:
This parameter shows how long it takes to build a Scorer for the query. A Scorer is the mechanism that
iterates over matching documents generates a score per-document (e.g. how well does "foo" match the document?).
Note, this records the time required to generate the Scorer object, not actuall score the documents. Some
Note, this records the time required to generate the Scorer object, not actual score the documents. Some
queries have faster or slower initialization of the Scorer, depending on optimizations, complexity, etc.
{empty} +
{empty} +
@ -353,7 +353,7 @@ For reference, the various collector reason's are:
`search_min_score`::
A collector that only returns matching documents that have a score greater than `n`. This is seen when
the top-level paramenter `min_score` has been specified.
the top-level parameter `min_score` has been specified.
`search_multi`::

View File

@ -148,14 +148,14 @@ nested level these can also be returned via the `fields` feature.
An important default is that the `_source` returned in hits inside `inner_hits` is relative to the `_nested` metadata.
So in the above example only the comment part is returned per nested hit and not the entire source of the top level
document that contained the the comment.
document that contained the comment.
[[hierarchical-nested-inner-hits]]
==== Hierarchical levels of nested object fields and inner hits.
If a mapping has multiple levels of hierarchical nested object fields each level can be accessed via dot notated path.
For example if there is a `comments` nested field that contains a `votes` nested field and votes should directly be returned
with the the root hits then the following path can be defined:
with the root hits then the following path can be defined:
[source,js]
--------------------------------------------------

View File

@ -236,7 +236,7 @@ GET /_search/template
}
------------------------------------------
<1> Name of the the query template in `config/scripts/`, i.e., `storedTemplate.mustache`.
<1> Name of the query template in `config/scripts/`, i.e., `storedTemplate.mustache`.
You can also register search templates by storing it in the elasticsearch cluster in a special index named `.scripts`.
There are REST APIs to manage these indexed templates.
@ -297,7 +297,7 @@ GET /_search/template
}
}
------------------------------------------
<1> Name of the the query template stored in the `.scripts` index.
<1> Name of the query template stored in the `.scripts` index.
[float]
==== Validating templates

View File

@ -111,9 +111,9 @@ doesn't take the query into account that is part of request.
`string_distance`::
Which string distance implementation to use for comparing how similar
suggested terms are. Five possible values can be specfied:
suggested terms are. Five possible values can be specified:
`internal` - The default based on damerau_levenshtein but highly optimized
for comparing string distancee for terms inside the index.
for comparing string distance for terms inside the index.
`damerau_levenshtein` - String distance algorithm based on
Damerau-Levenshtein algorithm.
`levenstein` - String distance algorithm based on Levenstein edit distance

View File

@ -75,7 +75,7 @@ sudo service elasticsearch start
[float]
==== Using systemd
Distributions like Debian Jessie, Ubuntu 14, and many of the SUSE derivitives do not use the `chkconfig` tool to register services, but rather `systemd` and its command `/bin/systemctl` to start and stop services (at least in newer versions, otherwise use the `chkconfig` commands above). The configuration file is also placed at `/etc/sysconfig/elasticsearch` if the system is rpm based and `/etc/default/elasticsearch` if it is deb. After installing the RPM, you have to change the systemd configuration and then start up elasticsearch
Distributions like Debian Jessie, Ubuntu 14, and many of the SUSE derivatives do not use the `chkconfig` tool to register services, but rather `systemd` and its command `/bin/systemctl` to start and stop services (at least in newer versions, otherwise use the `chkconfig` commands above). The configuration file is also placed at `/etc/sysconfig/elasticsearch` if the system is rpm based and `/etc/default/elasticsearch` if it is deb. After installing the RPM, you have to change the systemd configuration and then start up elasticsearch
[source,sh]
--------------------------------------------------

View File

@ -133,7 +133,7 @@ request:
curl http://localhost:9200/_nodes/process?pretty
--------------
If you see that `mlockall` is `false`, then it means that the the `mlockall`
If you see that `mlockall` is `false`, then it means that the `mlockall`
request has failed. The most probable reason, on Linux/Unix systems, is that
the user running Elasticsearch doesn't have permission to lock memory. This can
be granted by running `ulimit -l unlimited` as `root` before starting Elasticsearch.