[DOCS] Various spelling corrections (#37046)

This commit is contained in:
Josh Soref 2019-01-07 08:44:12 -05:00 committed by Luca Cavanna
parent ac2e09b25a
commit edb48321ba
66 changed files with 80 additions and 80 deletions

View File

@ -287,8 +287,8 @@ buildRestTests.setups['stackoverflow'] = '''
body: |'''
// Make Kibana strongly connected to elasticsearch and logstash
// Make Kibana rarer (and therefore higher-ranking) than Javascript
// Make Javascript strongly connected to jquery and angular
// Make Kibana rarer (and therefore higher-ranking) than JavaScript
// Make JavaScript strongly connected to jquery and angular
// Make Cabana strongly connected to elasticsearch but only as a result of a single author
for (int i = 0; i < 150; i++) {

View File

@ -72,7 +72,7 @@ operation that executes:
`noop`::
Set `ctx.op = "noop"` if your script doesn't make any
changes. The `updateByQuery` operaton then omits that document from the updates.
changes. The `updateByQuery` operation then omits that document from the updates.
This behavior increments the `noop` counter in the response body.
`delete`::

View File

@ -128,7 +128,7 @@ include-tagged::{doc-tests-file}[{api}-conflict]
--------------------------------------------------
<1> `getResponse` is null.
<2> `getFailure` isn't and contains an `Exception`.
<3> That `Exception` is actuall and `ElasticsearchException`
<3> That `Exception` is actually an `ElasticsearchException`
<4> and it has a status of `CONFLICT`. It'd have been an HTTP 409 if this
wasn't a multi get.
<5> `getMessage` explains the actual cause, `

View File

@ -125,7 +125,7 @@ include::../execution.asciidoc[]
[id="{upid}-{api}-response"]
==== Update By Query Response
The returned +{resposne}+ contains information about the executed operations and
The returned +{response}+ contains information about the executed operations and
allows to iterate over each result as follows:
["source","java",subs="attributes,callouts,macros"]

View File

@ -144,7 +144,7 @@ include-tagged::{doc-tests}/MiscellaneousDocumentationIT.java[rest-high-level-cl
In the rest of this documentation about the Java High Level Client, the `RestHighLevelClient` instance
will be referenced as `client`.
[[java-rest-hight-getting-started-request-options]]
[[java-rest-high-getting-started-request-options]]
=== RequestOptions
All APIs in the `RestHighLevelClient` accept a `RequestOptions` which you can

View File

@ -15,7 +15,7 @@ An +{request}+ requires an `index` argument:
--------------------------------------------------
include-tagged::{doc-tests-file}[{api}-request]
--------------------------------------------------
<1> The index to unreeze
<1> The index to unfreeze
==== Optional arguments
The following arguments can optionally be provided:

View File

@ -1,7 +1,7 @@
[[java-rest-high-migration-get-assistance]]
=== Migration Get Assistance
[[java-rest-high-migraton-get-assistance-request]]
[[java-rest-high-migration-get-assistance-request]]
==== Index Upgrade Info Request
An `IndexUpgradeInfoRequest` does not require any argument:

View File

@ -8,7 +8,7 @@
[[java-rest-high-migration-upgrade]]
=== Migration Upgrade
[[java-rest-high-migraton-upgrade-request]]
[[java-rest-high-migration-upgrade-request]]
==== Index Upgrade Request
An +{request}+ requires an index argument. Only one index at the time should be upgraded:
@ -32,7 +32,7 @@ include-tagged::{doc-tests-file}[{api}-execute]
The returned +{response}+ contains information about the executed operation
[[java-rest-high-migraton-async-upgrade-request]]
[[java-rest-high-migration-async-upgrade-request]]
==== Asynchronous Execution
The asynchronous execution of an upgrade request requires both the +{request}+

View File

@ -82,7 +82,7 @@ include-tagged::{doc-tests}/SearchDocumentationIT.java[rank-eval-response]
<2> Partial results that are keyed by their query id
<3> The metric score for each partial result
<4> Rated search hits contain a fully fledged `SearchHit`
<5> Rated search hits also contain an `Optional<Interger>` rating that
<5> Rated search hits also contain an `Optional<Integer>` rating that
is not present if the document did not get a rating in the request
<6> Metric details are named after the metric used in the request
<7> After casting to the metric used in the request, the

View File

@ -2,7 +2,7 @@
--
:api: get-privileges
:request: GetPrivilegesRequest
:respnse: GetPrivilegesResponse
:response: GetPrivilegesResponse
--
[id="{upid}-{api}"]

View File

@ -2,7 +2,7 @@
--
:api: get-roles
:request: GetRolesRequest
:respnse: GetRolesResponse
:response: GetRolesResponse
--
[id="{upid}-{api}"]

View File

@ -2,7 +2,7 @@
--
:api: get-users
:request: GetUsersRequest
:respnse: GetUsersResponse
:response: GetUsersResponse
--
[id="{upid}-{api}"]

View File

@ -1,6 +1,6 @@
--
:api: deactivate-watch
:request: deactivateWatchRequet
:request: deactivateWatchRequest
:response: deactivateWatchResponse
:doc-tests-file: {doc-tests}/WatcherDocumentationIT.java
--

View File

@ -328,7 +328,7 @@ include-tagged::{doc-tests}/RestClientDocumentation.java[rest-client-options-cus
The client is quite happy to execute many actions in parallel. The following
example indexes many documents in parallel. In a real world scenario you'd
probably want to use the `_bulk` API instead, but the example is illustative.
probably want to use the `_bulk` API instead, but the example is illustrative.
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------

View File

@ -65,7 +65,7 @@ are available in the script being tested.
The following parameters may be specified in `context_setup` for a filter context:
document:: Contains the document that will be temporarily indexed in-memory and is accessible from the script.
index:: The name of an index containing a mapping that is compatable with the document being indexed.
index:: The name of an index containing a mapping that is compatible with the document being indexed.
*Example*
@ -122,7 +122,7 @@ The `score` context executes scripts as if they were executed inside a `script_s
The following parameters may be specified in `context_setup` for a score context:
document:: Contains the document that will be temporarily indexed in-memory and is accessible from the script.
index:: The name of an index containing a mapping that is compatable with the document being indexed.
index:: The name of an index containing a mapping that is compatible with the document being indexed.
query:: If `_score` is used in the script then a query can specified that will be used to compute a score.
*Example*

View File

@ -28,7 +28,7 @@ This client provides:
* Logging support via Log::Any
* Compatibility with the official clients for Python, Ruby, PHP and Javascript
* Compatibility with the official clients for Python, Ruby, PHP and JavaScript
* Easy extensibility

View File

@ -93,6 +93,6 @@ supported:
`languageset`::
An array of languages to check. If not specified, then the language will
be guessed. Accepts: `any`, `comomon`, `cyrillic`, `english`, `french`,
be guessed. Accepts: `any`, `common`, `cyrillic`, `english`, `french`,
`german`, `hebrew`, `hungarian`, `polish`, `romanian`, `russian`,
`spanish`.

View File

@ -33,7 +33,7 @@ Issues and bug reports can usually be reported on the community plugin's web sit
For advice on writing your own plugin, see <<plugin-authors>>.
IMPORTANT: Site plugins -- plugins containing HTML, CSS and Javascript -- are
IMPORTANT: Site plugins -- plugins containing HTML, CSS and JavaScript -- are
no longer supported.
include::plugin-script.asciidoc[]

View File

@ -191,7 +191,7 @@ releases 2.0 and later do not support rivers.
==== Supported by the community:
* https://github.com/kodcu/pes[Pes]:
A pluggable elastic Javascript query DSL builder for Elasticsearch
A pluggable elastic JavaScript query DSL builder for Elasticsearch
* https://www.wireshark.org/[Wireshark]:
Protocol dissection for Zen discovery, HTTP and the binary protocol

View File

@ -139,7 +139,7 @@ Some examples, using scripts:
[source,js]
----
# The simpliest one
# The simplest one
PUT _snapshot/my_backup1
{
"type": "azure"

View File

@ -78,7 +78,7 @@ The following settings are supported:
[[repository-hdfs-availability]]
[float]
===== A Note on HDFS Availablility
===== A Note on HDFS Availability
When you initialize a repository, its settings are persisted in the cluster state. When a node comes online, it will
attempt to initialize all repositories for which it has settings. If your cluster has an HDFS repository configured, then
all nodes in the cluster must be able to reach HDFS when starting. If not, then the node will fail to initialize the

View File

@ -263,7 +263,7 @@ image::images/pipeline_movavg/linear_100window.png[]
The `ewma` model (aka "single-exponential") is similar to the `linear` model, except older data-points become exponentially less important,
rather than linearly less important. The speed at which the importance decays can be controlled with an `alpha`
setting. Small values make the weight decay slowly, which provides greater smoothing and takes into account a larger
portion of the window. Larger valuers make the weight decay quickly, which reduces the impact of older values on the
portion of the window. Larger values make the weight decay quickly, which reduces the impact of older values on the
moving average. This tends to make the moving average track the data more closely but with less smoothing.
The default value of `alpha` is `0.3`, and the setting accepts any float from 0-1 inclusive.

View File

@ -449,7 +449,7 @@ The `ewma` function (aka "single-exponential") is similar to the `linearMovAvg`
except older data-points become exponentially less important,
rather than linearly less important. The speed at which the importance decays can be controlled with an `alpha`
setting. Small values make the weight decay slowly, which provides greater smoothing and takes into account a larger
portion of the window. Larger valuers make the weight decay quickly, which reduces the impact of older values on the
portion of the window. Larger values make the weight decay quickly, which reduces the impact of older values on the
moving average. This tends to make the moving average track the data more closely but with less smoothing.
`null` and `NaN` values are ignored; the average is only calculated over the real values. If the window is empty, or all values are

View File

@ -58,7 +58,7 @@ The `fingerprint` analyzer accepts the following parameters:
[horizontal]
`separator`::
The character to use to concate the terms. Defaults to a space.
The character to use to concatenate the terms. Defaults to a space.
`max_output_size`::

View File

@ -15,7 +15,7 @@ The `char_group` tokenizer accepts one parameter:
`tokenize_on_chars`::
A list containing a list of characters to tokenize the string on. Whenever a character
from this list is encountered, a new token is started. This accepts either single
characters like eg. `-`, or character groups: `whitespace`, `letter`, `digit`,
characters like e.g. `-`, or character groups: `whitespace`, `letter`, `digit`,
`punctuation`, `symbol`.

View File

@ -10,7 +10,7 @@ GET /_cat/nodeattrs?v
--------------------------------------------------
// CONSOLE
// TEST[s/\?v/\?v&s=node,attr/]
// Sort the resulting attributes so we can assert on them more easilly
// Sort the resulting attributes so we can assert on them more easily
Could look like:
@ -55,7 +55,7 @@ GET /_cat/nodeattrs?v&h=name,pid,attr,value
--------------------------------------------------
// CONSOLE
// TEST[s/,value/,value&s=node,attr/]
// Sort the resulting attributes so we can assert on them more easilly
// Sort the resulting attributes so we can assert on them more easily
Might look like:

View File

@ -12,7 +12,7 @@ GET /_cat/templates?v&s=name
// TEST[s/^/PUT _template\/template0\n{"index_patterns": "te*", "order": 0}\n/]
// TEST[s/^/PUT _template\/template1\n{"index_patterns": "tea*", "order": 1}\n/]
// TEST[s/^/PUT _template\/template2\n{"index_patterns": "teak*", "order": 2, "version": 7}\n/]
// The substitions do two things:
// The substitutions do two things:
// 1. Filter the response to just templates matching the te* pattern
// so that we only get the templates we expect regardless of which
// templates exist. If xpack is installed there will be unexpected

View File

@ -47,7 +47,7 @@ GET /<index>/_ccr/stats
// CONSOLE
// TEST[s/<index>/follower_index/]
==== Path Parmeters
==== Path Parameters
`index` ::
(string) a comma-delimited list of index patterns

View File

@ -50,7 +50,7 @@ POST /<follower_index>/_ccr/unfollow
// CONSOLE
// TEST[s/<follower_index>/follower_index/]
==== Path Parmeters
==== Path Parameters
`follower_index` (required)::
(string) the name of the follower index

View File

@ -3,10 +3,10 @@
The cluster nodes reload secure settings API is used to re-read the
local node's encrypted keystore. Specifically, it will prompt the keystore
decryption and reading accross the cluster. The keystore's plain content is
decryption and reading across the cluster. The keystore's plain content is
used to reinitialize all compatible plugins. A compatible plugin can be
reinitilized without restarting the node. The operation is
complete when all compatible plugins have finished reinitilizing. Subsequently,
reinitialized without restarting the node. The operation is
complete when all compatible plugins have finished reinitializing. Subsequently,
the keystore is closed and any changes to it will not be reflected on the node.
[source,js]

View File

@ -279,7 +279,7 @@ the operating system:
`os.cgroup.memory.limit_in_bytes`.
NOTE: For the cgroup stats to be visible, cgroups must be compiled into
the kernal, the `cpu` and `cpuacct` cgroup subsystems must be
the kernel, the `cpu` and `cpuacct` cgroup subsystems must be
configured and stats must be readable from `/sys/fs/cgroup/cpu`
and `/sys/fs/cgroup/cpuacct`.

View File

@ -119,7 +119,7 @@ Which returns the following information:
<9> the definition of the phase (in this case, the "warm" phase) that the index is currently on
The index here has been moved to the error step because the shrink definition in
the policy is using an incorrect number of shards. So rectifing that in the
the policy is using an incorrect number of shards. So rectifying that in the
policy entails updating the existing policy to use one instead of four for
the targeted number of shards.

View File

@ -18,7 +18,7 @@ resource usage.
You control when the rollover action is triggered by specifying one or more
rollover parameters. The rollover is performed once any of the criteria are
met. Because the criteria are checked periodically, the index might grow
slightly beyond the specified threshold. To control how often the critera are
slightly beyond the specified threshold. To control how often the criteria are
checked, specify the `indices.lifecycle.poll_interval` cluster setting.
IMPORTANT: New indices created via rollover will not automatically inherit the

View File

@ -10,7 +10,7 @@ Indices are sorted into priority order as follows:
This means that, by default, newer indices will be recovered before older indices.
Use the per-index dynamically updateable `index.priority` setting to customise
Use the per-index dynamically updatable `index.priority` setting to customise
the index prioritization order. For instance:
[source,js]

View File

@ -27,7 +27,7 @@ flush can be executed if another flush operation is already executing.
The default is `false` and will cause an exception to be thrown on
the shard level if another flush operation is already running.
`force`:: Whether a flush should be forced even if it is not necessarily needed ie.
`force`:: Whether a flush should be forced even if it is not necessarily needed i.e.
if no changes will be committed to the index. This is useful if transaction log IDs
should be incremented even if no uncommitted changes are present.
(This setting can be considered as internal)

View File

@ -88,7 +88,7 @@ GET /_stats/search?groups=group1,group2
The stats returned are aggregated on the index level, with
`primaries` and `total` aggregations, where `primaries` are the values for only the
primary shards, and `total` are the cumulated values for both primary and replica shards.
primary shards, and `total` are the accumulated values for both primary and replica shards.
In order to get back shard level stats, set the `level` parameter to `shards`.

View File

@ -428,7 +428,7 @@ For example: `'Guest'.equalsIgnoreCase(ctx.network?.name)` is null safe because
since `ctx.network?.name` can return null.
Some situations require an explicit null check. In the following example there
is not null safe alternative, so an explict null check is needed.
is not null safe alternative, so an explicit null check is needed.
[source,js]
--------------------------------------------------

View File

@ -47,7 +47,7 @@ Later dissect matches the `[` and then `]` and then assigns `@timestamp` to ever
Paying special attention the parts of the string to discard will help build successful dissect patterns.
Successful matches require all keys in a pattern to have a value. If any of the `%{keyname}` defined in the pattern do
not have a value, then an exception is thrown and may be handled by the <<handling-failure-in-pipelines,on_falure>> directive.
not have a value, then an exception is thrown and may be handled by the <<handling-failure-in-pipelines,on_failure>> directive.
An empty key `%{}` or a <<dissect-modifier-named-skip-key, named skip key>> can be used to match values, but exclude the value from
the final document. All matched values are represented as string data types. The <<convert-processor, convert processor>>
may be used to convert to expected data type.

View File

@ -5,7 +5,7 @@ Expands a field with dots into an object field. This processor allows fields
with dots in the name to be accessible by other processors in the pipeline.
Otherwise these <<accessing-data-in-pipelines,fields>> can't be accessed by any processor.
[[dot-expender-options]]
[[dot-expander-options]]
.Dot Expand Options
[options="header"]
|======

View File

@ -189,7 +189,7 @@ Which returns:
===== Recognizing Location as a Geopoint
Although this processor enriches your document with a `location` field containing
the estimated latitude and longitude of the IP address, this field will not be
indexed as a {ref}/geo-point.html[`geo_point`] type in Elasticsearch without explicitely defining it
indexed as a {ref}/geo-point.html[`geo_point`] type in Elasticsearch without explicitly defining it
as such in the mapping.
You can use the following mapping for the example index above:

View File

@ -17,7 +17,7 @@ If you need help building patterns to match your logs, you will find the {kibana
Grok sits on top of regular expressions, so any regular expressions are valid in grok as well.
The regular expression library is Oniguruma, and you can see the full supported regexp syntax
https://github.com/kkos/oniguruma/blob/master/doc/RE[on the Onigiruma site].
https://github.com/kkos/oniguruma/blob/master/doc/RE[on the Oniguruma site].
Grok works by leveraging this regular expression language to allow naming existing patterns and combining them into more
complex patterns that match your fields.

View File

@ -48,7 +48,7 @@ reordered or deleted after they were initially added.
The `match_mapping_type` is the datatype detected by the json parser. Since
JSON doesn't allow to distinguish a `long` from an `integer` or a `double` from
a `float`, it will always choose the wider datatype, ie. `long` for integers
a `float`, it will always choose the wider datatype, i.e. `long` for integers
and `double` for floating-point numbers.
The following datatypes may be automatically detected:

View File

@ -121,7 +121,7 @@ near perfect spatial resolution (down to 1e-7 decimal degree precision) since al
spatial relations are computed using an encoded vector representation of the
original shape instead of a raster-grid representation as used by the
<<prefix-trees>> indexing approach. Performance of the tessellator primarily
depends on the number of vertices that define the polygon/multi-polyogn. While
depends on the number of vertices that define the polygon/multi-polygon. While
this is the default indexing technique prefix trees can still be used by setting
the `tree` or `strategy` parameters according to the appropriate
<<geo-shape-mapping-options>>. Note that these parameters are now deprecated

View File

@ -106,7 +106,7 @@ keepalives cannot be configured.
==== Transport Compression
[float]
===== Request Compresssion
===== Request Compression
By default, the `transport.compress` setting is `false` and network-level
request compression is disabled between nodes in the cluster. This default

View File

@ -213,7 +213,7 @@ exponent. Scores are computed as `S^exp^ / (S^exp^ + pivot^exp^)`. Like for the
and scores are in +(0, 1)+.
`exponent` must be positive, but is typically in +[0.5, 1]+. A good value should
be computed via traning. If you don't have the opportunity to do so, we recommend
be computed via training. If you don't have the opportunity to do so, we recommend
that you stick to the `saturation` function instead.
[source,js]

View File

@ -26,7 +26,7 @@ Scripting::
Search::
* Remove the deprecated _termvector endpoint. {pull}36131[#36131] (issues: {issue}36098[#36098], {issue}8484[#8484])
* Remove deprecated Graph endpoints {pull}35956[#35956]
* Validate metdata on `_msearch` {pull}35938[#35938] (issue: {issue}35869[#35869])
* Validate metadata on `_msearch` {pull}35938[#35938] (issue: {issue}35869[#35869])
* Make hits.total an object in the search response {pull}35849[#35849] (issue: {issue}33028[#33028])
* Remove the distinction between query and filter context in QueryBuilders {pull}35354[#35354] (issue: {issue}35293[#35293])
* Throw a parsing exception when boost is set in span_or query (#28390) {pull}34112[#34112] (issue: {issue}28390[#28390])
@ -544,7 +544,7 @@ Search::
* Add a More Like This query routing requirement check (#29678) {pull}33974[#33974]
Security::
* Remove license state listeners on closables {pull}36308[#36308] (issues: {issue}33328[#33328], {issue}35627[#35627], {issue}35628[#35628])
* Remove license state listeners on closeables {pull}36308[#36308] (issues: {issue}33328[#33328], {issue}35627[#35627], {issue}35628[#35628])
Snapshot/Restore::
* Upgrade GCS Dependencies to 1.55.0 {pull}36634[#36634] (issues: {issue}35229[#35229], {issue}35459[#35459])

View File

@ -126,7 +126,7 @@ GET /my_index/_rank_eval
<1> the template id
<2> the template definition to use
<3> a reference to a previously defined temlate
<3> a reference to a previously defined template
<4> the parameters to use to fill the template
[float]

View File

@ -11,7 +11,7 @@ cluster where {xpack} is installed, then you must download and configure the
. Add the {xpack} transport JAR file to your *CLASSPATH*. You can download the {xpack}
distribution and extract the JAR file manually or you can get it from the
https://artifacts.elastic.co/maven/org/elasticsearch/client/x-pack-transport/{version}/x-pack-transport-{version}.jar[Elasticsearc Maven repository].
https://artifacts.elastic.co/maven/org/elasticsearch/client/x-pack-transport/{version}/x-pack-transport-{version}.jar[Elasticsearch Maven repository].
As with any dependency, you will also need its transitive dependencies. Refer to the
https://artifacts.elastic.co/maven/org/elasticsearch/client/x-pack-transport/{version}/x-pack-transport-{version}.pom[X-Pack POM file
for your version] when downloading for offline usage.

View File

@ -21,7 +21,7 @@ Add the {es} JDBC driver to DbVisualizer through *Tools* > *Driver Manager*:
image:images/sql/client-apps/dbvis-1-driver-manager.png[]
Create a new driver entry through *Driver* > *Create Driver* entry and add the JDBC driver in the files panel
through the buttons on the right. Once specify, the driver class and its version should be automatically picked up - one can force the refresh through the *Find driver in liste locations* button, the second from the bottom on the right hand side:
through the buttons on the right. Once specify, the driver class and its version should be automatically picked up - one can force the refresh through the *Find driver in listed locations* button, the second from the bottom on the right hand side:
image:images/sql/client-apps/dbvis-2-driver.png[]

View File

@ -80,7 +80,7 @@ image:images/sql/odbc/apps_microstrat_loadtable.png[]
+
. Data Access Mode
+
Choose a table to load data from and press the _Finish_ button. When doing so, the application offers to choose a loading methdology.
Choose a table to load data from and press the _Finish_ button. When doing so, the application offers to choose a loading methodology.
You can choose whichever, we'll exemplify the _Connect Live_ way:
+
[[apps_microstrat_live]]

View File

@ -45,7 +45,7 @@ tables will load a preview of the data within:
.Pick table to load
image:images/sql/odbc/apps_pbi_picktable.png[]
Now tick the chosen table and click on the _Load_ button. Power BI will now load and anlyze the data, populating a list with the available
Now tick the chosen table and click on the _Load_ button. Power BI will now load and analyze the data, populating a list with the available
columns. These can now be used to build the desired visualisation:
[[apps_pbi_loaded]]

View File

@ -160,7 +160,7 @@ security option and is the recommended setting for production deployments.
+
* Certificate File
+
In case the server uses a certificate that is not part of the PKI, for example usaing a self-signed certificate, you can configure the path to a X.509 certificate file that will be used by the driver to validate server's offered certificate.
In case the server uses a certificate that is not part of the PKI, for example using a self-signed certificate, you can configure the path to a X.509 certificate file that will be used by the driver to validate server's offered certificate.
+
The driver will only read the contents of the file just before a connection is attempted. See <<connection_testing>> section further on how to check the validity of the provided parameters.
+

View File

@ -76,7 +76,7 @@ If you encounter an error during installation we would encourage you to open an
[[installation-cmd]]
==== Installation using the command line
NOTE: The examples given below apply to installation of the 64 bit MSI package. To acheive the same result with the 32 bit MSI package you would instead use the filename suffix `windows-x86.msi`
NOTE: The examples given below apply to installation of the 64 bit MSI package. To achieve the same result with the 32 bit MSI package you would instead use the filename suffix `windows-x86.msi`
The `.msi` can also be installed via the command line. The simplest installation using the same defaults as the GUI is achieved by first navigating to the download directory, then running:

View File

@ -30,7 +30,7 @@ No need for additional hardware, processes, runtimes or libraries to query {es};
Lightweight and efficient::
{es-sql} does not abstract {es} and its search capabilities - on the contrary, it embraces and exposes SQL to allow proper full-text search, in real-time, in the same declarative, succint fashion.
{es-sql} does not abstract {es} and its search capabilities - on the contrary, it embraces and exposes SQL to allow proper full-text search, in real-time, in the same declarative, succinct fashion.

View File

@ -48,7 +48,7 @@ pre-5.x indices forward to 6.x. Data in time-based indices
generally becomes less useful as time passes and are
deleted as they age past your retention period.
Unless you have an unusally long retention period, you can just
Unless you have an unusually long retention period, you can just
wait to upgrade to 6.x until all of your pre-5.x indices have
been deleted.

View File

@ -2,7 +2,7 @@
================================================
When you extract the zip or tarball packages, the `elasticsearch-n.n.n`
directory contains the Elasticsearh `config`, `data`, `logs` and
directory contains the Elasticsearch `config`, `data`, `logs` and
`plugins` directories.
We recommend moving these directories out of the Elasticsearch directory

View File

@ -607,7 +607,7 @@ Currently, the circuit breaker protects against loading too much field data by e
Elasticsearch has moved from an object-based cache to a page-based cache recycler as described in issue {GIT}4557[#4557]. This change makes garbage collection easier by limiting fragmentation, since all pages have the same size and are recycled. It also allows managing the size of the cache not based on the number of objects it contains, but on the memory that it uses.
These pages are used for two main purposes: implementing higher level data structures such as hash tables that are used internally by aggregations to eg. map terms to counts, as well as reusing memory in the translog/transport layer as detailed in issue {GIT}5691[#5691].
These pages are used for two main purposes: implementing higher level data structures such as hash tables that are used internally by aggregations to e.g. map terms to counts, as well as reusing memory in the translog/transport layer as detailed in issue {GIT}5691[#5691].
[float]
=== Dedicated Master Nodes Resiliency (STATUS: DONE, v1.0.0)

View File

@ -101,7 +101,7 @@ persistent ("keep-alive") HTTP connections.
=== Extensions
The https://github.com/elastic/elasticsearch-ruby/tree/master/elasticsearch-extensions[`elasticsearch-extensions`]
Rubygem provides a number of extensions to the core client, such as an API to programatically launch
Rubygem provides a number of extensions to the core client, such as an API to programmatically launch
Elasticsearch clusters (eg. for testing purposes), and more.
Please see its

View File

@ -66,8 +66,8 @@ public class DocsClientYamlTestSuiteIT extends ESClientYamlSuiteTestCase {
entries.addAll(ExecutableSection.DEFAULT_EXECUTABLE_CONTEXTS);
entries.add(new NamedXContentRegistry.Entry(ExecutableSection.class,
new ParseField("compare_analyzers"), CompareAnalyzers::parse));
NamedXContentRegistry executeableSectionRegistry = new NamedXContentRegistry(entries);
return ESClientYamlSuiteTestCase.createParameters(executeableSectionRegistry);
NamedXContentRegistry executableSectionRegistry = new NamedXContentRegistry(entries);
return ESClientYamlSuiteTestCase.createParameters(executableSectionRegistry);
}
@Override

View File

@ -23,7 +23,7 @@ The {es} {security-features} provide two ways to persist audit logs:
index. The audit index can reside on the same cluster, or a separate cluster.
By default, only the `logfile` output is used when enabling auditing,
implicitly outputing to both `<clustername>_audit.log` and `<clustername>_access.log`.
implicitly outputting to both `<clustername>_audit.log` and `<clustername>_access.log`.
To facilitate browsing and analyzing the events, you can also enable
indexing by setting `xpack.security.audit.outputs` in `elasticsearch.yml`:

View File

@ -159,7 +159,7 @@ user: <1>
<1> The name of a role.
<2> The distinguished name (DN) of a PKI user.
The disinguished name for a PKI user follows X.500 naming conventions which
The distinguished name for a PKI user follows X.500 naming conventions which
place the most specific fields (like `cn` or `uid`) at the beginning of the
name, and the most general fields (like `o` or `dc`) at the end of the name.
Some tools, such as _openssl_, may print out the subject name in a different

View File

@ -251,7 +251,7 @@ additional names that can be used:
`NameID` elements are an optional, but frequently provided, field within a
SAML Assertion that the IdP may use to identify the Subject of that
Assertion. In some cases the `NameID` will relate to the user's login
identifier (username) wihin the IdP, but in many cases they will be
identifier (username) within the IdP, but in many cases they will be
internally generated identifiers that have no obvious meaning outside
of the IdP.
@ -531,7 +531,7 @@ The path to the PEM formatted certificate file. e.g. `saml/saml-sign.crt`
The path to the PEM formatted key file. e.g. `saml/saml-sign.key`
`signing.secure_key_passphrase`::
The passphrase for the key, if the file is encypted. This is a
The passphrase for the key, if the file is encrypted. This is a
{ref}/secure-settings.html[secure setting] that must be set with the
`elasticsearch-keystore` tool.
@ -545,7 +545,7 @@ The path to the PKCS#12 or JKS keystore. e.g. `saml/saml-sign.p12`
The alias of the key within the keystore. e.g. `signing-key`
`signing.keystore.secure_password`::
The passphrase for the keystore, if the file is encypted. This is a
The passphrase for the keystore, if the file is encrypted. This is a
{ref}/secure-settings.html[secure setting] that must be set with the
`elasticsearch-keystore` tool.
@ -582,7 +582,7 @@ The path to the PEM formatted certificate file. e.g. `saml/saml-crypt.crt`
The path to the PEM formatted key file. e.g. `saml/saml-crypt.key`
`encryption.secure_key_passphrase`::
The passphrase for the key, if the file is encypted. This is a
The passphrase for the key, if the file is encrypted. This is a
{ref}/secure-settings.html[secure setting] that must be set with the
`elasticsearch-keystore` tool.
@ -596,7 +596,7 @@ The path to the PKCS#12 or JKS keystore. e.g. `saml/saml-crypt.p12`
The alias of the key within the keystore. e.g. `encryption-key`
`encryption.keystore.secure_password`::
The passphrase for the keystore, if the file is encypted. This is a
The passphrase for the keystore, if the file is encrypted. This is a
{ref}/secure-settings.html[secure setting] that must be set with the
`elasticsearch-keystore` tool.
@ -731,7 +731,7 @@ the certificates that {es} has been configured to use.
SAML authentication in {kib} is also subject to the
`xpack.security.sessionTimeout` setting that is described in the {kib} security
documentation, and you may wish to adjst this timeout to meet your local needs.
documentation, and you may wish to adjust this timeout to meet your local needs.
The two additional settings that are required for SAML support are shown below:

View File

@ -56,7 +56,7 @@ http://elasticsearch-py.readthedocs.org/en/master/#ssl-and-authentication[Python
https://metacpan.org/pod/Search::Elasticsearch::Cxn::HTTPTiny#CONFIGURATION[Perl],
http://www.elastic.co/guide/en/elasticsearch/client/php-api/current/_security.html[PHP],
http://nest.azurewebsites.net/elasticsearch-net/security.html[.NET],
http://www.elastic.co/guide/en/elasticsearch/client/javascript-api/current/auth-reference.html[Javascript]
http://www.elastic.co/guide/en/elasticsearch/client/javascript-api/current/auth-reference.html[JavaScript]
////
Groovy - TODO link

View File

@ -198,7 +198,7 @@ image::images/action-throttling.jpg[align="center"]
When a watch is triggered, its condition determines whether or not to execute the
watch actions. Within each action, you can also add a condition per action. These
additional conditions enable a single alert to execute different actions depending
on a their respective conditions. The following watch would alway send an email, when
on a their respective conditions. The following watch would always send an email, when
hits are found from the input search, but only trigger the `notify_pager` action when
there are more than 5 hits in the search result.

View File

@ -49,7 +49,7 @@ initial payload.
A <<input-search, search>> input contains a `request` object that specifies the
indices you want to search, the {ref}/search-request-search-type.html[search type],
and the search request body. The `body` field of a search input is the same as
the body of an Elasticsearch `_search` request, making the full Elaticsearch
the body of an Elasticsearch `_search` request, making the full Elasticsearch
Query DSL available for you to use.
For example, the following `search` input loads the latest VIX quote:

View File

@ -121,7 +121,7 @@ March 30, 2016
.New Features
* Added <<actions-pagerduty, PagerDuty action>>
* Added support for adding <<configuring-email-attachments, attachments to emails>>
via HTTP requests and superceding and deprecating the usage of `attach_data`
via HTTP requests and superseding and deprecating the usage of `attach_data`
in order to use this feature
[float]
@ -143,7 +143,7 @@ February 2, 2016
February 2, 2016
.Enhancements
* Adds support for Elasticssearch 2.1.2
* Adds support for Elasticsearch 2.1.2
[float]
==== 2.1.1