[DOCS] Replace "// CONSOLE" comments with [source,console] (#46679)

This commit is contained in:
James Rodewig 2019-09-13 11:23:53 -04:00
parent 0def429bc1
commit 2831535cf9
12 changed files with 30 additions and 56 deletions

View File

@ -51,7 +51,7 @@ A `moving_avg` aggregation looks like this in isolation:
`moving_avg` aggregations must be embedded inside of a `histogram` or `date_histogram` aggregation. They can be
embedded like any other metric aggregation:
[source,js]
[source,console]
--------------------------------------------------
POST /_search
{
@ -74,7 +74,6 @@ POST /_search
}
}
--------------------------------------------------
// CONSOLE
// TEST[setup:sales]
// TEST[warning:The moving_avg aggregation has been deprecated in favor of the moving_fn aggregation.]
@ -154,7 +153,7 @@ The `simple` model calculates the sum of all values in the window, then divides
a simple arithmetic mean of the window. The simple model does not perform any time-dependent weighting, which means
the values from a `simple` moving average tend to "lag" behind the real data.
[source,js]
[source,console]
--------------------------------------------------
POST /_search
{
@ -181,7 +180,6 @@ POST /_search
}
}
--------------------------------------------------
// CONSOLE
// TEST[setup:sales]
// TEST[warning:The moving_avg aggregation has been deprecated in favor of the moving_fn aggregation.]
@ -208,7 +206,7 @@ The `linear` model assigns a linear weighting to points in the series, such that
the beginning of the window) contribute a linearly less amount to the total average. The linear weighting helps reduce
the "lag" behind the data's mean, since older points have less influence.
[source,js]
[source,console]
--------------------------------------------------
POST /_search
{
@ -235,7 +233,6 @@ POST /_search
}
}
--------------------------------------------------
// CONSOLE
// TEST[setup:sales]
// TEST[warning:The moving_avg aggregation has been deprecated in favor of the moving_fn aggregation.]
@ -268,7 +265,7 @@ The default value of `alpha` is `0.3`, and the setting accepts any float from 0-
The EWMA model can be <<movavg-minimizer, Minimized>>
[source,js]
[source,console]
--------------------------------------------------
POST /_search
{
@ -298,7 +295,6 @@ POST /_search
}
}
--------------------------------------------------
// CONSOLE
// TEST[setup:sales]
// TEST[warning:The moving_avg aggregation has been deprecated in favor of the moving_fn aggregation.]
@ -327,7 +323,7 @@ The default value of `alpha` is `0.3` and `beta` is `0.1`. The settings accept a
The Holt-Linear model can be <<movavg-minimizer, Minimized>>
[source,js]
[source,console]
--------------------------------------------------
POST /_search
{
@ -358,7 +354,6 @@ POST /_search
}
}
--------------------------------------------------
// CONSOLE
// TEST[setup:sales]
// TEST[warning:The moving_avg aggregation has been deprecated in favor of the moving_fn aggregation.]
@ -416,7 +411,7 @@ The default value of `period` is `1`.
The additive Holt-Winters model can be <<movavg-minimizer, Minimized>>
[source,js]
[source,console]
--------------------------------------------------
POST /_search
{
@ -450,7 +445,6 @@ POST /_search
}
}
--------------------------------------------------
// CONSOLE
// TEST[setup:sales]
// TEST[warning:The moving_avg aggregation has been deprecated in favor of the moving_fn aggregation.]
@ -477,7 +471,7 @@ the result, but only minimally. If your data is non-zero, or you prefer to see
you can disable this behavior with `pad: false`
======
[source,js]
[source,console]
--------------------------------------------------
POST /_search
{
@ -512,7 +506,6 @@ POST /_search
}
}
--------------------------------------------------
// CONSOLE
// TEST[setup:sales]
// TEST[warning:The moving_avg aggregation has been deprecated in favor of the moving_fn aggregation.]
@ -527,7 +520,7 @@ Predictions are enabled by adding a `predict` parameter to any moving average ag
predictions you would like appended to the end of the series. These predictions will be spaced out at the same interval
as your buckets:
[source,js]
[source,console]
--------------------------------------------------
POST /_search
{
@ -555,7 +548,6 @@ POST /_search
}
}
--------------------------------------------------
// CONSOLE
// TEST[setup:sales]
// TEST[warning:The moving_avg aggregation has been deprecated in favor of the moving_fn aggregation.]
@ -606,7 +598,7 @@ models.
Minimization is enabled/disabled via the `minimize` parameter:
[source,js]
[source,console]
--------------------------------------------------
POST /_search
{
@ -637,7 +629,6 @@ POST /_search
}
}
--------------------------------------------------
// CONSOLE
// TEST[setup:sales]
// TEST[warning:The moving_avg aggregation has been deprecated in favor of the moving_fn aggregation.]

View File

@ -353,11 +353,10 @@ retrieved by using the following API:
==== Example
[source,js]
[source,console]
--------------------------------------------------
GET /_slm/stats
--------------------------------------------------
// CONSOLE
// TEST[continued]
Which returns a response similar to:

View File

@ -248,4 +248,3 @@ The API returns the following response:
}
}
--------------------------------------------------
// TESTRESPONSE

View File

@ -33,5 +33,4 @@ PUT tweets
}
}
--------------------------------------------------
// CONSOLE
// TEST[warning:Index [tweets] uses the deprecated `enabled` setting for `_field_names`. Disabling _field_names is not necessary because it no longer carries a large index overhead. Support for this setting will be removed in a future major version. Please remove it from your mappings and templates.]

View File

@ -65,7 +65,7 @@ Adaptive replica selection has been enabled by default. If you wish to return to
the older round robin of search requests, you can use the
`cluster.routing.use_adaptive_replica_selection` setting:
[source,js]
[source,console]
--------------------------------------------------
PUT /_cluster/settings
{
@ -74,7 +74,7 @@ PUT /_cluster/settings
}
}
--------------------------------------------------
// CONSOLE
[float]
[[search-api-returns-400-invalid-requests]]

View File

@ -70,7 +70,7 @@ as stopwords without the need to maintain a manual list.
In this example, words that have a document frequency greater than 0.1%
(eg `"this"` and `"is"`) will be treated as _common terms_.
[source,js]
[source,console]
--------------------------------------------------
GET /_search
{
@ -84,7 +84,6 @@ GET /_search
}
}
--------------------------------------------------
// CONSOLE
// TEST[warning:Deprecated field [common] used, replaced by [[match] query which can efficiently skip blocks of documents if the total number of hits is not tracked]]
The number of terms which should match can be controlled with the
@ -95,7 +94,7 @@ The number of terms which should match can be controlled with the
For low frequency terms, set the `low_freq_operator` to `"and"` to make
all terms required:
[source,js]
[source,console]
--------------------------------------------------
GET /_search
{
@ -110,12 +109,11 @@ GET /_search
}
}
--------------------------------------------------
// CONSOLE
// TEST[warning:Deprecated field [common] used, replaced by [[match] query which can efficiently skip blocks of documents if the total number of hits is not tracked]]
which is roughly equivalent to:
[source,js]
[source,console]
--------------------------------------------------
GET /_search
{
@ -135,14 +133,13 @@ GET /_search
}
}
--------------------------------------------------
// CONSOLE
Alternatively use
<<query-dsl-minimum-should-match,`minimum_should_match`>>
to specify a minimum number or percentage of low frequency terms which
must be present, for instance:
[source,js]
[source,console]
--------------------------------------------------
GET /_search
{
@ -157,12 +154,11 @@ GET /_search
}
}
--------------------------------------------------
// CONSOLE
// TEST[warning:Deprecated field [common] used, replaced by [[match] query which can efficiently skip blocks of documents if the total number of hits is not tracked]]
which is roughly equivalent to:
[source,js]
[source,console]
--------------------------------------------------
GET /_search
{
@ -187,7 +183,6 @@ GET /_search
}
}
--------------------------------------------------
// CONSOLE
A different
<<query-dsl-minimum-should-match,`minimum_should_match`>>
@ -195,7 +190,7 @@ can be applied for low and high frequency terms with the additional
`low_freq` and `high_freq` parameters. Here is an example when providing
additional parameters (note the change in structure):
[source,js]
[source,console]
--------------------------------------------------
GET /_search
{
@ -213,12 +208,11 @@ GET /_search
}
}
--------------------------------------------------
// CONSOLE
// TEST[warning:Deprecated field [common] used, replaced by [[match] query which can efficiently skip blocks of documents if the total number of hits is not tracked]]
which is roughly equivalent to:
[source,js]
[source,console]
--------------------------------------------------
GET /_search
{
@ -249,7 +243,6 @@ GET /_search
}
}
--------------------------------------------------
// CONSOLE
In this case it means the high frequency terms have only an impact on
relevance when there are at least three of them. But the most
@ -257,7 +250,7 @@ interesting use of the
<<query-dsl-minimum-should-match,`minimum_should_match`>>
for high frequency terms is when there are only high frequency terms:
[source,js]
[source,console]
--------------------------------------------------
GET /_search
{
@ -275,12 +268,11 @@ GET /_search
}
}
--------------------------------------------------
// CONSOLE
// TEST[warning:Deprecated field [common] used, replaced by [[match] query which can efficiently skip blocks of documents if the total number of hits is not tracked]]
which is roughly equivalent to:
[source,js]
[source,console]
--------------------------------------------------
GET /_search
{
@ -297,7 +289,6 @@ GET /_search
}
}
--------------------------------------------------
// CONSOLE
The high frequency generated query is then slightly less restrictive
than with an `AND`.

View File

@ -279,7 +279,7 @@ documents if in the range `[0..1)` or absolute if greater or equal to
Here is an example showing a query composed of stopwords exclusively:
[source,js]
[source,console]
--------------------------------------------------
GET /_search
{
@ -293,7 +293,6 @@ GET /_search
}
}
--------------------------------------------------
// CONSOLE
// TEST[warning:Deprecated field [cutoff_frequency] used, replaced by [you can omit this option, the [match] query can skip block of documents efficiently if the total number of hits is not tracked]]
IMPORTANT: The `cutoff_frequency` option operates on a per-shard-level. This means

View File

@ -5,7 +5,7 @@ deprecated[7.0.0,Types and the `type` query are deprecated and in the process of
Filters documents matching the provided document / mapping type.
[source,js]
[source,console]
--------------------------------------------------
GET /_search
{
@ -16,4 +16,3 @@ GET /_search
}
}
--------------------------------------------------
// CONSOLE

View File

@ -102,7 +102,7 @@ To do that, it will search for the first `keyword` that it can find that is _not
Consider the following `string` mapping:
[source, js]
[source,js]
----
{
"first_name" : {

View File

@ -83,7 +83,7 @@ If you want to use more complex queries, you can create your {dataframe} from a
If you prefer, you can use the
{ref}/preview-data-frame-transform.html[preview {dataframe-transforms} API]:
[source,js]
[source,console]
--------------------------------------------------
POST _data_frame/transforms/_preview
{
@ -130,7 +130,6 @@ POST _data_frame/transforms/_preview
}
}
--------------------------------------------------
// CONSOLE
// TEST[skip:set up sample data]
--
@ -159,7 +158,7 @@ If you prefer, you can use the
{ref}/put-data-frame-transform.html[create {dataframe-transforms} API]. For
example:
[source,js]
[source,console]
--------------------------------------------------
PUT _data_frame/transforms/ecommerce-customer-transform
{
@ -213,7 +212,6 @@ PUT _data_frame/transforms/ecommerce-customer-transform
}
}
--------------------------------------------------
// CONSOLE
// TEST[skip:setup kibana sample data]
--
@ -236,11 +234,10 @@ Alternatively, you can use the
{ref}/stop-data-frame-transform.html[stop {dataframe-transforms}] APIs. For
example:
[source,js]
[source,console]
--------------------------------------------------
POST _data_frame/transforms/ecommerce-customer-transform/_start
--------------------------------------------------
// CONSOLE
// TEST[skip:setup kibana sample data]
--

View File

@ -80,7 +80,7 @@ if _at least one_ of the member values matches. For example, the following rule
matches any user who is a member of the `admin` group, regardless of any
other groups they belong to:
[source, js]
[source,js]
------------------------------------------------------------
{ "field" : { "groups" : "admin" } }
------------------------------------------------------------

View File

@ -22,7 +22,7 @@ You can use the {ref}/security-api-clear-cache.html[clear cache API] to force
the eviction of cached users . For example, the following request evicts all
users from the `ad1` realm:
[source, js]
[source,js]
------------------------------------------------------------
$ curl -XPOST 'http://localhost:9200/_security/realm/ad1/_clear_cache'
------------------------------------------------------------
@ -30,7 +30,7 @@ $ curl -XPOST 'http://localhost:9200/_security/realm/ad1/_clear_cache'
To clear the cache for multiple realms, specify the realms as a comma-separated
list:
[source, js]
[source,js]
------------------------------------------------------------
$ curl -XPOST 'http://localhost:9200/_security/realm/ad1,ad2/_clear_cache'
------------------------------------------------------------