[DOCS] Replace `datatype` with `data type` (#58972) (#59184)

This commit is contained in:
James Rodewig 2020-07-07 14:59:35 -04:00 committed by GitHub
parent 045b893dd1
commit 6ed356ffc3
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
57 changed files with 143 additions and 142 deletions

View File

@ -9,7 +9,7 @@ You can add mappings at index creation time:
include-tagged::{client-tests}/IndicesDocumentationIT.java[index-with-mapping]
--------------------------------------------------
<1> <<java-admin-indices-create-index,Creates an index>> called `twitter`
<2> Add a `_doc` type with a field called `message` that has the datatype `text`.
<2> Add a `_doc` type with a field called `message` that has the data type `text`.
There are several variants of the above `addMapping` method, some taking an
`XContentBuilder` or a `Map` with the mapping definition as arguments. Make sure

View File

@ -1,7 +1,7 @@
[[mapper]]
== Mapper Plugins
Mapper plugins allow new field datatypes to be added to Elasticsearch.
Mapper plugins allow new field data types to be added to Elasticsearch.
[float]
=== Core mapper plugins

View File

@ -144,7 +144,7 @@ Character used to separate tokens from payloads. Defaults to `|`.
+
--
(Optional, string)
Datatype for the stored payload. Valid values are:
Data type for the stored payload. Valid values are:
`float`:::
(Default) Float

View File

@ -467,7 +467,7 @@ Contains statistics about <<mapping,field mappings>> in selected nodes.
=====
`field_types`::
(array of objects)
Contains statistics about <<mapping-types,field datatypes>> used in selected
Contains statistics about <<mapping-types,field data types>> used in selected
nodes.
+
.Properties of `field_types` objects
@ -475,15 +475,15 @@ nodes.
======
`name`::
(string)
Field datatype used in selected nodes.
Field data type used in selected nodes.
`count`::
(integer)
Number of fields mapped to the field datatype in selected nodes.
Number of fields mapped to the field data type in selected nodes.
`index_count`::
(integer)
Number of indices containing a mapping of the field datatype in selected nodes.
Number of indices containing a mapping of the field data type in selected nodes.
======
=====

View File

@ -245,7 +245,7 @@ PUT /logs/_mapping
====
Except for supported mapping parameters, we don't recommend you change the
mapping or field datatype of existing fields, even in a data stream's matching
mapping or field data type of existing fields, even in a data stream's matching
index template or its backing indices. Changing the mapping of an existing
field could invalidate any data thats already indexed.
@ -378,7 +378,7 @@ new data stream and reindex your data into it. See
=== Use reindex to change mappings or settings
You can use a reindex to change the mappings or settings of a data stream. This
is often required to change the datatype of an existing field or update static
is often required to change the data type of an existing field or update static
index settings for backing indices.
To reindex a data stream, first create or update an index template so that it
@ -447,8 +447,8 @@ uses the `logs_data_stream` template as its basis, with the following changes:
* The `index_patterns` wildcard pattern matches any index or data stream
starting with `new_logs`.
* The `@timestamp` field mapping uses the `date_nanos` field datatype rather
than the `date` datatype.
* The `@timestamp` field mapping uses the `date_nanos` field data type rather
than the `date` data type.
* The template includes `sort.field` and `sort.order` index settings, which were
not in the original `logs_data_stream` template.
@ -475,7 +475,7 @@ PUT /_index_template/new_logs_data_stream
}
}
----
<1> Changes the `@timestamp` field mapping to the `date_nanos` field datatype.
<1> Changes the `@timestamp` field mapping to the `date_nanos` field data type.
<2> Adds the `sort.field` index setting.
<3> Adds the `sort.order` index setting.
====

View File

@ -17,7 +17,7 @@ the stream's backing indices. It contains:
* A name or wildcard (`*`) pattern for the data stream.
* The data stream's _timestamp field_. This field must be mapped as a
<<date,`date`>> or <<date_nanos,`date_nanos`>> field datatype and must be
<<date,`date`>> or <<date_nanos,`date_nanos`>> field data type and must be
included in every document indexed to the data stream.
* The mappings and settings applied to each backing index when it's created.

View File

@ -74,7 +74,7 @@ Addend to add. If `null`, the function returns `null`.
Two addends are required. No more than two addends can be provided.
+
If using a field as the argument, this parameter supports only
<<number,`numeric`>> field datatypes.
<<number,`numeric`>> field data types.
*Returns:* integer, float, or `null`
====
@ -131,7 +131,7 @@ Source string. Empty strings return an empty string (`""`), regardless of the
`<left>` or `<right>` parameters. If `null`, the function returns `null`.
If using a field as the argument, this parameter supports only the following
field datatypes:
field data types:
* <<keyword,`keyword`>>
* <<constant-keyword,`constant_keyword`>>
@ -147,7 +147,7 @@ Text to the left of the substring to extract. This text should include
whitespace.
If using a field as the argument, this parameter supports only the following
field datatypes:
field data types:
* <<keyword,`keyword`>>
* <<constant-keyword,`constant_keyword`>>
@ -163,7 +163,7 @@ Text to the right of the substring to extract. This text should include
whitespace.
If using a field as the argument, this parameter supports only the following
field datatypes:
field data types:
* <<keyword,`keyword`>>
* <<constant-keyword,`constant_keyword`>>
@ -224,7 +224,7 @@ https://en.wikipedia.org/wiki/IPv6[IPv6] addresses. If `null`, the function
returns `null`.
+
If using a field as the argument, this parameter supports only the <<ip,`ip`>>
field datatype.
field data type.
`<cidr_block>`::
(Required{multi-arg}, string or `null`)
@ -279,7 +279,7 @@ concat(<value>[, <value>])
Value to concatenate. If any of the arguments are `null`, the function returns `null`.
+
If using a field as the argument, this parameter does not support the
<<text,`text`>> field datatype.
<<text,`text`>> field data type.
*Returns:* string or `null`
====
@ -376,7 +376,7 @@ divide(<dividend>, <divisor>)
Dividend to divide. If `null`, the function returns `null`.
+
If using a field as the argument, this parameter supports only
<<number,`numeric`>> field datatypes.
<<number,`numeric`>> field data types.
`<divisor>`::
(Required, integer or float or `null`)
@ -384,7 +384,7 @@ Divisor to divide by. If `null`, the function returns `null`. This value cannot
be zero (`0`).
+
If using a field as the argument, this parameter supports only
<<number,`numeric`>> field datatypes.
<<number,`numeric`>> field data types.
*Returns:* integer, float, or null
====
@ -434,7 +434,7 @@ endsWith(<source>, <substring>)
Source string. If `null`, the function returns `null`.
If using a field as the argument, this parameter supports only the following
field datatypes:
field data types:
* <<keyword,`keyword`>>
* <<constant-keyword,`constant_keyword`>>
@ -449,7 +449,7 @@ field datatypes:
Substring to search for. If `null`, the function returns `null`.
If using a field as the argument, this parameter supports only the following
field datatypes:
field data types:
* <<keyword,`keyword`>>
* <<constant-keyword,`constant_keyword`>>
@ -515,7 +515,7 @@ indexOf(<source>, <substring>[, <start_pos>])
Source string. If `null`, the function returns `null`.
If using a field as the argument, this parameter supports only the following
field datatypes:
field data types:
* <<keyword,`keyword`>>
* <<constant-keyword,`constant_keyword`>>
@ -536,7 +536,7 @@ If the `<start_pos>` is positive, empty strings (`""`) return the `<start_pos>`.
Otherwise, empty strings return `0`.
If using a field as the argument, this parameter supports only the following
field datatypes:
field data types:
* <<keyword,`keyword`>>
* <<constant-keyword,`constant_keyword`>>
@ -557,7 +557,7 @@ If this argument is `null` or higher than the length of the `<source>` string,
the function returns `null`.
If using a field as the argument, this parameter supports only the following
<<number,numeric>> field datatypes:
<<number,numeric>> field data types:
* `long`
* `integer`
@ -605,7 +605,7 @@ String for which to return the character length. If `null`, the function returns
`null`. Empty strings return `0`.
If using a field as the argument, this parameter supports only the following
field datatypes:
field data types:
* <<keyword,`keyword`>>
* <<constant-keyword,`constant_keyword`>>
@ -658,7 +658,7 @@ match(<source>, <reg_exp>[, ...])
Source string. If `null`, the function returns `null`.
If using a field as the argument, this parameter supports only the following
field datatypes:
field data types:
* <<keyword,`keyword`>>
* <<constant-keyword,`constant_keyword`>>
@ -728,7 +728,7 @@ Dividend to divide. If `null`, the function returns `null`. Floating point
numbers return `0`.
+
If using a field as the argument, this parameter supports only
<<number,`numeric`>> field datatypes.
<<number,`numeric`>> field data types.
`<divisor>`::
(Required, integer or float or `null`)
@ -736,7 +736,7 @@ Divisor to divide by. If `null`, the function returns `null`. Floating point
numbers return `0`. This value cannot be zero (`0`).
+
If using a field as the argument, this parameter supports only
<<number,`numeric`>> field datatypes.
<<number,`numeric`>> field data types.
*Returns:* integer, float, or `null`
====
@ -788,7 +788,7 @@ Factor to multiply. If `null`, the function returns `null`.
Two factors are required. No more than two factors can be provided.
If using a field as the argument, this parameter supports only
<<number,`numeric`>> field datatypes.
<<number,`numeric`>> field data types.
--
*Returns:* integer, float, or `null`
@ -864,7 +864,7 @@ Strings that begin with `0x` are auto-detected as hexadecimal and use a default
ignored. Empty strings (`""`) are not supported.
If using a field as the argument, this parameter supports only the following
field datatypes:
field data types:
* <<keyword,`keyword`>>
* <<constant-keyword,`constant_keyword`>>
@ -935,7 +935,7 @@ startsWith(<source>, <substring>)
Source string. If `null`, the function returns `null`.
If using a field as the argument, this parameter supports only the following
field datatypes:
field data types:
* <<keyword,`keyword`>>
* <<constant-keyword,`constant_keyword`>>
@ -950,7 +950,7 @@ field datatypes:
Substring to search for. If `null`, the function returns `null`.
If using a field as the argument, this parameter supports only the following
field datatypes:
field data types:
* <<keyword,`keyword`>>
* <<constant-keyword,`constant_keyword`>>
@ -994,7 +994,7 @@ string(<value>)
Value to convert to a string. If `null`, the function returns `null`.
+
If using a field as the argument, this parameter does not support the
<<text,`text`>> field datatype.
<<text,`text`>> field data type.
*Returns:* string or `null`
====
@ -1040,7 +1040,7 @@ stringContains(<source>, <substring>)
Source string to search. If `null`, the function returns `null`.
If using a field as the argument, this parameter supports only the following
field datatypes:
field data types:
* <<keyword,`keyword`>>
* <<constant-keyword,`constant_keyword`>>
@ -1052,7 +1052,7 @@ field datatypes:
Substring to search for. If `null`, the function returns `null`.
If using a field as the argument, this parameter supports only the following
field datatypes:
field data types:
* <<keyword,`keyword`>>
* <<constant-keyword,`constant_keyword`>>
@ -1159,14 +1159,14 @@ subtract(<minuend>, <subtrahend>)
Minuend to subtract from.
+
If using a field as the argument, this parameter supports only
<<number,`numeric`>> field datatypes.
<<number,`numeric`>> field data types.
`<subtrahend>`::
(Optional, integer or float or `null`)
Subtrahend to subtract. If `null`, the function returns `null`.
+
If using a field as the argument, this parameter supports only
<<number,`numeric`>> field datatypes.
<<number,`numeric`>> field data types.
*Returns:* integer, float, or `null`
====
@ -1218,7 +1218,7 @@ wildcard(<source>, <wildcard_exp>[, ...])
Source string. If `null`, the function returns `null`.
If using a field as the argument, this parameter supports only the following
field datatypes:
field data types:
* <<keyword,`keyword`>>
* <<constant-keyword,`constant_keyword`>>

View File

@ -36,7 +36,7 @@ mapped as a <<date,`date`>> field.
[NOTE]
====
You cannot use a <<nested,`nested`>> field datatype or the sub-fields of a
You cannot use a <<nested,`nested`>> field data type or the sub-fields of a
`nested` field dataype as the timestamp or event category field. See
<<eql-nested-fields>>.
====

View File

@ -20,7 +20,7 @@ Each data stream requires an <<indices-templates,index template>> that contains:
* A name or wildcard (`*`) pattern for the data stream.
* The data stream's timestamp field. This field must be mapped as a
<<date,`date`>> or <<date_nanos,`date_nanos`>> field datatype and must be
<<date,`date`>> or <<date_nanos,`date_nanos`>> field data type and must be
included in every document indexed to the data stream.
* The mappings and settings applied to each backing index when it's created.

View File

@ -76,7 +76,7 @@ include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=timeoutparms]
fields, this mapping can include:
* Field name
* <<field-datatypes,Field datatype>>
* <<field-datatypes,Field data type>>
* <<mapping-params,Mapping parameters>>
For existing fields, see <<updating-field-mappings>>.

View File

@ -16,7 +16,7 @@ For example, if you have a log message which contains `ip=1.2.3.4 error=REFUSED`
--------------------------------------------------
// NOTCONSOLE
TIP: Using the KV Processor can result in field names that you cannot control. Consider using the <<flattened>> datatype instead, which maps an entire object as a single field and allows for simple searches over its contents.
TIP: Using the KV Processor can result in field names that you cannot control. Consider using the <<flattened>> data type instead, which maps an entire object as a single field and allows for simple searches over its contents.
[[kv-options]]
.KV Options

View File

@ -62,7 +62,7 @@ that might occur in a document. When dynamic mapping is enabled, {es}
automatically detects and adds new fields to the index. This default
behavior makes it easy to index and explore your data--just start
indexing documents and {es} will detect and map booleans, floating point and
integer values, dates, and strings to the appropriate {es} datatypes.
integer values, dates, and strings to the appropriate {es} data types.
Ultimately, however, you know more about your data and how you want to use it
than {es} can. You can define rules to control dynamic mapping and explicitly

View File

@ -32,7 +32,7 @@ For more details, please see <<removal-of-types>>.
[float]
[[field-datatypes]]
== Field datatypes
== Field data types
Each field has a data `type` which can be:
@ -51,7 +51,7 @@ the <<analysis-standard-analyzer,`standard` analyzer>>, the
<<english-analyzer,`english`>> analyzer, and the
<<french-analyzer,`french` analyzer>>.
This is the purpose of _multi-fields_. Most datatypes support multi-fields
This is the purpose of _multi-fields_. Most data types support multi-fields
via the <<multi-fields>> parameter.
[[mapping-limit-settings]]
@ -86,7 +86,7 @@ limits the maximum number of <<query-dsl-bool-query,boolean clauses>> in a query
+
[TIP]
====
If your field mappings contain a large, arbitrary set of keys, consider using the <<flattened,flattened>> datatype.
If your field mappings contain a large, arbitrary set of keys, consider using the <<flattened,flattened>> data type.
====
`index.mapping.depth.limit`::

View File

@ -14,7 +14,7 @@ PUT data/_doc/1 <1>
--------------------------------------------------
<1> Creates the `data` index, the `_doc` mapping type, and a field
called `count` with datatype `long`.
called `count` with data type `long`.
The automatic detection and addition of new fields is called
_dynamic mapping_. The dynamic mapping rules can be customised to suit your

View File

@ -8,10 +8,10 @@ setting the <<dynamic,`dynamic`>> parameter to `false` (to ignore new fields) or
an exception if an unknown field is encountered).
Assuming `dynamic` field mapping is enabled, some simple rules are used to
determine which datatype the field should have:
determine which data type the field should have:
[horizontal]
*JSON datatype*:: *Elasticsearch datatype*
*JSON data type*:: *Elasticsearch data type*
`null`:: No field is added.
`true` or `false`:: <<boolean,`boolean`>> field
@ -25,8 +25,8 @@ string:: Either a <<date,`date`>> field
(if the value passes <<numeric-detection,numeric detection>>)
or a <<text,`text`>> field, with a <<keyword,`keyword`>> sub-field.
These are the only <<mapping-types,field datatypes>> that are dynamically
detected. All other datatypes must be mapped explicitly.
These are the only <<mapping-types,field data types>> that are dynamically
detected. All other data types must be mapped explicitly.
Besides the options listed below, dynamic field mapping rules can be further
customised with <<dynamic-templates,`dynamic_templates`>>.
@ -105,7 +105,7 @@ PUT my_index/_doc/1
[[numeric-detection]]
==== Numeric detection
While JSON has support for native floating point and integer datatypes, some
While JSON has support for native floating point and integer data types, some
applications or languages may sometimes render numbers as strings. Usually the
correct solution is to map these fields explicitly, but numeric detection
(which is disabled by default) can be enabled to do this automatically:

View File

@ -4,11 +4,11 @@
Dynamic templates allow you to define custom mappings that can be applied to
dynamically added fields based on:
* the <<dynamic-mapping,datatype>> detected by Elasticsearch, with <<match-mapping-type,`match_mapping_type`>>.
* the <<dynamic-mapping,data type>> detected by Elasticsearch, with <<match-mapping-type,`match_mapping_type`>>.
* the name of the field, with <<match-unmatch,`match` and `unmatch`>> or <<match-pattern,`match_pattern`>>.
* the full dotted path to the field, with <<path-match-unmatch,`path_match` and `path_unmatch`>>.
The original field name `{name}` and the detected datatype
The original field name `{name}` and the detected data type
`{dynamic_type}` <<template-variables,template variables>> can be used in
the mapping specification as placeholders.
@ -60,12 +60,12 @@ reordered or deleted after they were initially added.
[[match-mapping-type]]
==== `match_mapping_type`
The `match_mapping_type` is the datatype detected by the JSON parser. Since
The `match_mapping_type` is the data type detected by the JSON parser. Since
JSON doesn't distinguish a `long` from an `integer` or a `double` from
a `float`, it will always choose the wider datatype, i.e. `long` for integers
a `float`, it will always choose the wider data type, i.e. `long` for integers
and `double` for floating-point numbers.
The following datatypes may be automatically detected:
The following data types may be automatically detected:
- `boolean` when `true` or `false` are encountered.
- `date` when <<date-detection,date detection>> is enabled and a string matching
@ -75,7 +75,7 @@ The following datatypes may be automatically detected:
- `object` for objects, also called hashes.
- `string` for character strings.
`*` may also be used in order to match all datatypes.
`*` may also be used in order to match all data types.
For example, if we wanted to map all integer fields as `integer` instead of
`long`, and all `string` fields as both `text` and `keyword`, we

View File

@ -5,7 +5,7 @@ The following pages provide detailed explanations of the various mapping
parameters that are used by <<mapping-types,field mappings>>:
The following mapping parameters are common to some or all field datatypes:
The following mapping parameters are common to some or all field data types:
* <<analyzer,`analyzer`>>
* <<mapping-boost,`boost`>>

View File

@ -7,7 +7,7 @@ be rendered as a string, e.g. `"5"`. Alternatively, a number that should be
an integer might instead be rendered as a floating point, e.g. `5.0`, or even
`"5.0"`.
Coercion attempts to clean up dirty values to fit the datatype of a field.
Coercion attempts to clean up dirty values to fit the data type of a field.
For instance:
* Strings will be coerced to numbers.

View File

@ -5,7 +5,7 @@ Sometimes you don't have much control over the data that you receive. One
user may send a `login` field that is a <<date,`date`>>, and another sends a
`login` field that is an email address.
Trying to index the wrong datatype into a field throws an exception by
Trying to index the wrong data type into a field throws an exception by
default, and rejects the whole document. The `ignore_malformed` parameter, if
set to `true`, allows the exception to be ignored. The malformed field is not
indexed, but other fields in the document are processed normally.
@ -100,15 +100,15 @@ have malformed fields by using `exists`,`term` or `terms` queries on the special
[[json-object-limits]]
==== Limits for JSON Objects
You can't use `ignore_malformed` with the following datatypes:
You can't use `ignore_malformed` with the following data types:
* <<nested, Nested datatype>>
* <<object, Object datatype>>
* <<range, Range datatypes>>
* <<nested, Nested data type>>
* <<object, Object data type>>
* <<range, Range data types>>
You also can't use `ignore_malformed` to ignore JSON objects submitted to fields
of the wrong datatype. A JSON object is any data surrounded by curly brackets
`"{}"` and includes data mapped to the nested, object, and range datatypes.
of the wrong data type. A JSON object is any data surrounded by curly brackets
`"{}"` and includes data mapped to the nested, object, and range data types.
If you submit a JSON object to an unsupported field, {es} will return an error
and reject the entire document regardless of the `ignore_malformed` setting.

View File

@ -7,7 +7,7 @@ inverted index for search and highlighting purposes.
[WARNING]
====
The `index_options` parameter is intended for use with <<text,`text`>> fields
only. Avoid using `index_options` with other field datatypes.
only. Avoid using `index_options` with other field data types.
====
It accepts the following values:

View File

@ -46,7 +46,7 @@ GET my_index/_search
<2> An empty array does not contain an explicit `null`, and so won't be replaced with the `null_value`.
<3> A query for `NULL` returns document 1, but not document 2.
IMPORTANT: The `null_value` needs to be the same datatype as the field. For
IMPORTANT: The `null_value` needs to be the same data type as the field. For
instance, a `long` field cannot have a string `null_value`.
NOTE: The `null_value` only influences how data is indexed, it doesn't modify

View File

@ -3,7 +3,7 @@
Type mappings, <<object,`object` fields>> and <<nested,`nested` fields>>
contain sub-fields, called `properties`. These properties may be of any
<<mapping-types,datatype>>, including `object` and `nested`. Properties can
<<mapping-types,data type>>, including `object` and `nested`. Properties can
be added:
* explicitly by defining them when <<indices-create-index,creating an index>>.

View File

@ -1,11 +1,12 @@
[[mapping-types]]
== Field datatypes
== Field data types
Elasticsearch supports a number of different datatypes for the fields in a
Elasticsearch supports a number of different data types for the fields in a
document:
[float]
=== Core datatypes
[[_core_datatypes]]
=== Core data types
string:: <<text,`text`>>, <<keyword,`keyword`>> and <<wildcard,`wildcard`>>
<<number>>:: `long`, `integer`, `short`, `byte`, `double`, `float`, `half_float`, `scaled_float`
@ -16,21 +17,21 @@ string:: <<text,`text`>>, <<keyword,`keyword`>> and <<wildcard,`wildcard
<<range>>:: `integer_range`, `float_range`, `long_range`, `double_range`, `date_range`, `ip_range`
[float]
=== Complex datatypes
=== Complex data types
<<object>>:: `object` for single JSON objects
<<nested>>:: `nested` for arrays of JSON objects
[float]
=== Geo datatypes
=== Geo data types
<<geo-point>>:: `geo_point` for lat/lon points
<<geo-shape>>:: `geo_shape` for complex shapes like polygons
[float]
=== Specialised datatypes
=== Specialised data types
<<ip>>:: `ip` for IPv4 and IPv6 addresses
<<completion-suggester,Completion datatype>>::
<<completion-suggester,Completion data type>>::
`completion` to provide auto-complete suggestions
<<token-count>>:: `token_count` to count the number of tokens in a string
@ -64,9 +65,9 @@ string:: <<text,`text`>>, <<keyword,`keyword`>> and <<wildcard,`wildcard
[float]
[[types-array-handling]]
=== Arrays
In {es}, arrays do not require a dedicated field datatype. Any field can contain
In {es}, arrays do not require a dedicated field data type. Any field can contain
zero or more values by default, however, all values in the array must be of the
same datatype. See <<array>>.
same data type. See <<array>>.
[float]
=== Multi-fields
@ -79,7 +80,7 @@ the <<analysis-standard-analyzer,`standard` analyzer>>, the
<<english-analyzer,`english`>> analyzer, and the
<<french-analyzer,`french` analyzer>>.
This is the purpose of _multi-fields_. Most datatypes support multi-fields
This is the purpose of _multi-fields_. Most data types support multi-fields
via the <<multi-fields>> parameter.
include::types/alias.asciidoc[]

View File

@ -1,5 +1,5 @@
[[alias]]
=== Alias datatype
=== Alias data type
++++
<titleabbrev>Alias</titleabbrev>
++++

View File

@ -1,9 +1,9 @@
[[array]]
=== Arrays
In Elasticsearch, there is no dedicated `array` datatype. Any field can contain
In Elasticsearch, there is no dedicated `array` data type. Any field can contain
zero or more values by default, however, all values in the array must be of the
same datatype. For instance:
same data type. For instance:
* an array of strings: [ `"one"`, `"two"` ]
* an array of integers: [ `1`, `2` ]
@ -16,19 +16,19 @@ same datatype. For instance:
Arrays of objects do not work as you would expect: you cannot query each
object independently of the other objects in the array. If you need to be
able to do this then you should use the <<nested,`nested`>> datatype instead
of the <<object,`object`>> datatype.
able to do this then you should use the <<nested,`nested`>> data type instead
of the <<object,`object`>> data type.
This is explained in more detail in <<nested>>.
====================================================
When adding a field dynamically, the first value in the array determines the
field `type`. All subsequent values must be of the same datatype or it must
field `type`. All subsequent values must be of the same data type or it must
at least be possible to <<coerce,coerce>> subsequent values to the same
datatype.
data type.
Arrays with a mixture of datatypes are _not_ supported: [ `10`, `"some string"` ]
Arrays with a mixture of data types are _not_ supported: [ `10`, `"some string"` ]
An array may contain `null` values, which are either replaced by the
configured <<null-value,`null_value`>> or skipped entirely. An empty array
@ -92,7 +92,7 @@ big block of text, Lucene tokenizes the text into individual terms, and
adds each term to the inverted index separately.
This means that even a simple text field must be able to support multiple
values by default. When other datatypes were added, such as numbers and
values by default. When other data types were added, such as numbers and
dates, they used the same data structure as strings, and so got multi-values
for free.

View File

@ -1,5 +1,5 @@
[[binary]]
=== Binary datatype
=== Binary data type
++++
<titleabbrev>Binary</titleabbrev>
++++

View File

@ -1,5 +1,5 @@
[[boolean]]
=== Boolean datatype
=== Boolean data type
++++
<titleabbrev>Boolean</titleabbrev>
++++

View File

@ -2,7 +2,7 @@
[testenv="basic"]
[[constant-keyword]]
=== Constant keyword datatype
=== Constant keyword data type
++++
<titleabbrev>Constant keyword</titleabbrev>
++++

View File

@ -1,10 +1,10 @@
[[date]]
=== Date datatype
=== Date data type
++++
<titleabbrev>Date</titleabbrev>
++++
JSON doesn't have a date datatype, so dates in Elasticsearch can either be:
JSON doesn't have a date data type, so dates in Elasticsearch can either be:
* strings containing formatted dates, e.g. `"2015-01-01"` or `"2015/01/01 12:10:30"`.
* a long number representing _milliseconds-since-the-epoch_.

View File

@ -1,11 +1,11 @@
[[date_nanos]]
=== Date nanoseconds datatype
=== Date nanoseconds data type
++++
<titleabbrev>Date nanoseconds</titleabbrev>
++++
This datatype is an addition to the `date` datatype. However there is an
important distinction between the two. The existing `date` datatype stores
This data type is an addition to the `date` data type. However there is an
important distinction between the two. The existing `date` data type stores
dates in millisecond resolution. The `date_nanos` data type stores dates
in nanosecond resolution, which limits its range of dates from roughly
1970 to 2262, as dates are still stored as a long representing nanoseconds

View File

@ -1,7 +1,7 @@
[role="xpack"]
[testenv="basic"]
[[dense-vector]]
=== Dense vector datatype
=== Dense vector data type
++++
<titleabbrev>Dense vector</titleabbrev>
++++

View File

@ -2,7 +2,7 @@
[testenv="basic"]
[[flattened]]
=== Flattened datatype
=== Flattened data type
++++
<titleabbrev>Flattened</titleabbrev>
++++

View File

@ -1,5 +1,5 @@
[[geo-point]]
=== Geo-point datatype
=== Geo-point data type
++++
<titleabbrev>Geo-point</titleabbrev>
++++

View File

@ -1,10 +1,10 @@
[[geo-shape]]
=== Geo-shape datatype
=== Geo-shape data type
++++
<titleabbrev>Geo-shape</titleabbrev>
++++
The `geo_shape` datatype facilitates the indexing of and searching
The `geo_shape` data type facilitates the indexing of and searching
with arbitrary geo shapes such as rectangles and polygons. It should be
used when either the data being indexed or the queries being executed
contain shapes other than just points.

View File

@ -1,7 +1,7 @@
[role="xpack"]
[testenv="basic"]
[[histogram]]
=== Histogram datatype
=== Histogram data type
++++
<titleabbrev>Histogram</titleabbrev>
++++

View File

@ -1,5 +1,5 @@
[[ip]]
=== IP datatype
=== IP data type
++++
<titleabbrev>IP</titleabbrev>
++++
@ -36,7 +36,7 @@ GET my_index/_search
--------------------------------------------------
// TESTSETUP
NOTE: You can also store ip ranges in a single field using an <<range,ip_range datatype>>.
NOTE: You can also store ip ranges in a single field using an <<range,ip_range data type>>.
[[ip-params]]
==== Parameters for `ip` fields

View File

@ -1,5 +1,5 @@
[[keyword]]
=== Keyword datatype
=== Keyword data type
++++
<titleabbrev>Keyword</titleabbrev>
++++

View File

@ -1,15 +1,15 @@
[[nested]]
=== Nested datatype
=== Nested data type
++++
<titleabbrev>Nested</titleabbrev>
++++
The `nested` type is a specialised version of the <<object,`object`>> datatype
The `nested` type is a specialised version of the <<object,`object`>> data type
that allows arrays of objects to be indexed in a way that they can be queried
independently of each other.
TIP: When ingesting key-value pairs with a large, arbitrary set of keys, you might consider modeling each key-value pair as its own nested document with `key` and `value` fields. Instead, consider using the <<flattened,flattened>> datatype, which maps an entire object as a single field and allows for simple searches over its contents.
Nested documents and queries are typically expensive, so using the `flattened` datatype for this use case is a better option.
TIP: When ingesting key-value pairs with a large, arbitrary set of keys, you might consider modeling each key-value pair as its own nested document with `key` and `value` fields. Instead, consider using the <<flattened,flattened>> data type, which maps an entire object as a single field and allows for simple searches over its contents.
Nested documents and queries are typically expensive, so using the `flattened` data type for this use case is a better option.
[[nested-arrays-flattening-objects]]
==== How arrays of objects are flattened
@ -74,8 +74,8 @@ GET my_index/_search
==== Using `nested` fields for arrays of objects
If you need to index arrays of objects and to maintain the independence of
each object in the array, use the `nested` datatype instead of the
<<object,`object`>> datatype.
each object in the array, use the `nested` data type instead of the
<<object,`object`>> data type.
Internally, nested objects index each object in
the array as a separate hidden document, meaning that each nested object can be
@ -198,7 +198,7 @@ nested object. Accepts `true` (default), `false` and `strict`.
<<properties,`properties`>>::
(Optional, object)
The fields within the nested object, which can be of any
<<mapping-types,datatype>>, including `nested`. New properties
<<mapping-types,data type>>, including `nested`. New properties
may be added to an existing nested object.
[[nested-include-in-parent-parm]]

View File

@ -1,5 +1,5 @@
[[number]]
=== Numeric datatypes
=== Numeric data types
++++
<titleabbrev>Numeric</titleabbrev>
++++
@ -84,7 +84,7 @@ to help make a decision.
.Mapping numeric identifiers
====
// tag::map-ids-as-keyword[]
Not all numeric data should be mapped as a <<number,numeric>> field datatype.
Not all numeric data should be mapped as a <<number,numeric>> field data type.
{es} optimizes numeric fields, such as `integer` or `long`, for
<<query-dsl-range-query,`range`>> queries. However, <<keyword,`keyword`>> fields
are better for <<query-dsl-term-query,`term`>> and other
@ -101,7 +101,7 @@ Consider mapping a numeric identifier as a `keyword` if:
often faster than `term` searches on numeric fields.
If you're unsure which to use, you can use a <<multi-fields,multi-field>> to map
the data as both a `keyword` _and_ a numeric datatype.
the data as both a `keyword` _and_ a numeric data type.
// end::map-ids-as-keyword[]
====

View File

@ -1,5 +1,5 @@
[[object]]
=== Object datatype
=== Object data type
++++
<titleabbrev>Object</titleabbrev>
++++
@ -93,7 +93,7 @@ The following parameters are accepted by `object` fields:
<<properties,`properties`>>::
The fields within the object, which can be of any
<<mapping-types,datatype>>, including `object`. New properties
<<mapping-types,data type>>, including `object`. New properties
may be added to an existing object.
IMPORTANT: If you need to index arrays of objects instead of single objects,

View File

@ -1,10 +1,10 @@
[[parent-join]]
=== Join datatype
=== Join data type
++++
<titleabbrev>Join</titleabbrev>
++++
The `join` datatype is a special field that creates
The `join` data type is a special field that creates
parent/child relation within documents of the same index.
The `relations` section defines a set of possible relations within the documents,
each relation being a parent name and a child name.

View File

@ -1,12 +1,12 @@
[[point]]
[role="xpack"]
[testenv="basic"]
=== Point datatype
=== Point data type
++++
<titleabbrev>Point</titleabbrev>
++++
The `point` datatype facilitates the indexing of and searching
The `point` data type facilitates the indexing of and searching
arbitrary `x, y` pairs that fall in a 2-dimensional planar
coordinate system.

View File

@ -1,5 +1,5 @@
[[range]]
=== Range datatypes
=== Range data types
++++
<titleabbrev>Range</titleabbrev>
++++

View File

@ -1,5 +1,5 @@
[[rank-feature]]
=== Rank feature datatype
=== Rank feature data type
++++
<titleabbrev>Rank feature</titleabbrev>
++++

View File

@ -1,5 +1,5 @@
[[rank-features]]
=== Rank features datatype
=== Rank features data type
++++
<titleabbrev>Rank features</titleabbrev>
++++
@ -8,7 +8,7 @@ A `rank_features` field can index numeric feature vectors, so that they can
later be used to boost documents in queries with a
<<query-dsl-rank-feature-query,`rank_feature`>> query.
It is analogous to the <<rank-feature,`rank_feature`>> datatype but is better suited
It is analogous to the <<rank-feature,`rank_feature`>> data type but is better suited
when the list of features is sparse so that it wouldn't be reasonable to add
one field to the mappings for each of them.

View File

@ -1,5 +1,5 @@
[[search-as-you-type]]
=== Search-as-you-type datatype
=== Search-as-you-type data type
++++
<titleabbrev>Search-as-you-type</titleabbrev>
++++
@ -181,7 +181,7 @@ More subfields enables more specific queries but increases index size.
The following parameters are accepted in a mapping for the `search_as_you_type`
field due to its nature as a text-like field, and behave similarly to their
behavior when configuring a field of the <<text,`text`>> datatype. Unless
behavior when configuring a field of the <<text,`text`>> data type. Unless
otherwise noted, these options configure the root fields subfields in
the same way.

View File

@ -1,12 +1,12 @@
[[shape]]
[role="xpack"]
[testenv="basic"]
=== Shape datatype
=== Shape data type
++++
<titleabbrev>Shape</titleabbrev>
++++
The `shape` datatype facilitates the indexing of and searching
The `shape` data type facilitates the indexing of and searching
with arbitrary `x, y` cartesian shapes such as rectangles and polygons. It can be
used to index and query geometries whose coordinates fall in a 2-dimensional planar
coordinate system.

View File

@ -1,7 +1,7 @@
[role="xpack"]
[testenv="basic"]
[[sparse-vector]]
=== Sparse vector datatype
=== Sparse vector data type
++++
<titleabbrev>Sparse vector</titleabbrev>
++++

View File

@ -1,5 +1,5 @@
[[text]]
=== Text datatype
=== Text data type
++++
<titleabbrev>Text</titleabbrev>
++++

View File

@ -1,5 +1,5 @@
[[token-count]]
=== Token count datatype
=== Token count data type
++++
<titleabbrev>Token count</titleabbrev>
++++

View File

@ -1,7 +1,7 @@
[role="xpack"]
[testenv="basic"]
[[wildcard]]
=== Wildcard datatype
=== Wildcard data type
++++
<titleabbrev>Wildcard</titleabbrev>
++++

View File

@ -71,7 +71,7 @@ For example, JSON data might contain the following transaction coordinates:
// NOTCONSOLE
In {es}, location data is likely to be stored in `geo_point` fields. For more
information, see {ref}/geo-point.html[Geo-point datatype]. This data type is
information, see {ref}/geo-point.html[Geo-point data type]. This data type is
supported natively in {ml-features}. Specifically, {dfeed} when pulling data from
a `geo_point` field, will transform the data into the appropriate `lat,lon` string
format before sending to the {anomaly-job}.

View File

@ -142,7 +142,7 @@ Range queries on <<text, `text`>> or <<keyword, `keyword`>> files will not be ex
[[ranges-on-dates]]
===== Using the `range` query with `date` fields
When the `<field>` parameter is a <<date,`date`>> field datatype, you can use
When the `<field>` parameter is a <<date,`date`>> field data type, you can use
<<date-math,date math>> with the following parameters:
* `gt`

View File

@ -572,7 +572,7 @@ tag::mappings[]
specified, this mapping can include:
* Field names
* <<mapping-types,Field datatypes>>
* <<mapping-types,Field data types>>
* <<mapping-params,Mapping parameters>>
See <<mapping>>.

View File

@ -175,7 +175,7 @@ GET /_search
format for the field's returned doc values. <<date,Date fields>> support a
<<mapping-date-format,date `format`>>. <<number,Numeric fields>> support a
https://docs.oracle.com/javase/8/docs/api/java/text/DecimalFormat.html[DecimalFormat
pattern]. Other field datatypes do not support the `format` parameter.
pattern]. Other field data types do not support the `format` parameter.
====
TIP: You cannot use the `docvalue_fields` parameter to retrieve doc values for

View File

@ -300,7 +300,7 @@ For <<date,date fields>>, you can specify a date <<mapping-date-format,date
https://docs.oracle.com/javase/8/docs/api/java/text/DecimalFormat.html[DecimalFormat
pattern].
+
For other field datatypes, this parameter is not supported.
For other field data types, this parameter is not supported.
====
[[request-body-search-explain]]

View File

@ -481,7 +481,7 @@ Compares two numeric values, eg:
=== `length`
This depends on the datatype of the value being examined, eg:
This depends on the data type of the value being examined, eg:
....
- length: { _id: 22 } # the `_id` string is 22 chars long