43 lines
1.5 KiB
Plaintext
43 lines
1.5 KiB
Plaintext
[[ingest]]
|
|
= Ingest node
|
|
|
|
[partintro]
|
|
--
|
|
Use an ingest node to pre-process documents before the actual document indexing happens.
|
|
The ingest node intercepts bulk and index requests, it applies transformations, and it then
|
|
passes the documents back to the index or bulk APIs.
|
|
|
|
All nodes enable ingest by default, so any node can handle ingest tasks. You can also create
|
|
dedicated ingest nodes. To disable ingest for a node, configure the following setting in the
|
|
elasticsearch.yml file:
|
|
|
|
[source,yaml]
|
|
--------------------------------------------------
|
|
node.ingest: false
|
|
--------------------------------------------------
|
|
|
|
To pre-process documents before indexing, <<pipeline,define a pipeline>> that specifies a series of
|
|
<<ingest-processors,processors>>. Each processor transforms the document in some specific way. For example, a
|
|
pipeline might have one processor that removes a field from the document, followed by
|
|
another processor that renames a field. The <<cluster-state,cluster state>> then stores
|
|
the configured pipelines.
|
|
|
|
To use a pipeline, simply specify the `pipeline` parameter on an index or bulk request. This
|
|
way, the ingest node knows which pipeline to use. For example:
|
|
|
|
[source,js]
|
|
--------------------------------------------------
|
|
PUT my-index/_doc/my-id?pipeline=my_pipeline_id
|
|
{
|
|
"foo": "bar"
|
|
}
|
|
--------------------------------------------------
|
|
// CONSOLE
|
|
// TEST[catch:bad_request]
|
|
|
|
See <<ingest-apis,Ingest APIs>> for more information about creating, adding, and deleting pipelines.
|
|
|
|
--
|
|
|
|
include::ingest/ingest-node.asciidoc[]
|