In the case that source and target are the same in `enrich_values` then
a string array can be specified.
For example instead of this:
```
PUT /_ingest/pipeline/my-pipeline
{
"processors": [
{
"enrich" : {
"policy_name": "my-policy",
"enrich_values": [
{
"source": "first_name",
"target": "first_name"
},
{
"source": "last_name",
"target": "last_name"
},
{
"source": "address",
"target": "address"
},
{
"source": "city",
"target": "city"
},
{
"source": "state",
"target": "state"
},
{
"source": "zip",
"target": "zip"
}
]
}
}
]
}
```
This more compact format can be specified:
```
PUT /_ingest/pipeline/my-pipeline
{
"processors": [
{
"enrich" : {
"policy_name": "my-policy",
"targets": [
"first_name",
"last_name",
"address",
"city",
"state",
"zip"
]
}
}
]
}
```
And the `enrich_values` key has been renamed to `set_from`.
Relates to #32789
Currently the msearch api is used to execute buffered search requests;
however the msearch api doesn't deal with search requests in an intelligent way.
It basically executes each search separately in a concurrent manner.
This api reuses the msearch request and response classes and executes
the searches as one request in the node holding the enrich index shard.
Things like engine.searcher and query shard context are only created once.
Also there are less layers than executing a regular msearch request. This
results in an interesting speedup.
Without this change, in a single node cluster, enriching documents
with a bulk size of 5000 items, the ingest time in each bulk response
varied from 174ms to 822ms. With this change the ingest time in each
bulk response varied from 54ms to 109ms.
I think we should add a change like this based on this improvement in ingest time.
However I do wonder if instead of doing this change, we should improve
the msearch api to execute more efficiently. That would be more complicated
then this change, because in this change the custom api can only search
enrich index shards and these are special because they always have a single
primary shard. If msearch api is to be improved then that should work for
any search request to any indices. Making the same optimization for
indices with more than 1 primary shard requires much more work.
The current change is isolated in the enrich plugin and LOC / complexity
is small. So this good enough for now.
Adds a global soft limit on the number of concurrently executing enrich policies.
Since an enrich policy is run on the generic thread pool, this is meant to limit
policy runs separately from the generic thread pool capacity.
This PR adds a background maintenance task that is scheduled on the master node only.
The deletion of an index is based on if it is not linked to a policy or if the enrich alias is not
currently pointing at it. Synchronization has been added to make sure that no policy
executions are running at the time of cleanup, and if any executions do occur, the marking
process delays cleanup until next run.
This commit adds permissions validation on the indices provided in the
enrich policy. These indices should be validated at store time so as not
to have cryptic error messages in the event the user does not have
permissions to access said indices.
Introduced proxy api the handle the search request load that originates
from enrich processor. The enrich processor can execute many search
requests that execute asynchronously in parallel and that can easily overwhelm
the search thread pool on nodes. In order to protect this the Coordinator
queues the search requests and only executes a fixed number of search requests
in parallel.
Besides this; the Coordinator tries to include as much as possible search requests
(up to a defined maximum) inside a multi search request in order to reduce the number
of remote api calls to be made from the node that performs ingestion.
This test verifies that enrich policies still exist after a full
cluster restart. If EnrichPolicy is not registered as named xcontent
in EnrichPlugin class then this test fails.
Added an additional method to the Processor interface to allow a
processor implementation to make a non blocking call.
Also added semaphore in order to avoid search thread pools from rejecting
search requests originating from the match processor. This is a temporary workaround.
Add client to processor parameters in the ingest service.
Remove the search provider function from the processor parameters.
ExactMatchProcessor and Factory converted to use client.
Remove test cases that are no longer applicable from processor.
The test for now tests the enrich APIs in a multi node environment.
Picked EsIntegTestCase test over a real qa module in order to avoid
adding another module that starts a test cluster.
This commit adds the manage_enrich privilege, which grants access to all
of the enrich processor lifecycle actions. In addition this commit also
creates a role which grants access to the generated indices.
Relates #41939
Co-authored-by: Martijn van Groningen <martijn.v.groningen@gmail.com>
Ensures that fields retained in an enrich index from a source are not contained
under a nested field. It additionally makes sure that key fields exist, and that
value fields are checked if they are present. The policy runner test has also
been expanded with some faulty mapping test cases.
Add support for components used by processor factories to get updated
before processor factories create new processor instances.
Components can register via `IngestService#addIngestClusterStateListener(...)`
then if the internal representation of ingest pipelines get updated,
these components get updated with the current cluster state before
pipelines are updated.
Registered EnrichProcessorFactory as ingest cluster state listener, so
that it has always an up to date view of the active enrich policies.
The enrich key field is being kept track in _meta field by the policy runner.
The ingest processor uses the field name defined in enrich index _meta field and
not in the policy. This will avoid problems if policy is changed without
a new enrich index being created.
This also complete decouples EnrichPolicy from ExactMatchProcessor.
The following scenario results in failure without this change:
1) Create policy
2) Execute policy
3) Create pipeline with enrich processor
4) Use pipeline
5) Update enrich key in policy
6) Use pipeline, which then fails.
Reindex uses scroll searches to read the source data. It is more efficient
to read more data in one search scroll round then several. I think 10000
is a good sweet spot.
Relates to #32789
its own helper method to determine alias / policy base name.
This way both the enrich processor and policy runner use the same logic
to determine the alias to use.
Relates to #32789
Backports #41088
Adds the foundation of the execution logic to execute an enrich policy. Validates
the source index existence as well as mappings, creates a new enrich index for
the policy, reindexes the source index into the new enrich index, and swaps the
enrich alias for the policy to the new index.
The enrich processor performs a lookup in a locally allocated
enrich index shard using a field value from the document being enriched.
If there is a match then the _source of the enrich document is fetched.
The document being enriched then gets the decorate values from the
enrich document based on the configured decorate fields in the pipeline.
Note that the usage of the _source field is temporary until the enrich
source field that is part of #41521 is merged into the enrich branch.
Using the _source field involves significant decompression which not
desired for enrich use cases.
The policy contains the information what field in the enrich index
to query and what fields are available to decorate a document being
enriched with.
The enrich processor has the following configuration options:
* `policy_name` - the name of the policy this processor should use
* `enrich_key` - the field in the document being enriched that holds to lookup value
* `ignore_missing` - Whether to allow the key field to be missing
* `enrich_values` - a list of fields to decorate the document being enriched with.
Each entry holds a source field and a target field.
The source field indicates what decorate field to use that is available in the policy.
The target field controls the field name to use in the document being enriched.
The source and target fields can be the same.
Example pipeline config:
```
{
"processors": [
{
"policy_name": "my_policy",
"enrich_key": "host_name",
"enrich_values": [
{
"source": "globalRank",
"target": "global_rank"
}
]
}
]
}
```
In the above example documents are being enriched with a global rank value.
For each document that has match in the enrich index based on its host_name field,
the document gets an global rank field value, which is fetched from the `globalRank`
field in the enrich index and saved as `global_rank` in the document being enriched.
This is PR is part one of #41521
move put policy api yaml test to this rest module.
The main benefit is that all tests will then be run when running:
`./gradlew -p x-pack/plugin/enrich check`
The rest qa module starts a node with default distribution and basic
license.
This qa module will also be used for adding different rest tests (not yaml),
for example rest tests needed for #41532
Also when we are going to work on security integration then we can
add a security qa module under the qa folder. Also at some point
we should add a multi node qa module.
There is no need to create a enrich store component for the transport
layer since the inner components of the store are either present in the
master node calls or via an already injected ClusterService. This commit
cleans up the class, adds the forthcoming delete call and tests the new
code.