Currently the policy config is placed directly in the json object
of the toplevel `policies` array field. For example:
```
{
"policies": [
{
"match": {
"name" : "my-policy",
"indices" : ["users"],
"match_field" : "email",
"enrich_fields" : [
"first_name",
"last_name",
"city",
"zip",
"state"
]
}
}
]
}
```
This change adds a `config` field in each policy json object:
```
{
"policies": [
{
"config": {
"match": {
"name" : "my-policy",
"indices" : ["users"],
"match_field" : "email",
"enrich_fields" : [
"first_name",
"last_name",
"city",
"zip",
"state"
]
}
}
}
]
}
```
This allows us in the future to add other information about policies
in the get policy api response.
The UI will consume this API to build an overview of all policies.
The UI may in the future include additional information about a policy
and the plan is to include that in the get policy api, so that this
information can be gathered in a single api call.
An example of the information that is likely to be added is:
* Last policy execution time
* The status of a policy (executing, executed, unexecuted)
* Information about the last failure if exists
This change also slightly modifies the stats response,
so that is can easier consumer by monitoring and other
users. (coordinators stats are now in a list instead of
a map and has an additional field for the node id)
Relates to #32789
This commit changes the GET REST api so it will accept an optional comma
separated list of enrich policy ids. This change also modifies the
behavior of the GET API in that it will not error if it is passed a bad
enrich id anymore, but will instead just return an empty list.
The enrich api returns enrich coordinator stats and
information about currently executing enrich policies.
The coordinator stats include per ingest node:
* The current number of search requests in the queue.
* The total number of outstanding remote requests that
have been executed since node startup. Each remote
request is likely to include multiple search requests.
This depends on how much search requests are in the
queue at the time when the remote request is performed.
* The number of current outstanding remote requests.
* The total number of search requests that `enrich`
processors have executed since node startup.
The current execution policies stats include:
* The name of policy that is executing
* A full blow task info object that is executing the policy.
Relates to #32789
The previous transport action was a read action, which under the right
set of circumstances can execute on a coordinating node. This commit
ensures that cannot happen.
Besides a rename, this changes allows to processor to attach multiple
enrich docs to the document being ingested.
Also in order to control the maximum number of enrich docs to be
included in the document being ingested, the `max_matches` setting
is added to the enrich processor.
Relates #32789
When a policy is deleted, the enrich indices that are backing the policy
alias should also be deleted. This commit does that work and cleans up
the transport action a bit so that the lock release is easier to see, as
well as to ensure that any action carried out, regardless of exception,
unlocks the policy.
This commit changes the enrich processor factory to read the required
configuration from the current enrich index (from meta mapping field)
in order to create the processor.
Before this change the required config was read from the enrich policy
in the cluster state. Enrich policies are going to be stored in an
index (instead of the cluster state). In a processor factory there isn't
a way to load something from an index, so with this change we read
the required config / info from the enrich index (which is derived
from the enrich policy), which then allows us to move enrich policies
to an index.
With this change it is required to execute a policy before creating a
pipeline. Otherwise there is no enrich index and then there is no way
to validate that a policy exist or retrieve its type and match field.
Relates to #32789
A policy type controls how the enrich index is created and
the query executed against the match field. Currently there
is a single policy type (`exact_match`). In the near future
more policy types will be added and different policy may have
different configuration options.
For this reason type should be a json object instead of a string field:
```
{
"exact_match": {
...
}
}
```
instead of:
```
{
"type": "exact_match",
...
}
```
This will make streaming parsing of enrich policies easier as in the
new format, the parsing code can know ahead what configuration fields
to expect. In the latter format that is not possible if the type field
appears not as the first field.
Relates to #32789
Enrich processor configuration changes:
* Renamed `enrich_key` option to `field` option.
* Replaced `set_from` and `targets` options with `target_field`.
The `target_field` option behaves different to how `set_from` and
`targets` worked. The `target_field` is the field that will contain
the looked up document.
Relates to #32789
The get and list APIs are a single API in this commit. Whether
requesting one named policy or all policies, a list of policies is
returened. The list API code has all been removed and the GET api is
what remains, which contains much of the list response code.
This commit adds a lock to the delete policy, in the same way that the
locking is done for policy execution. It also creates a test to exercise
the delete transport action, and modifies an existing test to provide a
common set of functions for saving and deleting policies.
The delete policy had a subtle bug in that it would still delete the
policy if pipelines were accessing it, after giving the client back an
error. This commit fixes that and ensures it does not happen by adding
verification in the test.
If a pipeline that refrences the policy exists, we should not allow the
policy to be deleted. The user will need to remove the processor from
the pipeline before deleting the policy. This commit adds a check to
ensure that the policy cannot be deleted if it is referenced by any
pipeline in the system.
The policy name is used to generate the enrich index name.
For this reason, a policy name should be validated in the same way
as index names.
Relates to #32789
In the case that source and target are the same in `enrich_values` then
a string array can be specified.
For example instead of this:
```
PUT /_ingest/pipeline/my-pipeline
{
"processors": [
{
"enrich" : {
"policy_name": "my-policy",
"enrich_values": [
{
"source": "first_name",
"target": "first_name"
},
{
"source": "last_name",
"target": "last_name"
},
{
"source": "address",
"target": "address"
},
{
"source": "city",
"target": "city"
},
{
"source": "state",
"target": "state"
},
{
"source": "zip",
"target": "zip"
}
]
}
}
]
}
```
This more compact format can be specified:
```
PUT /_ingest/pipeline/my-pipeline
{
"processors": [
{
"enrich" : {
"policy_name": "my-policy",
"targets": [
"first_name",
"last_name",
"address",
"city",
"state",
"zip"
]
}
}
]
}
```
And the `enrich_values` key has been renamed to `set_from`.
Relates to #32789
Currently the msearch api is used to execute buffered search requests;
however the msearch api doesn't deal with search requests in an intelligent way.
It basically executes each search separately in a concurrent manner.
This api reuses the msearch request and response classes and executes
the searches as one request in the node holding the enrich index shard.
Things like engine.searcher and query shard context are only created once.
Also there are less layers than executing a regular msearch request. This
results in an interesting speedup.
Without this change, in a single node cluster, enriching documents
with a bulk size of 5000 items, the ingest time in each bulk response
varied from 174ms to 822ms. With this change the ingest time in each
bulk response varied from 54ms to 109ms.
I think we should add a change like this based on this improvement in ingest time.
However I do wonder if instead of doing this change, we should improve
the msearch api to execute more efficiently. That would be more complicated
then this change, because in this change the custom api can only search
enrich index shards and these are special because they always have a single
primary shard. If msearch api is to be improved then that should work for
any search request to any indices. Making the same optimization for
indices with more than 1 primary shard requires much more work.
The current change is isolated in the enrich plugin and LOC / complexity
is small. So this good enough for now.
Adds a global soft limit on the number of concurrently executing enrich policies.
Since an enrich policy is run on the generic thread pool, this is meant to limit
policy runs separately from the generic thread pool capacity.
This PR adds a background maintenance task that is scheduled on the master node only.
The deletion of an index is based on if it is not linked to a policy or if the enrich alias is not
currently pointing at it. Synchronization has been added to make sure that no policy
executions are running at the time of cleanup, and if any executions do occur, the marking
process delays cleanup until next run.
This commit adds permissions validation on the indices provided in the
enrich policy. These indices should be validated at store time so as not
to have cryptic error messages in the event the user does not have
permissions to access said indices.
Introduced proxy api the handle the search request load that originates
from enrich processor. The enrich processor can execute many search
requests that execute asynchronously in parallel and that can easily overwhelm
the search thread pool on nodes. In order to protect this the Coordinator
queues the search requests and only executes a fixed number of search requests
in parallel.
Besides this; the Coordinator tries to include as much as possible search requests
(up to a defined maximum) inside a multi search request in order to reduce the number
of remote api calls to be made from the node that performs ingestion.
This test verifies that enrich policies still exist after a full
cluster restart. If EnrichPolicy is not registered as named xcontent
in EnrichPlugin class then this test fails.
Added an additional method to the Processor interface to allow a
processor implementation to make a non blocking call.
Also added semaphore in order to avoid search thread pools from rejecting
search requests originating from the match processor. This is a temporary workaround.
Add client to processor parameters in the ingest service.
Remove the search provider function from the processor parameters.
ExactMatchProcessor and Factory converted to use client.
Remove test cases that are no longer applicable from processor.
The test for now tests the enrich APIs in a multi node environment.
Picked EsIntegTestCase test over a real qa module in order to avoid
adding another module that starts a test cluster.
This commit adds the manage_enrich privilege, which grants access to all
of the enrich processor lifecycle actions. In addition this commit also
creates a role which grants access to the generated indices.
Relates #41939
Co-authored-by: Martijn van Groningen <martijn.v.groningen@gmail.com>
Ensures that fields retained in an enrich index from a source are not contained
under a nested field. It additionally makes sure that key fields exist, and that
value fields are checked if they are present. The policy runner test has also
been expanded with some faulty mapping test cases.
Add support for components used by processor factories to get updated
before processor factories create new processor instances.
Components can register via `IngestService#addIngestClusterStateListener(...)`
then if the internal representation of ingest pipelines get updated,
these components get updated with the current cluster state before
pipelines are updated.
Registered EnrichProcessorFactory as ingest cluster state listener, so
that it has always an up to date view of the active enrich policies.
The enrich key field is being kept track in _meta field by the policy runner.
The ingest processor uses the field name defined in enrich index _meta field and
not in the policy. This will avoid problems if policy is changed without
a new enrich index being created.
This also complete decouples EnrichPolicy from ExactMatchProcessor.
The following scenario results in failure without this change:
1) Create policy
2) Execute policy
3) Create pipeline with enrich processor
4) Use pipeline
5) Update enrich key in policy
6) Use pipeline, which then fails.
Reindex uses scroll searches to read the source data. It is more efficient
to read more data in one search scroll round then several. I think 10000
is a good sweet spot.
Relates to #32789
its own helper method to determine alias / policy base name.
This way both the enrich processor and policy runner use the same logic
to determine the alias to use.
Relates to #32789
Backports #41088
Adds the foundation of the execution logic to execute an enrich policy. Validates
the source index existence as well as mappings, creates a new enrich index for
the policy, reindexes the source index into the new enrich index, and swaps the
enrich alias for the policy to the new index.
The enrich processor performs a lookup in a locally allocated
enrich index shard using a field value from the document being enriched.
If there is a match then the _source of the enrich document is fetched.
The document being enriched then gets the decorate values from the
enrich document based on the configured decorate fields in the pipeline.
Note that the usage of the _source field is temporary until the enrich
source field that is part of #41521 is merged into the enrich branch.
Using the _source field involves significant decompression which not
desired for enrich use cases.
The policy contains the information what field in the enrich index
to query and what fields are available to decorate a document being
enriched with.
The enrich processor has the following configuration options:
* `policy_name` - the name of the policy this processor should use
* `enrich_key` - the field in the document being enriched that holds to lookup value
* `ignore_missing` - Whether to allow the key field to be missing
* `enrich_values` - a list of fields to decorate the document being enriched with.
Each entry holds a source field and a target field.
The source field indicates what decorate field to use that is available in the policy.
The target field controls the field name to use in the document being enriched.
The source and target fields can be the same.
Example pipeline config:
```
{
"processors": [
{
"policy_name": "my_policy",
"enrich_key": "host_name",
"enrich_values": [
{
"source": "globalRank",
"target": "global_rank"
}
]
}
]
}
```
In the above example documents are being enriched with a global rank value.
For each document that has match in the enrich index based on its host_name field,
the document gets an global rank field value, which is fetched from the `globalRank`
field in the enrich index and saved as `global_rank` in the document being enriched.
This is PR is part one of #41521
move put policy api yaml test to this rest module.
The main benefit is that all tests will then be run when running:
`./gradlew -p x-pack/plugin/enrich check`
The rest qa module starts a node with default distribution and basic
license.
This qa module will also be used for adding different rest tests (not yaml),
for example rest tests needed for #41532
Also when we are going to work on security integration then we can
add a security qa module under the qa folder. Also at some point
we should add a multi node qa module.
There is no need to create a enrich store component for the transport
layer since the inner components of the store are either present in the
master node calls or via an already injected ClusterService. This commit
cleans up the class, adds the forthcoming delete call and tests the new
code.