diff --git a/nifi-extension-bundles/nifi-amqp-bundle/nifi-amqp-processors/src/main/resources/docs/org.apache.nifi.amqp.processors.ConsumeAMQP/additionalDetails.html b/nifi-extension-bundles/nifi-amqp-bundle/nifi-amqp-processors/src/main/resources/docs/org.apache.nifi.amqp.processors.ConsumeAMQP/additionalDetails.html index 0659552991..9522df4da8 100644 --- a/nifi-extension-bundles/nifi-amqp-bundle/nifi-amqp-processors/src/main/resources/docs/org.apache.nifi.amqp.processors.ConsumeAMQP/additionalDetails.html +++ b/nifi-extension-bundles/nifi-amqp-bundle/nifi-amqp-processors/src/main/resources/docs/org.apache.nifi.amqp.processors.ConsumeAMQP/additionalDetails.html @@ -49,7 +49,7 @@ Configuring PublishAMQP:
- This processor collects various objects (eg. tasks, comments, etc...) from Asana via the specified
+ This processor collects various objects (e.g. tasks, comments, etc...) from Asana via the specified
AsanaClientService
. When the processor started for the first time with a given configuration
it collects each of the objects matching the user specified criteria, and emits FlowFile
s
of each on the NEW
relationship. Then, it polls Asana in the frequency of the configured Run Schedule
diff --git a/nifi-extension-bundles/nifi-asn1-bundle/nifi-asn1-services/src/main/resources/docs/org.apache.nifi.jasn1.JASN1Reader/additionalDetails.html b/nifi-extension-bundles/nifi-asn1-bundle/nifi-asn1-services/src/main/resources/docs/org.apache.nifi.jasn1.JASN1Reader/additionalDetails.html
index 51ca11fdd9..23bca13fa0 100644
--- a/nifi-extension-bundles/nifi-asn1-bundle/nifi-asn1-services/src/main/resources/docs/org.apache.nifi.jasn1.JASN1Reader/additionalDetails.html
+++ b/nifi-extension-bundles/nifi-asn1-bundle/nifi-asn1-services/src/main/resources/docs/org.apache.nifi.jasn1.JASN1Reader/additionalDetails.html
@@ -58,7 +58,7 @@
It usually guesses the name of this class correctly from Root Model Name.
However there may be situations where this is not the case.
Should this happen, one can take use of the fact that NiFi logs the temporary directory where the compiled Java classes can be found.
- Once the proper class of the root model type is identified in that directory (should be easily done by looking for it by it's name)
+ Once the proper class of the root model type is identified in that directory (should be easily done by looking for it by its name)
it can be provided directly via the Root Model Class Name property.
(Note however that the service should be left Enabled while doing the search as it deletes the temporary directory when it is disabled.
To be able to set the property the service needs to be disabled in the end - and let it remove the directory,
diff --git a/nifi-extension-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/resources/docs/org.apache.nifi.processors.aws.dynamodb.PutDynamoDBRecord/additionalDetails.html b/nifi-extension-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/resources/docs/org.apache.nifi.processors.aws.dynamodb.PutDynamoDBRecord/additionalDetails.html
index 690864a5d2..98d546422b 100644
--- a/nifi-extension-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/resources/docs/org.apache.nifi.processors.aws.dynamodb.PutDynamoDBRecord/additionalDetails.html
+++ b/nifi-extension-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/resources/docs/org.apache.nifi.processors.aws.dynamodb.PutDynamoDBRecord/additionalDetails.html
@@ -31,7 +31,7 @@
- The list data types supported by DynamoDB does not fully overlaps with the capabilities of the Record data structure. + The list data types supported by DynamoDB does not fully overlap with the capabilities of the Record data structure. Some conversions and simplifications are necessary during inserting the data. These are:
@@ -49,7 +49,7 @@Working with DynamoDB when batch inserting comes with two inherit limitations. First, the number of inserted Items is limited to 25 in any case. - In order to overcome this, during one execution, depending on the number or records in the incoming FlowFile, PutDynamoDBRecord might attempt multiple + In order to overcome this, during one execution, depending on the number of records in the incoming FlowFile, PutDynamoDBRecord might attempt multiple insert calls towards the database server. Using this approach, the flow does not have to work with this limitation in most cases.
@@ -63,7 +63,7 @@The most common reason for this behaviour comes from the other limitation the inserts have with DynamoDB: the database has a build in supervision over the amount of inserted data. - When a client reaches the "throughput limit", the server refuses to process the insert request until a certain amount of time. More information on this might be find here. + When a client reaches the "throughput limit", the server refuses to process the insert request until a certain amount of time. More information here. From the perspective of the PutDynamoDBRecord we consider these cases as temporary issues and the FlowFile will be transferred to the "unprocessed" Relationship after which the processor will yield in order to avoid further throughput issues. (Other kinds of failures will result transfer to the "failure" Relationship)
@@ -87,13 +87,13 @@- The processors assigns one of the record fields as partition key. The name of the record field is specified by the "Partition Key Field" property and the value will be the value of the record field with the same name. + The processors assign one of the record fields as partition key. The name of the record field is specified by the "Partition Key Field" property and the value will be the value of the record field with the same name.
- The processor assigns the value of a FlowFile attribute as partition key. With this strategy all the Items within a FlowFile will share the same partition key value and it is suggested to use for tables also having a sort key in order to meet the primary key requirements of the DynamoDB. + The processor assigns the value of a FlowFile attribute as partition key. With this strategy all the Items within a FlowFile will share the same partition key value, and it is suggested to use for tables also having a sort key in order to meet the primary key requirements of the DynamoDB. The property "Partition Key Field" defines the name of the Item field and the property "Partition Key Attribute" will specify which attribute's value will be assigned to the partition key. With this strategy the "Partition Key Field" must be different from the fields consisted by the incoming records.
@@ -118,7 +118,7 @@- The processors assigns one of the record fields as sort key. The name of the record field is specified by the "Sort Key Field" property and the value will be the value of the record field with the same name. + The processors assign one of the record fields as sort key. The name of the record field is specified by the "Sort Key Field" property and the value will be the value of the record field with the same name. With this strategy the "Sort Key Field" must be different from the fields consisted by the incoming records.
diff --git a/nifi-extension-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/resources/docs/org.apache.nifi.processors.aws.ml.transcribe.GetAwsTranscribeJobStatus/additionalDetails.html b/nifi-extension-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/resources/docs/org.apache.nifi.processors.aws.ml.transcribe.GetAwsTranscribeJobStatus/additionalDetails.html index 25e4f3513d..42a855390e 100644 --- a/nifi-extension-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/resources/docs/org.apache.nifi.processors.aws.ml.transcribe.GetAwsTranscribeJobStatus/additionalDetails.html +++ b/nifi-extension-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/resources/docs/org.apache.nifi.processors.aws.ml.transcribe.GetAwsTranscribeJobStatus/additionalDetails.html @@ -25,11 +25,6 @@Automatically convert speech to text -
Automatically convert speech to text -
Amazon Translate is a neural machine translation service for translating text to and from English across a breadth of supported languages. Powered by deep-learning technologies, Amazon Translate delivers fast, high-quality, and affordable language translation. - It provides a managed, continually trained solution so you can easily translate company and user-authored content or build applications that require support across multiple languages. + It provides a managed, continually trained solution, so you can easily translate company and user-authored content or build applications that require support across multiple languages. The machine translation engine has been trained on a wide variety of content across different domains to produce quality translations that serve any industry need.
diff --git a/nifi-extension-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/resources/docs/org.apache.nifi.processors.aws.ml.translate.StartAwsTranslateJob/additionalDetails.html b/nifi-extension-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/resources/docs/org.apache.nifi.processors.aws.ml.translate.StartAwsTranslateJob/additionalDetails.html index ae02e9a397..6c7ee898ec 100644 --- a/nifi-extension-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/resources/docs/org.apache.nifi.processors.aws.ml.translate.StartAwsTranslateJob/additionalDetails.html +++ b/nifi-extension-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/resources/docs/org.apache.nifi.processors.aws.ml.translate.StartAwsTranslateJob/additionalDetails.html @@ -26,7 +26,7 @@Amazon Translate is a neural machine translation service for translating text to and from English across a breadth of supported languages. Powered by deep-learning technologies, Amazon Translate delivers fast, high-quality, and affordable language translation. - It provides a managed, continually trained solution so you can easily translate company and user-authored content or build applications that require support across multiple languages. + It provides a managed, continually trained solution, so you can easily translate company and user-authored content or build applications that require support across multiple languages. The machine translation engine has been trained on a wide variety of content across different domains to produce quality translations that serve any industry need.
diff --git a/nifi-extension-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/resources/docs/org.apache.nifi.processors.aws.s3.ListS3/additionalDetails.html b/nifi-extension-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/resources/docs/org.apache.nifi.processors.aws.s3.ListS3/additionalDetails.html index c717e98e49..eb0c5bd48a 100644 --- a/nifi-extension-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/resources/docs/org.apache.nifi.processors.aws.s3.ListS3/additionalDetails.html +++ b/nifi-extension-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/resources/docs/org.apache.nifi.processors.aws.s3.ListS3/additionalDetails.html @@ -47,7 +47,7 @@To solve this, the ListS3 Processor can optionally be configured with a Record Writer. When a Record Writer is configured, a single FlowFile will be created that will contain a Record for each object in the bucket, instead of a separate FlowFile per object. - See the documentation for ListFile for an example of how to build a dataflow that allows for processing all of the objects before proceeding + See the documentation for ListFile for an example of how to build a dataflow that allows for processing all the objects before proceeding with any other step.
diff --git a/nifi-extension-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-client-service/src/main/resources/docs/org.apache.nifi.elasticsearch.ElasticSearchClientServiceImpl/additionalDetails.html b/nifi-extension-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-client-service/src/main/resources/docs/org.apache.nifi.elasticsearch.ElasticSearchClientServiceImpl/additionalDetails.html index 5a563e7554..395c629488 100644 --- a/nifi-extension-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-client-service/src/main/resources/docs/org.apache.nifi.elasticsearch.ElasticSearchClientServiceImpl/additionalDetails.html +++ b/nifi-extension-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-client-service/src/main/resources/docs/org.apache.nifi.elasticsearch.ElasticSearchClientServiceImpl/additionalDetails.html @@ -56,7 +56,7 @@ This Elasticsearch client relies on aRestClient
using the Apache HTTP Async Client. By default, it will start one
dispatcher thread, and a number of worker threads used by the connection manager. There will be as many worker thread as the number
of locally detected processors/cores on the NiFi host. Consequently, it is highly recommended to have only one instance of this
- controller service per remote Elasticsearch destination and have this controller service shared across all of the Elasticsearch
+ controller service per remote Elasticsearch destination and have this controller service shared across all the Elasticsearch
processors of the NiFi flows. Having a very high number of instances could lead to resource starvation and result in OOM errors.
diff --git a/nifi-extension-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-client-service/src/main/resources/docs/org.apache.nifi.elasticsearch.ElasticSearchLookupService/additionalDetails.html b/nifi-extension-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-client-service/src/main/resources/docs/org.apache.nifi.elasticsearch.ElasticSearchLookupService/additionalDetails.html
index 3b95430e33..b900a5c886 100644
--- a/nifi-extension-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-client-service/src/main/resources/docs/org.apache.nifi.elasticsearch.ElasticSearchLookupService/additionalDetails.html
+++ b/nifi-extension-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-client-service/src/main/resources/docs/org.apache.nifi.elasticsearch.ElasticSearchLookupService/additionalDetails.html
@@ -35,7 +35,7 @@
name.
- The query that is assembled from these is a boolean query where all of the criteria are under the must list. + The query that is assembled from these is a boolean query where all the criteria are under the must list. In addition, wildcards are not supported right now and all criteria are translated into literal match queries.
The following is an example query that would be created for tracking an "@timestamp" field:
+The following is an example query that would be created for tracking a "@timestamp" field:
{ "query": { diff --git a/nifi-extension-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-restapi-processors/src/main/resources/docs/org.apache.nifi.processors.elasticsearch.DeleteByQueryElasticsearch/additionalDetails.html b/nifi-extension-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-restapi-processors/src/main/resources/docs/org.apache.nifi.processors.elasticsearch.DeleteByQueryElasticsearch/additionalDetails.html index 05c19ad2c7..e27076c6f5 100644 --- a/nifi-extension-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-restapi-processors/src/main/resources/docs/org.apache.nifi.processors.elasticsearch.DeleteByQueryElasticsearch/additionalDetails.html +++ b/nifi-extension-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-restapi-processors/src/main/resources/docs/org.apache.nifi.processors.elasticsearch.DeleteByQueryElasticsearch/additionalDetails.html @@ -31,7 +31,7 @@ } }-
To delete all of the contents of an index, this could be used:
+To delete all the contents of an index, this could be used:
{ "query": { diff --git a/nifi-extension-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-restapi-processors/src/main/resources/docs/org.apache.nifi.processors.elasticsearch.PutElasticsearchRecord/additionalDetails.html b/nifi-extension-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-restapi-processors/src/main/resources/docs/org.apache.nifi.processors.elasticsearch.PutElasticsearchRecord/additionalDetails.html index 231c939e61..80a5717a1f 100644 --- a/nifi-extension-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-restapi-processors/src/main/resources/docs/org.apache.nifi.processors.elasticsearch.PutElasticsearchRecord/additionalDetails.html +++ b/nifi-extension-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-restapi-processors/src/main/resources/docs/org.apache.nifi.processors.elasticsearch.PutElasticsearchRecord/additionalDetails.html @@ -44,7 +44,7 @@ record path operations that find an index or type value in the record set. The ID and operation type (create, index, update, upsert or delete) can also be extracted in a similar fashion from the record set. - An "@timestamp" field can be added to the data either using a default or by extracting it from the record set. + A "@timestamp" field can be added to the data either using a default or by extracting it from the record set. This is useful if the documents are being indexed into an Elasticsearch Data Stream.-Example - per-record actions
diff --git a/nifi-extension-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-restapi-processors/src/main/resources/docs/org.apache.nifi.processors.elasticsearch.UpdateByQueryElasticsearch/additionalDetails.html b/nifi-extension-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-restapi-processors/src/main/resources/docs/org.apache.nifi.processors.elasticsearch.UpdateByQueryElasticsearch/additionalDetails.html index f747a30a4c..b65ace6735 100644 --- a/nifi-extension-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-restapi-processors/src/main/resources/docs/org.apache.nifi.processors.elasticsearch.UpdateByQueryElasticsearch/additionalDetails.html +++ b/nifi-extension-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-restapi-processors/src/main/resources/docs/org.apache.nifi.processors.elasticsearch.UpdateByQueryElasticsearch/additionalDetails.html @@ -36,7 +36,7 @@ } }
To update all of the contents of an index, this could be used:
+To update all the contents of an index, this could be used:
{ "query": { diff --git a/nifi-extension-bundles/nifi-email-bundle/nifi-email-processors/src/main/resources/docs/org.apache.nifi.processors.email.ConsumeIMAP/additionalDetails.html b/nifi-extension-bundles/nifi-email-bundle/nifi-email-processors/src/main/resources/docs/org.apache.nifi.processors.email.ConsumeIMAP/additionalDetails.html index dd51b768ec..351e32188a 100644 --- a/nifi-extension-bundles/nifi-email-bundle/nifi-email-processors/src/main/resources/docs/org.apache.nifi.processors.email.ConsumeIMAP/additionalDetails.html +++ b/nifi-extension-bundles/nifi-email-bundle/nifi-email-processors/src/main/resources/docs/org.apache.nifi.processors.email.ConsumeIMAP/additionalDetails.html @@ -49,7 +49,7 @@- Another useful property is mail.debug which allows Java Mail API to print protocol messages to the console helping you to both understand what's going on as well as debug issues. + Another useful property is mail.debug which allows Java Mail API to print protocol messages to the console helping you to both understand what's going on and debug issues.
For the full list of available Java Mail properties please refer to here diff --git a/nifi-extension-bundles/nifi-email-bundle/nifi-email-processors/src/main/resources/docs/org.apache.nifi.processors.email.ConsumePOP3/additionalDetails.html b/nifi-extension-bundles/nifi-email-bundle/nifi-email-processors/src/main/resources/docs/org.apache.nifi.processors.email.ConsumePOP3/additionalDetails.html index 40de3ba0e0..f408d1bf47 100644 --- a/nifi-extension-bundles/nifi-email-bundle/nifi-email-processors/src/main/resources/docs/org.apache.nifi.processors.email.ConsumePOP3/additionalDetails.html +++ b/nifi-extension-bundles/nifi-email-bundle/nifi-email-processors/src/main/resources/docs/org.apache.nifi.processors.email.ConsumePOP3/additionalDetails.html @@ -48,7 +48,7 @@
- Another useful property is mail.debug which allows Java Mail API to print protocol messages to the console helping you to both understand what's going on as well as debug issues. + Another useful property is mail.debug which allows Java Mail API to print protocol messages to the console helping you to both understand what's going on and debug issues.
For the full list of available Java Mail properties please refer to here diff --git a/nifi-extension-bundles/nifi-gcp-bundle/nifi-gcp-processors/src/main/java/org/apache/nifi/processors/gcp/bigquery/AbstractBigQueryProcessor.java b/nifi-extension-bundles/nifi-gcp-bundle/nifi-gcp-processors/src/main/java/org/apache/nifi/processors/gcp/bigquery/AbstractBigQueryProcessor.java index 2012c4c474..3c0e470206 100644 --- a/nifi-extension-bundles/nifi-gcp-bundle/nifi-gcp-processors/src/main/java/org/apache/nifi/processors/gcp/bigquery/AbstractBigQueryProcessor.java +++ b/nifi-extension-bundles/nifi-gcp-bundle/nifi-gcp-processors/src/main/java/org/apache/nifi/processors/gcp/bigquery/AbstractBigQueryProcessor.java @@ -168,7 +168,7 @@ public abstract class AbstractBigQueryProcessor extends AbstractGCPProcessor results) { diff --git a/nifi-extension-bundles/nifi-gcp-bundle/nifi-gcp-processors/src/main/resources/docs/org.apache.nifi.processors.gcp.storage.ListGCSBucket/additionalDetails.html b/nifi-extension-bundles/nifi-gcp-bundle/nifi-gcp-processors/src/main/resources/docs/org.apache.nifi.processors.gcp.storage.ListGCSBucket/additionalDetails.html index 27aa2af2ce..37491482b7 100644 --- a/nifi-extension-bundles/nifi-gcp-bundle/nifi-gcp-processors/src/main/resources/docs/org.apache.nifi.processors.gcp.storage.ListGCSBucket/additionalDetails.html +++ b/nifi-extension-bundles/nifi-gcp-bundle/nifi-gcp-processors/src/main/resources/docs/org.apache.nifi.processors.gcp.storage.ListGCSBucket/additionalDetails.html @@ -47,7 +47,7 @@
To solve this, the ListGCSBucket Processor can optionally be configured with a Record Writer. When a Record Writer is configured, a single FlowFile will be created that will contain a Record for each object in the bucket, instead of a separate FlowFile per object. - See the documentation for ListFile for an example of how to build a dataflow that allows for processing all of the objects before proceeding + See the documentation for ListFile for an example of how to build a dataflow that allows for processing all the objects before proceeding with any other step.
diff --git a/nifi-extension-bundles/nifi-gcp-bundle/nifi-gcp-processors/src/main/resources/docs/org.apache.nifi.processors.gcp.vision.GetGcpVisionAnnotateFilesOperationStatus/additionalDetails.html b/nifi-extension-bundles/nifi-gcp-bundle/nifi-gcp-processors/src/main/resources/docs/org.apache.nifi.processors.gcp.vision.GetGcpVisionAnnotateFilesOperationStatus/additionalDetails.html index f3bd5c178c..fc1bc6b964 100644 --- a/nifi-extension-bundles/nifi-gcp-bundle/nifi-gcp-processors/src/main/resources/docs/org.apache.nifi.processors.gcp.vision.GetGcpVisionAnnotateFilesOperationStatus/additionalDetails.html +++ b/nifi-extension-bundles/nifi-gcp-bundle/nifi-gcp-processors/src/main/resources/docs/org.apache.nifi.processors.gcp.vision.GetGcpVisionAnnotateFilesOperationStatus/additionalDetails.html @@ -27,7 +27,7 @@Usage
GetGcpVisionAnnotateFilesOperationStatus is designed to periodically check the statuses of file annotation operations. This processor should be used in pair with StartGcpVisionAnnotateFilesOperation Processor. - An outgoing FlowFile contains the raw response returned by the Vision server. This response is in JSON json format and contains a google storage reference where the result is located, as well as additional metadata, as written in the Google Vision API Reference document. + An outgoing FlowFile contains the raw response returned by the Vision server. This response is in JSON format and contains a Google storage reference where the result is located, as well as additional metadata, as written in the Google Vision API Reference document.
This is a grooviest groovy script :)
+This is the grooviest groovy script :)
variable | type | description |
---|
If the Grouping Attribute property is specified, all rates are accumulated separately for unique values of the specified attribute. For example, assume Grouping Attribute property is - specified and the its value is "city". All FlowFiles containing a "city" attribute with value "Albuquerque" will have an accumulated rate calculated. A separate rate will be calculated + specified and its value is "city". All FlowFiles containing a "city" attribute with value "Albuquerque" will have an accumulated rate calculated. A separate rate will be calculated for all FlowFiles containing a "city" attribute with a value "Boston". In other words, separate rate calculations will be accumulated for all unique values of the Grouping Attribute.
diff --git a/nifi-extension-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/resources/docs/org.apache.nifi.processors.standard.DebugFlow/additionalDetails.html b/nifi-extension-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/resources/docs/org.apache.nifi.processors.standard.DebugFlow/additionalDetails.html index 9981b4a5e1..f3fa6353b1 100644 --- a/nifi-extension-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/resources/docs/org.apache.nifi.processors.standard.DebugFlow/additionalDetails.html +++ b/nifi-extension-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/resources/docs/org.apache.nifi.processors.standard.DebugFlow/additionalDetails.html @@ -23,8 +23,8 @@- When triggered, the processor loops through the appropriate response list (based on whether or not it - received a FlowFile). A response is produced the configured number of times for each pass through its + When triggered, the processor loops through the appropriate response list. + A response is produced the configured number of times for each pass through its response list, as long as the processor is running.
Triggered by a FlowFile, the processor can produce the following responses. diff --git a/nifi-extension-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/resources/docs/org.apache.nifi.processors.standard.JoinEnrichment/additionalDetails.html b/nifi-extension-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/resources/docs/org.apache.nifi.processors.standard.JoinEnrichment/additionalDetails.html index f4ac29856b..54c008e30b 100644 --- a/nifi-extension-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/resources/docs/org.apache.nifi.processors.standard.JoinEnrichment/additionalDetails.html +++ b/nifi-extension-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/resources/docs/org.apache.nifi.processors.standard.JoinEnrichment/additionalDetails.html @@ -155,7 +155,7 @@ This strategy would produce output the looks like this (assuming a JSON Writer):
- The "Insert Enrichment Fields" strategy inserts all of the fields of the "enrichment" record into the original record. The records are correlated by their index in the FlowFile. That is, + The "Insert Enrichment Fields" strategy inserts all the fields of the "enrichment" record into the original record. The records are correlated by their index in the FlowFile. That is, the first record in the "enrichment" FlowFile is inserted into the first record in the "original" FlowFile. The second record of the "enrichment" FlowFile is inserted into the second record of the "original" FlowFile and so on.
@@ -323,7 +323,7 @@ FlowFile as its own table with the name "original" while we treat the enrichment-Given this, we might combine all of the data using a simple query such as: +Given this, we might combine all the data using a simple query such as:
SELECT o.*, e.*
@@ -445,7 +445,7 @@ using this Processor.
small attributes on a FlowFile is perfectly fine. Storing 300 attributes, on the other hand, may occupy a significant amount of heap.
- Limit backpressure. The JoinEnrichment Processor will pull into its own memory all of the incoming FlowFiles. As a result, it will be helpful to avoid providing a huge number of FlowFiles
+ Limit backpressure. The JoinEnrichment Processor will pull into its own memory all the incoming FlowFiles. As a result, it will be helpful to avoid providing a huge number of FlowFiles
to the Processor at any given time. This can be done by setting the backpressure limits to a smaller value. For example, in our example above, the ForkEnrichment Processor is connected
directly to the JoinEnrichment Processor. We may want to limit the backpressure on this connection to 500 or 1,000 instead of the default 10,000. Doing so will limit the number of FlowFiles
that are allowed to be loaded into the JoinEnrichment Processor at one time.
@@ -456,7 +456,7 @@ using this Processor.
More Complex Joining Strategies
This Processor offers several strategies that can be used for correlating data together and joining records from two different FlowFiles into a single FlowFile. However, there are times
-when users may require more powerful capabilities than what is offered. We might, for example, want to use the information in an enrichment record to determine whether or not to null out a value in
+when users may require more powerful capabilities than what is offered. We might, for example, want to use the information in an enrichment record to determine whether to null out a value in
the corresponding original records.
diff --git a/nifi-extension-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/resources/docs/org.apache.nifi.processors.standard.ListFTP/additionalDetails.html b/nifi-extension-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/resources/docs/org.apache.nifi.processors.standard.ListFTP/additionalDetails.html
index 462d191848..7b19c751a4 100644
--- a/nifi-extension-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/resources/docs/org.apache.nifi.processors.standard.ListFTP/additionalDetails.html
+++ b/nifi-extension-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/resources/docs/org.apache.nifi.processors.standard.ListFTP/additionalDetails.html
@@ -52,7 +52,7 @@
We can still accomplish the desired use case of waiting until all files in the directory have been processed by splitting apart the FlowFile
- and processing all of the data within a Process Group. Configuring the Process Group with a FlowFile Concurrency of "Single FlowFile per Node"
+ and processing all the data within a Process Group. Configuring the Process Group with a FlowFile Concurrency of "Single FlowFile per Node"
means that only one FlowFile will be brought into the Process Group. Once that happens, the FlowFile can be split apart and each part processed.
Configuring the Process Group with an Outbound Policy of "Batch Output" means that none of the FlowFiles will leave the Process Group until all have
finished processing. As a result, we can build a flow like the following:
diff --git a/nifi-extension-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/resources/docs/org.apache.nifi.processors.standard.ListFile/additionalDetails.html b/nifi-extension-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/resources/docs/org.apache.nifi.processors.standard.ListFile/additionalDetails.html
index d4a1f691d9..1483e609b5 100644
--- a/nifi-extension-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/resources/docs/org.apache.nifi.processors.standard.ListFile/additionalDetails.html
+++ b/nifi-extension-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/resources/docs/org.apache.nifi.processors.standard.ListFile/additionalDetails.html
@@ -52,7 +52,7 @@
We can still accomplish the desired use case of waiting until all files in the directory have been processed by splitting apart the FlowFile
- and processing all of the data within a Process Group. Configuring the Process Group with a FlowFile Concurrency of "Single FlowFile per Node"
+ and processing all the data within a Process Group. Configuring the Process Group with a FlowFile Concurrency of "Single FlowFile per Node"
means that only one FlowFile will be brought into the Process Group. Once that happens, the FlowFile can be split apart and each part processed.
Configuring the Process Group with an Outbound Policy of "Batch Output" means that none of the FlowFiles will leave the Process Group until all have
finished processing. As a result, we can build a flow like the following:
diff --git a/nifi-extension-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/resources/docs/org.apache.nifi.processors.standard.ListSFTP/additionalDetails.html b/nifi-extension-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/resources/docs/org.apache.nifi.processors.standard.ListSFTP/additionalDetails.html
index 624099e119..7370f255bd 100644
--- a/nifi-extension-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/resources/docs/org.apache.nifi.processors.standard.ListSFTP/additionalDetails.html
+++ b/nifi-extension-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/resources/docs/org.apache.nifi.processors.standard.ListSFTP/additionalDetails.html
@@ -52,7 +52,7 @@
We can still accomplish the desired use case of waiting until all files in the directory have been processed by splitting apart the FlowFile
- and processing all of the data within a Process Group. Configuring the Process Group with a FlowFile Concurrency of "Single FlowFile per Node"
+ and processing all the data within a Process Group. Configuring the Process Group with a FlowFile Concurrency of "Single FlowFile per Node"
means that only one FlowFile will be brought into the Process Group. Once that happens, the FlowFile can be split apart and each part processed.
Configuring the Process Group with an Outbound Policy of "Batch Output" means that none of the FlowFiles will leave the Process Group until all have
finished processing. As a result, we can build a flow like the following:
diff --git a/nifi-extension-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/resources/docs/org.apache.nifi.processors.standard.MergeContent/additionalDetails.html b/nifi-extension-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/resources/docs/org.apache.nifi.processors.standard.MergeContent/additionalDetails.html
index 6582620f33..e471574c31 100644
--- a/nifi-extension-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/resources/docs/org.apache.nifi.processors.standard.MergeContent/additionalDetails.html
+++ b/nifi-extension-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/resources/docs/org.apache.nifi.processors.standard.MergeContent/additionalDetails.html
@@ -49,8 +49,8 @@
How the Processor determines which bin to place a FlowFile in depends on a few different configuration options. Firstly, the Merge Strategy
is considered. The Merge Strategy can be set to one of two options: "Bin Packing Algorithm," or "Defragment". When the goal is to simply combine
- smaller FlowFiles into one larger FlowFile, the Bin Packing Algorithm should be used. This algorithm picks a bin based on whether or not the FlowFile
- can fit in the bin according to its size and the <Maximum Bin Size> property and whether or not the FlowFile is 'like' the other FlowFiles in
+ smaller FlowFiles into one larger FlowFile, the Bin Packing Algorithm should be used. This algorithm picks a bin based on whether the FlowFile
+ can fit in the bin according to its size and the <Maximum Bin Size> property and whether the FlowFile is 'like' the other FlowFiles in
the bin. What it means for two FlowFiles to be 'like FlowFiles' is discussed at the end of this section.
@@ -62,7 +62,7 @@
so that the FlowFiles can be ordered correctly. For a given "fragment.identifier", at least one FlowFile must have the "fragment.count" attribute
(which indicates how many FlowFiles belong in the bin). Other FlowFiles with the same identifier must have the same value for the "fragment.count" attribute,
or they can omit this attribute.
- NOTE: while there are valid use cases for breaking apart FlowFiles and later re-merging them, it is an anti-pattern to take a larger FlowFile,
+ NOTE: while there are valid use cases for breaking apart FlowFiles and later re-merging them, it is an antipattern to take a larger FlowFile,
break it into a million tiny FlowFiles, and then re-merge them. Doing so can result in using huge amounts of Java heap and can result in Out Of Memory Errors.
Additionally, it adds large amounts of load to the NiFi framework. This can result in increased CPU and disk utilization and often times can be an order of magnitude
lower throughput and an order of magnitude higher latency. As an alternative, whenever possible, dataflows should be built to make use of Record-oriented processors,
@@ -84,8 +84,7 @@
When a Bin is Merged
Above, we discussed how a bin is chosen for a given FlowFile. Once a bin has been created and FlowFiles added to it, we must have some way to determine
- when a bin is "full" so that we can bin those FlowFiles together into a "merged" FlowFile. There are a few criteria that are used to make a determination as
- to whether or not a bin should be merged.
+ when a bin is "full" so that we can bin those FlowFiles together into a "merged" FlowFile.
@@ -112,7 +111,7 @@
If the <Merge Strategy> property is set to "Defragment", then a bin is full only when the number of FlowFiles in the bin is equal to the number specified
by the "fragment.count" attribute of one of the FlowFiles in the bin. All FlowFiles that have this attribute must have the same value for this attribute,
or else they will be routed to the "failure" relationship. It is not necessary that all FlowFiles have this value, but at least one FlowFile in the bin must have
- this value or the bin will never be complete. If all of the necessary FlowFiles are not binned together by the point at which the bin times amount
+ this value or the bin will never be complete. If all the necessary FlowFiles are not binned together by the point at which the bin times amount
(as specified by the <Max Bin Age> property), then the FlowFiles will all be routed to the 'failure' relationship instead of being merged together.
@@ -150,7 +149,7 @@
Bin has not yet reached either of the minimum thresholds. Note that the age here is determined by when the Bin was created, NOT the age of the FlowFiles that reside within those
Bins. As a result, if the Processor is stopped until it has 1 million FlowFiles queued, each one being 10 days old, but the Max Bin Age is set to "1 day," the Max Bin Age will not
be met for at least one full day, even though the FlowFiles themselves are much older than this threshold. If the Processor is stopped and restarted, all Bins are destroyed and
- recreated, so the timer is reset.
+ recreated, and the timer is reset.
BIN_MANAGER_FULL
diff --git a/nifi-extension-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/resources/docs/org.apache.nifi.processors.standard.MergeRecord/additionalDetails.html b/nifi-extension-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/resources/docs/org.apache.nifi.processors.standard.MergeRecord/additionalDetails.html
index 4a34d3e481..37d1e8f639 100644
--- a/nifi-extension-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/resources/docs/org.apache.nifi.processors.standard.MergeRecord/additionalDetails.html
+++ b/nifi-extension-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/resources/docs/org.apache.nifi.processors.standard.MergeRecord/additionalDetails.html
@@ -50,8 +50,8 @@
How the Processor determines which bin to place a FlowFile in depends on a few different configuration options. Firstly, the Merge Strategy
is considered. The Merge Strategy can be set to one of two options: Bin Packing Algorithm, or Defragment. When the goal is to simply combine
- smaller FlowFiles into one larger FlowFiles, the Bin Packing Algorithm should be used. This algorithm picks a bin based on whether or not the FlowFile
- can fit in the bin according to its size and the <Maximum Bin Size> property and whether or not the FlowFile is 'like' the other FlowFiles in
+ smaller FlowFiles into one larger FlowFiles, the Bin Packing Algorithm should be used. This algorithm picks a bin based on whether the FlowFile
+ can fit in the bin according to its size and the <Maximum Bin Size> property and whether the FlowFile is 'like' the other FlowFiles in
the bin. What it means for two FlowFiles to be 'like FlowFiles' is discussed at the end of this section.
@@ -87,12 +87,11 @@
When a Bin is Merged
Above, we discussed how a bin is chosen for a given FlowFile. Once a bin has been created and FlowFiles added to it, we must have some way to determine
- when a bin is "full" so that we can bin those FlowFiles together into a "merged" FlowFile. There are a few criteria that are used to make a determination as
- to whether or not a bin should be merged.
+ when a bin is "full" so that we can bin those FlowFiles together into a "merged" FlowFile.
- If the <Merge Strategy> property is set to "Bin Packing Algorithm" then then the following rules will be evaluated.
+ If the <Merge Strategy> property is set to "Bin Packing Algorithm" then the following rules will be evaluated.
Firstly, in order for a bin to be full, both of the thresholds specified by the <Minimum Bin Size> and the <Minimum Number of Records> properties
must be satisfied. If one of these properties is not set, then it is ignored. Secondly, if either the <Maximum Bin Size> or the <Maximum Number of
Records> property is reached, then the bin is merged. That is, both of the minimum values must be reached but only one of the maximum values need be reached.
@@ -109,7 +108,7 @@
If the <Merge Strategy> property is set to "Defragment" then a bin is full only when the number of FlowFiles in the bin is equal to the number specified
by the "fragment.count" attribute of one of the FlowFiles in the bin. All FlowFiles that have this attribute must have the same value for this attribute,
or else they will be routed to the "failure" relationship. It is not necessary that all FlowFiles have this value, but at least one FlowFile in the bin must have
- this value or the bin will never be complete. If all of the necessary FlowFiles are not binned together by the point at which the bin times amount
+ this value or the bin will never be complete. If all the necessary FlowFiles are not binned together by the point at which the bin times amount
(as specified by the <Max Bin Age> property), then the FlowFiles will all be routed to the 'failure' relationship instead of being merged together.
@@ -117,7 +116,7 @@
Once a bin is merged into a single FlowFile, it can sometimes be useful to understand why exactly the bin was merged when it was. For example, if the maximum number
of allowable bins is reached, a merged FlowFile may consist of far fewer records than expected. In order to help understand the behavior, the Processor will emit
a JOIN Provenance Events when creating the merged FlowFile, and the JOIN event will include in it a "Details" field that explains why the bin was merged when it was.
- For example, the event will indicate "Records Merged due to: Bin is full" if the bin reached its minimum thresholds and no more subsequent FlowFiles were able to be
+ For example, the event will indicate "Records Merged due to: Bin is full" if the bin reached its minimum thresholds and no more subsequent FlowFiles were
added to it. Or it may indicate "Records Merged due to: Maximum number of bins has been exceeded" if the bin was merged due to the configured maximum number of bins
being filled and needing to free up space for a new bin.
@@ -125,8 +124,8 @@
When a Failure Occurs
- When a bin is filled, the Processor is responsible for merging together all of the records in those FlowFiles into a single FlowFile. If the Processor fails
- to do so for any reason (for example, a Record cannot be read from an input FlowFile), then all of the FlowFiles in that bin are routed to the 'failure'
+ When a bin is filled, the Processor is responsible for merging together all the records in those FlowFiles into a single FlowFile. If the Processor fails
+ to do so for any reason (for example, a Record cannot be read from an input FlowFile), then all the FlowFiles in that bin are routed to the 'failure'
Relationship. The Processor does not skip the single problematic FlowFile and merge the others. This behavior was chosen because of two different considerations.
Firstly, without those problematic records, the bin may not truly be full, as the minimum bin size may not be reached without those records.
Secondly, and more importantly, if the problematic FlowFile contains 100 "good" records before the problematic ones, those 100 records would already have been
@@ -205,7 +204,7 @@
In this, because we have not configured a Correlation Attribute, and because all FlowFiles have the same schema, the Processor
will attempt to add all of these FlowFiles to the same bin. Because the Minimum Number of Records is 3 and the Maximum Number of Records is 5,
- all of the FlowFiles will be added to the same bin. The output, then, is a single FlowFile with the following content:
+ all the FlowFiles will be added to the same bin. The output, then, is a single FlowFile with the following content:
@@ -219,7 +218,7 @@ Jan, 2
- When the Processor runs, it will bin all of the FlowFiles that it can get from the queue. After that, it will merge any bin that is "full enough."
+ When the Processor runs, it will bin all the FlowFiles that it can get from the queue. After that, it will merge any bin that is "full enough."
So if we had only 3 FlowFiles on the queue, those 3 would have been added, and a new bin would have been created in the next iteration, once the
4th FlowFile showed up. However, if we had 8 FlowFiles queued up, only 5 would have been added to the first bin. The other 3 would have been added
to a second bin, and that bin would then be merged since it reached the minimum threshold of 3 also.
diff --git a/nifi-extension-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/resources/docs/org.apache.nifi.processors.standard.PartitionRecord/additionalDetails.html b/nifi-extension-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/resources/docs/org.apache.nifi.processors.standard.PartitionRecord/additionalDetails.html
index c41280fee4..8fb5701979 100644
--- a/nifi-extension-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/resources/docs/org.apache.nifi.processors.standard.PartitionRecord/additionalDetails.html
+++ b/nifi-extension-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/resources/docs/org.apache.nifi.processors.standard.PartitionRecord/additionalDetails.html
@@ -32,7 +32,7 @@
In order to make the Processor valid, at least one user-defined property must be added to the Processor.
The value of the property must be a valid RecordPath. Expression Language is supported and will be evaluated before
attempting to compile the RecordPath. However, if Expression Language is used, the Processor is not able to validate
- the RecordPath before-hand and may result in having FlowFiles fail processing if the RecordPath is not valid when being
+ the RecordPath beforehand and may result in having FlowFiles fail processing if the RecordPath is not valid when being
used.
@@ -46,7 +46,7 @@
- Once a FlowFile has been written, we know that all of the Records within that FlowFile have the same value for the fields that are
+ Once a FlowFile has been written, we know that all the Records within that FlowFile have the same value for the fields that are
described by the configured RecordPath's. As a result, this means that we can promote those values to FlowFile Attributes. We do so
by looking at the name of the property to which each RecordPath belongs. For example, if we have a property named country
with a value of /geo/country/name
, then each outbound FlowFile will have an attribute named country
with the
@@ -142,7 +142,7 @@
Example 1 - Partition By Simple Field
- For a simple case, let's partition all of the records based on the state that they live in.
+ For a simple case, let's partition all the records based on the state that they live in.
We can add a property named state
with a value of /locations/home/state
.
The result will be that we will have two outbound FlowFiles. The first will contain an attribute with the name
state
and a value of NY
. This FlowFile will consist of 3 records: John Doe, Jane Doe, and Jacob Doe.
@@ -174,7 +174,7 @@
- This will result in three different FlowFiles being created. The first FlowFile will contain records for John Doe and Jane Doe. If will contain an attribute
+ This will result in three different FlowFiles being created. The first FlowFile will contain records for John Doe and Jane Doe. It will contain an attribute
named "favorite.food" with a value of "spaghetti." However, because the second RecordPath pointed to a Record field, no "home" attribute will be added.
In this case, both of these records have the same value for both the first element of the "favorites" array
and the same value for the home address. Janet Doe has the same value for the first element in the "favorites" array but has a different home address. Similarly,
diff --git a/nifi-extension-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/resources/docs/org.apache.nifi.processors.standard.QueryRecord/additionalDetails.html b/nifi-extension-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/resources/docs/org.apache.nifi.processors.standard.QueryRecord/additionalDetails.html
index e3157d71a3..5d3a466113 100644
--- a/nifi-extension-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/resources/docs/org.apache.nifi.processors.standard.QueryRecord/additionalDetails.html
+++ b/nifi-extension-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/resources/docs/org.apache.nifi.processors.standard.QueryRecord/additionalDetails.html
@@ -132,14 +132,14 @@
It is also worth noting that the outbound FlowFiles have two different schemas. The Engineers
and Younger Than Average
FlowFiles contain 3 fields:
name
, age
, and title
while the VP
FlowFile contains only the name
field. In most cases, the Record Writer is configured to
use whatever Schema is provided to it by the Record (this generally means that it is configured with a Schema Access Strategy
of Inherit Record Schema
). In such
- a case, this works well. However, if a Schema is supplied to the Record Writer explicitly, it is important to ensure that the Schema accounts for all fields. If not, then then the
+ a case, this works well. However, if a Schema is supplied to the Record Writer explicitly, it is important to ensure that the Schema accounts for all fields. If not, then the
fields that are missing from the Record Writer's schema will simply not be present in the output.
SQL Over Hierarchical Data
- One important detail that we must taken into account when evaluating SQL over streams of arbitrary data is how
+ One important detail that we must take into account when evaluating SQL over streams of arbitrary data is how
we can handle hierarchical data, such as JSON, XML, and Avro. Because SQL was developed originally for relational databases, which
represent "flat" data, it is easy to understand how this would map to other "flat" data like a CSV file. Or even
a "flat" JSON representation where all fields are primitive types. However, in many cases, users encounter cases where they would like to evaluate SQL
diff --git a/nifi-extension-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/resources/docs/org.apache.nifi.processors.standard.TailFile/additionalDetails.html b/nifi-extension-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/resources/docs/org.apache.nifi.processors.standard.TailFile/additionalDetails.html
index e5013b2d1b..17c6282265 100644
--- a/nifi-extension-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/resources/docs/org.apache.nifi.processors.standard.TailFile/additionalDetails.html
+++ b/nifi-extension-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/resources/docs/org.apache.nifi.processors.standard.TailFile/additionalDetails.html
@@ -44,7 +44,7 @@
Modes
This processor is used to tail a file or multiple files, depending on the configured mode. The
- mode to choose depends of the logging pattern followed by the file(s) to tail. In any case, if there
+ mode to choose depends on the logging pattern followed by the file(s) to tail. In any case, if there
is a rolling pattern, the rolling files must be plain text files (compression is not supported at
the moment).
@@ -171,7 +171,7 @@
- Additionally, we run the chance of the Regular Expression not matching the data in the file. This could result in buffering all of the file's content, which could cause NiFi
+ Additionally, we run the chance of the Regular Expression not matching the data in the file. This could result in buffering all the file's content, which could cause NiFi
to run out of memory. To avoid this, the <Max Buffer Size> property limits the amount of data that can be buffered. If this amount of data is buffered, it will be flushed
to the FlowFile, even if another message hasn't been encountered.
diff --git a/nifi-extension-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/resources/docs/org.apache.nifi.processors.standard.UpdateRecord/additionalDetails.html b/nifi-extension-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/resources/docs/org.apache.nifi.processors.standard.UpdateRecord/additionalDetails.html
index c76a6ed192..b9c1c8717d 100644
--- a/nifi-extension-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/resources/docs/org.apache.nifi.processors.standard.UpdateRecord/additionalDetails.html
+++ b/nifi-extension-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/resources/docs/org.apache.nifi.processors.standard.UpdateRecord/additionalDetails.html
@@ -51,7 +51,7 @@
- Below, we lay out some examples in order to provide clarity about the Processor's behavior. For all of
+ Below, we lay out some examples in order to provide clarity about the Processor's behavior. For all
the examples below, consider the example to operate on the following set of 2 (JSON) records:
@@ -210,7 +210,7 @@
In the above example, we replaced the value of field based on another RecordPath. That RecordPath was an "absolute RecordPath,"
- meaning that it starts with a "slash" character (/
) and therefore it specifies the path from the "root" or "outer most" element.
+ meaning that it starts with a "slash" character (/
) and therefore it specifies the path from the "root" or "outermost" element.
However, sometimes we want to reference a field in such a way that we defined the RecordPath relative to the field being updated. This example
does just that. For each of the siblings given in the "siblings" array, we will replace the sibling's name with their id's. To do so, we will
configure the processor with the following properties:
diff --git a/nifi-extension-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/resources/docs/org.apache.nifi.processors.standard.ValidateCsv/additionalDetails.html b/nifi-extension-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/resources/docs/org.apache.nifi.processors.standard.ValidateCsv/additionalDetails.html
index 530467880f..f70cdb4005 100644
--- a/nifi-extension-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/resources/docs/org.apache.nifi.processors.standard.ValidateCsv/additionalDetails.html
+++ b/nifi-extension-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/resources/docs/org.apache.nifi.processors.standard.ValidateCsv/additionalDetails.html
@@ -95,7 +95,7 @@
Schema property: Unique(), UniqueHashCode()
Meaning: the input CSV has two columns. All the values of the first column must be unique (all the values are stored in
- memory and this can be consuming depending of the input). All the values of the second column must be unique (only hash
+ memory). All the values of the second column must be unique (only hash
codes of the input values are stored to ensure uniqueness).
diff --git a/nifi-extension-bundles/nifi-standard-bundle/nifi-standard-reporting-tasks/src/main/resources/docs/org.apache.nifi.controller.ControllerStatusReportingTask/additionalDetails.html b/nifi-extension-bundles/nifi-standard-bundle/nifi-standard-reporting-tasks/src/main/resources/docs/org.apache.nifi.controller.ControllerStatusReportingTask/additionalDetails.html
index 76c32fe320..f04e7d9bee 100644
--- a/nifi-extension-bundles/nifi-standard-bundle/nifi-standard-reporting-tasks/src/main/resources/docs/org.apache.nifi.controller.ControllerStatusReportingTask/additionalDetails.html
+++ b/nifi-extension-bundles/nifi-standard-bundle/nifi-standard-reporting-tasks/src/main/resources/docs/org.apache.nifi.controller.ControllerStatusReportingTask/additionalDetails.html
@@ -55,7 +55,7 @@
- If may be convenient to redirect the logging output of this ReportingTask to a separate log file than the typical application log.
+ It may be convenient to redirect the logging output of this ReportingTask to a separate log file than the typical application log.
This can be accomplished by modified the logback.xml file in the NiFi conf/ directory such that a logger with the name
org.apache.nifi.controller.ControllerStatusReportingTask
is configured to write to a separate log.
diff --git a/nifi-extension-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/resources/docs/org.apache.nifi.cef.CEFReader/additionalDetails.html b/nifi-extension-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/resources/docs/org.apache.nifi.cef.CEFReader/additionalDetails.html
index 6d8ec60a1e..07611e4b3d 100644
--- a/nifi-extension-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/resources/docs/org.apache.nifi.cef.CEFReader/additionalDetails.html
+++ b/nifi-extension-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/resources/docs/org.apache.nifi.cef.CEFReader/additionalDetails.html
@@ -103,7 +103,7 @@ Oct 12 04:16:11 localhost CEF:0|Company|Product|1.2.3|audit-login|Successful log
A common concern when inferring schemas is how to handle the condition of two values that have different types. For example, a custom extension field might
- have a Float value in one record and String in an other. In these cases, the inferred will contain a CHOICE data type with FLOAT and STRING options. Records will
+ have a Float value in one record and String in another. In these cases, the inferred will contain a CHOICE data type with FLOAT and STRING options. Records will
be allowed to have either value for the particular field.
@@ -111,7 +111,7 @@ Oct 12 04:16:11 localhost CEF:0|Company|Product|1.2.3|audit-login|Successful log
CEF format comes with specification not only to the message format but also has directives for the content. Because of this, the data type of some
fields are not determined by the actual value(s) in the FlowFile but by the CEF format. This includes header fields, which always have to appear and
comply to the data types defined in the CEF format. Also, extension fields from the Extension Dictionary might or might not appear in the generated
- schema based on the FlowFile content but in case an extension field is added it's data type is bound by the CEF format. Custom extensions have no similar
+ schema based on the FlowFile content but in case an extension field is added its data type is bound by the CEF format. Custom extensions have no similar
restrictions, their presence in the schema is completely depending on the FlowFile content.
diff --git a/nifi-extension-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/resources/docs/org.apache.nifi.csv.CSVReader/additionalDetails.html b/nifi-extension-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/resources/docs/org.apache.nifi.csv.CSVReader/additionalDetails.html
index c3c903e92d..850a659d70 100644
--- a/nifi-extension-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/resources/docs/org.apache.nifi.csv.CSVReader/additionalDetails.html
+++ b/nifi-extension-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/resources/docs/org.apache.nifi.csv.CSVReader/additionalDetails.html
@@ -108,10 +108,10 @@ Jane, Ten
When two values are encountered for the same field in two different records (or two values are encountered for an ARRAY type), the inference engine prefers
to use a "wider" data type over using a CHOICE data type. A data type "A" is said to be wider than data type "B" if and only if data type "A" encompasses all
values of "B" in addition to other values. For example, the LONG type is wider than the INT type but not wider than the BOOLEAN type (and BOOLEAN is also not wider
- than LONG). INT is wider than SHORT. The STRING type is considered wider than all other types with the Exception of MAP, RECORD, ARRAY, and CHOICE.
+ than LONG). INT is wider than SHORT. The STRING type is considered wider than all other types except MAP, RECORD, ARRAY, and CHOICE.
- Before inferring the type of a value, leading and trailing whitespace are removed. Additionally, if the value is surrounded by double-quotes ("), the double-quotes
+ Before inferring the type of value, leading and trailing whitespace are removed. Additionally, if the value is surrounded by double-quotes ("), the double-quotes
are removed. Therefore, the value 16
is interpreted the same as "16"
. Both will be interpreted as an INT. However, the value
" 16"
will be inferred as a STRING type because the white space is enclosed within double-quotes, which means that the white space is considered
part of the value.
diff --git a/nifi-extension-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/resources/docs/org.apache.nifi.grok.GrokReader/additionalDetails.html b/nifi-extension-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/resources/docs/org.apache.nifi.grok.GrokReader/additionalDetails.html
index e800fc272a..50794116f1 100644
--- a/nifi-extension-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/resources/docs/org.apache.nifi.grok.GrokReader/additionalDetails.html
+++ b/nifi-extension-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/resources/docs/org.apache.nifi.grok.GrokReader/additionalDetails.html
@@ -30,7 +30,7 @@
a file that contains Grok Patterns that can be used for parsing log data. If not specified, a default
patterns file will be used. Its contents are provided below. There are also properties for specifying
the schema to use when parsing data. The schema is not required. However, when data is parsed
- a Record is created that contains all of the fields present in the Grok Expression (explained below),
+ a Record is created that contains all the fields present in the Grok Expression (explained below),
and all fields are of type String. If a schema is chosen, the field can be declared to be a different,
compatible type, such as number. Additionally, if the schema does not contain one of the fields in the
parsed data, that field will be ignored. This can be used to filter out fields that are not of interest.
diff --git a/nifi-extension-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/resources/docs/org.apache.nifi.json.JsonPathReader/additionalDetails.html b/nifi-extension-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/resources/docs/org.apache.nifi.json.JsonPathReader/additionalDetails.html
index 87ca8c3229..6ee6c581b7 100644
--- a/nifi-extension-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/resources/docs/org.apache.nifi.json.JsonPathReader/additionalDetails.html
+++ b/nifi-extension-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/resources/docs/org.apache.nifi.json.JsonPathReader/additionalDetails.html
@@ -46,7 +46,7 @@
This Controller Service must be configured with a schema. Each JSON Path that is evaluated and is found in the "root level"
of the schema will produce a Field in the Record. I.e., the schema should match the Record that is created by evaluating all
- of the JSON Paths. It should not match the "incoming JSON" that is read from the FlowFile.
+ the JSON Paths. It should not match the "incoming JSON" that is read from the FlowFile.
@@ -130,7 +130,7 @@
When two values are encountered for the same field in two different records (or two values are encountered for an ARRAY type), the inference engine prefers
to use a "wider" data type over using a CHOICE data type. A data type "A" is said to be wider than data type "B" if and only if data type "A" encompasses all
values of "B" in addition to other values. For example, the LONG type is wider than the INT type but not wider than the BOOLEAN type (and BOOLEAN is also not wider
- than LONG). INT is wider than SHORT. The STRING type is considered wider than all other types with the Exception of MAP, RECORD, ARRAY, and CHOICE.
+ than LONG). INT is wider than SHORT. The STRING type is considered wider than all other types except MAP, RECORD, ARRAY, and CHOICE.
If two values are encountered for the same field in two different records (or two values are encountered for an ARRAY type), but neither value is of a type that
diff --git a/nifi-extension-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/resources/docs/org.apache.nifi.json.JsonTreeReader/additionalDetails.html b/nifi-extension-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/resources/docs/org.apache.nifi.json.JsonTreeReader/additionalDetails.html
index d80f371913..ac6e8f8f33 100644
--- a/nifi-extension-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/resources/docs/org.apache.nifi.json.JsonTreeReader/additionalDetails.html
+++ b/nifi-extension-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/resources/docs/org.apache.nifi.json.JsonTreeReader/additionalDetails.html
@@ -118,7 +118,7 @@
When two values are encountered for the same field in two different records (or two values are encountered for an ARRAY type), the inference engine prefers
to use a "wider" data type over using a CHOICE data type. A data type "A" is said to be wider than data type "B" if and only if data type "A" encompasses all
values of "B" in addition to other values. For example, the LONG type is wider than the INT type but not wider than the BOOLEAN type (and BOOLEAN is also not wider
- than LONG). INT is wider than SHORT. The STRING type is considered wider than all other types with the Exception of MAP, RECORD, ARRAY, and CHOICE.
+ than LONG). INT is wider than SHORT. The STRING type is considered wider than all other types except MAP, RECORD, ARRAY, and CHOICE.
If two values are encountered for the same field in two different records (or two values are encountered for an ARRAY type), but neither value is of a type that
diff --git a/nifi-extension-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/resources/docs/org.apache.nifi.xml.XMLReader/additionalDetails.html b/nifi-extension-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/resources/docs/org.apache.nifi.xml.XMLReader/additionalDetails.html
index 8b4c5b4aaf..8e6ec3113b 100755
--- a/nifi-extension-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/resources/docs/org.apache.nifi.xml.XMLReader/additionalDetails.html
+++ b/nifi-extension-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/resources/docs/org.apache.nifi.xml.XMLReader/additionalDetails.html
@@ -75,7 +75,7 @@
- This record can be described by a schema containing one field (e. g. of type string). By providing this schema,
+ This record can be described by a schema containing one field (e.g. of type string). By providing this schema,
the reader expects zero or one occurrences of "simple_field" in the record.
@@ -584,7 +584,7 @@
The "Field Name for Content" property is not set, and the XML element has a sub-element named "value". The name of the sub-element clashes with the
default field name added to the schema by the Schema Inference logic (see Example 2). As seen in the output data, the input XML attribute's value
is added to the record just like in the previous examples. The value of the <value>
element is retained, but the content of the
- <field_with_attribute>
that was outside of the sub-element, is lost.
+ <field_with_attribute>
that was outside the sub-element, is lost.
XML Attributes and Schema Inference Example 5
@@ -907,7 +907,7 @@
When two values are encountered for the same field in two different records (or two values are encountered for an ARRAY type), the inference engine prefers
to use a "wider" data type over using a CHOICE data type. A data type "A" is said to be wider than data type "B" if and only if data type "A" encompasses all
values of "B" in addition to other values. For example, the LONG type is wider than the INT type but not wider than the BOOLEAN type (and BOOLEAN is also not wider
- than LONG). INT is wider than SHORT. The STRING type is considered wider than all other types with the Exception of MAP, RECORD, ARRAY, and CHOICE.
+ than LONG). INT is wider than SHORT. The STRING type is considered wider than all other types except MAP, RECORD, ARRAY, and CHOICE.
If two values are encountered for the same field in two different records (or two values are encountered for an ARRAY type), but neither value is of a type that
diff --git a/nifi-extension-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/resources/docs/org.apache.nifi.yaml.YamlTreeReader/additionalDetails.html b/nifi-extension-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/resources/docs/org.apache.nifi.yaml.YamlTreeReader/additionalDetails.html
index ece3862e05..7af5e53042 100644
--- a/nifi-extension-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/resources/docs/org.apache.nifi.yaml.YamlTreeReader/additionalDetails.html
+++ b/nifi-extension-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/resources/docs/org.apache.nifi.yaml.YamlTreeReader/additionalDetails.html
@@ -118,7 +118,7 @@
When two values are encountered for the same field in two different records (or two values are encountered for an ARRAY type), the inference engine prefers
to use a "wider" data type over using a CHOICE data type. A data type "A" is said to be wider than data type "B" if and only if data type "A" encompasses all
values of "B" in addition to other values. For example, the LONG type is wider than the INT type but not wider than the BOOLEAN type (and BOOLEAN is also not wider
- than LONG). INT is wider than SHORT. The STRING type is considered wider than all other types with the Exception of MAP, RECORD, ARRAY, and CHOICE.
+ than LONG). INT is wider than SHORT. The STRING type is considered wider than all other types except MAP, RECORD, ARRAY, and CHOICE.
If two values are encountered for the same field in two different records (or two values are encountered for an ARRAY type), but neither value is of a type that
diff --git a/nifi-extension-bundles/nifi-update-attribute-bundle/nifi-update-attribute-processor/src/main/resources/docs/org.apache.nifi.processors.attributes.UpdateAttribute/additionalDetails.html b/nifi-extension-bundles/nifi-update-attribute-bundle/nifi-update-attribute-processor/src/main/resources/docs/org.apache.nifi.processors.attributes.UpdateAttribute/additionalDetails.html
index b7c40a2e73..b4220baf3b 100644
--- a/nifi-extension-bundles/nifi-update-attribute-bundle/nifi-update-attribute-processor/src/main/resources/docs/org.apache.nifi.processors.attributes.UpdateAttribute/additionalDetails.html
+++ b/nifi-extension-bundles/nifi-update-attribute-bundle/nifi-update-attribute-processor/src/main/resources/docs/org.apache.nifi.processors.attributes.UpdateAttribute/additionalDetails.html
@@ -342,7 +342,7 @@
In the event that the processor is unable to get the state at the beginning of the onTrigger, the FlowFile will be pushed back to the originating relationship and the processor will yield.
If the processor is able to get the state at the beginning of the onTrigger but unable to set the state after adding attributes to the FlowFile, the FlowFile will be transferred to
- "set state fail". This is normally due to the state not being the most up to date version (another thread has replaced the state with another version). In most use-cases this relationship
+ "set state fail". This is normally due to the state not being the most recent version (another thread has replaced the state with another version). In most use-cases this relationship
should loop back to the processor since the only affected attributes will be overwritten.
Note: Currently the only "stateful" option is to store state locally. This is done because the current implementation of clustered state relies on Zookeeper and Zookeeper isn't designed
@@ -367,7 +367,7 @@
Notes about Concurrency and Stateful Usage
When using the stateful option, concurrent tasks should be used with caution. If every incoming FlowFile will update state then it will be much more efficient to have only one
- task. This is because the first thing the onTrigger does is get the state and the last thing it does is store the state if there are an updates. If it does not have the most up to date
+ task. This is because the first thing the onTrigger does is get the state and the last thing it does is store the state if there are an updates. If it does not have the most recent
initial state when it goes to update it will fail and send the FlowFile to "set state fail". This is done so that the update is successful when it was done with the most recent information.
If it didn't do it in this mock-atomic way, there'd be no guarantee that the state is accurate.
diff --git a/nifi-framework-bundle/nifi-framework/nifi-framework-core-api/src/main/java/org/apache/nifi/controller/AbstractComponentNode.java b/nifi-framework-bundle/nifi-framework/nifi-framework-core-api/src/main/java/org/apache/nifi/controller/AbstractComponentNode.java
index cae39a90a4..dc188e223d 100644
--- a/nifi-framework-bundle/nifi-framework/nifi-framework-core-api/src/main/java/org/apache/nifi/controller/AbstractComponentNode.java
+++ b/nifi-framework-bundle/nifi-framework/nifi-framework-core-api/src/main/java/org/apache/nifi/controller/AbstractComponentNode.java
@@ -208,7 +208,7 @@ public abstract class AbstractComponentNode implements ComponentNode {
* configured set of properties
*/
protected boolean isClasspathDifferent(final Map properties) {
- // If any property in the given map modifies classpath and is different than the currently configured value,
+ // If any property in the given map modifies classpath and is different from the currently configured value,
// the given properties will require a different classpath.
for (final Map.Entry entry : properties.entrySet()) {
final PropertyDescriptor descriptor = entry.getKey();
@@ -308,7 +308,7 @@ public abstract class AbstractComponentNode implements ComponentNode {
if (propertyName != null && entry.getValue() == null) {
removeProperty(propertyName, allowRemovalOfRequiredProperties);
} else if (propertyName != null) {
- // Use the EL-Agnostic Parameter Parser to gather the list of referenced Parameters. We do this because we want to to keep track of which parameters
+ // Use the EL-Agnostic Parameter Parser to gather the list of referenced Parameters. We do this because we want to keep track of which parameters
// are referenced, regardless of whether or not they are referenced from within an EL Expression. However, we also will need to derive a different ParameterTokenList
// that we can provide to the PropertyConfiguration, so that when compiling the Expression Language Expressions, we are able to keep the Parameter Reference within
// the Expression's text.
diff --git a/nifi-framework-bundle/nifi-framework/nifi-resources/src/main/resources/conf/authorizers.xml b/nifi-framework-bundle/nifi-framework/nifi-resources/src/main/resources/conf/authorizers.xml
index 190a79f589..b12ad65b11 100644
--- a/nifi-framework-bundle/nifi-framework/nifi-resources/src/main/resources/conf/authorizers.xml
+++ b/nifi-framework-bundle/nifi-framework/nifi-resources/src/main/resources/conf/authorizers.xml
@@ -71,7 +71,7 @@
'TLS - Client Auth' - Client authentication policy when connecting to LDAP using LDAPS or START_TLS.
Possible values are REQUIRED, WANT, NONE.
'TLS - Protocol' - Protocol to use when connecting to LDAP using LDAPS or START_TLS. (i.e. TLS,
- TLSv1.1, TLSv1.2, etc).
+ TLSv1.1, TLSv1.2, etc.).
'TLS - Shutdown Gracefully' - Specifies whether the TLS should be shut down gracefully
before the target context is closed. Defaults to false.
@@ -255,7 +255,7 @@
The FileAccessPolicyProvider will provide support for managing access policies which is backed by a file
on the local file system.
- - User Group Provider - The identifier for an User Group Provider defined above that will be used to access
+ - User Group Provider - The identifier for a User Group Provider defined above that will be used to access
users and groups for use in the managed access policies.
- Authorizations File - The file where the FileAccessPolicyProvider will store policies.
diff --git a/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/api/ApplicationResource.java b/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/api/ApplicationResource.java
index a896aaa6f0..cd815d8195 100644
--- a/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/api/ApplicationResource.java
+++ b/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/api/ApplicationResource.java
@@ -355,7 +355,7 @@ public abstract class ApplicationResource {
* When a two-phase commit style request is used, the first phase (generally referred to
* as the "commit-request stage") is intended to validate that the request can be completed.
* In NiFi, we use this phase to validate that the request can complete. This method determines
- * whether or not the request is the first phase of a two-phase commit.
+ * whether the request is the first phase of a two-phase commit.
*
* @param httpServletRequest the request
* @return true
if the request represents a two-phase commit style request and is the
@@ -374,7 +374,7 @@ public abstract class ApplicationResource {
}
/**
- * Checks whether or not the request should be replicated to the cluster
+ * Checks whether the request should be replicated to the cluster
*
* @return true
if the request should be replicated, false
otherwise
*/
@@ -847,7 +847,7 @@ public abstract class ApplicationResource {
* @throws UnknownNodeException if the nodeUuid given does not map to any node in the cluster
*/
protected Response replicate(final URI path, final String method, final Object entity, final String nodeUuid, final Map headersToOverride) {
- // since we're cluster we must specify the cluster node identifier
+ // since we're in a cluster we must specify the cluster node identifier
if (nodeUuid == null) {
throw new IllegalArgumentException("The cluster node identifier must be specified.");
}
diff --git a/nifi-manifest/nifi-runtime-manifest/pom.xml b/nifi-manifest/nifi-runtime-manifest/pom.xml
index c6c0f8eb4c..3247885621 100644
--- a/nifi-manifest/nifi-runtime-manifest/pom.xml
+++ b/nifi-manifest/nifi-runtime-manifest/pom.xml
@@ -150,7 +150,7 @@
-
+
build-info-no-git
diff --git a/nifi-registry/nifi-registry-core/nifi-registry-resources/src/main/resources/conf/authorizers.xml b/nifi-registry/nifi-registry-core/nifi-registry-resources/src/main/resources/conf/authorizers.xml
index ff01c8f073..b36893924c 100644
--- a/nifi-registry/nifi-registry-core/nifi-registry-resources/src/main/resources/conf/authorizers.xml
+++ b/nifi-registry/nifi-registry-core/nifi-registry-resources/src/main/resources/conf/authorizers.xml
@@ -224,7 +224,7 @@
The FileAccessPolicyProvider will provide support for managing access policies which is backed by a file
on the local file system.
- - User Group Provider - The identifier for an User Group Provider defined above that will be used to access
+ - User Group Provider - The identifier for a User Group Provider defined above that will be used to access
users and groups for use in the managed access policies.
- Authorizations File - The file where the FileAccessPolicyProvider will store policies.
diff --git a/nifi-registry/nifi-registry-core/nifi-registry-revision/nifi-registry-revision-common/pom.xml b/nifi-registry/nifi-registry-core/nifi-registry-revision/nifi-registry-revision-common/pom.xml
index 4e75e6f0b6..260bb42b86 100644
--- a/nifi-registry/nifi-registry-core/nifi-registry-revision/nifi-registry-revision-common/pom.xml
+++ b/nifi-registry/nifi-registry-core/nifi-registry-revision/nifi-registry-revision-common/pom.xml
@@ -23,7 +23,7 @@
nifi-registry-revision-common
jar
-
+
org.slf4j
diff --git a/nifi-toolkit/nifi-toolkit-cli/src/main/java/org/apache/nifi/toolkit/cli/impl/command/AbstractCommand.java b/nifi-toolkit/nifi-toolkit-cli/src/main/java/org/apache/nifi/toolkit/cli/impl/command/AbstractCommand.java
index 72a439c2db..17fd78a6a1 100644
--- a/nifi-toolkit/nifi-toolkit-cli/src/main/java/org/apache/nifi/toolkit/cli/impl/command/AbstractCommand.java
+++ b/nifi-toolkit/nifi-toolkit-cli/src/main/java/org/apache/nifi/toolkit/cli/impl/command/AbstractCommand.java
@@ -95,7 +95,7 @@ public abstract class AbstractCommand implements Command {
}
protected void doInitialize(final Context context) {
- // sub-classes can override to do additional things like add options
+ // subclasses can override to do additional things like add options
}
protected void addOption(final Option option) {
diff --git a/nifi-toolkit/nifi-toolkit-cli/src/main/java/org/apache/nifi/toolkit/cli/impl/command/AbstractCommandGroup.java b/nifi-toolkit/nifi-toolkit-cli/src/main/java/org/apache/nifi/toolkit/cli/impl/command/AbstractCommandGroup.java
index 2b6792d8e4..9a3a7dad0d 100644
--- a/nifi-toolkit/nifi-toolkit-cli/src/main/java/org/apache/nifi/toolkit/cli/impl/command/AbstractCommandGroup.java
+++ b/nifi-toolkit/nifi-toolkit-cli/src/main/java/org/apache/nifi/toolkit/cli/impl/command/AbstractCommandGroup.java
@@ -51,7 +51,7 @@ public abstract class AbstractCommandGroup implements CommandGroup {
}
/**
- * Sub-classes override to provide the appropriate commands for the given group.
+ * Subclasses override to provide the appropriate commands for the given group.
*
* @return the list of commands for this group
*/