- Upgraded immediately actionable dependency versions from Meterian report.
- Upgraded jackson-core test dependencies for HBase and Elasticsearch modules.
- Only 3 instances of jackson-core < 2.8.6 (Google Cloud Platform and Spark Receiver modules).
- Upgraded version of poi dependency in nifi-email-processors to 3.16.
- Resolving dependency issues after rebasing against 1.5.0-SNAPSHOT.
- Removed jackson-databind from <dependencyManagement> block in nifi/pom.xml and added explicit reference to ${jackson.version} in all referenced artifacts.
- Removed jackson-mapper-asl from <dependencyManagement> block in nifi/pom.xml and added explicit reference to ${jackson.old.version} in all referenced artifacts.
- Removed Jasypt from <dependencyManagement> and added explicit version in test dependency for legacy compatibility.
- This closes#2084
- Support counters at Wait/Notify processors so that NiFi flow can be
configured to wait for N signals
- Extract Wait/Notify logics into WaitNotifyProtocol
- Added FragmentAttributes to manage commonly used fragment attributes
- Changed existing split processors to set 'fragment.identifier' and
'fragment.count', so that Wait can use those to wait for all splits
get processed
This closes#1420.
Signed-off-by: Bryan Bende <bbende@apache.org>
writing every Object to a new line, or as an array of Objects.
Let's assume you have an Avro content as stream of records (record1, record2, ...). If JSON container is "none", the converter will expose the records as sequence of
single JSON objects:
record1
record2
...
recordN
Please bear in mind, that the final output is not a valid JSON content. You can then forward this content e.g. to Kafka, where every record will be a single Kafka message.
If JSON container is "array", the output looks like this:
[record1,record2,...,recordN]
It is useful when you want to convert your Avro content to a valid JSON array.
This closes#88
Reviewed and Amended (amendments reviewed by original patch author on github) by Tony Kurc (tkurc@apache.org)
- Adding documentation about bare record use, renaming Split Size to Output Size, and adding a test case with 0 records
- Removing validators on properties that have allowable values, using positive integer validator for Output Size, and fixing type on processor description
- Adding optional ability to extract record count
- Renaming record.count to item.count for clarity, and updating documentation
- Adding a test case with 0 records
- Removing validators from properties that use allowable values