Before this fix, PublishKafka (0.9) and PublishKafka_0_10 fail with empty incoming FlowFiles due to 'transfer relationship not specified' error.
Because the internal 'publish' method is not called as StreamDemarcator does not emit any token regardless whether demarcator is set or not.
As for PublishKafka_0_11 and PublishKafka_1_0, empty FlowFiles are transferred to 'success' relationship, however no Kafka message is sent to Kafka.
Since Kafka allows 0 byte body empty messages, NiFi should be able to send it, too.
This commit changes above current situation to the followings, with all PublishKafka_* processors:
- If demarcator is not set, then publish incoming FlowFile content as it is. This enables sending an empty Kafka message.
- If demarcator is set, send each token as a separate message.
Even if no token is found (empty incoming FlowFile), transfer the FlowFile to 'success'.
This closes#2362.
Signed-off-by: Mark Payne <markap14@hotmail.com>
- Upgrading to Jersey 2.x.
- Updating NOTICE files where necessary.
- Fixing checkstyle issues.
This closes#2206.
Signed-off-by: Andy LoPresto <alopresto@apache.org>
It is possible null values to be stored in Kafka topics. Fixed handle this scenario.
Notice without this fix, the consumer is unable to consume more messages (at least
without removing messages from the queue).
- Removed FlowFile from RecordReaderFactory, RecordSetWriterFactory and SchemaAccessStrategy.
- Renamed variable 'allowableValue' to 'strategy' to represent its meaning better.
- Removed creation of temporal FlowFile to resolve Record Schema from ConsumerLease.
- Removed unnecessary 'InputStream content' argument from
RecordSetWriterFactory.getSchema method.
This closes#1877.
Also, updated record writers to ensure that they write the schema as appropriate if not using a RecordSet. Updated ConsumeKafkaRecord to allow for multiple schemas to be on same topic and partition
Signed-off-by: joewitt <joewitt@apache.org>
NIFI-3838: Updated version from 1.2.0-SNAPSHOT to 1.3.0-SNAPSHOT; removed unneeded value from AttributeExpression.ResultType enum
NIFI-3838: Addressed PR Review feedback
NIFI-3838: Allow for schemas to be merged together for a record; refactored RecordSetWriterFactory so that there is a method to obtain the schema and then the writer is created with that schema. Added additional unit tests
NIFI-3838: Addressed problems with documentation based on PR Review
NIFI-3838: Fixed checkstyle violation
NIFI-3838: Addressed issue of comparing different types of Number objects
Signed-off-by: Matt Burgess <mattyb149@apache.org>
This closes#1772
Enabled the ability to specify wildcard topics as a regular expression
as supported in the Kafka client library.
Signed-off-by: joewitt <joewitt@apache.org>
Currently, NiFi Kafka consumer processors have following issue.
While downstream connections are full, ConsumeKafka is not scheduled to run onTrigger.
It stopps executing poll to tell Kafka server that this client is alive.
Thus, after a while in that situation, Kafka server rebalances the client.
When downstream connections back to normal, although ConsumeKafka is scheduled again,
the client is no longer a part of a consumer group.
If this happens, Kafka client succeeds polling messages when ConsumeKafka processor resumes, but fails to commit offset.
Received messages are already committed into NiFi flow, but since consumer offset is not updated, those will be consumed again, duplicated.
In order to address above issue:
- For ConsumeKafka_0_10, use latest client library
Above issue has been addressed by KIP-62.
The latest Kafka consumer poll checks if the client instance is still valid, and rejoin the group if not, before consuming messages.
- For ConsumeKafka (0.9), added manual retention logic using pause/resume
Kafka client 0.9 doesn't have background thread heartbeat, so similar machanism is added manually.
Use Kafka pause/resume consumer API to tell Kafka server that the client stops consuming messages but is still alive.
Another internal thread is used to perform paused poll periodically based on the time passed since the last onTrigger(poll) is executed.
This closes#1527.
Signed-off-by: Bryan Bende <bbende@apache.org>