Commit Graph

15 Commits

Author SHA1 Message Date
Koji Kawamura e5ed62a98f NIFI-4724: Support 0 byte message with PublishKafka
Before this fix, PublishKafka (0.9) and PublishKafka_0_10 fail with empty incoming FlowFiles due to 'transfer relationship not specified' error.
Because the internal 'publish' method is not called as StreamDemarcator does not emit any token regardless whether demarcator is set or not.

As for PublishKafka_0_11 and PublishKafka_1_0, empty FlowFiles are transferred to 'success' relationship, however no Kafka message is sent to Kafka.

Since Kafka allows 0 byte body empty messages, NiFi should be able to send it, too.

This commit changes above current situation to the followings, with all PublishKafka_* processors:

- If demarcator is not set, then publish incoming FlowFile content as it is. This enables sending an empty Kafka message.
- If demarcator is set, send each token as a separate message.
  Even if no token is found (empty incoming FlowFile), transfer the FlowFile to 'success'.

This closes #2362.

Signed-off-by: Mark Payne <markap14@hotmail.com>
2018-01-05 10:42:58 -05:00
jknulst d543cfde25 NIFI-4675 Lifted restriction on demarcator and kafka.key usage together. This closes #2326. 2017-12-14 15:15:13 -05:00
Mark Payne c138987bb4 NIFI-4656, NIFI-4680: This closes #2330. Fix error handling in consume/publish kafka processors. Address issue with HortonworksSchemaRegistry throwing RuntimeException when it should be IOException. Fixed bug in ConsumeerLease/ConsumKafkaRecord that caused it to report too many records received
Signed-off-by: joewitt <joewitt@apache.org>
2017-12-08 16:01:14 -05:00
matthew-silverman c9cc76b5c8 NIFI-4639: fresh writer for each output record 2017-12-08 08:39:22 -05:00
joewitt cdc1facf39
NIFI-4664, NIFI-4662, NIFI-4660, NIFI-4659 moved tests which are timing/threading/network dependent and brittle to integration tests and un-ignored tests that are IT. Updated travis to reduce impact on infra and appveyor now skips test runs so is just to prove build works on windows. This closes #2319
squash
2017-12-06 10:53:09 -05:00
Mark Payne 00b11e82b7 NIFI-4600: This closes #2312. Added nifi-kafka-1-0-nar and nifi-kafka-1-0-processors modules
Signed-off-by: joewitt <joewitt@apache.org>
2017-12-04 16:51:59 -05:00
Janosch Woschitz e8b2387cb2 NIFI-4623: This closes #2281. Removed obsolete instability warning in documentation of newer (>= 0_10) Kafka processors
Signed-off-by: joewitt <joewitt@apache.org>
2017-11-21 12:16:54 -05:00
Mark Payne 7ad7520150 NIFI-4437: This closes #2183. When using ConsumeKafka_0_11 and no message demarcator, ensure that we add FlowFile Attributes for any Message Header that matches the 'Headers to Add as Attributes (Regex)' property
Signed-off-by: joewitt <joewitt@apache.org>
2017-10-06 15:06:32 -04:00
Mark Payne 582df7f4e8 NIFI-4008: This closes #2189. Update ConsumeKafkaRecord 0.11 so that it can consume multiple records from a single Kafka message
NIFI-4008: Ensure that we always check if a Kafka message's value is null before dereferencing it

Signed-off-by: joewitt <joewitt@apache.org>
2017-10-06 15:03:37 -04:00
Jeff Storck 2694adcca9 Merge branch 'NIFI-4412-RC2' 2017-10-02 13:58:54 -04:00
Mark Payne b3be2459e4 NIFI-4330: Fixed checkstyle violations (tabs instead of spaces). This closes #2185. 2017-09-29 10:33:56 -04:00
gardellajuanpablo 2d5b8c7267 NIFI-4330 ConsumeKafka* throw NullPointerException if Kafka message has a null value
It is possible null values to be stored in Kafka topics. Fixed handle this scenario.
Notice without this fix, the consumer is unable to consume more messages (at least
without removing messages from the queue).
2017-09-29 10:57:53 -03:00
Jeff Storck a57911d3db NIFI-4412-RC2 prepare for next development iteration 2017-09-28 13:45:36 -04:00
Jeff Storck e6508ba7d3 NIFI-4412-RC2 prepare release nifi-1.4.0-RC2 2017-09-28 13:45:21 -04:00
Mark Payne 3fb704c58f NIFI-4201: This closes #2024. Implementation of processors for interacting with Kafka 0.11
Signed-off-by: joewitt <joewitt@apache.org>
2017-09-22 22:08:19 -04:00