Before this fix, PublishKafka (0.9) and PublishKafka_0_10 fail with empty incoming FlowFiles due to 'transfer relationship not specified' error.
Because the internal 'publish' method is not called as StreamDemarcator does not emit any token regardless whether demarcator is set or not.
As for PublishKafka_0_11 and PublishKafka_1_0, empty FlowFiles are transferred to 'success' relationship, however no Kafka message is sent to Kafka.
Since Kafka allows 0 byte body empty messages, NiFi should be able to send it, too.
This commit changes above current situation to the followings, with all PublishKafka_* processors:
- If demarcator is not set, then publish incoming FlowFile content as it is. This enables sending an empty Kafka message.
- If demarcator is set, send each token as a separate message.
Even if no token is found (empty incoming FlowFile), transfer the FlowFile to 'success'.
This closes#2362.
Signed-off-by: Mark Payne <markap14@hotmail.com>
- Upgrading to Jersey 2.x.
- Updating NOTICE files where necessary.
- Fixing checkstyle issues.
This closes#2206.
Signed-off-by: Andy LoPresto <alopresto@apache.org>
It is possible null values to be stored in Kafka topics. Fixed handle this scenario.
Notice without this fix, the consumer is unable to consume more messages (at least
without removing messages from the queue).
- Removed FlowFile from RecordReaderFactory, RecordSetWriterFactory and SchemaAccessStrategy.
- Renamed variable 'allowableValue' to 'strategy' to represent its meaning better.
- Removed creation of temporal FlowFile to resolve Record Schema from ConsumerLease.
- Removed unnecessary 'InputStream content' argument from
RecordSetWriterFactory.getSchema method.
This closes#1877.
Also, updated record writers to ensure that they write the schema as appropriate if not using a RecordSet. Updated ConsumeKafkaRecord to allow for multiple schemas to be on same topic and partition
Signed-off-by: joewitt <joewitt@apache.org>
NIFI-3838: Updated version from 1.2.0-SNAPSHOT to 1.3.0-SNAPSHOT; removed unneeded value from AttributeExpression.ResultType enum
NIFI-3838: Addressed PR Review feedback
NIFI-3838: Allow for schemas to be merged together for a record; refactored RecordSetWriterFactory so that there is a method to obtain the schema and then the writer is created with that schema. Added additional unit tests
NIFI-3838: Addressed problems with documentation based on PR Review
NIFI-3838: Fixed checkstyle violation
NIFI-3838: Addressed issue of comparing different types of Number objects
Signed-off-by: Matt Burgess <mattyb149@apache.org>
This closes#1772
Enabled the ability to specify wildcard topics as a regular expression
as supported in the Kafka client library.
Signed-off-by: joewitt <joewitt@apache.org>
Currently, NiFi Kafka consumer processors have following issue.
While downstream connections are full, ConsumeKafka is not scheduled to run onTrigger.
It stopps executing poll to tell Kafka server that this client is alive.
Thus, after a while in that situation, Kafka server rebalances the client.
When downstream connections back to normal, although ConsumeKafka is scheduled again,
the client is no longer a part of a consumer group.
If this happens, Kafka client succeeds polling messages when ConsumeKafka processor resumes, but fails to commit offset.
Received messages are already committed into NiFi flow, but since consumer offset is not updated, those will be consumed again, duplicated.
In order to address above issue:
- For ConsumeKafka_0_10, use latest client library
Above issue has been addressed by KIP-62.
The latest Kafka consumer poll checks if the client instance is still valid, and rejoin the group if not, before consuming messages.
- For ConsumeKafka (0.9), added manual retention logic using pause/resume
Kafka client 0.9 doesn't have background thread heartbeat, so similar machanism is added manually.
Use Kafka pause/resume consumer API to tell Kafka server that the client stops consuming messages but is still alive.
Another internal thread is used to perform paused poll periodically based on the time passed since the last onTrigger(poll) is executed.
This closes#1527.
Signed-off-by: Bryan Bende <bbende@apache.org>
- Refactoring NarDetails to include all info from MANIFEST
- Adding the concept of a Bundle and refactoring NarClassLoaders to pass Bundles to ExtensionManager
- Adding logic to fail start-up when multiple NARs with same coordinates exist, moving Bundle classes to framework API
- Refactoring bundle API to classes and creating BundleCoordinate
- Updating FlowController to use BundleCoordinate
- Updating the UI and DTO model to support showing bundle details that loaded an extension type.
- Adding bundle details for processor canvas node, processor dialogs, controller service dialogs, and reporting task dialogs.
- Updating the formating of the bundle coordinates.
- Addressing text overflow in the configuration/details dialog.
- Fixing self referencing functions.
- Updating extension UI mapping to incorporate bundle coordinates.
- Discovering custom UIs through the supplied bundles.
- Adding verification methods for creating extensions through the rest api.
- Only returning extensions that are common amongst all nodes.
- Rendering the ghost processors using a dotted border.
- Adding bundle details to the flow.xml.
- Loading NiFi build and version details from the framework NAR.
- Removing properties for build and version details.
- Wiring together front end and back end changes.
- Including bundle coordinates in the component data model.
- Wiring together component data model and flow.xml.
- Addressing issue when resolve unvesioned dependent NARs.
Updating unit tests to pass based on framework changes
- Fixing logging of extension types during start up
- Allowing the application to start if there is a compatible bundle found. - Reporting missing bundle when the a compatible bundle is not found. - Fixing table height in new component dialogs.
Fixing chechstyle error and increasing test timeout for TestStandardControllerServiceProvider
- Adding ability to change processor type at runtime
- Adding backend code to change type for controller services
- Cleaning up instance classloaders for temp components.
- Creating a dialog for changing the version of a component.
- Updating the formatting of the component type and bundle throughout.
- Updating the new component dialogs to support selecting source group.
- Cleaning up new component dialogs.
- Cleaning up documentation in the cluster node endpoint.
Adding missing include in nifi-web-ui pom compressor plugin
- Refactoring so ConfigurableComponent provides getLogger() and so the nodes provide the ConfigurableComponent
- Creating LoggableComponent to pass around the component, logger, and coordinate with in the framework
- Finishing clean up following rebase.
Calling lifecycle methods for add and remove when changing versions of a component
- Introducing verifyCanUpdateBundle(coordinate) to ConfiguredComponent, and adding unit tests
- Ensuring documentation is available for all components. Including those of the same type that are loaded from different bundles.
Adding lookup from ClassLoader to Bundle, adding fix for instance class loading to include all parent NARs, and adding additional unit tests for FlowController
- Adding validation to ensure referenced controller services implement the required API
- Fixing template instantiation to look up compatible bundle
- Requiring services/reporting tasks to be disabled/stopped.
- Only supporting a change version option when the item has multiple versions available.
- Limiting the possible new controller services to the applicable API version.
- Showing the implemented API versions for Controller Services.
- Updating the property descriptor tooltip to indicate the required service requirements.
- Introducing version based sorting in the new component dialog, change version dialog, and new controller service dialog.
- Addressing remainder of the issues from recent rebase.
Ensuring bundles have been added to the flow before proposing a flow, and incorporating bundle information into flow fingerprinting
- Refactoring the way missing bundles work to retain the desired bundle if available
- Fixing logger.isDebugEnabled to be logger.isTraceEnabled
- Auditing when user changes the bundle. - Ensuring bundle details are present in templates.
Moving standard prioritizers to framework NAR and refactoring ExtensionManager logic to handle cases where an extension is in a JAR directly in the lib directory
- Ensuring all nodes attempt to instantiate the same template instance when the available bundles may differ. - Fixing the auditing of copy/paste and template instantiation. - Running addtional verification methods when running standalone.
Refactoring controller service invocation handler to allow updating the node used by the invocation handler
- Ensuring the bundles in a proposed flow are compatible with the current instance when the current instance has no flow is going to accept the proposed flow
- Merging whether multiple versions of the component are available
- Setting NAR plugin back to current released version
- Cleaning up DocGenerator to not process multiple times
Addressing incorrect usage of nf.Common. - Using formatType in the new component type dialogs.
Improving error messages when looking for bundles
Addressing comments from PR. - Fixing references to global nf namespace. - Fixing injection of nfProcessGroupConfiguration in nfComponentVersion. - Fixing web api integration tests.
Not rendering unversioned in help documentation. - Ensuring the isExtentionMissing flag is correct after changing the component type.
Adding synchronization in node classes to ensure changing component can't occur when component is running, introducing MissingBundleException for better reporting when a node can't join cluster due to a missing bundle, and bumping NAR plugin to released version 1.2.0
Adding concept of missing components to fingerprinting to ensure nodes agree on missing components when joining a cluster
NIFI-3380: NIFI-3520: - Fixing hive nar dependency. - Marking DBCPService as provided. - Skipping services that require instance classloading and are cobundled with their service API. - Skipping components that require instance classloading and reference service APIs that are cobundled. - Addressing UI issues in the new component dialogs when re-opening with a filter applied.
Fixing checkstyles issue and adding back assume checks to distributed cache server test
Ensuring new component types are sorted correctly when shown initially.
This closes#1585.
- Marked PutKafka Partition Strategy property as deprecated, as Kafka 0.8 client doesn't use 'partitioner.class' as producer property, we don't have to specify it.
- Changed Partition Strategy property from a required one to a dynamic property, so that existing processor config can stay in valid state.
- Fixed partition property to work.
- Route a flow file if it failed to be published due to invalid partition.
This closes#1425