Added explicit reference to Sun Java Cryptographic Service Provider in PasswordBasedEncryptor.
Removed manual seeding of SecureRandom in PasswordBasedEncryptor.
This closes#138.
Signed-off-by: Aldrin Piri <aldrin@apache.org>
- Ensuring anonymous user label and login links are rendered when appropriate.
- Ensuring responses are accurate when making requests with a token when user log in is not supported.
- Update admin guide with documentation for username/password authentication.
- Setting default anonymous roles to none.
- Making account status messages to users more clear.
- Deleting user keys when an admin revokes/deletes an account.
- Updating authentication filter to error back whenever authentication fails.
Due to the fact that current component uses artificial names for properties set via UI and then maps those properties to the actual names used by Kafka, we can not rely on NiFi UI to display an error if user attempts to set a dynamic property which will eventually map to the same Kafka property. So, I’ve decided that any dynamic property will simply override an existing property with WARNING message displayed. It is actually consistent with how Kafka does it and displayed the overrides in the console. Updated the relevant annotation description.
It is also worth to mentioned that current code was using an old property from Kafka 0.7 (“zk.connectiontimeout.ms”) which is no longer present in Kafka 0.8 (WARN Timer-Driven Process Thread-7 utils.VerifiableProperties:83 - Property zk.connectiontimeout.ms is not valid). The add/override strategy would provide for more flexibility when dealing with Kafka volatile configuration until things will settle down and we can get some sensible defaults in place.
While doing it addressed the following issues that were discovered while making modification and testing:
ISSUE: When GetKafka started and there are no messages in Kafka topic the onTrigger(..) method would block due to the fact that Kafka’s ConsumerIterator.hasNext() blocks. When attempt was made to stop GetKafka would stops successfully due to the interrupt. However in UI it would appear as ERROR based on the fact that InterruptException was not handled.
RESOLUTION: After discussing it with @markap14 the the general desire is to let the task exit as quick as possible and that the whole thread maintenance logic was there initially due to the fact that there was no way to tell Kafka consumer to return immediately if there are no events. In this patch we are now using ‘consumer.timeout.ms’ property of Kafka and setting its value to 1 millisecond (default is -1 - always block infinitely). This ensures that tasks that attempted to read an empty topic will exit immediately just to be rescheduled by NiFi based on user configurations.
ISSUE: Kafka would not release FlowFile with events if it didn’t have enough to complete the batch since it would block waiting for more messages (based on the blocking issue described above).
RESOLUTION: The invocation of hasNext() results in Kafka’s ConsumerTimeoutException which is handled in the catch block where the FlowFile with partial batch will be released to success. Not sure if we need to put a WARN message. In fact in my opinion we should not as it may create unnecessary confusion.
ISSUE: When configuring a consumer for topic and specifying multiple concurrent consumers in ‘topicCountMap’ based on 'context.getMaxConcurrentTasks()’ each consumer would bind to a topic partition. If you have less partitions then the value returned by 'context.getMaxConcurrentTasks()’ you would essentially allocate Kafka resources that would never get a chance to receive a single message (see more here https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Group+Example).
RESOLUTION: Logic was added to determine the amount of partitions for a topic and in the event where 'context.getMaxConcurrentTasks()’ value is greater than the amount of partitions, the partition count will be used to when creating ‘topicCountMap’ and WARNING message will be displayed)see code). Unfortunately we can’t do anything with the actual tasks, but based on current state of the code they will exit immediately just to be rescheduled where the process will repeat. NOTE: That is not ideal as it will be rescheduling tasks that will never have a chance to do anything, but at least it could be fixed on the user side after reading the warning message.
NIFI-1192 added dynamic properties support for PutKafka
NIFI-1192 polishing
NIFI-1192 polished and addressed PR comments
- Refactoring web security to use Spring Security Java Configuration.
- Introducing security in Web UI in order to get JWT.
NIFI-655:
- Setting up the resources (js/css) for the login page.
NIFI-655:
- Adding support for configuring anonymous roles.
- Addressing checkstyle violations.
NIFI-655:
- Moving to token api to web-api.
- Creating an LoginProvider API for user/pass based authentication.
- Creating a module for funneling access to the authorized useres.
NIFI-655:
- Moving away from usage of DN to identity throughout the application (from the user db to the authorization provider).
- Updating the authorized users schema to support login users.
- Creating an extension point for authentication of users based on username/password.
NIFI-655:
- Creating an endpoint for returning the identity of the current user.
- Updating the LoginAuthenticationFilter.
NIFI-655:
- Moving NiFi registration to the login page.
- Running the authentication filters in a different order to ensure we can disambiguate each case.
- Starting to layout each case... Forbidden, Login, Create User, Create NiFi Account.
NIFI-655:
- Addressing checkstyle issues.
NIFI-655:
- Making nf-storage available in the login page.
- Requiring use of local storage.
- Ignoring security for GET requests when obtaining the login configuration.
NIFI-655:
- Adding a new endpoint to obtain the status of a user registration.
- Updated the login page loading to ensure all possible states work.
NIFI-655:
- Ensuring we know the necessary state before we attempt to render the login page.
- Building the proxy chain in the JWT authentication filter.
- Only rendering the login when appropriate.
NIFI-655:
- Starting to style the login page.
- Added simple 'login' support by identifying username/password. Issuing JWT token coming...
- Added logout support
- Rendering the username when appropriate.
NIFI-655:
- Extracting certificate validation into a utility class.
- Fixing checkstyle issues.
- Cleaning up the web security context.
- Removing proxy chain checking where possible.
NIFI-655:
- Starting to add support for registration.
- Creating registration form.
NIFI-655:
- Starting to implement the JWT service.
- Parsing JWT on client side in order to render who the user currently is when logged in.
NIFI-655:
- Allowing the user to link back to the log in page from the new account page.
- Renaming DN to identity where possible.
NIFI-655:
- Fixing checkstyle issues.
NIFI-655:
- Adding more/better support for logging out.
NIFI-655:
- Fixing checkstyle issues.
NIFI-655:
- Adding a few new exceptions for the login identity provider.
NIFI-655:
- Disabling log in by default initially.
- Restoring authorization service unit test.
NIFI-655:
- Fixing checkstyle issues.
NIFI-655:
- Updating packages for log in filters.
- Handling new registration exceptions.
- Code clean up.
NIFI-655:
- Removing registration support.
- Removing file based implementation.
NIFI-655:
- Removing file based implementation.
NIFI-655:
- Removing unused spring configuration files.
NIFI-655:
- Making the auto wiring more explicit.
NIFI-655:
- Removing unused dependencies.
NIFI-655:
- Removing unused filter.
NIFI-655:
- Updating the login API authenticate method to use a richer set of exceptions.
- UI code clean.
NIFI-655:
- Ensuring the login identity provider is able to switch context classloaders via the standard NAR mechanisms.
NIFI-655:
- Initial commit of the LDAP based identity providers.
- Fixed issue when attempting to log into a NiFi that does not support new account requests.
NIFI-655:
- Allowing the ldap provider to specify if client authentication is required/desired.
NIFI-655:
- Persisting keys to sign user tokens.
- Allowing the identity provider to specify the token expiration.
- Code clean up.
NIFI-655:
- Ensuring identities are unique in the key table.
NIFI-655:
- Adding support for specifying the user search base and user search filter in the active directory provider.
NIFI-655:
- Fixing checkstyle issues.
NIFI-655:
- Adding automatic client side token renewal.
NIFI-655:
- Ensuring the logout link is rendered when appropriate.
NIFI-655:
- Adding configuration options for referrals and connect/read timeouts
NIFI-655:
- Added an endpoint for access details including configuration, creating tokens, and checking status.
- Updated DTOs and client side to utilize new endpoints.
NIFI-655:
- Refactoring certificate extraction and validation.
- Refactoring how expiration is specified in the login identity providers.
- Adding unit tests for the access endpoints.
- Code clean up.
NIFI-655:
- Keeping token expiration between 1 minute and 12 hours.
NIFI-655:
- Using the user identity provided by the login identity provider.
NIFI-655: - Fixed typo in error message for unrecognized authentication strategy.
Signed-off-by: Matt Gilman <matt.c.gilman@gmail.com>
NIFI-655. - Added logback-test.xml configuration resource for nifi-web-security.
Signed-off-by: Matt Gilman <matt.c.gilman@gmail.com>
NIFI-655. - Added issuer field to LoginAuthenticationToken. - Updated AccessResource to pass identity provider class name when creating LoginAuthenticationTokens. - Began refactoring JWT logic from request parsing logic in JwtService. - Added unit tests for JWT logic.
Signed-off-by: Matt Gilman <matt.c.gilman@gmail.com>
NIFI-655. - Changed issuer field to use FQ class name because some classes return an empty string for getSimpleName(). - Finished refactoring JWT logic from request parsing logic in JwtService. - Updated AccessResource and JwtAuthenticationFilter to call new JwtService methods decoupled from request header parsing. - Added extensive unit tests for JWT logic.
Signed-off-by: Matt Gilman <matt.c.gilman@gmail.com>
NIFI-655:
- Refactoring key service to expose the key id.
- Handling client side expiration better.
- Removing specialized active directory provider and abstract ldap provider.
NIFI-655. - Updated JwtService and JwtServiceTest to use Key POJO instead of raw String key from KeyService.
Signed-off-by: Matt Gilman <matt.c.gilman@gmail.com>
NIFI-655:
- Fixing typo when loading the ldap connect timeout.
- Providing a better experience for session expiration.
- Using ellipsis for lengthly user name.
- Adding an issuer to the authentication response so the LIP can specify the appropriate value.
NIFI-655:
- Showing a logging in notification during the log in process.
NIFI-655:
- Removing unnecessary class.
NIFI-655:
- Fixing checkstyle issues.
- Showing the progress spinner while submitting account justification.
NIFI-655:
- Removing deprecated authentication strategy.
- Renaming TLS to START_TLS.
- Allowing the protocol to be configured.
NIFI-655:
- Fixing issue detecting the presence of DN column
NIFI-655:
- Pre-populating the login-identity-providers.xml file with necessary properties and documentation.
- Renaming the Authentication Duration property name.
NIFI-655:
- Updating documentation for the failure response codes.
NIFI-655:
- Ensuring the user identity is not too long.
NIFI-655:
- Updating default authentication expiration to 12 hours.
NIFI-655:
- Remaining on the login form when there is any unsuccessful login attempt.
- Fixing checkstyle issues.
NIFI-980 Add support for HTTP Digest authentication to InvokeHttp
NIFI-1080 Provide additional InvokeHttp unit tests
NIFI-1133 InvokeHTTP Processor does not save Location header for 3xx responses
NIFI-1009 InvokeHTTP should be able to be scheduled without any incoming connection for GET operations
NIFI-61 Multiple improvements for InvokeHTTP inclusive of providing unique tx.id across clusters, dynamic HTTP header properties
Signed-off-by: Aldrin Piri <aldrin@apache.org>
connections are scheduled to run, even if the connections have no FlowFiles;
expose these details to processor developers; update documentation
Signed-off-by: Aldrin Piri <aldrin@apache.org>
- made DocReader package private
- polished logic in read(..) method to avoid escaping the loop
- added call to sorting logic in LuceneUtil.groupDocsByStorageFileName(..) to ensure that previous behavior and assumptions in read(..) methodd are preserved
- other minor polishing
- Ensured that failures derived form correlating Document to its actual provenance event do fail the entire query and produce partial results with warning messages
- Refactored DocsReader.read() operation.
- Added test to validate two conditions where the such failures could occur
Reviewed by Bryan Bende (bbende@apache.org).
Committed with amendments (for whitespace, a couple misspellings and to change a test method name based on review) by Tony Kurc (tkurc@apache.org)
Fixed the order of service state check in PropertyDescriptor
Encapsulated the check into private method for readability
Modified and documented test to validate correct behavior.
For more details please see comment in https://issues.apache.org/jira/browse/NIFI-1143
Current implementation of DBCPConnectionPool was attempting to test if connection could be obtained via dataSource.getConnection().
Such call is naturally a blocking call and the duration of the block is dependent on driver implementation. Some drivers (e.g., Phoenix - https://phoenix.apache.org/installation.html)
attempts numerous retries before failing creating a deadlock when attempt was made to disable DBCPConnectionPool which was still being enabled.
This fix removes the connection test from DBCPConnectionPool.onConfigured() operation returning successfully upon creation of DataSource.
For more details see comments in https://issues.apache.org/jira/browse/NIFI-1061
- Refactoring PutHBaseCell to batch Puts by table
- Adding optional Columns property to GetHBase to return only selected column families or columns
- Making GetHBase cluster friendly by storing state in the distributed cache and a local file
- Adding Initial Time Range property to GetHBase
- Adding Filter Expression property and custom validate to prevent using columns and a filter at the same time
- Creating an HBaseClientService controller service to isolate the HBase client and support multiple versions
- Creating appropriate LICENSE/NOTICE files
- Adding @InputRequirement to processors
- Addressing comments from review, moving hbase client services under standard services
- Making sure result of session.penalize() is assinged to FlowFile variable before transferring
- corrected a missed 'final' on org.apache.nifi.processors.aws.AbstractAWSProcessor.relationships
- added protected method org.apache.nifi.processors.aws.AbstractAWSProcessor.getRegion()
- added protected method org.apache.nifi.processors.aws.s3.AbstractS3Processor.getUrlForObject(String, String)
- explicitly set AWS client protocol to HTTPS, and created a static final field with comments if other protocols may be considered
- added a static final field for the UserAgent
Reviewed by Aldrin Piri <aldrin@apache.org>
- Refactored tests - created AbstractS3Test for common utility methods
- Corrected incorrect unit test in TestDeleteS3Object, and adjusted processor documentation to reflect behavior
- moved aws dependency management to root pom
This closes#107
Tested, Reviewed and Amended by Tony Kurc (<tkurc@apache.org>)
- Adding syslog.port to ListenSyslog attributes, logging at warn level when rejecting tcp connections
- Adding @InputRequirement to processors and adding appropriate send and receive provenance events
- Refactoring connection handling on put side, removing number of buffers from properties and basing it off concurrent tasks for the processor.
- Refactoring some of the TCP handling so it keeps reading from a connection until the client closes it
- Adding an error queue
- Adding a sender field on the syslog event to record the system that sent the message
Content of FlowFile. Added Include Core Attributes Property to control
if FlowFile CoreAttributes are included in the JSON output or not.
Added Null value for Empty String flag to control if empty values in
the JSON are empty string or true NULL values. Added more tests and
minor text refactoring per Github comments
Signed-off-by: Bryan Bende <bbende@apache.org>
either all of the existing attributes or a user defined list. The
existing Attributes are converted to JSON and placed in a new Attribute
on the existing FlowFile as Attribute “JSONAttributes”
Signed-off-by: Bryan Bende <bbende@apache.org>
The exception was caused due to basic file permissions. This fix overrides
'visitFileFailed' method of SimpleFileVisitor to log WARN message and allow
FileSystemRepository to continue.
- Fixing empty java docs and adding sort by id asc to the history query
- Changing userDn to userIdentity in Action and FlowChangeAction
- Modifying NiFiAuditor to always save events locally, and implementing getFlowChanges for ClusteredEventAccess
- Added SSL context to JMS producer and consumer processors
- Tony Kurc Amended patch to check SSL need by scheme and exception consistency
Reviewed by Tony Kurc (tkurc@apache.org)
- attempt a relogin based on an interval specified in the processor configuration
- use hadoop's UserGroupInformation.checkTGTAndReloginFromKeytab to determine if a relogin is necessary based on the ticket and do so if needed
- improve code readability with HdfsResources object in AbstractHadoopProcessor
Reviewed and Amended by Tony Kurc (tkurc@apache.org). This closes#97
writing every Object to a new line, or as an array of Objects.
Let's assume you have an Avro content as stream of records (record1, record2, ...). If JSON container is "none", the converter will expose the records as sequence of
single JSON objects:
record1
record2
...
recordN
Please bear in mind, that the final output is not a valid JSON content. You can then forward this content e.g. to Kafka, where every record will be a single Kafka message.
If JSON container is "array", the output looks like this:
[record1,record2,...,recordN]
It is useful when you want to convert your Avro content to a valid JSON array.
This closes#88
Reviewed and Amended (amendments reviewed by original patch author on github) by Tony Kurc (tkurc@apache.org)
- Starting to add support for deleting flow files from a queue by creating endpoints and starting to wire everything together.
- Adding context menu item for initiating the request to drop flow files.
NIFI-828:
- Always selecting the first item in the new component table.
- Enabling adding the selected component by typing Enter.
- Removing the 'filter by' in the new component dialogs and instead just searching every field.
- Penalize or Yield based on ErrorHandlingStrategy.Penalty
- Add Retry relationship to PutCouchbaseKey
- Remove unnecessary try/catch and let the framework handle it
- Change CouchbaseException relation mapping for Fatal from Failure to Retry
Signed-off-by: Bryan Bende <bbende@apache.org>
The LogAttribute processor evaluates the log prefix EL using the current flow file.
Log prefix helps to distinguish the log output of multiple LogAttribute processors and identify the right processor. Log prefix appears in the first and the last log line, followed by the original 50 dashes. If you configure log prefix 'STEP 1: ' the log output looks like this:
STEP 1 : --------------------------------------------------
Standard FlowFile Attributes
Key: 'entryDate'
Value: 'Tue Sep 22 15:13:02 CEST 2015'
Key: 'lineageStartDate'
Value: 'Tue Sep 22 15:13:02 CEST 2015'
Key: 'fileSize'
Value: '9'
FlowFile Attribute Map Content
Key: 'customAttribute'
Value: 'custom value'
STEP 1 : --------------------------------------------------
flow file content...
Signed-off-by: Aldrin Piri <aldrin@apache.org>
- Adding documentation about bare record use, renaming Split Size to Output Size, and adding a test case with 0 records
- Removing validators on properties that have allowable values, using positive integer validator for Output Size, and fixing type on processor description
- Adding optional ability to extract record count
- Renaming record.count to item.count for clarity, and updating documentation
- Adding a test case with 0 records
- Removing validators from properties that use allowable values
- Ensuring that nodes are not disconnected when the user attempts to reset a counter that does not exist on that node. This can happen when/if counters are adjusted conditionally.
- Updating default value for Regex so it matches once (?s:^.*$) instead of twice (.*). Matching on .* results in matching for every character and then again for 0 characters.
- Three main types of compression options:
NONE : no compression
AUTOMATIC : infers codec by extension
SPECIFIED : specified codec (e.g. snappy, gzip, bzip, or lz4)
- Unified the way ExecuteStreamCommand and ExecuteProcess handle arguments
- Argument delimiters can now be specified. Their default being what they were using before (; and space)
- Add krb5.conf to nifi.properties
nifi.kerberos.krb5.file | path to krb5.conf
- Connections to secure Hadoop clusters will be determined by their config,
that is, hadoop.security.authentication should be set to kerberos.
- Added two optional arguments to AbstractHadoopProcessor (principal and keytab),
these are only required if the cluster you're connecting to is secured. Both of
these options require the krb5.conf to be present in nifi.properties.
Signed-off-by: Bryan Bende <bbende@apache.org>
Closes InputStreams created to read the public keys for PGP encryption and several other
streams involved in PGP encryptiong. This prevents NiFi from leaking file handles on
every validate call or encryption attempt in the EncryptContent processor.
Before this change, the host given out to clients to connect to a Remote
Process Group Input Port is the host where the NiFi instance runs.
However, sometimes the binding host is different from the host that
clients connect to. For example, when a NiFi instance runs inside a
Docker container, a client on a separate machine must connect to the
Docker host which forwards the connection to the container.
Add a configuration property to specify the host name to give out to
clients to connect to a Remote Process Group Input Port. If the property
is not configured, then give out the name of host running the NiFi
instance.
- Adding NOTICE for nifi-ambari-nar, and fixing formatting issues in nifi-ambari-reporting-task pom
- Addressing review feedback: updating assembly NOTICE, fixing unit test, and minor clean up
- Adding additionalDetails.html