NIFI-537 fixed identified licensing issue with several of the new nars

This commit is contained in:
joewitt 2015-04-22 14:50:03 -04:00
parent 0d7838273c
commit 060a1e0d9c
23 changed files with 1256 additions and 958 deletions

View File

@ -512,43 +512,30 @@ The following binary components are provided under the Apache Software License v
JOAuth JOAuth
Copyright 2010-2013 Twitter, Inc Copyright 2010-2013 Twitter, Inc
Licensed under the Apache License, Version 2.0: http://www.apache.org/licenses/LICENSE-2.0
(ASLv2) Hosebird Client (ASLv2) Hosebird Client
The following NOTICE information applies: The following NOTICE information applies:
Hosebird Client (hbc) Hosebird Client (hbc)
Copyright 2013 Twitter, Inc. Copyright 2013 Twitter, Inc.
Licensed under the Apache License, Version 2.0: http://www.apache.org/licenses/LICENSE-2.0
(ASLv2) GeoIP2 Java API (ASLv2) GeoIP2 Java API
The following NOTICE information applies: The following NOTICE information applies:
GeoIP2 Java API GeoIP2 Java API
This software is Copyright (c) 2013 by MaxMind, Inc. This software is Copyright (c) 2013 by MaxMind, Inc.
This is free software, licensed under the Apache License, Version 2.0.
(ASLv2) Google HTTP Client Library for Java
The following NOTICE information applies:
Google HTTP Client Library for Java
This is free software, licensed under the Apache License, Version 2.0.
(ASLv2) Amazon Web Services SDK (ASLv2) Amazon Web Services SDK
The following NOTICE information applies: The following NOTICE information applies:
Copyright 2010-2014 Amazon.com, Inc. or its affiliates. All Rights Reserved. Copyright 2010-2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
This product includes software developed by This product includes software developed by
Amazon Technologies, Inc (http://www.amazon.com/). Amazon Technologies, Inc (http://www.amazon.com/).
**********************
THIRD PARTY COMPONENTS
**********************
This software includes third party software subject to the following copyrights:
- XML parsing and utility functions from JetS3t - Copyright 2006-2009 James Murty.
- JSON parsing and utility functions from JSON.org - Copyright 2002 JSON.org.
- PKCS#1 PEM encoded private key parsing and utility functions from oauth.googlecode.com - Copyright 1998-2010 AOL Inc.
**********************
THIRD PARTY COMPONENTS
**********************
This software includes third party software subject to the following copyrights:
- XML parsing and utility functions from JetS3t - Copyright 2006-2009 James Murty.
- JSON parsing and utility functions from JSON.org - Copyright 2002 JSON.org.
- PKCS#1 PEM encoded private key parsing and utility functions from oauth.googlecode.com - Copyright 1998-2010 AOL Inc.
************************ ************************
Common Development and Distribution License 1.1 Common Development and Distribution License 1.1

View File

@ -1,484 +1,484 @@
<?xml version="1.0" encoding="UTF-8"?> <?xml version="1.0" encoding="UTF-8"?>
<!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor
license agreements. See the NOTICE file distributed with this work for additional license agreements. See the NOTICE file distributed with this work for additional
information regarding copyright ownership. The ASF licenses this file to information regarding copyright ownership. The ASF licenses this file to
You under the Apache License, Version 2.0 (the "License"); you may not use You under the Apache License, Version 2.0 (the "License"); you may not use
this file except in compliance with the License. You may obtain a copy of this file except in compliance with the License. You may obtain a copy of
the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required
by applicable law or agreed to in writing, software distributed under the by applicable law or agreed to in writing, software distributed under the
License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS
OF ANY KIND, either express or implied. See the License for the specific OF ANY KIND, either express or implied. See the License for the specific
language governing permissions and limitations under the License. --> language governing permissions and limitations under the License. -->
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion> <modelVersion>4.0.0</modelVersion>
<parent> <parent>
<groupId>org.apache.nifi</groupId> <groupId>org.apache.nifi</groupId>
<artifactId>nifi</artifactId> <artifactId>nifi</artifactId>
<version>0.1.0-incubating-SNAPSHOT</version> <version>0.1.0-incubating-SNAPSHOT</version>
</parent> </parent>
<artifactId>nifi-assembly</artifactId> <artifactId>nifi-assembly</artifactId>
<packaging>pom</packaging> <packaging>pom</packaging>
<description>This is the assembly Apache NiFi (incubating)</description> <description>This is the assembly Apache NiFi (incubating)</description>
<build> <build>
<plugins> <plugins>
<plugin> <plugin>
<artifactId>maven-assembly-plugin</artifactId> <artifactId>maven-assembly-plugin</artifactId>
<configuration> <configuration>
<finalName>nifi-${project.version}</finalName> <finalName>nifi-${project.version}</finalName>
<attach>false</attach> <attach>false</attach>
</configuration> </configuration>
<executions> <executions>
<execution> <execution>
<id>make shared resource</id> <id>make shared resource</id>
<goals> <goals>
<goal>single</goal> <goal>single</goal>
</goals> </goals>
<phase>package</phase> <phase>package</phase>
<configuration> <configuration>
<descriptors> <descriptors>
<descriptor>src/main/assembly/dependencies.xml</descriptor> <descriptor>src/main/assembly/dependencies.xml</descriptor>
</descriptors> </descriptors>
<tarLongFileMode>posix</tarLongFileMode> <tarLongFileMode>posix</tarLongFileMode>
</configuration> </configuration>
</execution> </execution>
</executions> </executions>
</plugin> </plugin>
</plugins> </plugins>
</build> </build>
<dependencies> <dependencies>
<dependency> <dependency>
<groupId>ch.qos.logback</groupId> <groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId> <artifactId>logback-classic</artifactId>
<scope>compile</scope> <scope>compile</scope>
</dependency> </dependency>
<dependency> <dependency>
<groupId>org.slf4j</groupId> <groupId>org.slf4j</groupId>
<artifactId>jcl-over-slf4j</artifactId> <artifactId>jcl-over-slf4j</artifactId>
<scope>compile</scope> <scope>compile</scope>
</dependency> </dependency>
<dependency> <dependency>
<groupId>org.slf4j</groupId> <groupId>org.slf4j</groupId>
<artifactId>jul-to-slf4j</artifactId> <artifactId>jul-to-slf4j</artifactId>
<scope>compile</scope> <scope>compile</scope>
</dependency> </dependency>
<dependency> <dependency>
<groupId>org.slf4j</groupId> <groupId>org.slf4j</groupId>
<artifactId>log4j-over-slf4j</artifactId> <artifactId>log4j-over-slf4j</artifactId>
<scope>compile</scope> <scope>compile</scope>
</dependency> </dependency>
<dependency> <dependency>
<groupId>org.slf4j</groupId> <groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId> <artifactId>slf4j-api</artifactId>
<scope>compile</scope> <scope>compile</scope>
</dependency> </dependency>
<dependency> <dependency>
<groupId>org.apache.nifi</groupId> <groupId>org.apache.nifi</groupId>
<artifactId>nifi-api</artifactId> <artifactId>nifi-api</artifactId>
</dependency> </dependency>
<dependency> <dependency>
<groupId>org.apache.nifi</groupId> <groupId>org.apache.nifi</groupId>
<artifactId>nifi-runtime</artifactId> <artifactId>nifi-runtime</artifactId>
</dependency> </dependency>
<dependency> <dependency>
<groupId>org.apache.nifi</groupId> <groupId>org.apache.nifi</groupId>
<artifactId>nifi-bootstrap</artifactId> <artifactId>nifi-bootstrap</artifactId>
</dependency> </dependency>
<dependency> <dependency>
<groupId>org.apache.nifi</groupId> <groupId>org.apache.nifi</groupId>
<artifactId>nifi-resources</artifactId> <artifactId>nifi-resources</artifactId>
<classifier>resources</classifier> <classifier>resources</classifier>
<scope>runtime</scope> <scope>runtime</scope>
<type>zip</type> <type>zip</type>
</dependency> </dependency>
<dependency> <dependency>
<groupId>org.apache.nifi</groupId> <groupId>org.apache.nifi</groupId>
<artifactId>nifi-docs</artifactId> <artifactId>nifi-docs</artifactId>
<classifier>resources</classifier> <classifier>resources</classifier>
<scope>runtime</scope> <scope>runtime</scope>
<type>zip</type> <type>zip</type>
</dependency> </dependency>
<dependency> <dependency>
<groupId>org.apache.nifi</groupId> <groupId>org.apache.nifi</groupId>
<artifactId>nifi-framework-nar</artifactId> <artifactId>nifi-framework-nar</artifactId>
<type>nar</type> <type>nar</type>
</dependency> </dependency>
<dependency> <dependency>
<groupId>org.apache.nifi</groupId> <groupId>org.apache.nifi</groupId>
<artifactId>nifi-provenance-repository-nar</artifactId> <artifactId>nifi-provenance-repository-nar</artifactId>
<type>nar</type> <type>nar</type>
</dependency> </dependency>
<dependency> <dependency>
<groupId>org.apache.nifi</groupId> <groupId>org.apache.nifi</groupId>
<artifactId>nifi-standard-services-api-nar</artifactId> <artifactId>nifi-standard-services-api-nar</artifactId>
<type>nar</type> <type>nar</type>
</dependency> </dependency>
<dependency> <dependency>
<groupId>org.apache.nifi</groupId> <groupId>org.apache.nifi</groupId>
<artifactId>nifi-ssl-context-service-nar</artifactId> <artifactId>nifi-ssl-context-service-nar</artifactId>
<type>nar</type> <type>nar</type>
</dependency> </dependency>
<dependency> <dependency>
<groupId>org.apache.nifi</groupId> <groupId>org.apache.nifi</groupId>
<artifactId>nifi-distributed-cache-services-nar</artifactId> <artifactId>nifi-distributed-cache-services-nar</artifactId>
<type>nar</type> <type>nar</type>
</dependency> </dependency>
<dependency> <dependency>
<groupId>org.apache.nifi</groupId> <groupId>org.apache.nifi</groupId>
<artifactId>nifi-standard-nar</artifactId> <artifactId>nifi-standard-nar</artifactId>
<type>nar</type> <type>nar</type>
</dependency> </dependency>
<dependency> <dependency>
<groupId>org.apache.nifi</groupId> <groupId>org.apache.nifi</groupId>
<artifactId>nifi-jetty-bundle</artifactId> <artifactId>nifi-jetty-bundle</artifactId>
<type>nar</type> <type>nar</type>
</dependency> </dependency>
<dependency> <dependency>
<groupId>org.apache.nifi</groupId> <groupId>org.apache.nifi</groupId>
<artifactId>nifi-update-attribute-nar</artifactId> <artifactId>nifi-update-attribute-nar</artifactId>
<type>nar</type> <type>nar</type>
</dependency> </dependency>
<dependency> <dependency>
<groupId>org.apache.nifi</groupId> <groupId>org.apache.nifi</groupId>
<artifactId>nifi-hadoop-libraries-nar</artifactId> <artifactId>nifi-hadoop-libraries-nar</artifactId>
<type>nar</type> <type>nar</type>
</dependency> </dependency>
<dependency> <dependency>
<groupId>org.apache.nifi</groupId> <groupId>org.apache.nifi</groupId>
<artifactId>nifi-hadoop-nar</artifactId> <artifactId>nifi-hadoop-nar</artifactId>
<type>nar</type> <type>nar</type>
</dependency> </dependency>
<dependency> <dependency>
<groupId>org.apache.nifi</groupId> <groupId>org.apache.nifi</groupId>
<artifactId>nifi-kafka-nar</artifactId> <artifactId>nifi-kafka-nar</artifactId>
<type>nar</type> <type>nar</type>
</dependency> </dependency>
<dependency> <dependency>
<groupId>org.apache.nifi</groupId> <groupId>org.apache.nifi</groupId>
<artifactId>nifi-http-context-map-nar</artifactId> <artifactId>nifi-http-context-map-nar</artifactId>
<type>nar</type> <type>nar</type>
</dependency> </dependency>
<dependency> <dependency>
<groupId>org.apache.nifi</groupId> <groupId>org.apache.nifi</groupId>
<artifactId>nifi-kite-nar</artifactId> <artifactId>nifi-kite-nar</artifactId>
<type>nar</type> <type>nar</type>
</dependency> </dependency>
<dependency> <dependency>
<groupId>org.apache.nifi</groupId> <groupId>org.apache.nifi</groupId>
<artifactId>nifi-social-media-nar</artifactId> <artifactId>nifi-social-media-nar</artifactId>
<version>0.1.0-incubating-SNAPSHOT</version> <version>0.1.0-incubating-SNAPSHOT</version>
<type>nar</type> <type>nar</type>
</dependency> </dependency>
<dependency> <dependency>
<groupId>org.apache.nifi</groupId> <groupId>org.apache.nifi</groupId>
<artifactId>nifi-hl7-nar</artifactId> <artifactId>nifi-hl7-nar</artifactId>
<version>0.1.0-incubating-SNAPSHOT</version> <version>0.1.0-incubating-SNAPSHOT</version>
<type>nar</type> <type>nar</type>
</dependency> </dependency>
<dependency> <dependency>
<groupId>org.apache.nifi</groupId> <groupId>org.apache.nifi</groupId>
<artifactId>nifi-language-translation-nar</artifactId> <artifactId>nifi-language-translation-nar</artifactId>
<version>0.1.0-incubating-SNAPSHOT</version> <version>0.1.0-incubating-SNAPSHOT</version>
<type>nar</type> <type>nar</type>
</dependency> </dependency>
<dependency> <dependency>
<groupId>org.apache.nifi</groupId> <groupId>org.apache.nifi</groupId>
<artifactId>nifi-geo-nar</artifactId> <artifactId>nifi-geo-nar</artifactId>
<version>0.1.0-incubating-SNAPSHOT</version> <version>0.1.0-incubating-SNAPSHOT</version>
<type>nar</type> <type>nar</type>
</dependency> </dependency>
</dependencies> </dependencies>
<properties> <properties>
<!--Wrapper Properties --> <!--Wrapper Properties -->
<nifi.wrapper.jvm.heap.initial.mb>256</nifi.wrapper.jvm.heap.initial.mb> <nifi.wrapper.jvm.heap.initial.mb>256</nifi.wrapper.jvm.heap.initial.mb>
<nifi.wrapper.jvm.heap.max.mb>512</nifi.wrapper.jvm.heap.max.mb> <nifi.wrapper.jvm.heap.max.mb>512</nifi.wrapper.jvm.heap.max.mb>
<nifi.initial.permgen.size.mb>128</nifi.initial.permgen.size.mb> <nifi.initial.permgen.size.mb>128</nifi.initial.permgen.size.mb>
<nifi.max.permgen.size.mb>128</nifi.max.permgen.size.mb> <nifi.max.permgen.size.mb>128</nifi.max.permgen.size.mb>
<nifi.wrapper.logfile.maxsize>10m</nifi.wrapper.logfile.maxsize> <nifi.wrapper.logfile.maxsize>10m</nifi.wrapper.logfile.maxsize>
<nifi.wrapper.logfile.maxfiles>10</nifi.wrapper.logfile.maxfiles> <nifi.wrapper.logfile.maxfiles>10</nifi.wrapper.logfile.maxfiles>
<!-- nifi.properties: core properties --> <!-- nifi.properties: core properties -->
<nifi.version>${project.version}</nifi.version> <nifi.version>${project.version}</nifi.version>
<nifi.flowcontroller.autoResumeState>true</nifi.flowcontroller.autoResumeState> <nifi.flowcontroller.autoResumeState>true</nifi.flowcontroller.autoResumeState>
<nifi.flowcontroller.graceful.shutdown.period>10 sec</nifi.flowcontroller.graceful.shutdown.period> <nifi.flowcontroller.graceful.shutdown.period>10 sec</nifi.flowcontroller.graceful.shutdown.period>
<nifi.flowservice.writedelay.interval>500 ms</nifi.flowservice.writedelay.interval> <nifi.flowservice.writedelay.interval>500 ms</nifi.flowservice.writedelay.interval>
<nifi.administrative.yield.duration>30 sec</nifi.administrative.yield.duration> <nifi.administrative.yield.duration>30 sec</nifi.administrative.yield.duration>
<nifi.bored.yield.duration>10 millis</nifi.bored.yield.duration> <nifi.bored.yield.duration>10 millis</nifi.bored.yield.duration>
<nifi.flow.configuration.file>./conf/flow.xml.gz</nifi.flow.configuration.file> <nifi.flow.configuration.file>./conf/flow.xml.gz</nifi.flow.configuration.file>
<nifi.flow.configuration.archive.dir>./conf/archive/</nifi.flow.configuration.archive.dir> <nifi.flow.configuration.archive.dir>./conf/archive/</nifi.flow.configuration.archive.dir>
<nifi.authority.provider.configuration.file>./conf/authority-providers.xml</nifi.authority.provider.configuration.file> <nifi.authority.provider.configuration.file>./conf/authority-providers.xml</nifi.authority.provider.configuration.file>
<nifi.templates.directory>./conf/templates</nifi.templates.directory> <nifi.templates.directory>./conf/templates</nifi.templates.directory>
<nifi.database.directory>./database_repository</nifi.database.directory> <nifi.database.directory>./database_repository</nifi.database.directory>
<nifi.flowfile.repository.implementation>org.apache.nifi.controller.repository.WriteAheadFlowFileRepository</nifi.flowfile.repository.implementation> <nifi.flowfile.repository.implementation>org.apache.nifi.controller.repository.WriteAheadFlowFileRepository</nifi.flowfile.repository.implementation>
<nifi.flowfile.repository.directory>./flowfile_repository</nifi.flowfile.repository.directory> <nifi.flowfile.repository.directory>./flowfile_repository</nifi.flowfile.repository.directory>
<nifi.flowfile.repository.partitions>256</nifi.flowfile.repository.partitions> <nifi.flowfile.repository.partitions>256</nifi.flowfile.repository.partitions>
<nifi.flowfile.repository.checkpoint.interval>2 mins</nifi.flowfile.repository.checkpoint.interval> <nifi.flowfile.repository.checkpoint.interval>2 mins</nifi.flowfile.repository.checkpoint.interval>
<nifi.flowfile.repository.always.sync>false</nifi.flowfile.repository.always.sync> <nifi.flowfile.repository.always.sync>false</nifi.flowfile.repository.always.sync>
<nifi.swap.manager.implementation>org.apache.nifi.controller.FileSystemSwapManager</nifi.swap.manager.implementation> <nifi.swap.manager.implementation>org.apache.nifi.controller.FileSystemSwapManager</nifi.swap.manager.implementation>
<nifi.queue.swap.threshold>20000</nifi.queue.swap.threshold> <nifi.queue.swap.threshold>20000</nifi.queue.swap.threshold>
<nifi.swap.in.period>5 sec</nifi.swap.in.period> <nifi.swap.in.period>5 sec</nifi.swap.in.period>
<nifi.swap.in.threads>1</nifi.swap.in.threads> <nifi.swap.in.threads>1</nifi.swap.in.threads>
<nifi.swap.out.period>5 sec</nifi.swap.out.period> <nifi.swap.out.period>5 sec</nifi.swap.out.period>
<nifi.swap.out.threads>4</nifi.swap.out.threads> <nifi.swap.out.threads>4</nifi.swap.out.threads>
<nifi.content.repository.implementation>org.apache.nifi.controller.repository.FileSystemRepository</nifi.content.repository.implementation> <nifi.content.repository.implementation>org.apache.nifi.controller.repository.FileSystemRepository</nifi.content.repository.implementation>
<nifi.content.claim.max.appendable.size>10 MB</nifi.content.claim.max.appendable.size> <nifi.content.claim.max.appendable.size>10 MB</nifi.content.claim.max.appendable.size>
<nifi.content.claim.max.flow.files>100</nifi.content.claim.max.flow.files> <nifi.content.claim.max.flow.files>100</nifi.content.claim.max.flow.files>
<nifi.content.repository.directory.default>./content_repository</nifi.content.repository.directory.default> <nifi.content.repository.directory.default>./content_repository</nifi.content.repository.directory.default>
<nifi.content.repository.archive.max.retention.period /> <nifi.content.repository.archive.max.retention.period />
<nifi.content.repository.archive.max.usage.percentage /> <nifi.content.repository.archive.max.usage.percentage />
<nifi.content.repository.archive.enabled>false</nifi.content.repository.archive.enabled> <nifi.content.repository.archive.enabled>false</nifi.content.repository.archive.enabled>
<nifi.content.repository.always.sync>false</nifi.content.repository.always.sync> <nifi.content.repository.always.sync>false</nifi.content.repository.always.sync>
<nifi.content.viewer.url /> <nifi.content.viewer.url />
<nifi.restore.directory /> <nifi.restore.directory />
<nifi.ui.banner.text /> <nifi.ui.banner.text />
<nifi.ui.autorefresh.interval>30 sec</nifi.ui.autorefresh.interval> <nifi.ui.autorefresh.interval>30 sec</nifi.ui.autorefresh.interval>
<nifi.nar.library.directory>./lib</nifi.nar.library.directory> <nifi.nar.library.directory>./lib</nifi.nar.library.directory>
<nifi.nar.working.directory>./work/nar/</nifi.nar.working.directory> <nifi.nar.working.directory>./work/nar/</nifi.nar.working.directory>
<nifi.documentation.working.directory>./work/docs/components</nifi.documentation.working.directory> <nifi.documentation.working.directory>./work/docs/components</nifi.documentation.working.directory>
<nifi.sensitive.props.algorithm>PBEWITHMD5AND256BITAES-CBC-OPENSSL</nifi.sensitive.props.algorithm> <nifi.sensitive.props.algorithm>PBEWITHMD5AND256BITAES-CBC-OPENSSL</nifi.sensitive.props.algorithm>
<nifi.sensitive.props.provider>BC</nifi.sensitive.props.provider> <nifi.sensitive.props.provider>BC</nifi.sensitive.props.provider>
<nifi.h2.url.append>;LOCK_TIMEOUT=25000;WRITE_DELAY=0;AUTO_SERVER=FALSE</nifi.h2.url.append> <nifi.h2.url.append>;LOCK_TIMEOUT=25000;WRITE_DELAY=0;AUTO_SERVER=FALSE</nifi.h2.url.append>
<nifi.remote.input.socket.port>9990</nifi.remote.input.socket.port> <nifi.remote.input.socket.port>9990</nifi.remote.input.socket.port>
<!-- persistent provenance repository properties --> <!-- persistent provenance repository properties -->
<nifi.provenance.repository.implementation>org.apache.nifi.provenance.PersistentProvenanceRepository</nifi.provenance.repository.implementation> <nifi.provenance.repository.implementation>org.apache.nifi.provenance.PersistentProvenanceRepository</nifi.provenance.repository.implementation>
<nifi.provenance.repository.directory.default>./provenance_repository</nifi.provenance.repository.directory.default> <nifi.provenance.repository.directory.default>./provenance_repository</nifi.provenance.repository.directory.default>
<nifi.provenance.repository.max.storage.time>24 hours</nifi.provenance.repository.max.storage.time> <nifi.provenance.repository.max.storage.time>24 hours</nifi.provenance.repository.max.storage.time>
<nifi.provenance.repository.max.storage.size>1 GB</nifi.provenance.repository.max.storage.size> <nifi.provenance.repository.max.storage.size>1 GB</nifi.provenance.repository.max.storage.size>
<nifi.provenance.repository.rollover.time>5 mins</nifi.provenance.repository.rollover.time> <nifi.provenance.repository.rollover.time>5 mins</nifi.provenance.repository.rollover.time>
<nifi.provenance.repository.rollover.size>100 MB</nifi.provenance.repository.rollover.size> <nifi.provenance.repository.rollover.size>100 MB</nifi.provenance.repository.rollover.size>
<nifi.provenance.repository.query.threads>2</nifi.provenance.repository.query.threads> <nifi.provenance.repository.query.threads>2</nifi.provenance.repository.query.threads>
<nifi.provenance.repository.compress.on.rollover>true</nifi.provenance.repository.compress.on.rollover> <nifi.provenance.repository.compress.on.rollover>true</nifi.provenance.repository.compress.on.rollover>
<nifi.provenance.repository.indexed.fields>EventType, FlowFileUUID, <nifi.provenance.repository.indexed.fields>EventType, FlowFileUUID,
Filename, ProcessorID</nifi.provenance.repository.indexed.fields> Filename, ProcessorID</nifi.provenance.repository.indexed.fields>
<nifi.provenance.repository.indexed.attributes /> <nifi.provenance.repository.indexed.attributes />
<nifi.provenance.repository.index.shard.size>500 MB</nifi.provenance.repository.index.shard.size> <nifi.provenance.repository.index.shard.size>500 MB</nifi.provenance.repository.index.shard.size>
<nifi.provenance.repository.always.sync>false</nifi.provenance.repository.always.sync> <nifi.provenance.repository.always.sync>false</nifi.provenance.repository.always.sync>
<nifi.provenance.repository.journal.count>16</nifi.provenance.repository.journal.count> <nifi.provenance.repository.journal.count>16</nifi.provenance.repository.journal.count>
<!-- volatile provenance repository properties --> <!-- volatile provenance repository properties -->
<nifi.provenance.repository.buffer.size>100000</nifi.provenance.repository.buffer.size> <nifi.provenance.repository.buffer.size>100000</nifi.provenance.repository.buffer.size>
<!-- Component status repository properties --> <!-- Component status repository properties -->
<nifi.components.status.repository.implementation>org.apache.nifi.controller.status.history.VolatileComponentStatusRepository</nifi.components.status.repository.implementation> <nifi.components.status.repository.implementation>org.apache.nifi.controller.status.history.VolatileComponentStatusRepository</nifi.components.status.repository.implementation>
<nifi.components.status.repository.buffer.size>288</nifi.components.status.repository.buffer.size> <nifi.components.status.repository.buffer.size>288</nifi.components.status.repository.buffer.size>
<nifi.components.status.snapshot.frequency>5 mins</nifi.components.status.snapshot.frequency> <nifi.components.status.snapshot.frequency>5 mins</nifi.components.status.snapshot.frequency>
<!-- nifi.properties: web properties --> <!-- nifi.properties: web properties -->
<nifi.web.war.directory>./lib</nifi.web.war.directory> <nifi.web.war.directory>./lib</nifi.web.war.directory>
<nifi.web.http.host /> <nifi.web.http.host />
<nifi.web.http.port>8080</nifi.web.http.port> <nifi.web.http.port>8080</nifi.web.http.port>
<nifi.web.https.host /> <nifi.web.https.host />
<nifi.web.https.port /> <nifi.web.https.port />
<nifi.jetty.work.dir>./work/jetty</nifi.jetty.work.dir> <nifi.jetty.work.dir>./work/jetty</nifi.jetty.work.dir>
<nifi.web.jetty.threads>200</nifi.web.jetty.threads> <nifi.web.jetty.threads>200</nifi.web.jetty.threads>
<!-- nifi.properties: security properties --> <!-- nifi.properties: security properties -->
<nifi.security.keystore /> <nifi.security.keystore />
<nifi.security.keystoreType /> <nifi.security.keystoreType />
<nifi.security.keystorePasswd /> <nifi.security.keystorePasswd />
<nifi.security.keyPasswd /> <nifi.security.keyPasswd />
<nifi.security.truststore /> <nifi.security.truststore />
<nifi.security.truststoreType /> <nifi.security.truststoreType />
<nifi.security.truststorePasswd /> <nifi.security.truststorePasswd />
<nifi.security.needClientAuth /> <nifi.security.needClientAuth />
<nifi.security.authorizedUsers.file>./conf/authorized-users.xml</nifi.security.authorizedUsers.file> <nifi.security.authorizedUsers.file>./conf/authorized-users.xml</nifi.security.authorizedUsers.file>
<nifi.security.user.credential.cache.duration>24 hours</nifi.security.user.credential.cache.duration> <nifi.security.user.credential.cache.duration>24 hours</nifi.security.user.credential.cache.duration>
<nifi.security.user.authority.provider>file-provider</nifi.security.user.authority.provider> <nifi.security.user.authority.provider>file-provider</nifi.security.user.authority.provider>
<nifi.security.x509.principal.extractor /> <nifi.security.x509.principal.extractor />
<nifi.security.support.new.account.requests /> <nifi.security.support.new.account.requests />
<nifi.security.ocsp.responder.url /> <nifi.security.ocsp.responder.url />
<nifi.security.ocsp.responder.certificate /> <nifi.security.ocsp.responder.certificate />
<!-- nifi.properties: cluster common properties (cluster manager and nodes <!-- nifi.properties: cluster common properties (cluster manager and nodes
must have same values) --> must have same values) -->
<nifi.cluster.protocol.heartbeat.interval>5 sec</nifi.cluster.protocol.heartbeat.interval> <nifi.cluster.protocol.heartbeat.interval>5 sec</nifi.cluster.protocol.heartbeat.interval>
<nifi.cluster.protocol.is.secure>false</nifi.cluster.protocol.is.secure> <nifi.cluster.protocol.is.secure>false</nifi.cluster.protocol.is.secure>
<nifi.cluster.protocol.socket.timeout>30 sec</nifi.cluster.protocol.socket.timeout> <nifi.cluster.protocol.socket.timeout>30 sec</nifi.cluster.protocol.socket.timeout>
<nifi.cluster.protocol.connection.handshake.timeout>45 sec</nifi.cluster.protocol.connection.handshake.timeout> <nifi.cluster.protocol.connection.handshake.timeout>45 sec</nifi.cluster.protocol.connection.handshake.timeout>
<nifi.cluster.protocol.use.multicast>false</nifi.cluster.protocol.use.multicast> <nifi.cluster.protocol.use.multicast>false</nifi.cluster.protocol.use.multicast>
<nifi.cluster.protocol.multicast.address /> <nifi.cluster.protocol.multicast.address />
<nifi.cluster.protocol.multicast.port /> <nifi.cluster.protocol.multicast.port />
<nifi.cluster.protocol.multicast.service.broadcast.delay>500 ms</nifi.cluster.protocol.multicast.service.broadcast.delay> <nifi.cluster.protocol.multicast.service.broadcast.delay>500 ms</nifi.cluster.protocol.multicast.service.broadcast.delay>
<nifi.cluster.protocol.multicast.service.locator.attempts>3</nifi.cluster.protocol.multicast.service.locator.attempts> <nifi.cluster.protocol.multicast.service.locator.attempts>3</nifi.cluster.protocol.multicast.service.locator.attempts>
<nifi.cluster.protocol.multicast.service.locator.attempts.delay>1 sec</nifi.cluster.protocol.multicast.service.locator.attempts.delay> <nifi.cluster.protocol.multicast.service.locator.attempts.delay>1 sec</nifi.cluster.protocol.multicast.service.locator.attempts.delay>
<!-- nifi.properties: cluster node properties (only configure for cluster <!-- nifi.properties: cluster node properties (only configure for cluster
nodes) --> nodes) -->
<nifi.cluster.is.node>false</nifi.cluster.is.node> <nifi.cluster.is.node>false</nifi.cluster.is.node>
<nifi.cluster.node.address /> <nifi.cluster.node.address />
<nifi.cluster.node.protocol.port /> <nifi.cluster.node.protocol.port />
<nifi.cluster.node.protocol.threads>2</nifi.cluster.node.protocol.threads> <nifi.cluster.node.protocol.threads>2</nifi.cluster.node.protocol.threads>
<nifi.cluster.node.unicast.manager.address /> <nifi.cluster.node.unicast.manager.address />
<nifi.cluster.node.unicast.manager.protocol.port /> <nifi.cluster.node.unicast.manager.protocol.port />
<!-- nifi.properties: cluster manager properties (only configure for cluster <!-- nifi.properties: cluster manager properties (only configure for cluster
manager) --> manager) -->
<nifi.cluster.is.manager>false</nifi.cluster.is.manager> <nifi.cluster.is.manager>false</nifi.cluster.is.manager>
<nifi.cluster.manager.address /> <nifi.cluster.manager.address />
<nifi.cluster.manager.protocol.port /> <nifi.cluster.manager.protocol.port />
<nifi.cluster.manager.node.firewall.file /> <nifi.cluster.manager.node.firewall.file />
<nifi.cluster.manager.node.event.history.size>10</nifi.cluster.manager.node.event.history.size> <nifi.cluster.manager.node.event.history.size>10</nifi.cluster.manager.node.event.history.size>
<nifi.cluster.manager.node.api.connection.timeout>30 sec</nifi.cluster.manager.node.api.connection.timeout> <nifi.cluster.manager.node.api.connection.timeout>30 sec</nifi.cluster.manager.node.api.connection.timeout>
<nifi.cluster.manager.node.api.read.timeout>30 sec</nifi.cluster.manager.node.api.read.timeout> <nifi.cluster.manager.node.api.read.timeout>30 sec</nifi.cluster.manager.node.api.read.timeout>
<nifi.cluster.manager.node.api.request.threads>10</nifi.cluster.manager.node.api.request.threads> <nifi.cluster.manager.node.api.request.threads>10</nifi.cluster.manager.node.api.request.threads>
<nifi.cluster.manager.flow.retrieval.delay>5 sec</nifi.cluster.manager.flow.retrieval.delay> <nifi.cluster.manager.flow.retrieval.delay>5 sec</nifi.cluster.manager.flow.retrieval.delay>
<nifi.cluster.manager.protocol.threads>10</nifi.cluster.manager.protocol.threads> <nifi.cluster.manager.protocol.threads>10</nifi.cluster.manager.protocol.threads>
<nifi.cluster.manager.safemode.duration>0 sec</nifi.cluster.manager.safemode.duration> <nifi.cluster.manager.safemode.duration>0 sec</nifi.cluster.manager.safemode.duration>
</properties> </properties>
<profiles> <profiles>
<profile> <profile>
<id>rpm</id> <id>rpm</id>
<activation> <activation>
<activeByDefault>false</activeByDefault> <activeByDefault>false</activeByDefault>
</activation> </activation>
<build> <build>
<plugins> <plugins>
<plugin> <plugin>
<artifactId>maven-dependency-plugin</artifactId> <artifactId>maven-dependency-plugin</artifactId>
<executions> <executions>
<execution> <execution>
<id>unpack-shared-resources</id> <id>unpack-shared-resources</id>
<goals> <goals>
<goal>unpack-dependencies</goal> <goal>unpack-dependencies</goal>
</goals> </goals>
<phase>generate-resources</phase> <phase>generate-resources</phase>
<configuration> <configuration>
<outputDirectory>${project.build.directory}/generated-resources</outputDirectory> <outputDirectory>${project.build.directory}/generated-resources</outputDirectory>
<includeArtifactIds>nifi-resources</includeArtifactIds> <includeArtifactIds>nifi-resources</includeArtifactIds>
<includeGroupIds>org.apache.nifi</includeGroupIds> <includeGroupIds>org.apache.nifi</includeGroupIds>
<excludeTransitive>false</excludeTransitive> <excludeTransitive>false</excludeTransitive>
</configuration> </configuration>
</execution> </execution>
<execution> <execution>
<id>unpack-docs</id> <id>unpack-docs</id>
<goals> <goals>
<goal>unpack-dependencies</goal> <goal>unpack-dependencies</goal>
</goals> </goals>
<phase>generate-resources</phase> <phase>generate-resources</phase>
<configuration> <configuration>
<outputDirectory>${project.build.directory}/generated-docs</outputDirectory> <outputDirectory>${project.build.directory}/generated-docs</outputDirectory>
<includeArtifactIds>nifi-docs</includeArtifactIds> <includeArtifactIds>nifi-docs</includeArtifactIds>
<includeGroupIds>org.apache.nifi</includeGroupIds> <includeGroupIds>org.apache.nifi</includeGroupIds>
<excludeTransitive>false</excludeTransitive> <excludeTransitive>false</excludeTransitive>
</configuration> </configuration>
</execution> </execution>
</executions> </executions>
</plugin> </plugin>
<plugin> <plugin>
<groupId>org.codehaus.mojo</groupId> <groupId>org.codehaus.mojo</groupId>
<artifactId>rpm-maven-plugin</artifactId> <artifactId>rpm-maven-plugin</artifactId>
<configuration> <configuration>
<summary>Apache NiFi (incubating)</summary> <summary>Apache NiFi (incubating)</summary>
<description>Apache Nifi (incubating) is dataflow system based on <description>Apache Nifi (incubating) is dataflow system based on
the Flow-Based Programming concepts.</description> the Flow-Based Programming concepts.</description>
<license>Apache License, Version 2.0 and others (see included <license>Apache License, Version 2.0 and others (see included
LICENSE file)</license> LICENSE file)</license>
<url>http://nifi.incubator.apache.org</url> <url>http://nifi.incubator.apache.org</url>
<group>Utilities</group> <group>Utilities</group>
<prefix>/opt/nifi</prefix> <prefix>/opt/nifi</prefix>
<defineStatements> <defineStatements>
<defineStatement>_use_internal_dependency_generator 0</defineStatement> <defineStatement>_use_internal_dependency_generator 0</defineStatement>
</defineStatements> </defineStatements>
<defaultDirmode>750</defaultDirmode> <defaultDirmode>750</defaultDirmode>
<defaultFilemode>640</defaultFilemode> <defaultFilemode>640</defaultFilemode>
<defaultUsername>root</defaultUsername> <defaultUsername>root</defaultUsername>
<defaultGroupname>root</defaultGroupname> <defaultGroupname>root</defaultGroupname>
</configuration> </configuration>
<executions> <executions>
<execution> <execution>
<id>build-bin-rpm</id> <id>build-bin-rpm</id>
<goals> <goals>
<goal>attached-rpm</goal> <goal>attached-rpm</goal>
</goals> </goals>
<configuration> <configuration>
<classifier>bin</classifier> <classifier>bin</classifier>
<provides> <provides>
<provide>nifi</provide> <provide>nifi</provide>
</provides> </provides>
<mappings> <mappings>
<mapping> <mapping>
<directory>/opt/nifi/nifi-${project.version}</directory> <directory>/opt/nifi/nifi-${project.version}</directory>
</mapping> </mapping>
<mapping> <mapping>
<directory>/opt/nifi/nifi-${project.version}</directory> <directory>/opt/nifi/nifi-${project.version}</directory>
<sources> <sources>
<source> <source>
<location>./LICENSE</location> <location>./LICENSE</location>
</source> </source>
<source> <source>
<location>./NOTICE</location> <location>./NOTICE</location>
</source> </source>
<source> <source>
<location>../DISCLAIMER</location> <location>../DISCLAIMER</location>
</source> </source>
<source> <source>
<location>./README.md</location> <location>./README.md</location>
<destination>README</destination> <destination>README</destination>
</source> </source>
</sources> </sources>
</mapping> </mapping>
<mapping> <mapping>
<directory>/opt/nifi/nifi-${project.version}/bin</directory> <directory>/opt/nifi/nifi-${project.version}/bin</directory>
<filemode>750</filemode> <filemode>750</filemode>
<sources> <sources>
<source> <source>
<location>${project.build.directory}/generated-resources/bin/nifi.sh</location> <location>${project.build.directory}/generated-resources/bin/nifi.sh</location>
<destination>nifi.sh</destination> <destination>nifi.sh</destination>
<filter>true</filter> <filter>true</filter>
</source> </source>
</sources> </sources>
</mapping> </mapping>
<mapping> <mapping>
<directory>/opt/nifi/nifi-${project.version}/conf</directory> <directory>/opt/nifi/nifi-${project.version}/conf</directory>
<configuration>true</configuration> <configuration>true</configuration>
<sources> <sources>
<source> <source>
<location>${project.build.directory}/generated-resources/conf</location> <location>${project.build.directory}/generated-resources/conf</location>
<filter>true</filter> <filter>true</filter>
</source> </source>
</sources> </sources>
</mapping> </mapping>
<mapping> <mapping>
<directory>/opt/nifi/nifi-${project.version}/lib</directory> <directory>/opt/nifi/nifi-${project.version}/lib</directory>
<dependency> <dependency>
<excludes> <excludes>
<exclude>org.apache.nifi:nifi-bootstrap</exclude> <exclude>org.apache.nifi:nifi-bootstrap</exclude>
<exclude>org.apache.nifi:nifi-resources</exclude> <exclude>org.apache.nifi:nifi-resources</exclude>
<exclude>org.apache.nifi:nifi-docs</exclude> <exclude>org.apache.nifi:nifi-docs</exclude>
</excludes> </excludes>
</dependency> </dependency>
</mapping> </mapping>
<mapping> <mapping>
<directory>/opt/nifi/nifi-${project.version}/lib/bootstrap</directory> <directory>/opt/nifi/nifi-${project.version}/lib/bootstrap</directory>
<dependency> <dependency>
<includes> <includes>
<include>org.apache.nifi:nifi-bootstrap</include> <include>org.apache.nifi:nifi-bootstrap</include>
</includes> </includes>
</dependency> </dependency>
</mapping> </mapping>
<mapping> <mapping>
<directory>/opt/nifi/nifi-${project.version}/docs</directory> <directory>/opt/nifi/nifi-${project.version}/docs</directory>
<sources> <sources>
<source> <source>
<location>${project.build.directory}/generated-docs</location> <location>${project.build.directory}/generated-docs</location>
</source> </source>
</sources> </sources>
</mapping> </mapping>
</mappings> </mappings>
</configuration> </configuration>
</execution> </execution>
</executions> </executions>
</plugin> </plugin>
</plugins> </plugins>
</build> </build>
</profile> </profile>
</profiles> </profiles>
</project> </project>

View File

@ -0,0 +1,74 @@
nifi-aws-nar
Copyright 2015 The Apache Software Foundation
This product includes software developed at
The Apache Software Foundation (http://www.apache.org/).
******************
Apache Software License v2
******************
The following binary components are provided under the Apache Software License v2
(ASLv2) Apache HttpComponents
The following NOTICE information applies:
Apache HttpClient
Copyright 1999-2014 The Apache Software Foundation
Apache HttpCore
Copyright 2005-2014 The Apache Software Foundation
This project contains annotations derived from JCIP-ANNOTATIONS
Copyright (c) 2005 Brian Goetz and Tim Peierls. See http://www.jcip.net
(ASLv2) Joda Time
The following NOTICE information applies:
This product includes software developed by
Joda.org (http://www.joda.org/).
(ASLv2) Apache Commons Codec
The following NOTICE information applies:
Apache Commons Codec
Copyright 2002-2014 The Apache Software Foundation
src/test/org/apache/commons/codec/language/DoubleMetaphoneTest.java
contains test data from http://aspell.net/test/orig/batch0.tab.
Copyright (C) 2002 Kevin Atkinson (kevina@gnu.org)
===============================================================================
The content of package org.apache.commons.codec.language.bm has been translated
from the original php source code available at http://stevemorse.org/phoneticinfo.htm
with permission from the original authors.
Original source copyright:
Copyright (c) 2008 Alexander Beider & Stephen P. Morse.
(ASLv2) Apache Commons Logging
The following NOTICE information applies:
Apache Commons Logging
Copyright 2003-2013 The Apache Software Foundation
(ASLv2) Apache Commons Lang
The following NOTICE information applies:
Apache Commons Lang
Copyright 2001-2014 The Apache Software Foundation
This product includes software from the Spring Framework,
under the Apache License 2.0 (see: StringUtils.containsWhitespace())
(ASLv2) Amazon Web Services SDK
The following NOTICE information applies:
Copyright 2010-2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
This product includes software developed by
Amazon Technologies, Inc (http://www.amazon.com/).
**********************
THIRD PARTY COMPONENTS
**********************
This software includes third party software subject to the following copyrights:
- XML parsing and utility functions from JetS3t - Copyright 2006-2009 James Murty.
- JSON parsing and utility functions from JSON.org - Copyright 2002 JSON.org.
- PKCS#1 PEM encoded private key parsing and utility functions from oauth.googlecode.com - Copyright 1998-2010 AOL Inc.

View File

@ -35,9 +35,9 @@
<artifactId>nifi-processor-utils</artifactId> <artifactId>nifi-processor-utils</artifactId>
</dependency> </dependency>
<dependency> <dependency>
<groupId>com.amazonaws</groupId> <groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk</artifactId> <artifactId>aws-java-sdk</artifactId>
</dependency> </dependency>
<dependency> <dependency>
<groupId>org.apache.nifi</groupId> <groupId>org.apache.nifi</groupId>
@ -55,4 +55,17 @@
<scope>test</scope> <scope>test</scope>
</dependency> </dependency>
</dependencies> </dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.rat</groupId>
<artifactId>apache-rat-plugin</artifactId>
<configuration>
<excludes>
<exclude>src/test/resources/hello.txt</exclude>
</excludes>
</configuration>
</plugin>
</plugins>
</build>
</project> </project>

View File

@ -58,50 +58,48 @@ public abstract class AbstractAWSProcessor<ClientType extends AmazonWebServiceCl
new HashSet<>(Arrays.asList(REL_SUCCESS, REL_FAILURE))); new HashSet<>(Arrays.asList(REL_SUCCESS, REL_FAILURE)));
public static final PropertyDescriptor CREDENTAILS_FILE = new PropertyDescriptor.Builder() public static final PropertyDescriptor CREDENTAILS_FILE = new PropertyDescriptor.Builder()
.name("Credentials File") .name("Credentials File")
.expressionLanguageSupported(false) .expressionLanguageSupported(false)
.required(false) .required(false)
.addValidator(StandardValidators.FILE_EXISTS_VALIDATOR) .addValidator(StandardValidators.FILE_EXISTS_VALIDATOR)
.build(); .build();
public static final PropertyDescriptor ACCESS_KEY = new PropertyDescriptor.Builder() public static final PropertyDescriptor ACCESS_KEY = new PropertyDescriptor.Builder()
.name("Access Key") .name("Access Key")
.expressionLanguageSupported(false) .expressionLanguageSupported(false)
.required(false) .required(false)
.addValidator(StandardValidators.NON_EMPTY_VALIDATOR) .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
.sensitive(true) .sensitive(true)
.build(); .build();
public static final PropertyDescriptor SECRET_KEY = new PropertyDescriptor.Builder() public static final PropertyDescriptor SECRET_KEY = new PropertyDescriptor.Builder()
.name("Secret Key") .name("Secret Key")
.expressionLanguageSupported(false) .expressionLanguageSupported(false)
.required(false) .required(false)
.addValidator(StandardValidators.NON_EMPTY_VALIDATOR) .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
.sensitive(true) .sensitive(true)
.build(); .build();
public static final PropertyDescriptor REGION = new PropertyDescriptor.Builder() public static final PropertyDescriptor REGION = new PropertyDescriptor.Builder()
.name("Region") .name("Region")
.required(true) .required(true)
.allowableValues(getAvailableRegions()) .allowableValues(getAvailableRegions())
.defaultValue(createAllowableValue(Regions.DEFAULT_REGION).getValue()) .defaultValue(createAllowableValue(Regions.DEFAULT_REGION).getValue())
.build(); .build();
public static final PropertyDescriptor TIMEOUT = new PropertyDescriptor.Builder() public static final PropertyDescriptor TIMEOUT = new PropertyDescriptor.Builder()
.name("Communications Timeout") .name("Communications Timeout")
.required(true) .required(true)
.addValidator(StandardValidators.TIME_PERIOD_VALIDATOR) .addValidator(StandardValidators.TIME_PERIOD_VALIDATOR)
.defaultValue("30 secs") .defaultValue("30 secs")
.build(); .build();
private volatile ClientType client; private volatile ClientType client;
private static AllowableValue createAllowableValue(final Regions regions) { private static AllowableValue createAllowableValue(final Regions regions) {
return new AllowableValue(regions.getName(), regions.getName(), regions.getName()); return new AllowableValue(regions.getName(), regions.getName(), regions.getName());
} }
private static AllowableValue[] getAvailableRegions() { private static AllowableValue[] getAvailableRegions() {
final List<AllowableValue> values = new ArrayList<>(); final List<AllowableValue> values = new ArrayList<>();
for ( final Regions regions : Regions.values() ) { for (final Regions regions : Regions.values()) {
values.add(createAllowableValue(regions)); values.add(createAllowableValue(regions));
} }
@ -119,19 +117,18 @@ public abstract class AbstractAWSProcessor<ClientType extends AmazonWebServiceCl
final boolean accessKeySet = validationContext.getProperty(ACCESS_KEY).isSet(); final boolean accessKeySet = validationContext.getProperty(ACCESS_KEY).isSet();
final boolean secretKeySet = validationContext.getProperty(SECRET_KEY).isSet(); final boolean secretKeySet = validationContext.getProperty(SECRET_KEY).isSet();
if ( (accessKeySet && !secretKeySet) || (secretKeySet && !accessKeySet) ) { if ((accessKeySet && !secretKeySet) || (secretKeySet && !accessKeySet)) {
problems.add(new ValidationResult.Builder().input("Access Key").valid(false).explanation("If setting Secret Key or Access Key, must set both").build()); problems.add(new ValidationResult.Builder().input("Access Key").valid(false).explanation("If setting Secret Key or Access Key, must set both").build());
} }
final boolean credentialsFileSet = validationContext.getProperty(CREDENTAILS_FILE).isSet(); final boolean credentialsFileSet = validationContext.getProperty(CREDENTAILS_FILE).isSet();
if ( (secretKeySet || accessKeySet) && credentialsFileSet ) { if ((secretKeySet || accessKeySet) && credentialsFileSet) {
problems.add(new ValidationResult.Builder().input("Access Key").valid(false).explanation("Cannot set both Credentials File and Secret Key/Access Key").build()); problems.add(new ValidationResult.Builder().input("Access Key").valid(false).explanation("Cannot set both Credentials File and Secret Key/Access Key").build());
} }
return problems; return problems;
} }
protected ClientConfiguration createConfiguration(final ProcessContext context) { protected ClientConfiguration createConfiguration(final ProcessContext context) {
final ClientConfiguration config = new ClientConfiguration(); final ClientConfiguration config = new ClientConfiguration();
config.setMaxConnections(context.getMaxConcurrentTasks()); config.setMaxConnections(context.getMaxConcurrentTasks());
@ -145,16 +142,15 @@ public abstract class AbstractAWSProcessor<ClientType extends AmazonWebServiceCl
return config; return config;
} }
@OnScheduled @OnScheduled
public void onScheduled(final ProcessContext context) { public void onScheduled(final ProcessContext context) {
final ClientType awsClient = createClient(context, getCredentials(context), createConfiguration(context)); final ClientType awsClient = createClient(context, getCredentials(context), createConfiguration(context));
this.client = awsClient; this.client = awsClient;
// if the processor supports REGION, get the configured region. // if the processor supports REGION, get the configured region.
if ( getSupportedPropertyDescriptors().contains(REGION) ) { if (getSupportedPropertyDescriptors().contains(REGION)) {
final String region = context.getProperty(REGION).getValue(); final String region = context.getProperty(REGION).getValue();
if ( region != null ) { if (region != null) {
client.setRegion(Region.getRegion(Regions.fromName(region))); client.setRegion(Region.getRegion(Regions.fromName(region)));
} }
} }
@ -172,7 +168,7 @@ public abstract class AbstractAWSProcessor<ClientType extends AmazonWebServiceCl
final String credentialsFile = context.getProperty(CREDENTAILS_FILE).getValue(); final String credentialsFile = context.getProperty(CREDENTAILS_FILE).getValue();
if ( credentialsFile != null ) { if (credentialsFile != null) {
try { try {
return new PropertiesCredentials(new File(credentialsFile)); return new PropertiesCredentials(new File(credentialsFile));
} catch (final IOException ioe) { } catch (final IOException ioe) {
@ -180,14 +176,13 @@ public abstract class AbstractAWSProcessor<ClientType extends AmazonWebServiceCl
} }
} }
if ( accessKey != null && secretKey != null ) { if (accessKey != null && secretKey != null) {
return new BasicAWSCredentials(accessKey, secretKey); return new BasicAWSCredentials(accessKey, secretKey);
} }
return new AnonymousAWSCredentials(); return new AnonymousAWSCredentials();
} }
protected boolean isEmpty(final String value) { protected boolean isEmpty(final String value) {
return value == null || value.trim().equals(""); return value == null || value.trim().equals("");
} }

View File

@ -39,80 +39,78 @@ import com.amazonaws.services.s3.model.Permission;
public abstract class AbstractS3Processor extends AbstractAWSProcessor<AmazonS3Client> { public abstract class AbstractS3Processor extends AbstractAWSProcessor<AmazonS3Client> {
public static final PropertyDescriptor FULL_CONTROL_USER_LIST = new PropertyDescriptor.Builder() public static final PropertyDescriptor FULL_CONTROL_USER_LIST = new PropertyDescriptor.Builder()
.name("FullControl User List") .name("FullControl User List")
.required(false) .required(false)
.expressionLanguageSupported(true) .expressionLanguageSupported(true)
.addValidator(StandardValidators.NON_EMPTY_VALIDATOR) .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
.description("A comma-separated list of Amazon User ID's or E-mail addresses that specifies who should have Full Control for an object") .description("A comma-separated list of Amazon User ID's or E-mail addresses that specifies who should have Full Control for an object")
.defaultValue("${s3.permissions.full.users}") .defaultValue("${s3.permissions.full.users}")
.build(); .build();
public static final PropertyDescriptor READ_USER_LIST = new PropertyDescriptor.Builder() public static final PropertyDescriptor READ_USER_LIST = new PropertyDescriptor.Builder()
.name("Read Permission User List") .name("Read Permission User List")
.required(false) .required(false)
.expressionLanguageSupported(true) .expressionLanguageSupported(true)
.addValidator(StandardValidators.NON_EMPTY_VALIDATOR) .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
.description("A comma-separated list of Amazon User ID's or E-mail addresses that specifies who should have Read Access for an object") .description("A comma-separated list of Amazon User ID's or E-mail addresses that specifies who should have Read Access for an object")
.defaultValue("${s3.permissions.read.users}") .defaultValue("${s3.permissions.read.users}")
.build(); .build();
public static final PropertyDescriptor WRITE_USER_LIST = new PropertyDescriptor.Builder() public static final PropertyDescriptor WRITE_USER_LIST = new PropertyDescriptor.Builder()
.name("Write Permission User List") .name("Write Permission User List")
.required(false) .required(false)
.expressionLanguageSupported(true) .expressionLanguageSupported(true)
.addValidator(StandardValidators.NON_EMPTY_VALIDATOR) .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
.description("A comma-separated list of Amazon User ID's or E-mail addresses that specifies who should have Write Access for an object") .description("A comma-separated list of Amazon User ID's or E-mail addresses that specifies who should have Write Access for an object")
.defaultValue("${s3.permissions.write.users}") .defaultValue("${s3.permissions.write.users}")
.build(); .build();
public static final PropertyDescriptor READ_ACL_LIST = new PropertyDescriptor.Builder() public static final PropertyDescriptor READ_ACL_LIST = new PropertyDescriptor.Builder()
.name("Read ACL User List") .name("Read ACL User List")
.required(false) .required(false)
.expressionLanguageSupported(true) .expressionLanguageSupported(true)
.addValidator(StandardValidators.NON_EMPTY_VALIDATOR) .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
.description("A comma-separated list of Amazon User ID's or E-mail addresses that specifies who should have permissions to read the Access Control List for an object") .description("A comma-separated list of Amazon User ID's or E-mail addresses that specifies who should have permissions to read the Access Control List for an object")
.defaultValue("${s3.permissions.readacl.users}") .defaultValue("${s3.permissions.readacl.users}")
.build(); .build();
public static final PropertyDescriptor WRITE_ACL_LIST = new PropertyDescriptor.Builder() public static final PropertyDescriptor WRITE_ACL_LIST = new PropertyDescriptor.Builder()
.name("Write ACL User List") .name("Write ACL User List")
.required(false) .required(false)
.expressionLanguageSupported(true) .expressionLanguageSupported(true)
.addValidator(StandardValidators.NON_EMPTY_VALIDATOR) .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
.description("A comma-separated list of Amazon User ID's or E-mail addresses that specifies who should have permissions to change the Access Control List for an object") .description("A comma-separated list of Amazon User ID's or E-mail addresses that specifies who should have permissions to change the Access Control List for an object")
.defaultValue("${s3.permissions.writeacl.users}") .defaultValue("${s3.permissions.writeacl.users}")
.build(); .build();
public static final PropertyDescriptor OWNER = new PropertyDescriptor.Builder() public static final PropertyDescriptor OWNER = new PropertyDescriptor.Builder()
.name("Owner") .name("Owner")
.required(false) .required(false)
.expressionLanguageSupported(true) .expressionLanguageSupported(true)
.addValidator(StandardValidators.NON_EMPTY_VALIDATOR) .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
.description("The Amazon ID to use for the object's owner") .description("The Amazon ID to use for the object's owner")
.defaultValue("${s3.owner}") .defaultValue("${s3.owner}")
.build(); .build();
public static final PropertyDescriptor BUCKET = new PropertyDescriptor.Builder() public static final PropertyDescriptor BUCKET = new PropertyDescriptor.Builder()
.name("Bucket") .name("Bucket")
.expressionLanguageSupported(true) .expressionLanguageSupported(true)
.required(true) .required(true)
.addValidator(StandardValidators.NON_EMPTY_VALIDATOR) .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
.build(); .build();
public static final PropertyDescriptor KEY = new PropertyDescriptor.Builder() public static final PropertyDescriptor KEY = new PropertyDescriptor.Builder()
.name("Object Key") .name("Object Key")
.required(true) .required(true)
.addValidator(StandardValidators.NON_EMPTY_VALIDATOR) .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
.expressionLanguageSupported(true) .expressionLanguageSupported(true)
.defaultValue("${filename}") .defaultValue("${filename}")
.build(); .build();
@Override @Override
protected AmazonS3Client createClient(final ProcessContext context, final AWSCredentials credentials, final ClientConfiguration config) { protected AmazonS3Client createClient(final ProcessContext context, final AWSCredentials credentials, final ClientConfiguration config) {
return new AmazonS3Client(credentials, config); return new AmazonS3Client(credentials, config);
} }
protected Grantee createGrantee(final String value) { protected Grantee createGrantee(final String value) {
if ( isEmpty(value) ) { if (isEmpty(value)) {
return null; return null;
} }
if ( value.contains("@") ) { if (value.contains("@")) {
return new EmailAddressGrantee(value); return new EmailAddressGrantee(value);
} else { } else {
return new CanonicalGrantee(value); return new CanonicalGrantee(value);
@ -120,16 +118,16 @@ public abstract class AbstractS3Processor extends AbstractAWSProcessor<AmazonS3C
} }
protected final List<Grantee> createGrantees(final String value) { protected final List<Grantee> createGrantees(final String value) {
if ( isEmpty(value) ) { if (isEmpty(value)) {
return Collections.emptyList(); return Collections.emptyList();
} }
final List<Grantee> grantees = new ArrayList<>(); final List<Grantee> grantees = new ArrayList<>();
final String[] vals = value.split(","); final String[] vals = value.split(",");
for ( final String val : vals ) { for (final String val : vals) {
final String identifier = val.trim(); final String identifier = val.trim();
final Grantee grantee = createGrantee(identifier); final Grantee grantee = createGrantee(identifier);
if ( grantee != null ) { if (grantee != null) {
grantees.add(grantee); grantees.add(grantee);
} }
} }
@ -140,29 +138,29 @@ public abstract class AbstractS3Processor extends AbstractAWSProcessor<AmazonS3C
final AccessControlList acl = new AccessControlList(); final AccessControlList acl = new AccessControlList();
final String ownerId = context.getProperty(OWNER).evaluateAttributeExpressions(flowFile).getValue(); final String ownerId = context.getProperty(OWNER).evaluateAttributeExpressions(flowFile).getValue();
if ( !isEmpty(ownerId) ) { if (!isEmpty(ownerId)) {
final Owner owner = new Owner(); final Owner owner = new Owner();
owner.setId(ownerId); owner.setId(ownerId);
acl.setOwner(owner); acl.setOwner(owner);
} }
for ( final Grantee grantee : createGrantees(context.getProperty(FULL_CONTROL_USER_LIST).evaluateAttributeExpressions(flowFile).getValue())) { for (final Grantee grantee : createGrantees(context.getProperty(FULL_CONTROL_USER_LIST).evaluateAttributeExpressions(flowFile).getValue())) {
acl.grantPermission(grantee, Permission.FullControl); acl.grantPermission(grantee, Permission.FullControl);
} }
for ( final Grantee grantee : createGrantees(context.getProperty(READ_USER_LIST).evaluateAttributeExpressions(flowFile).getValue())) { for (final Grantee grantee : createGrantees(context.getProperty(READ_USER_LIST).evaluateAttributeExpressions(flowFile).getValue())) {
acl.grantPermission(grantee, Permission.Read); acl.grantPermission(grantee, Permission.Read);
} }
for ( final Grantee grantee : createGrantees(context.getProperty(WRITE_USER_LIST).evaluateAttributeExpressions(flowFile).getValue())) { for (final Grantee grantee : createGrantees(context.getProperty(WRITE_USER_LIST).evaluateAttributeExpressions(flowFile).getValue())) {
acl.grantPermission(grantee, Permission.Write); acl.grantPermission(grantee, Permission.Write);
} }
for ( final Grantee grantee : createGrantees(context.getProperty(READ_ACL_LIST).evaluateAttributeExpressions(flowFile).getValue())) { for (final Grantee grantee : createGrantees(context.getProperty(READ_ACL_LIST).evaluateAttributeExpressions(flowFile).getValue())) {
acl.grantPermission(grantee, Permission.ReadAcp); acl.grantPermission(grantee, Permission.ReadAcp);
} }
for ( final Grantee grantee : createGrantees(context.getProperty(WRITE_ACL_LIST).evaluateAttributeExpressions(flowFile).getValue())) { for (final Grantee grantee : createGrantees(context.getProperty(WRITE_ACL_LIST).evaluateAttributeExpressions(flowFile).getValue())) {
acl.grantPermission(grantee, Permission.WriteAcp); acl.grantPermission(grantee, Permission.WriteAcp);
} }

View File

@ -43,36 +43,34 @@ import com.amazonaws.services.s3.model.GetObjectRequest;
import com.amazonaws.services.s3.model.ObjectMetadata; import com.amazonaws.services.s3.model.ObjectMetadata;
import com.amazonaws.services.s3.model.S3Object; import com.amazonaws.services.s3.model.S3Object;
@SupportsBatching @SupportsBatching
@SeeAlso({PutS3Object.class}) @SeeAlso({PutS3Object.class})
@Tags({"Amazon", "S3", "AWS", "Get", "Fetch"}) @Tags({"Amazon", "S3", "AWS", "Get", "Fetch"})
@CapabilityDescription("Retrieves the contents of an S3 Object and writes it to the content of a FlowFile") @CapabilityDescription("Retrieves the contents of an S3 Object and writes it to the content of a FlowFile")
@WritesAttributes({ @WritesAttributes({
@WritesAttribute(attribute="s3.bucket", description="The name of the S3 bucket"), @WritesAttribute(attribute = "s3.bucket", description = "The name of the S3 bucket"),
@WritesAttribute(attribute="path", description="The path of the file"), @WritesAttribute(attribute = "path", description = "The path of the file"),
@WritesAttribute(attribute="absolute.path", description="The path of the file"), @WritesAttribute(attribute = "absolute.path", description = "The path of the file"),
@WritesAttribute(attribute="filename", description="The name of the file"), @WritesAttribute(attribute = "filename", description = "The name of the file"),
@WritesAttribute(attribute="hash.value", description="The MD5 sum of the file"), @WritesAttribute(attribute = "hash.value", description = "The MD5 sum of the file"),
@WritesAttribute(attribute="hash.algorithm", description="MD5"), @WritesAttribute(attribute = "hash.algorithm", description = "MD5"),
@WritesAttribute(attribute="mime.type", description="If S3 provides the content type/MIME type, this attribute will hold that file"), @WritesAttribute(attribute = "mime.type", description = "If S3 provides the content type/MIME type, this attribute will hold that file"),
@WritesAttribute(attribute="s3.etag", description="The ETag that can be used to see if the file has changed"), @WritesAttribute(attribute = "s3.etag", description = "The ETag that can be used to see if the file has changed"),
@WritesAttribute(attribute="s3.expirationTime", description="If the file has an expiration date, this attribute will be set, containing the milliseconds since epoch in UTC time"), @WritesAttribute(attribute = "s3.expirationTime", description = "If the file has an expiration date, this attribute will be set, containing the milliseconds since epoch in UTC time"),
@WritesAttribute(attribute="s3.expirationTimeRuleId", description="The ID of the rule that dictates this object's expiration time"), @WritesAttribute(attribute = "s3.expirationTimeRuleId", description = "The ID of the rule that dictates this object's expiration time"),
@WritesAttribute(attribute="s3.version", description="The version of the S3 object"), @WritesAttribute(attribute = "s3.version", description = "The version of the S3 object"),})
})
public class FetchS3Object extends AbstractS3Processor { public class FetchS3Object extends AbstractS3Processor {
public static final PropertyDescriptor VERSION_ID = new PropertyDescriptor.Builder() public static final PropertyDescriptor VERSION_ID = new PropertyDescriptor.Builder()
.name("Version") .name("Version")
.description("The Version of the Object to download") .description("The Version of the Object to download")
.addValidator(StandardValidators.NON_EMPTY_VALIDATOR) .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
.expressionLanguageSupported(true) .expressionLanguageSupported(true)
.required(false) .required(false)
.build(); .build();
public static final List<PropertyDescriptor> properties = Collections.unmodifiableList( public static final List<PropertyDescriptor> properties = Collections.unmodifiableList(
Arrays.asList(BUCKET, KEY, REGION, ACCESS_KEY, SECRET_KEY, CREDENTAILS_FILE, TIMEOUT, VERSION_ID) ); Arrays.asList(BUCKET, KEY, REGION, ACCESS_KEY, SECRET_KEY, CREDENTAILS_FILE, TIMEOUT, VERSION_ID));
@Override @Override
protected List<PropertyDescriptor> getSupportedPropertyDescriptors() { protected List<PropertyDescriptor> getSupportedPropertyDescriptors() {
@ -82,7 +80,7 @@ public class FetchS3Object extends AbstractS3Processor {
@Override @Override
public void onTrigger(final ProcessContext context, final ProcessSession session) { public void onTrigger(final ProcessContext context, final ProcessSession session) {
FlowFile flowFile = session.get(); FlowFile flowFile = session.get();
if ( flowFile == null ) { if (flowFile == null) {
return; return;
} }
@ -93,7 +91,7 @@ public class FetchS3Object extends AbstractS3Processor {
final AmazonS3 client = getClient(); final AmazonS3 client = getClient();
final GetObjectRequest request; final GetObjectRequest request;
if ( versionId == null ) { if (versionId == null) {
request = new GetObjectRequest(bucket, key); request = new GetObjectRequest(bucket, key);
} else { } else {
request = new GetObjectRequest(bucket, key, versionId); request = new GetObjectRequest(bucket, key, versionId);
@ -105,10 +103,10 @@ public class FetchS3Object extends AbstractS3Processor {
attributes.put("s3.bucket", s3Object.getBucketName()); attributes.put("s3.bucket", s3Object.getBucketName());
final ObjectMetadata metadata = s3Object.getObjectMetadata(); final ObjectMetadata metadata = s3Object.getObjectMetadata();
if ( metadata.getContentDisposition() != null ) { if (metadata.getContentDisposition() != null) {
final String fullyQualified = metadata.getContentDisposition(); final String fullyQualified = metadata.getContentDisposition();
final int lastSlash = fullyQualified.lastIndexOf("/"); final int lastSlash = fullyQualified.lastIndexOf("/");
if ( lastSlash > -1 && lastSlash < fullyQualified.length() - 1 ) { if (lastSlash > -1 && lastSlash < fullyQualified.length() - 1) {
attributes.put(CoreAttributes.PATH.key(), fullyQualified.substring(0, lastSlash)); attributes.put(CoreAttributes.PATH.key(), fullyQualified.substring(0, lastSlash));
attributes.put(CoreAttributes.ABSOLUTE_PATH.key(), fullyQualified); attributes.put(CoreAttributes.ABSOLUTE_PATH.key(), fullyQualified);
attributes.put(CoreAttributes.FILENAME.key(), fullyQualified.substring(lastSlash + 1)); attributes.put(CoreAttributes.FILENAME.key(), fullyQualified.substring(lastSlash + 1));
@ -116,41 +114,41 @@ public class FetchS3Object extends AbstractS3Processor {
attributes.put(CoreAttributes.FILENAME.key(), metadata.getContentDisposition()); attributes.put(CoreAttributes.FILENAME.key(), metadata.getContentDisposition());
} }
} }
if (metadata.getContentMD5() != null ) { if (metadata.getContentMD5() != null) {
attributes.put("hash.value", metadata.getContentMD5()); attributes.put("hash.value", metadata.getContentMD5());
attributes.put("hash.algorithm", "MD5"); attributes.put("hash.algorithm", "MD5");
} }
if ( metadata.getContentType() != null ) { if (metadata.getContentType() != null) {
attributes.put(CoreAttributes.MIME_TYPE.key(), metadata.getContentType()); attributes.put(CoreAttributes.MIME_TYPE.key(), metadata.getContentType());
} }
if ( metadata.getETag() != null ) { if (metadata.getETag() != null) {
attributes.put("s3.etag", metadata.getETag()); attributes.put("s3.etag", metadata.getETag());
} }
if ( metadata.getExpirationTime() != null ) { if (metadata.getExpirationTime() != null) {
attributes.put("s3.expirationTime", String.valueOf(metadata.getExpirationTime().getTime())); attributes.put("s3.expirationTime", String.valueOf(metadata.getExpirationTime().getTime()));
} }
if ( metadata.getExpirationTimeRuleId() != null ) { if (metadata.getExpirationTimeRuleId() != null) {
attributes.put("s3.expirationTimeRuleId", metadata.getExpirationTimeRuleId()); attributes.put("s3.expirationTimeRuleId", metadata.getExpirationTimeRuleId());
} }
if ( metadata.getUserMetadata() != null ) { if (metadata.getUserMetadata() != null) {
attributes.putAll(metadata.getUserMetadata()); attributes.putAll(metadata.getUserMetadata());
} }
if ( metadata.getVersionId() != null ) { if (metadata.getVersionId() != null) {
attributes.put("s3.version", metadata.getVersionId()); attributes.put("s3.version", metadata.getVersionId());
} }
} catch (final IOException | AmazonClientException ioe) { } catch (final IOException | AmazonClientException ioe) {
getLogger().error("Failed to retrieve S3 Object for {}; routing to failure", new Object[] {flowFile, ioe}); getLogger().error("Failed to retrieve S3 Object for {}; routing to failure", new Object[]{flowFile, ioe});
session.transfer(flowFile, REL_FAILURE); session.transfer(flowFile, REL_FAILURE);
return; return;
} }
if ( !attributes.isEmpty() ) { if (!attributes.isEmpty()) {
flowFile = session.putAllAttributes(flowFile, attributes); flowFile = session.putAllAttributes(flowFile, attributes);
} }
session.transfer(flowFile, REL_SUCCESS); session.transfer(flowFile, REL_SUCCESS);
final long transferMillis = TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - startNanos); final long transferMillis = TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - startNanos);
getLogger().info("Successfully retrieved S3 Object for {} in {} millis; routing to success", new Object[] {flowFile, transferMillis}); getLogger().info("Successfully retrieved S3 Object for {} in {} millis; routing to success", new Object[]{flowFile, transferMillis});
session.getProvenanceReporter().receive(flowFile, "http://" + bucket + ".amazonaws.com/" + key, transferMillis); session.getProvenanceReporter().receive(flowFile, "http://" + bucket + ".amazonaws.com/" + key, transferMillis);
} }

View File

@ -56,35 +56,35 @@ import com.amazonaws.services.s3.model.StorageClass;
@SeeAlso({FetchS3Object.class}) @SeeAlso({FetchS3Object.class})
@Tags({"Amazon", "S3", "AWS", "Archive", "Put"}) @Tags({"Amazon", "S3", "AWS", "Archive", "Put"})
@CapabilityDescription("Puts FlowFiles to an Amazon S3 Bucket") @CapabilityDescription("Puts FlowFiles to an Amazon S3 Bucket")
@DynamicProperty(name="The name of a User-Defined Metadata field to add to the S3 Object", @DynamicProperty(name = "The name of a User-Defined Metadata field to add to the S3 Object",
value="The value of a User-Defined Metadata field to add to the S3 Object", value = "The value of a User-Defined Metadata field to add to the S3 Object",
description="Allows user-defined metadata to be added to the S3 object as key/value pairs", description = "Allows user-defined metadata to be added to the S3 object as key/value pairs",
supportsExpressionLanguage=true) supportsExpressionLanguage = true)
@ReadsAttribute(attribute="filename", description="Uses the FlowFile's filename as the filename for the S3 object") @ReadsAttribute(attribute = "filename", description = "Uses the FlowFile's filename as the filename for the S3 object")
@WritesAttributes({ @WritesAttributes({
@WritesAttribute(attribute="s3.version", description="The version of the S3 Object that was put to S3"), @WritesAttribute(attribute = "s3.version", description = "The version of the S3 Object that was put to S3"),
@WritesAttribute(attribute="s3.etag", description="The ETag of the S3 Object"), @WritesAttribute(attribute = "s3.etag", description = "The ETag of the S3 Object"),
@WritesAttribute(attribute="s3.expiration", description="A human-readable form of the expiration date of the S3 object, if one is set") @WritesAttribute(attribute = "s3.expiration", description = "A human-readable form of the expiration date of the S3 object, if one is set")
}) })
public class PutS3Object extends AbstractS3Processor { public class PutS3Object extends AbstractS3Processor {
public static final PropertyDescriptor EXPIRATION_RULE_ID = new PropertyDescriptor.Builder() public static final PropertyDescriptor EXPIRATION_RULE_ID = new PropertyDescriptor.Builder()
.name("Expiration Time Rule") .name("Expiration Time Rule")
.required(false) .required(false)
.expressionLanguageSupported(true) .expressionLanguageSupported(true)
.addValidator(StandardValidators.NON_EMPTY_VALIDATOR) .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
.build(); .build();
public static final PropertyDescriptor STORAGE_CLASS = new PropertyDescriptor.Builder() public static final PropertyDescriptor STORAGE_CLASS = new PropertyDescriptor.Builder()
.name("Storage Class") .name("Storage Class")
.required(true) .required(true)
.allowableValues(StorageClass.Standard.name(), StorageClass.ReducedRedundancy.name()) .allowableValues(StorageClass.Standard.name(), StorageClass.ReducedRedundancy.name())
.defaultValue(StorageClass.Standard.name()) .defaultValue(StorageClass.Standard.name())
.build(); .build();
public static final List<PropertyDescriptor> properties = Collections.unmodifiableList( public static final List<PropertyDescriptor> properties = Collections.unmodifiableList(
Arrays.asList(KEY, BUCKET, ACCESS_KEY, SECRET_KEY, CREDENTAILS_FILE, STORAGE_CLASS, REGION, TIMEOUT, EXPIRATION_RULE_ID, Arrays.asList(KEY, BUCKET, ACCESS_KEY, SECRET_KEY, CREDENTAILS_FILE, STORAGE_CLASS, REGION, TIMEOUT, EXPIRATION_RULE_ID,
FULL_CONTROL_USER_LIST, READ_USER_LIST, WRITE_USER_LIST, READ_ACL_LIST, WRITE_ACL_LIST, OWNER) ); FULL_CONTROL_USER_LIST, READ_USER_LIST, WRITE_USER_LIST, READ_ACL_LIST, WRITE_ACL_LIST, OWNER));
@Override @Override
protected List<PropertyDescriptor> getSupportedPropertyDescriptors() { protected List<PropertyDescriptor> getSupportedPropertyDescriptors() {
@ -94,16 +94,16 @@ public class PutS3Object extends AbstractS3Processor {
@Override @Override
protected PropertyDescriptor getSupportedDynamicPropertyDescriptor(final String propertyDescriptorName) { protected PropertyDescriptor getSupportedDynamicPropertyDescriptor(final String propertyDescriptorName) {
return new PropertyDescriptor.Builder() return new PropertyDescriptor.Builder()
.name(propertyDescriptorName) .name(propertyDescriptorName)
.addValidator(StandardValidators.NON_EMPTY_VALIDATOR) .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
.expressionLanguageSupported(true) .expressionLanguageSupported(true)
.dynamic(true) .dynamic(true)
.build(); .build();
} }
public void onTrigger(final ProcessContext context, final ProcessSession session) { public void onTrigger(final ProcessContext context, final ProcessSession session) {
FlowFile flowFile = session.get(); FlowFile flowFile = session.get();
if ( flowFile == null ) { if (flowFile == null) {
return; return;
} }
@ -125,45 +125,45 @@ public class PutS3Object extends AbstractS3Processor {
objectMetadata.setContentLength(ff.getSize()); objectMetadata.setContentLength(ff.getSize());
final String expirationRule = context.getProperty(EXPIRATION_RULE_ID).evaluateAttributeExpressions(ff).getValue(); final String expirationRule = context.getProperty(EXPIRATION_RULE_ID).evaluateAttributeExpressions(ff).getValue();
if ( expirationRule != null ) { if (expirationRule != null) {
objectMetadata.setExpirationTimeRuleId(expirationRule); objectMetadata.setExpirationTimeRuleId(expirationRule);
} }
final Map<String, String> userMetadata = new HashMap<>(); final Map<String, String> userMetadata = new HashMap<>();
for ( final Map.Entry<PropertyDescriptor, String> entry : context.getProperties().entrySet() ) { for (final Map.Entry<PropertyDescriptor, String> entry : context.getProperties().entrySet()) {
if ( entry.getKey().isDynamic() ) { if (entry.getKey().isDynamic()) {
final String value = context.getProperty(entry.getKey()).evaluateAttributeExpressions(ff).getValue(); final String value = context.getProperty(entry.getKey()).evaluateAttributeExpressions(ff).getValue();
userMetadata.put(entry.getKey().getName(), value); userMetadata.put(entry.getKey().getName(), value);
} }
} }
if ( !userMetadata.isEmpty() ) { if (!userMetadata.isEmpty()) {
objectMetadata.setUserMetadata(userMetadata); objectMetadata.setUserMetadata(userMetadata);
} }
final PutObjectRequest request = new PutObjectRequest(bucket, key, in, objectMetadata); final PutObjectRequest request = new PutObjectRequest(bucket, key, in, objectMetadata);
request.setStorageClass(StorageClass.valueOf(context.getProperty(STORAGE_CLASS).getValue())); request.setStorageClass(StorageClass.valueOf(context.getProperty(STORAGE_CLASS).getValue()));
final AccessControlList acl = createACL(context, ff); final AccessControlList acl = createACL(context, ff);
if ( acl != null ) { if (acl != null) {
request.setAccessControlList(acl); request.setAccessControlList(acl);
} }
final PutObjectResult result = s3.putObject(request); final PutObjectResult result = s3.putObject(request);
if ( result.getVersionId() != null ) { if (result.getVersionId() != null) {
attributes.put("s3.version", result.getVersionId()); attributes.put("s3.version", result.getVersionId());
} }
attributes.put("s3.etag", result.getETag()); attributes.put("s3.etag", result.getETag());
final Date expiration = result.getExpirationTime(); final Date expiration = result.getExpirationTime();
if ( expiration != null ) { if (expiration != null) {
attributes.put("s3.expiration", expiration.toString()); attributes.put("s3.expiration", expiration.toString());
} }
} }
} }
}); });
if ( !attributes.isEmpty() ) { if (!attributes.isEmpty()) {
flowFile = session.putAllAttributes(flowFile, attributes); flowFile = session.putAllAttributes(flowFile, attributes);
} }
session.transfer(flowFile, REL_SUCCESS); session.transfer(flowFile, REL_SUCCESS);
@ -172,9 +172,9 @@ public class PutS3Object extends AbstractS3Processor {
final long millis = TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - startNanos); final long millis = TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - startNanos);
session.getProvenanceReporter().send(flowFile, url, millis); session.getProvenanceReporter().send(flowFile, url, millis);
getLogger().info("Successfully put {} to Amazon S3 in {} milliseconds", new Object[] {ff, millis}); getLogger().info("Successfully put {} to Amazon S3 in {} milliseconds", new Object[]{ff, millis});
} catch (final ProcessException | AmazonClientException pe) { } catch (final ProcessException | AmazonClientException pe) {
getLogger().error("Failed to put {} to Amazon S3 due to {}", new Object[] {flowFile, pe}); getLogger().error("Failed to put {} to Amazon S3 due to {}", new Object[]{flowFile, pe});
session.transfer(flowFile, REL_FAILURE); session.transfer(flowFile, REL_FAILURE);
} }
} }

View File

@ -28,29 +28,27 @@ import com.amazonaws.services.sns.AmazonSNSClient;
public abstract class AbstractSNSProcessor extends AbstractAWSProcessor<AmazonSNSClient> { public abstract class AbstractSNSProcessor extends AbstractAWSProcessor<AmazonSNSClient> {
protected static final AllowableValue ARN_TYPE_TOPIC = protected static final AllowableValue ARN_TYPE_TOPIC
new AllowableValue("Topic ARN", "Topic ARN", "The ARN is the name of a topic"); = new AllowableValue("Topic ARN", "Topic ARN", "The ARN is the name of a topic");
protected static final AllowableValue ARN_TYPE_TARGET = protected static final AllowableValue ARN_TYPE_TARGET
new AllowableValue("Target ARN", "Target ARN", "The ARN is the name of a particular Target, used to notify a specific subscriber"); = new AllowableValue("Target ARN", "Target ARN", "The ARN is the name of a particular Target, used to notify a specific subscriber");
public static final PropertyDescriptor ARN = new PropertyDescriptor.Builder() public static final PropertyDescriptor ARN = new PropertyDescriptor.Builder()
.name("Amazon Resource Name (ARN)") .name("Amazon Resource Name (ARN)")
.description("The name of the resource to which notifications should be published") .description("The name of the resource to which notifications should be published")
.expressionLanguageSupported(true) .expressionLanguageSupported(true)
.required(true) .required(true)
.addValidator(StandardValidators.NON_EMPTY_VALIDATOR) .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
.build(); .build();
public static final PropertyDescriptor ARN_TYPE = new PropertyDescriptor.Builder() public static final PropertyDescriptor ARN_TYPE = new PropertyDescriptor.Builder()
.name("ARN Type") .name("ARN Type")
.description("The type of Amazon Resource Name that is being used.") .description("The type of Amazon Resource Name that is being used.")
.expressionLanguageSupported(false) .expressionLanguageSupported(false)
.required(true) .required(true)
.allowableValues(ARN_TYPE_TOPIC, ARN_TYPE_TARGET) .allowableValues(ARN_TYPE_TOPIC, ARN_TYPE_TARGET)
.defaultValue(ARN_TYPE_TOPIC.getValue()) .defaultValue(ARN_TYPE_TOPIC.getValue())
.build(); .build();
@Override @Override
protected AmazonSNSClient createClient(final ProcessContext context, final AWSCredentials credentials, final ClientConfiguration config) { protected AmazonSNSClient createClient(final ProcessContext context, final AWSCredentials credentials, final ClientConfiguration config) {

View File

@ -46,31 +46,31 @@ import com.amazonaws.services.sns.model.PublishRequest;
public class PutSNS extends AbstractSNSProcessor { public class PutSNS extends AbstractSNSProcessor {
public static final PropertyDescriptor CHARACTER_ENCODING = new PropertyDescriptor.Builder() public static final PropertyDescriptor CHARACTER_ENCODING = new PropertyDescriptor.Builder()
.name("Character Set") .name("Character Set")
.description("The character set in which the FlowFile's content is encoded") .description("The character set in which the FlowFile's content is encoded")
.defaultValue("UTF-8") .defaultValue("UTF-8")
.expressionLanguageSupported(true) .expressionLanguageSupported(true)
.addValidator(StandardValidators.CHARACTER_SET_VALIDATOR) .addValidator(StandardValidators.CHARACTER_SET_VALIDATOR)
.required(true) .required(true)
.build(); .build();
public static final PropertyDescriptor USE_JSON_STRUCTURE = new PropertyDescriptor.Builder() public static final PropertyDescriptor USE_JSON_STRUCTURE = new PropertyDescriptor.Builder()
.name("Use JSON Structure") .name("Use JSON Structure")
.description("If true, the contents of the FlowFile must be JSON with a top-level element named 'default'. Additional elements can be used to send different messages to different protocols. See the Amazon SNS Documentation for more information.") .description("If true, the contents of the FlowFile must be JSON with a top-level element named 'default'. Additional elements can be used to send different messages to different protocols. See the Amazon SNS Documentation for more information.")
.defaultValue("false") .defaultValue("false")
.allowableValues("true", "false") .allowableValues("true", "false")
.required(true) .required(true)
.build(); .build();
public static final PropertyDescriptor SUBJECT = new PropertyDescriptor.Builder() public static final PropertyDescriptor SUBJECT = new PropertyDescriptor.Builder()
.name("E-mail Subject") .name("E-mail Subject")
.description("The optional subject to use for any subscribers that are subscribed via E-mail") .description("The optional subject to use for any subscribers that are subscribed via E-mail")
.expressionLanguageSupported(true) .expressionLanguageSupported(true)
.required(false) .required(false)
.addValidator(StandardValidators.NON_EMPTY_VALIDATOR) .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
.build(); .build();
public static final List<PropertyDescriptor> properties = Collections.unmodifiableList( public static final List<PropertyDescriptor> properties = Collections.unmodifiableList(
Arrays.asList(ARN, ARN_TYPE, SUBJECT, REGION, ACCESS_KEY, SECRET_KEY, CREDENTAILS_FILE, TIMEOUT, Arrays.asList(ARN, ARN_TYPE, SUBJECT, REGION, ACCESS_KEY, SECRET_KEY, CREDENTAILS_FILE, TIMEOUT,
USE_JSON_STRUCTURE, CHARACTER_ENCODING) ); USE_JSON_STRUCTURE, CHARACTER_ENCODING));
public static final int MAX_SIZE = 256 * 1024; public static final int MAX_SIZE = 256 * 1024;
@ -82,24 +82,23 @@ public class PutSNS extends AbstractSNSProcessor {
@Override @Override
protected PropertyDescriptor getSupportedDynamicPropertyDescriptor(final String propertyDescriptorName) { protected PropertyDescriptor getSupportedDynamicPropertyDescriptor(final String propertyDescriptorName) {
return new PropertyDescriptor.Builder() return new PropertyDescriptor.Builder()
.name(propertyDescriptorName) .name(propertyDescriptorName)
.addValidator(StandardValidators.NON_EMPTY_VALIDATOR) .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
.expressionLanguageSupported(true) .expressionLanguageSupported(true)
.required(false) .required(false)
.dynamic(true) .dynamic(true)
.build(); .build();
} }
@Override @Override
public void onTrigger(final ProcessContext context, final ProcessSession session) { public void onTrigger(final ProcessContext context, final ProcessSession session) {
FlowFile flowFile = session.get(); FlowFile flowFile = session.get();
if ( flowFile == null ) { if (flowFile == null) {
return; return;
} }
if ( flowFile.getSize() > MAX_SIZE ) { if (flowFile.getSize() > MAX_SIZE) {
getLogger().error("Cannot publish {} to SNS because its size exceeds Amazon SNS's limit of 256KB; routing to failure", new Object[] {flowFile}); getLogger().error("Cannot publish {} to SNS because its size exceeds Amazon SNS's limit of 256KB; routing to failure", new Object[]{flowFile});
session.transfer(flowFile, REL_FAILURE); session.transfer(flowFile, REL_FAILURE);
return; return;
} }
@ -114,25 +113,25 @@ public class PutSNS extends AbstractSNSProcessor {
final PublishRequest request = new PublishRequest(); final PublishRequest request = new PublishRequest();
request.setMessage(message); request.setMessage(message);
if ( context.getProperty(USE_JSON_STRUCTURE).asBoolean() ) { if (context.getProperty(USE_JSON_STRUCTURE).asBoolean()) {
request.setMessageStructure("json"); request.setMessageStructure("json");
} }
final String arn = context.getProperty(ARN).evaluateAttributeExpressions(flowFile).getValue(); final String arn = context.getProperty(ARN).evaluateAttributeExpressions(flowFile).getValue();
final String arnType = context.getProperty(ARN_TYPE).getValue(); final String arnType = context.getProperty(ARN_TYPE).getValue();
if ( arnType.equalsIgnoreCase(ARN_TYPE_TOPIC.getValue()) ) { if (arnType.equalsIgnoreCase(ARN_TYPE_TOPIC.getValue())) {
request.setTopicArn(arn); request.setTopicArn(arn);
} else { } else {
request.setTargetArn(arn); request.setTargetArn(arn);
} }
final String subject = context.getProperty(SUBJECT).evaluateAttributeExpressions(flowFile).getValue(); final String subject = context.getProperty(SUBJECT).evaluateAttributeExpressions(flowFile).getValue();
if ( subject != null ) { if (subject != null) {
request.setSubject(subject); request.setSubject(subject);
} }
for ( final Map.Entry<PropertyDescriptor, String> entry : context.getProperties().entrySet() ) { for (final Map.Entry<PropertyDescriptor, String> entry : context.getProperties().entrySet()) {
if ( entry.getKey().isDynamic() && !isEmpty(entry.getValue()) ) { if (entry.getKey().isDynamic() && !isEmpty(entry.getValue())) {
final MessageAttributeValue value = new MessageAttributeValue(); final MessageAttributeValue value = new MessageAttributeValue();
value.setStringValue(context.getProperty(entry.getKey()).evaluateAttributeExpressions(flowFile).getValue()); value.setStringValue(context.getProperty(entry.getKey()).evaluateAttributeExpressions(flowFile).getValue());
value.setDataType("String"); value.setDataType("String");
@ -144,9 +143,9 @@ public class PutSNS extends AbstractSNSProcessor {
client.publish(request); client.publish(request);
session.transfer(flowFile, REL_SUCCESS); session.transfer(flowFile, REL_SUCCESS);
session.getProvenanceReporter().send(flowFile, arn); session.getProvenanceReporter().send(flowFile, arn);
getLogger().info("Successfully published notification for {}", new Object[] {flowFile}); getLogger().info("Successfully published notification for {}", new Object[]{flowFile});
} catch (final Exception e) { } catch (final Exception e) {
getLogger().error("Failed to publish Amazon SNS message for {} due to {}", new Object[] {flowFile, e}); getLogger().error("Failed to publish Amazon SNS message for {} due to {}", new Object[]{flowFile, e});
session.transfer(flowFile, REL_FAILURE); session.transfer(flowFile, REL_FAILURE);
return; return;
} }

View File

@ -28,20 +28,20 @@ import com.amazonaws.services.sqs.AmazonSQSClient;
public abstract class AbstractSQSProcessor extends AbstractAWSProcessor<AmazonSQSClient> { public abstract class AbstractSQSProcessor extends AbstractAWSProcessor<AmazonSQSClient> {
public static final PropertyDescriptor BATCH_SIZE = new PropertyDescriptor.Builder() public static final PropertyDescriptor BATCH_SIZE = new PropertyDescriptor.Builder()
.name("Batch Size") .name("Batch Size")
.description("The maximum number of messages to send in a single network request") .description("The maximum number of messages to send in a single network request")
.required(true) .required(true)
.addValidator(StandardValidators.POSITIVE_INTEGER_VALIDATOR) .addValidator(StandardValidators.POSITIVE_INTEGER_VALIDATOR)
.defaultValue("25") .defaultValue("25")
.build(); .build();
public static final PropertyDescriptor QUEUE_URL = new PropertyDescriptor.Builder() public static final PropertyDescriptor QUEUE_URL = new PropertyDescriptor.Builder()
.name("Queue URL") .name("Queue URL")
.description("The URL of the queue to act upon") .description("The URL of the queue to act upon")
.addValidator(StandardValidators.NON_EMPTY_VALIDATOR) .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
.expressionLanguageSupported(true) .expressionLanguageSupported(true)
.required(true) .required(true)
.build(); .build();
@Override @Override
protected AmazonSQSClient createClient(final ProcessContext context, final AWSCredentials credentials, final ClientConfiguration config) { protected AmazonSQSClient createClient(final ProcessContext context, final AWSCredentials credentials, final ClientConfiguration config) {

View File

@ -40,28 +40,28 @@ import com.amazonaws.services.sqs.model.DeleteMessageBatchRequestEntry;
@Tags({"Amazon", "AWS", "SQS", "Queue", "Delete"}) @Tags({"Amazon", "AWS", "SQS", "Queue", "Delete"})
@CapabilityDescription("Deletes a message from an Amazon Simple Queuing Service Queue") @CapabilityDescription("Deletes a message from an Amazon Simple Queuing Service Queue")
public class DeleteSQS extends AbstractSQSProcessor { public class DeleteSQS extends AbstractSQSProcessor {
public static final PropertyDescriptor RECEIPT_HANDLE = new PropertyDescriptor.Builder() public static final PropertyDescriptor RECEIPT_HANDLE = new PropertyDescriptor.Builder()
.name("Receipt Handle") .name("Receipt Handle")
.description("The identifier that specifies the receipt of the message") .description("The identifier that specifies the receipt of the message")
.expressionLanguageSupported(true) .expressionLanguageSupported(true)
.required(true) .required(true)
.addValidator(StandardValidators.NON_EMPTY_VALIDATOR) .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
.defaultValue("${sqs.receipt.handle}") .defaultValue("${sqs.receipt.handle}")
.build(); .build();
public static final List<PropertyDescriptor> properties = Collections.unmodifiableList( public static final List<PropertyDescriptor> properties = Collections.unmodifiableList(
Arrays.asList(ACCESS_KEY, SECRET_KEY, REGION, QUEUE_URL, TIMEOUT) ); Arrays.asList(ACCESS_KEY, SECRET_KEY, REGION, QUEUE_URL, TIMEOUT));
@Override @Override
protected List<PropertyDescriptor> getSupportedPropertyDescriptors() { protected List<PropertyDescriptor> getSupportedPropertyDescriptors() {
return properties; return properties;
} }
@Override @Override
public void onTrigger(final ProcessContext context, final ProcessSession session) { public void onTrigger(final ProcessContext context, final ProcessSession session) {
List<FlowFile> flowFiles = session.get(1); List<FlowFile> flowFiles = session.get(1);
if ( flowFiles.isEmpty() ) { if (flowFiles.isEmpty()) {
return; return;
} }
@ -74,7 +74,7 @@ public class DeleteSQS extends AbstractSQSProcessor {
final List<DeleteMessageBatchRequestEntry> entries = new ArrayList<>(flowFiles.size()); final List<DeleteMessageBatchRequestEntry> entries = new ArrayList<>(flowFiles.size());
for ( final FlowFile flowFile : flowFiles ) { for (final FlowFile flowFile : flowFiles) {
final DeleteMessageBatchRequestEntry entry = new DeleteMessageBatchRequestEntry(); final DeleteMessageBatchRequestEntry entry = new DeleteMessageBatchRequestEntry();
entry.setReceiptHandle(context.getProperty(RECEIPT_HANDLE).evaluateAttributeExpressions(flowFile).getValue()); entry.setReceiptHandle(context.getProperty(RECEIPT_HANDLE).evaluateAttributeExpressions(flowFile).getValue());
entries.add(entry); entries.add(entry);
@ -84,10 +84,10 @@ public class DeleteSQS extends AbstractSQSProcessor {
try { try {
client.deleteMessageBatch(request); client.deleteMessageBatch(request);
getLogger().info("Successfully deleted {} objects from SQS", new Object[] {flowFiles.size()}); getLogger().info("Successfully deleted {} objects from SQS", new Object[]{flowFiles.size()});
session.transfer(flowFiles, REL_SUCCESS); session.transfer(flowFiles, REL_SUCCESS);
} catch (final Exception e) { } catch (final Exception e) {
getLogger().error("Failed to delete {} objects from SQS due to {}", new Object[] {flowFiles.size(), e}); getLogger().error("Failed to delete {} objects from SQS due to {}", new Object[]{flowFiles.size(), e});
session.transfer(flowFiles, REL_FAILURE); session.transfer(flowFiles, REL_FAILURE);
} }
} }

View File

@ -51,57 +51,57 @@ import com.amazonaws.services.sqs.model.ReceiveMessageRequest;
import com.amazonaws.services.sqs.model.ReceiveMessageResult; import com.amazonaws.services.sqs.model.ReceiveMessageResult;
@SupportsBatching @SupportsBatching
@Tags({ "Amazon", "AWS", "SQS", "Queue", "Get", "Fetch", "Poll"}) @Tags({"Amazon", "AWS", "SQS", "Queue", "Get", "Fetch", "Poll"})
@SeeAlso({PutSQS.class, DeleteSQS.class}) @SeeAlso({PutSQS.class, DeleteSQS.class})
@CapabilityDescription("Fetches messages from an Amazon Simple Queuing Service Queue") @CapabilityDescription("Fetches messages from an Amazon Simple Queuing Service Queue")
@WritesAttributes({ @WritesAttributes({
@WritesAttribute(attribute="hash.value", description="The MD5 sum of the message"), @WritesAttribute(attribute = "hash.value", description = "The MD5 sum of the message"),
@WritesAttribute(attribute="hash.algorithm", description="MD5"), @WritesAttribute(attribute = "hash.algorithm", description = "MD5"),
@WritesAttribute(attribute="sqs.message.id", description="The unique identifier of the SQS message"), @WritesAttribute(attribute = "sqs.message.id", description = "The unique identifier of the SQS message"),
@WritesAttribute(attribute="sqs.receipt.handle", description="The SQS Receipt Handle that is to be used to delete the message from the queue") @WritesAttribute(attribute = "sqs.receipt.handle", description = "The SQS Receipt Handle that is to be used to delete the message from the queue")
}) })
public class GetSQS extends AbstractSQSProcessor { public class GetSQS extends AbstractSQSProcessor {
public static final PropertyDescriptor CHARSET = new PropertyDescriptor.Builder() public static final PropertyDescriptor CHARSET = new PropertyDescriptor.Builder()
.name("Character Set") .name("Character Set")
.description("The Character Set that should be used to encode the textual content of the SQS message") .description("The Character Set that should be used to encode the textual content of the SQS message")
.required(true) .required(true)
.defaultValue("UTF-8") .defaultValue("UTF-8")
.allowableValues(Charset.availableCharsets().keySet().toArray(new String[0])) .allowableValues(Charset.availableCharsets().keySet().toArray(new String[0]))
.build(); .build();
public static final PropertyDescriptor AUTO_DELETE = new PropertyDescriptor.Builder() public static final PropertyDescriptor AUTO_DELETE = new PropertyDescriptor.Builder()
.name("Auto Delete Messages") .name("Auto Delete Messages")
.description("Specifies whether the messages should be automatically deleted by the processors once they have been received.") .description("Specifies whether the messages should be automatically deleted by the processors once they have been received.")
.required(true) .required(true)
.allowableValues("true", "false") .allowableValues("true", "false")
.defaultValue("true") .defaultValue("true")
.build(); .build();
public static final PropertyDescriptor VISIBILITY_TIMEOUT = new PropertyDescriptor.Builder() public static final PropertyDescriptor VISIBILITY_TIMEOUT = new PropertyDescriptor.Builder()
.name("Visibility Timeout") .name("Visibility Timeout")
.description("The amount of time after a message is received but not deleted that the message is hidden from other consumers") .description("The amount of time after a message is received but not deleted that the message is hidden from other consumers")
.expressionLanguageSupported(false) .expressionLanguageSupported(false)
.required(true) .required(true)
.defaultValue("15 mins") .defaultValue("15 mins")
.addValidator(StandardValidators.TIME_PERIOD_VALIDATOR) .addValidator(StandardValidators.TIME_PERIOD_VALIDATOR)
.build(); .build();
public static final PropertyDescriptor BATCH_SIZE = new PropertyDescriptor.Builder() public static final PropertyDescriptor BATCH_SIZE = new PropertyDescriptor.Builder()
.name("Batch Size") .name("Batch Size")
.description("The maximum number of messages to send in a single network request") .description("The maximum number of messages to send in a single network request")
.required(true) .required(true)
.addValidator(StandardValidators.createLongValidator(1L, 10L, true)) .addValidator(StandardValidators.createLongValidator(1L, 10L, true))
.defaultValue("10") .defaultValue("10")
.build(); .build();
public static final PropertyDescriptor STATIC_QUEUE_URL = new PropertyDescriptor.Builder() public static final PropertyDescriptor STATIC_QUEUE_URL = new PropertyDescriptor.Builder()
.fromPropertyDescriptor(QUEUE_URL) .fromPropertyDescriptor(QUEUE_URL)
.expressionLanguageSupported(false) .expressionLanguageSupported(false)
.build(); .build();
public static final List<PropertyDescriptor> properties = Collections.unmodifiableList( public static final List<PropertyDescriptor> properties = Collections.unmodifiableList(
Arrays.asList(STATIC_QUEUE_URL, AUTO_DELETE, ACCESS_KEY, SECRET_KEY, CREDENTAILS_FILE, REGION, BATCH_SIZE, TIMEOUT, CHARSET, VISIBILITY_TIMEOUT) ); Arrays.asList(STATIC_QUEUE_URL, AUTO_DELETE, ACCESS_KEY, SECRET_KEY, CREDENTAILS_FILE, REGION, BATCH_SIZE, TIMEOUT, CHARSET, VISIBILITY_TIMEOUT));
@Override @Override
protected List<PropertyDescriptor> getSupportedPropertyDescriptors() { protected List<PropertyDescriptor> getSupportedPropertyDescriptors() {
@ -131,28 +131,28 @@ public class GetSQS extends AbstractSQSProcessor {
try { try {
result = client.receiveMessage(request); result = client.receiveMessage(request);
} catch (final Exception e) { } catch (final Exception e) {
getLogger().error("Failed to receive messages from Amazon SQS due to {}", new Object[] {e}); getLogger().error("Failed to receive messages from Amazon SQS due to {}", new Object[]{e});
context.yield(); context.yield();
return; return;
} }
final List<Message> messages = result.getMessages(); final List<Message> messages = result.getMessages();
if ( messages.isEmpty() ) { if (messages.isEmpty()) {
context.yield(); context.yield();
return; return;
} }
final boolean autoDelete = context.getProperty(AUTO_DELETE).asBoolean(); final boolean autoDelete = context.getProperty(AUTO_DELETE).asBoolean();
for ( final Message message : messages ) { for (final Message message : messages) {
FlowFile flowFile = session.create(); FlowFile flowFile = session.create();
final Map<String, String> attributes = new HashMap<>(); final Map<String, String> attributes = new HashMap<>();
for ( final Map.Entry<String, String> entry : message.getAttributes().entrySet() ) { for (final Map.Entry<String, String> entry : message.getAttributes().entrySet()) {
attributes.put("sqs." + entry.getKey(), entry.getValue()); attributes.put("sqs." + entry.getKey(), entry.getValue());
} }
for ( final Map.Entry<String, MessageAttributeValue> entry : message.getMessageAttributes().entrySet() ) { for (final Map.Entry<String, MessageAttributeValue> entry : message.getMessageAttributes().entrySet()) {
attributes.put("sqs." + entry.getKey(), entry.getValue().getStringValue()); attributes.put("sqs." + entry.getKey(), entry.getValue().getStringValue());
} }
@ -172,10 +172,10 @@ public class GetSQS extends AbstractSQSProcessor {
session.transfer(flowFile, REL_SUCCESS); session.transfer(flowFile, REL_SUCCESS);
session.getProvenanceReporter().receive(flowFile, queueUrl); session.getProvenanceReporter().receive(flowFile, queueUrl);
getLogger().info("Successfully received {} from Amazon SQS", new Object[] {flowFile}); getLogger().info("Successfully received {} from Amazon SQS", new Object[]{flowFile});
} }
if ( autoDelete ) { if (autoDelete) {
// If we want to auto-delete messages, we must fist commit the session to ensure that the data // If we want to auto-delete messages, we must fist commit the session to ensure that the data
// is persisted in NiFi's repositories. // is persisted in NiFi's repositories.
session.commit(); session.commit();
@ -183,7 +183,7 @@ public class GetSQS extends AbstractSQSProcessor {
final DeleteMessageBatchRequest deleteRequest = new DeleteMessageBatchRequest(); final DeleteMessageBatchRequest deleteRequest = new DeleteMessageBatchRequest();
deleteRequest.setQueueUrl(queueUrl); deleteRequest.setQueueUrl(queueUrl);
final List<DeleteMessageBatchRequestEntry> deleteRequestEntries = new ArrayList<>(); final List<DeleteMessageBatchRequestEntry> deleteRequestEntries = new ArrayList<>();
for ( final Message message : messages ) { for (final Message message : messages) {
final DeleteMessageBatchRequestEntry entry = new DeleteMessageBatchRequestEntry(); final DeleteMessageBatchRequestEntry entry = new DeleteMessageBatchRequestEntry();
entry.setId(message.getMessageId()); entry.setId(message.getMessageId());
entry.setReceiptHandle(message.getReceiptHandle()); entry.setReceiptHandle(message.getReceiptHandle());
@ -195,7 +195,7 @@ public class GetSQS extends AbstractSQSProcessor {
try { try {
client.deleteMessageBatch(deleteRequest); client.deleteMessageBatch(deleteRequest);
} catch (final Exception e) { } catch (final Exception e) {
getLogger().error("Received {} messages from Amazon SQS but failed to delete the messages; these messages may be duplicated. Reason for deletion failure: {}", new Object[] {messages.size(), e}); getLogger().error("Received {} messages from Amazon SQS but failed to delete the messages; these messages may be duplicated. Reason for deletion failure: {}", new Object[]{messages.size(), e});
} }
} }

View File

@ -44,26 +44,25 @@ import com.amazonaws.services.sqs.model.MessageAttributeValue;
import com.amazonaws.services.sqs.model.SendMessageBatchRequest; import com.amazonaws.services.sqs.model.SendMessageBatchRequest;
import com.amazonaws.services.sqs.model.SendMessageBatchRequestEntry; import com.amazonaws.services.sqs.model.SendMessageBatchRequestEntry;
@SupportsBatching @SupportsBatching
@Tags({"Amazon", "AWS", "SQS", "Queue", "Put", "Publish"}) @Tags({"Amazon", "AWS", "SQS", "Queue", "Put", "Publish"})
@SeeAlso({GetSQS.class, DeleteSQS.class}) @SeeAlso({GetSQS.class, DeleteSQS.class})
@CapabilityDescription("Publishes a message to an Amazon Simple Queuing Service Queue") @CapabilityDescription("Publishes a message to an Amazon Simple Queuing Service Queue")
@DynamicProperty(name="The name of a Message Attribute to add to the message", value="The value of the Message Attribute", @DynamicProperty(name = "The name of a Message Attribute to add to the message", value = "The value of the Message Attribute",
description="Allows the user to add key/value pairs as Message Attributes by adding a property whose name will become the name of " description = "Allows the user to add key/value pairs as Message Attributes by adding a property whose name will become the name of "
+ "the Message Attribute and value will become the value of the Message Attribute", supportsExpressionLanguage=true) + "the Message Attribute and value will become the value of the Message Attribute", supportsExpressionLanguage = true)
public class PutSQS extends AbstractSQSProcessor { public class PutSQS extends AbstractSQSProcessor {
public static final PropertyDescriptor DELAY = new PropertyDescriptor.Builder() public static final PropertyDescriptor DELAY = new PropertyDescriptor.Builder()
.name("Delay") .name("Delay")
.description("The amount of time to delay the message before it becomes available to consumers") .description("The amount of time to delay the message before it becomes available to consumers")
.required(true) .required(true)
.addValidator(StandardValidators.TIME_PERIOD_VALIDATOR) .addValidator(StandardValidators.TIME_PERIOD_VALIDATOR)
.defaultValue("0 secs") .defaultValue("0 secs")
.build(); .build();
public static final List<PropertyDescriptor> properties = Collections.unmodifiableList( public static final List<PropertyDescriptor> properties = Collections.unmodifiableList(
Arrays.asList(QUEUE_URL, ACCESS_KEY, SECRET_KEY, CREDENTAILS_FILE, REGION, DELAY, TIMEOUT) ); Arrays.asList(QUEUE_URL, ACCESS_KEY, SECRET_KEY, CREDENTAILS_FILE, REGION, DELAY, TIMEOUT));
private volatile List<PropertyDescriptor> userDefinedProperties = Collections.emptyList(); private volatile List<PropertyDescriptor> userDefinedProperties = Collections.emptyList();
@ -75,19 +74,19 @@ public class PutSQS extends AbstractSQSProcessor {
@Override @Override
protected PropertyDescriptor getSupportedDynamicPropertyDescriptor(final String propertyDescriptorName) { protected PropertyDescriptor getSupportedDynamicPropertyDescriptor(final String propertyDescriptorName) {
return new PropertyDescriptor.Builder() return new PropertyDescriptor.Builder()
.name(propertyDescriptorName) .name(propertyDescriptorName)
.expressionLanguageSupported(true) .expressionLanguageSupported(true)
.required(false) .required(false)
.dynamic(true) .dynamic(true)
.addValidator(StandardValidators.NON_EMPTY_VALIDATOR) .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
.build(); .build();
} }
@OnScheduled @OnScheduled
public void setup(final ProcessContext context) { public void setup(final ProcessContext context) {
userDefinedProperties = new ArrayList<>(); userDefinedProperties = new ArrayList<>();
for ( final PropertyDescriptor descriptor : context.getProperties().keySet() ) { for (final PropertyDescriptor descriptor : context.getProperties().keySet()) {
if ( descriptor.isDynamic() ) { if (descriptor.isDynamic()) {
userDefinedProperties.add(descriptor); userDefinedProperties.add(descriptor);
} }
} }
@ -96,7 +95,7 @@ public class PutSQS extends AbstractSQSProcessor {
@Override @Override
public void onTrigger(final ProcessContext context, final ProcessSession session) { public void onTrigger(final ProcessContext context, final ProcessSession session) {
FlowFile flowFile = session.get(); FlowFile flowFile = session.get();
if ( flowFile == null ) { if (flowFile == null) {
return; return;
} }
@ -117,7 +116,7 @@ public class PutSQS extends AbstractSQSProcessor {
final Map<String, MessageAttributeValue> messageAttributes = new HashMap<>(); final Map<String, MessageAttributeValue> messageAttributes = new HashMap<>();
for ( final PropertyDescriptor descriptor : userDefinedProperties ) { for (final PropertyDescriptor descriptor : userDefinedProperties) {
final MessageAttributeValue mav = new MessageAttributeValue(); final MessageAttributeValue mav = new MessageAttributeValue();
mav.setDataType("String"); mav.setDataType("String");
mav.setStringValue(context.getProperty(descriptor).evaluateAttributeExpressions(flowFile).getValue()); mav.setStringValue(context.getProperty(descriptor).evaluateAttributeExpressions(flowFile).getValue());
@ -133,12 +132,12 @@ public class PutSQS extends AbstractSQSProcessor {
try { try {
client.sendMessageBatch(request); client.sendMessageBatch(request);
} catch (final Exception e) { } catch (final Exception e) {
getLogger().error("Failed to send messages to Amazon SQS due to {}; routing to failure", new Object[] {e}); getLogger().error("Failed to send messages to Amazon SQS due to {}; routing to failure", new Object[]{e});
session.transfer(flowFile, REL_FAILURE); session.transfer(flowFile, REL_FAILURE);
return; return;
} }
getLogger().info("Successfully published message to Amazon SQS for {}", new Object[] {flowFile}); getLogger().info("Successfully published message to Amazon SQS for {}", new Object[]{flowFile});
session.transfer(flowFile, REL_SUCCESS); session.transfer(flowFile, REL_SUCCESS);
final long transmissionMillis = TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - startNanos); final long transmissionMillis = TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - startNanos);
session.getProvenanceReporter().send(flowFile, queueUrl, transmissionMillis); session.getProvenanceReporter().send(flowFile, queueUrl, transmissionMillis);

View File

@ -1,3 +1,19 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.nifi.processors.aws.s3; package org.apache.nifi.processors.aws.s3;
import java.io.IOException; import java.io.IOException;
@ -15,6 +31,7 @@ import org.junit.Test;
@Ignore("For local testing only - interacts with S3 so the credentials file must be configured and all necessary buckets created") @Ignore("For local testing only - interacts with S3 so the credentials file must be configured and all necessary buckets created")
public class TestFetchS3Object { public class TestFetchS3Object {
private final String CREDENTIALS_FILE = System.getProperty("user.home") + "/aws-credentials.properties"; private final String CREDENTIALS_FILE = System.getProperty("user.home") + "/aws-credentials.properties";
@Test @Test
@ -36,7 +53,7 @@ public class TestFetchS3Object {
final byte[] expectedBytes = Files.readAllBytes(Paths.get("src/test/resources/hello.txt")); final byte[] expectedBytes = Files.readAllBytes(Paths.get("src/test/resources/hello.txt"));
out.assertContentEquals(new String(expectedBytes)); out.assertContentEquals(new String(expectedBytes));
for ( final Map.Entry<String, String> entry : out.getAttributes().entrySet() ) { for (final Map.Entry<String, String> entry : out.getAttributes().entrySet()) {
System.out.println(entry.getKey() + " : " + entry.getValue()); System.out.println(entry.getKey() + " : " + entry.getValue());
} }
} }

View File

@ -1,3 +1,19 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.nifi.processors.aws.s3; package org.apache.nifi.processors.aws.s3;
import java.io.IOException; import java.io.IOException;
@ -24,9 +40,9 @@ public class TestPutS3Object {
runner.setProperty(PutS3Object.CREDENTAILS_FILE, CREDENTIALS_FILE); runner.setProperty(PutS3Object.CREDENTAILS_FILE, CREDENTIALS_FILE);
runner.setProperty(PutS3Object.BUCKET, "test-bucket-00000000-0000-0000-0000-123456789012"); runner.setProperty(PutS3Object.BUCKET, "test-bucket-00000000-0000-0000-0000-123456789012");
runner.setProperty(PutS3Object.EXPIRATION_RULE_ID, "Expire Quickly"); runner.setProperty(PutS3Object.EXPIRATION_RULE_ID, "Expire Quickly");
Assert.assertTrue( runner.setProperty("x-custom-prop", "hello").isValid() ); Assert.assertTrue(runner.setProperty("x-custom-prop", "hello").isValid());
for (int i=0; i < 3; i++) { for (int i = 0; i < 3; i++) {
final Map<String, String> attrs = new HashMap<>(); final Map<String, String> attrs = new HashMap<>();
attrs.put("filename", String.valueOf(i) + ".txt"); attrs.put("filename", String.valueOf(i) + ".txt");
runner.enqueue(Paths.get("src/test/resources/hello.txt"), attrs); runner.enqueue(Paths.get("src/test/resources/hello.txt"), attrs);
@ -42,7 +58,7 @@ public class TestPutS3Object {
runner.setProperty(PutS3Object.BUCKET, "test-bucket-00000000-0000-0000-0000-123456789012"); runner.setProperty(PutS3Object.BUCKET, "test-bucket-00000000-0000-0000-0000-123456789012");
runner.setProperty(PutS3Object.CREDENTAILS_FILE, CREDENTIALS_FILE); runner.setProperty(PutS3Object.CREDENTAILS_FILE, CREDENTIALS_FILE);
runner.setProperty(PutS3Object.EXPIRATION_RULE_ID, "Expire Quickly"); runner.setProperty(PutS3Object.EXPIRATION_RULE_ID, "Expire Quickly");
Assert.assertTrue( runner.setProperty("x-custom-prop", "hello").isValid() ); Assert.assertTrue(runner.setProperty("x-custom-prop", "hello").isValid());
final Map<String, String> attrs = new HashMap<>(); final Map<String, String> attrs = new HashMap<>();
attrs.put("filename", "folder/1.txt"); attrs.put("filename", "folder/1.txt");
@ -52,14 +68,13 @@ public class TestPutS3Object {
runner.assertAllFlowFilesTransferred(PutS3Object.REL_SUCCESS, 1); runner.assertAllFlowFilesTransferred(PutS3Object.REL_SUCCESS, 1);
} }
@Test @Test
public void testStorageClass() throws IOException { public void testStorageClass() throws IOException {
final TestRunner runner = TestRunners.newTestRunner(new PutS3Object()); final TestRunner runner = TestRunners.newTestRunner(new PutS3Object());
runner.setProperty(PutS3Object.BUCKET, "test-bucket-00000000-0000-0000-0000-123456789012"); runner.setProperty(PutS3Object.BUCKET, "test-bucket-00000000-0000-0000-0000-123456789012");
runner.setProperty(PutS3Object.CREDENTAILS_FILE, CREDENTIALS_FILE); runner.setProperty(PutS3Object.CREDENTAILS_FILE, CREDENTIALS_FILE);
runner.setProperty(PutS3Object.STORAGE_CLASS, StorageClass.ReducedRedundancy.name()); runner.setProperty(PutS3Object.STORAGE_CLASS, StorageClass.ReducedRedundancy.name());
Assert.assertTrue( runner.setProperty("x-custom-prop", "hello").isValid() ); Assert.assertTrue(runner.setProperty("x-custom-prop", "hello").isValid());
final Map<String, String> attrs = new HashMap<>(); final Map<String, String> attrs = new HashMap<>();
attrs.put("filename", "folder/2.txt"); attrs.put("filename", "folder/2.txt");

View File

@ -1,3 +1,19 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.nifi.processors.aws.sns; package org.apache.nifi.processors.aws.sns;
import static org.junit.Assert.assertTrue; import static org.junit.Assert.assertTrue;
@ -14,6 +30,7 @@ import org.junit.Test;
@Ignore("For local testing only - interacts with S3 so the credentials file must be configured and all necessary buckets created") @Ignore("For local testing only - interacts with S3 so the credentials file must be configured and all necessary buckets created")
public class TestPutSNS { public class TestPutSNS {
private final String CREDENTIALS_FILE = System.getProperty("user.home") + "/aws-credentials.properties"; private final String CREDENTIALS_FILE = System.getProperty("user.home") + "/aws-credentials.properties";
@Test @Test
@ -21,7 +38,7 @@ public class TestPutSNS {
final TestRunner runner = TestRunners.newTestRunner(new PutSNS()); final TestRunner runner = TestRunners.newTestRunner(new PutSNS());
runner.setProperty(PutSNS.CREDENTAILS_FILE, CREDENTIALS_FILE); runner.setProperty(PutSNS.CREDENTAILS_FILE, CREDENTIALS_FILE);
runner.setProperty(PutSNS.ARN, "arn:aws:sns:us-west-2:100515378163:test-topic-1"); runner.setProperty(PutSNS.ARN, "arn:aws:sns:us-west-2:100515378163:test-topic-1");
assertTrue( runner.setProperty("DynamicProperty", "hello!").isValid() ); assertTrue(runner.setProperty("DynamicProperty", "hello!").isValid());
final Map<String, String> attrs = new HashMap<>(); final Map<String, String> attrs = new HashMap<>();
attrs.put("filename", "1.txt"); attrs.put("filename", "1.txt");

View File

@ -1,3 +1,19 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.nifi.processors.aws.sqs; package org.apache.nifi.processors.aws.sqs;
import java.util.List; import java.util.List;
@ -11,6 +27,7 @@ import org.junit.Test;
@Ignore("For local testing only - interacts with S3 so the credentials file must be configured and all necessary buckets created") @Ignore("For local testing only - interacts with S3 so the credentials file must be configured and all necessary buckets created")
public class TestGetSQS { public class TestGetSQS {
private final String CREDENTIALS_FILE = System.getProperty("user.home") + "/aws-credentials.properties"; private final String CREDENTIALS_FILE = System.getProperty("user.home") + "/aws-credentials.properties";
@Test @Test
@ -23,7 +40,7 @@ public class TestGetSQS {
runner.run(1); runner.run(1);
final List<MockFlowFile> flowFiles = runner.getFlowFilesForRelationship(GetSQS.REL_SUCCESS); final List<MockFlowFile> flowFiles = runner.getFlowFilesForRelationship(GetSQS.REL_SUCCESS);
for ( final MockFlowFile mff : flowFiles ) { for (final MockFlowFile mff : flowFiles) {
System.out.println(mff.getAttributes()); System.out.println(mff.getAttributes());
System.out.println(new String(mff.toByteArray())); System.out.println(new String(mff.toByteArray()));
} }

View File

@ -1,3 +1,19 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.nifi.processors.aws.sqs; package org.apache.nifi.processors.aws.sqs;
import java.io.IOException; import java.io.IOException;
@ -14,6 +30,7 @@ import org.junit.Test;
@Ignore("For local testing only - interacts with S3 so the credentials file must be configured and all necessary buckets created") @Ignore("For local testing only - interacts with S3 so the credentials file must be configured and all necessary buckets created")
public class TestPutSQS { public class TestPutSQS {
private final String CREDENTIALS_FILE = System.getProperty("user.home") + "/aws-credentials.properties"; private final String CREDENTIALS_FILE = System.getProperty("user.home") + "/aws-credentials.properties";
@Test @Test
@ -22,7 +39,7 @@ public class TestPutSQS {
runner.setProperty(PutSNS.CREDENTAILS_FILE, CREDENTIALS_FILE); runner.setProperty(PutSNS.CREDENTAILS_FILE, CREDENTIALS_FILE);
runner.setProperty(PutSQS.TIMEOUT, "30 secs"); runner.setProperty(PutSQS.TIMEOUT, "30 secs");
runner.setProperty(PutSQS.QUEUE_URL, "https://sqs.us-west-2.amazonaws.com/100515378163/test-queue-000000000"); runner.setProperty(PutSQS.QUEUE_URL, "https://sqs.us-west-2.amazonaws.com/100515378163/test-queue-000000000");
Assert.assertTrue( runner.setProperty("x-custom-prop", "hello").isValid() ); Assert.assertTrue(runner.setProperty("x-custom-prop", "hello").isValid());
final Map<String, String> attrs = new HashMap<>(); final Map<String, String> attrs = new HashMap<>();
attrs.put("filename", "1.txt"); attrs.put("filename", "1.txt");

View File

@ -30,14 +30,14 @@
<module>nifi-aws-nar</module> <module>nifi-aws-nar</module>
</modules> </modules>
<dependencyManagement> <dependencyManagement>
<dependencies> <dependencies>
<dependency> <dependency>
<groupId>com.amazonaws</groupId> <groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk</artifactId> <artifactId>aws-java-sdk</artifactId>
<version>1.9.24</version> <version>1.9.24</version>
</dependency> </dependency>
</dependencies> </dependencies>
</dependencyManagement> </dependencyManagement>
</project> </project>

View File

@ -0,0 +1,68 @@
nifi-geo-nar
Copyright 2015 The Apache Software Foundation
This product includes software developed at
The Apache Software Foundation (http://www.apache.org/).
******************
Apache Software License v2
******************
The following binary components are provided under the Apache Software License v2
(ASLv2) Apache Commons Lang
The following NOTICE information applies:
Apache Commons Lang
Copyright 2001-2014 The Apache Software Foundation
This product includes software from the Spring Framework,
under the Apache License 2.0 (see: StringUtils.containsWhitespace())
(ASLv2) Apache HttpComponents
The following NOTICE information applies:
Apache HttpClient
Copyright 1999-2014 The Apache Software Foundation
Apache HttpCore
Copyright 2005-2014 The Apache Software Foundation
This project contains annotations derived from JCIP-ANNOTATIONS
Copyright (c) 2005 Brian Goetz and Tim Peierls. See http://www.jcip.net
(ASLv2) Apache Commons Codec
The following NOTICE information applies:
Apache Commons Codec
Copyright 2002-2014 The Apache Software Foundation
src/test/org/apache/commons/codec/language/DoubleMetaphoneTest.java
contains test data from http://aspell.net/test/orig/batch0.tab.
Copyright (C) 2002 Kevin Atkinson (kevina@gnu.org)
===============================================================================
The content of package org.apache.commons.codec.language.bm has been translated
from the original php source code available at http://stevemorse.org/phoneticinfo.htm
with permission from the original authors.
Original source copyright:
Copyright (c) 2008 Alexander Beider & Stephen P. Morse.
(ASLv2) Apache Commons Logging
The following NOTICE information applies:
Apache Commons Logging
Copyright 2003-2013 The Apache Software Foundation
(ASLv2) GeoIP2 Java API
The following NOTICE information applies:
GeoIP2 Java API
This software is Copyright (c) 2013 by MaxMind, Inc.
************************
Creative Commons Attribution-ShareAlike 3.0
************************
The following binary components are provided under the Creative Commons Attribution-ShareAlike 3.0. See project link for details.
(CCAS 3.0) MaxMind DB (https://github.com/maxmind/MaxMind-DB)

View File

@ -0,0 +1,29 @@
nifi-hl7-nar
Copyright 2015 The Apache Software Foundation
This product includes software developed at
The Apache Software Foundation (http://www.apache.org/).
******************
Apache Software License v2
******************
The following binary components are provided under the Apache Software License v2
(ASLv2) Apache Commons Lang
The following NOTICE information applies:
Apache Commons Lang
Copyright 2001-2014 The Apache Software Foundation
This product includes software from the Spring Framework,
under the Apache License 2.0 (see: StringUtils.containsWhitespace())
*****************
Mozilla Public License v1.1
*****************
The following binary components are provided under the Mozilla Public License v1.1. See project link for details.
(MPL 1.1) HAPI Base (ca.uhn.hapi:hapi-base:2.2 - http://hl7api.sourceforge.net/)
(MPL 1.1) HAPI Structures (ca.uhn.hapi:hapi-structures-v*:2.2 - http://hl7api.sourceforge.net/)

View File

@ -0,0 +1,57 @@
nifi-social-media-nar
Copyright 2015 The Apache Software Foundation
This product includes software developed at
The Apache Software Foundation (http://www.apache.org/).
******************
Apache Software License v2
******************
The following binary components are provided under the Apache Software License v2
(ASLv2) Apache Commons Lang
The following NOTICE information applies:
Apache Commons Lang
Copyright 2001-2014 The Apache Software Foundation
This product includes software from the Spring Framework,
under the Apache License 2.0 (see: StringUtils.containsWhitespace())
(ASLv2) Apache Commons Codec
The following NOTICE information applies:
Apache Commons Codec
Copyright 2002-2014 The Apache Software Foundation
src/test/org/apache/commons/codec/language/DoubleMetaphoneTest.java
contains test data from http://aspell.net/test/orig/batch0.tab.
Copyright (C) 2002 Kevin Atkinson (kevina@gnu.org)
===============================================================================
The content of package org.apache.commons.codec.language.bm has been translated
from the original php source code available at http://stevemorse.org/phoneticinfo.htm
with permission from the original authors.
Original source copyright:
Copyright (c) 2008 Alexander Beider & Stephen P. Morse.
(ASLv2) Apache Commons Logging
The following NOTICE information applies:
Apache Commons Logging
Copyright 2003-2013 The Apache Software Foundation
(ASLv2) Twitter4J
The following NOTICE information applies:
Copyright 2007 Yusuke Yamamoto
Twitter4J includes software from JSON.org to parse JSON response from the Twitter API. You can see the license term at http://www.JSON.org/license.html
(ASLv2) JOAuth
The following NOTICE information applies:
JOAuth
Copyright 2010-2013 Twitter, Inc
(ASLv2) Hosebird Client
The following NOTICE information applies:
Hosebird Client (hbc)
Copyright 2013 Twitter, Inc.