Commit Graph

221 Commits

Author SHA1 Message Date
David Pilato d7eb375d24 Merge branch 'master' into pr/s3-path-style-access
# Conflicts:
#	plugins/repository-s3/src/main/java/org/elasticsearch/cloud/aws/AwsS3Service.java
#	plugins/repository-s3/src/main/java/org/elasticsearch/cloud/aws/InternalAwsS3Service.java
#	plugins/repository-s3/src/main/java/org/elasticsearch/repositories/s3/S3Repository.java
#	plugins/repository-s3/src/test/java/org/elasticsearch/cloud/aws/TestAwsS3Service.java
2016-04-29 15:21:16 +02:00
xuzha cd527c5b92 Add support for customizing the rule file in ICU tokenizer
Lucene allows to create a ICUTokenizer with a special config argument
enabling the customization of the rule based iterator by providing
custom rules files.

This commit enable this feature. Users could provide a list of RBBI rule
files to ICU tokenizer.

closes #13146
2016-04-22 12:39:20 -07:00
Martijn van Groningen dd2184ab25 ingest: Streamline option naming for several processors:
* `rename` processor, renamed `to` to `target_field`
* `date` processor, renamed `match_field` to `field` and renamed `match_formats` to `formats`
* `geoip` processor, renamed `source_field` to `field` and renamed `fields` to `properties`
* `attachment` processor, renamed `source_field` to `field` and renamed `fields` to `properties`

Closes #17835
2016-04-21 13:40:43 +02:00
Clinton Gormley 098b2e03b5 Removed all references to site plugins from plugin docs 2016-04-12 19:28:09 +02:00
Alexander Reelsen da19ddf3e6 Ingest Attachment: Allow to prevent base64 conversions by using raw bytes (#16601)
CBOR is natively supported in Elasticsearch and allows for byte arrays.
This means, that by using CBOR the user can prevent base64 conversions
for the data being sent back and forth.

This PR adds support to extract data from a byte array in addition to
a string. This also required to add a ByteArrayValueSource class.
2016-04-11 14:14:56 +02:00
Clinton Gormley 88c5dfeca4 Docs: Removed references to deprecated functionality
* search_type=count
* DFS in term vectors
* Replaced string with text/keyword as appropriate
2016-04-07 13:33:35 +02:00
Clinton Gormley 7d4ed5b19e Changed JAVA_OPTS to ES_JAVA_OPTS in plugin docs 2016-04-03 16:52:37 +02:00
Clinton Gormley 6ff947427d Fixed plugin docs links to dir layouts 2016-04-03 16:50:28 +02:00
javanna 27d4994aff Merge branch 'master' into enhancement/remove_node_client_setting 2016-03-24 18:10:11 +01:00
javanna 030453d320 Merge branch 'master' into enhancement/remove_node_client_setting 2016-03-23 11:25:34 +01:00
David Pilato e907b7c11e Check that S3 setting `buffer_size` is always lower than `chunk_size`
We can be better at checking `buffer_size` and `chunk_size` for S3 repositories.
For example, we know that:

* `buffer_size` should be more than `5mb`
* `chunk_size` should be no more than `5tb`
* `buffer_size` should be lower than `chunk_size`

Otherwise, setting `buffer_size` is useless.

For the record:

`chunk_size` is a Snapshot setting whatever the implementation is.
`buffer_size` is an S3 implementation setting.

Let say that you are snapshotting a 500mb file. If you set `chunk_size` to `200mb`, then Snapshot service will call S3 repository to snapshot 3 files with the following sizes:

* `200mb`
* `200mb`
* `100mb`

If you set `buffer_size` to `100mb` (AWS maximum size recommendation), the first file of `200mb` will be uploaded on S3 using the multipart feature in 2 chunks and the workflow is basically the following:

* create the multipart request and get back an `id` from AWS S3 platform
* upload part1: `100mb`
* upload part2: `100mb`
* "commit" the full upload using the `id`.

Closes #17244.
2016-03-23 10:39:54 +01:00
Jun Ohtani a9a0f262af Analysis Kuromoji: Add nbest option and NumberFilter
Add nbest_cost and nbest_examples parameter to KuromojiTokenizerFactory
Add KuromojiNumberFilterFactory
2016-03-22 20:09:56 +09:00
javanna bf390a935e Merge branch 'master' into enhancement/remove_node_client_setting 2016-03-21 17:18:23 +01:00
Clinton Gormley d83e12094e Docs: Added redirect entries for multicast plugin and the cloud plugins 2016-03-16 12:31:00 +01:00
Jason Tedor 8a05c2a2be Bootstrap does not set system properties
Today, certain bootstrap properties are set and read via system
properties. This action-at-distance way of managing these properties is
rather confusing, and completely unnecessary. But another problem exists
with setting these as system properties. Namely, these system properties
are interpreted as Elasticsearch settings, not all of which are
registered. This leads to Elasticsearch failing to startup if any of
these special properties are set. Instead, these properties should be
kept as local as possible, and passed around as method parameters where
needed. This eliminates the action-at-distance way of handling these
properties, and eliminates the need to register these non-setting
properties. This commit does exactly that.

Additionally, today we use the "-D" command line flag to set the
properties, but this is confusing because "-D" is a special flag to the
JVM for setting system properties. This creates confusion because some
"-D" properties should be passed via arguments to the JVM (so via
ES_JAVA_OPTS), and some should be passed as arguments to
Elasticsearch. This commit changes the "-D" flag for Elasticsearch
settings to "-E".
2016-03-13 20:09:15 -04:00
Ryan Ernst 5f3d0067f8 Merge pull request #17024 from rjernst/cli-parsing
Cli: Switch to jopt-simple
2016-03-11 12:35:39 -08:00
Ryan Ernst 3f44e1d429 Remove old reference to site plugins example in docs 2016-03-11 11:53:20 -08:00
Ryan Ernst 591fb8f028 Merge branch 'master' into cli-parsing 2016-03-11 10:45:05 -08:00
Ed Winn c4934f5250 Current link returns 404. Updated 2016-03-10 16:52:30 -07:00
Ryan Ernst 3836f3a736 Remove reference to standalonerunner 2016-03-08 13:40:39 -08:00
javanna e5d9328a2d [DOCS] adapt docs to node.client setting removal 2016-03-05 10:55:19 +01:00
Martijn van Groningen 116acee1dd Merge pull request #16946 from dedemorton/ingest_doc_edit
Improve the ingest documentation.
2016-03-04 11:49:19 +01:00
DeDe Morton 4d0124e65c Edits to ingest plugin docs 2016-03-03 22:49:31 -08:00
Lee Hinman 6adbbff97c Fix organization rename in all files in project
Basically a query-replace of "https://github.com/elasticsearch/" with "https://github.com/elastic/"
2016-03-03 12:04:13 -07:00
Clinton Gormley 69b5b1920f Merge pull request #16907 from centic9/patch-2
Elasticsearch monitoring support for Dynatrace Application Monitoring
2016-03-02 15:25:54 +01:00
Clinton Gormley 5bb744bfde Changed v3.0.0 to v5.0.0 in plugin docs 2016-03-02 11:57:42 +01:00
David Pilato 7a42014909 Upgrade Azure Storage client to 4.0.0
We are using `2.0.0` today but Azure team now recommends:

```xml
<dependency>
    <groupId>com.microsoft.azure</groupId>
    <artifactId>azure-storage</artifactId>
    <version>4.0.0</version>
</dependency>
```

This new version fix the timeout issues we have seen with azure storage although #15080 adds a timeout support.
Azure storage client 2.0.0 was not passing correctly this value when it was calling Azure services.

Note that the timeout is a server side timeout and not client side timeout.
It means that it will raise only a timeout when:

* upload of blob is complete
* if azure service is not able to process the blob (and store it) within a given time range.

In which case it will raise an exception which elasticsearch can deal with:

```
java.io.IOException
    at __randomizedtesting.SeedInfo.seed([91BC11AEF16E073F:6886FA5308FCE4D8]:0)
    at com.microsoft.azure.storage.core.Utility.initIOException(Utility.java:643)
    at com.microsoft.azure.storage.blob.BlobOutputStream.writeBlock(BlobOutputStream.java:444)
    at com.microsoft.azure.storage.blob.BlobOutputStream.access$000(BlobOutputStream.java:53)
    at com.microsoft.azure.storage.blob.BlobOutputStream$1.call(BlobOutputStream.java:388)
    at com.microsoft.azure.storage.blob.BlobOutputStream$1.call(BlobOutputStream.java:385)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: com.microsoft.azure.storage.StorageException: Operation could not be completed within the specified time.
    at com.microsoft.azure.storage.StorageException.translateException(StorageException.java:89)
    at com.microsoft.azure.storage.core.StorageRequest.materializeException(StorageRequest.java:305)
    at com.microsoft.azure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:175)
    at com.microsoft.azure.storage.blob.CloudBlockBlob.uploadBlockInternal(CloudBlockBlob.java:1006)
    at com.microsoft.azure.storage.blob.CloudBlockBlob.uploadBlock(CloudBlockBlob.java:978)
    at com.microsoft.azure.storage.blob.BlobOutputStream.writeBlock(BlobOutputStream.java:438)
    ... 9 more
```

The following code was used to test this against Azure platform:

```java
public void testDumb() throws URISyntaxException, StorageException, IOException, InvalidKeyException {
    String connectionString = "MY-AZURE-STRING";

    CloudStorageAccount storageAccount = CloudStorageAccount.parse(connectionString);
    CloudBlobClient client = storageAccount.createCloudBlobClient();
    client.getDefaultRequestOptions().setTimeoutIntervalInMs(1000);
    CloudBlobContainer container = client.getContainerReference("dumb");
    container.createIfNotExists();
    CloudBlockBlob blob = container.getBlockBlobReference("blob");

    File sourceFile = File.createTempFile("sourceFile", ".tmp");

    try {
        int fileSize = 10000000;

        byte[] buffer = new byte[fileSize];
        Random random = new Random();
        random.nextBytes(buffer);

        logger.info("Generate local file");
        FileOutputStream fos = new FileOutputStream(sourceFile);
        fos.write(buffer);
        fos.close();
        logger.info("End generate local file");

        FileInputStream fis = new FileInputStream(sourceFile);

        logger.info("Start uploading");
        blob.upload(fis, fileSize);
        logger.info("End uploading");

    }
    finally {
        if (sourceFile.exists()) {
            sourceFile.delete();
        }
    }
}
```

With 2.0.0, the above code was not raising any exception. With 4.0.0, the exception is now thrown correctly.

The default timeout is 5 minutes. See https://github.com/Azure/azure-storage-java/blob/master/microsoft-azure-storage/src/com/microsoft/azure/storage/core/Utility.java#L352-L375

Closes #12567.

Release notes from 2.0.0:

 * Removed deprecated table AtomPub support.
 * Removed deprecated constructors which take service clients in favor of constructors which take credentials.
 * Added support for "Add" permissions on Blob SAS.
 * Added support for "Create" permissions on Blob and File SAS.
 * Added support for IP Restricted SAS and Protocol SAS.
 * Added support for Account SAS to all services.
 * Added support for Minute and Hour Metrics to FileServiceProperties and added support for File Metrics to CloudAnalyticsClient.
 * Removed deprecated startCopyFromBlob() on CloudBlob. Use startCopy() instead.
 * Removed deprecated Credentials and StorageKey classes. Please use the appropriate methods on StorageCredentialsAccountAndKey instead.

 * Fixed a bug in table where a select on a non-existent field resulted in a null reference exception if the corresponding field in the TableEntity was not nullable.
 * Fixed a bug in table where JsonParser was automatically closing the response stream before it was completely drained causing socket exhaustion.
 * Fixed a bug in StorageCredentialsAccountAndKey.updateKey(String) which prevented valid keys from being set.
 * Added CloudBlobContainer.listBlobs(final String, final boolean) method.
 * Fixed a bug in blob where using AccessConditions on block blob uploads larger than 64MB done with the upload* methods or block blob uploads done openOutputStream with would fail if the blob did not already exist.
 * Added support for setting a proxy per request. Proxy can be set on an OperationContext instance and will be used when that instance is passed to the request method.

 * Added support for SAS to the Azure File service.
 * Added support for Append Blob.
 * Added support for Access Control Lists (ACL) to File Shares.
 * Added support for getting and setting of CORS rules to File service.
 * Added support for ShareStats to File Shares.
 * Added support for copying an Azure File to another Azure File or a Block Blob asynchronously, and aborting Azure File copy operations asynchronously.
 * Added support for copying a Blob to an Azure File asynchronously.
 * Added support for setting a maximum quota property on a File Share.
 * Removed deprecated AuthenticationScheme and its getter and setter. In the future only SharedKey will be used.
 * Removed deprecated getter/setters for all request option properties on the service clients. Please use the default request options getter/setters instead.
 * Removed getSubDirectoryReference() for blob directories and file directories. Use getDirectoryReference() instead.
 * Removed getEntityClass() in TableQuery. Please use getClazzType() instead.
 * Added client-side verification for lease duration and break periods.
 * Deprecated the setters in table for timestamp as this property is only modifiable by the service.
 * Deprecated startCopyFromBlob() on CloudBlob. Use startCopy() instead.
 * Deprecated the Credentials and StorageKey classes. Please use the appropriate methods on StorageCredentialsAccountAndKey instead.
 * Deprecated constructors which take service clients in favor of constructors which take credentials.
 * Fixed a bug where the DateBackwardCompatibility flag was not applied if set on the CloudTableClient default request options.
 * Changed library behavior to retry all exceptions thrown when parsing a response object.
 * Changed behavior to stop removing query parameters passed in with the resource URI if that URI contains a SAS token. Some query parameters such as comp, restype, snapshot and api-version will still be removed.
 * Added support for logging StringToSign to SharedKey and SAS.
 * **Added a connect timeout to prevent hangs when establishing the network connection.**
 * **Made performance enhancements to the BlobOutputStream class.**

 * Fixed a bug where maximum execution time was ignored for file, queue, and table services.
 * **Changed the socket timeout to be set to the service side timeout plus 5 minutes when maximum execution time is not set.**
 * **Changed the socket timeout to default to 5 minutes rather than infinite when neither service side timeout or maximum execution time are set.**
 * Fixed a bug where MD5 was calculated for commitBlockList even though UseTransactionalMD5 was set to false.
 * Fixed a bug where selecting fields that did not exist returned an error rather than an EntityProperty with a null value.
 * Fixed a bug where table entities with a single quote in their partition or row key could be inserted but not operated on in any other way.

 * Fixed a bug for all listing API's where next() would sometimes throw an exception if hasNext() had not been called even if there were more elements to iterate on.
 * Added sequence number to the blob properties. This is populated for page blobs.
 * Creating a page blob sets its length property.
 * Added support for page blob sequence numbers and sequence number access conditions.
 * Fixed a bug in abort copy where the lease access condition was not sent to the service.
 * Fixed an issue in startCopyFromBlob where if the URI of the source blob contained certain non-ASCII characters they would not be encoded appropriately. This would result in Authorization failures.
 * Fixed a small performance issue in XML serialization.
 * Fixed a bug in BlobOutputStream and FileOutputStream where flush added data to a request pool rather than immediately committing it to the Azure service.
 * Refactored to remove the blob, queue, and file package dependency on table in the error handling code.
 * Added additional client-side logging for REST requests, responses, and errors.

Closes #15976.
2016-02-29 15:00:34 +01:00
Clinton Gormley 8830817fa3 Merge pull request #16827 from ayushsangani/patch-3
Modify path of Servlet Transport
2016-02-29 00:59:44 +01:00
Clinton Gormley 2d56eed306 Merge pull request #16785 from dsem/patch-1
Fix python script filename extension
2016-02-28 23:04:41 +01:00
Itamar Syn-Hershko 8ea6264f55 Format settings in discovery-ec2 docs
Closes #16846
2016-02-28 11:15:22 -05:00
David Pilato 90fba97a30 Moves GCE settings to the new infra
Closes #16720.
2016-02-19 17:00:39 -08:00
David Pilato 55d9b6878b Deprecate Mapper Attachment Plugin
Now that we have the ingest-attachment plugin (https://github.com/elastic/elasticsearch/pull/16490)  we should deprecate the mapper-attachment plugin.

Closes #16650.
2016-02-15 16:40:12 +01:00
DeDe Morton 461f329cd8 Add ingest plugins to Elasticsearch plugin docs 2016-02-12 07:37:50 -08:00
Jim Ferenczi b146f3ecb3 Pack all the plugin files into a single folder named elasticsearch at the root of the plugin zip. 2016-02-10 10:13:05 +01:00
Nik Everett 1c741f56b9 Merge pull request #16529 from dongjoon-hyun/fix_typos_in_docs
Fix typos in docs.
2016-02-09 14:00:06 -05:00
Alexander Reelsen 0d4711c2fc Ingest: Add attachment processor
This is a simple port of the mapper attachment plugin to the ingest
functionality, no new features. The only option is to limit
the number of chars to prevent indexing of huge documents.

Fields can be selected in the processor as well.

Close #16303
2016-02-09 17:03:30 +01:00
Dongjoon Hyun 21ea552070 Fix typos in docs. 2016-02-09 02:07:32 -08:00
Jim Ferenczi 7d0181b5d4 Rename bin/plugin in bin/elasticsearch-plugin 2016-02-05 10:09:14 +01:00
Ryan Ernst 3787f437ec Merge branch 'master' into remove_multicast 2016-02-01 07:25:45 -08:00
Ryan Ernst b8f08c35ec Plugin: Remove multicast plugin
closes #16310.
2016-01-29 18:41:31 -08:00
Ofir 22b91c2322 Update analysis.asciidoc 2016-01-28 13:02:30 +02:00
Ofir ac349373f0 Update analysis.asciidoc 2016-01-28 13:02:20 +02:00
Ofir 03e04daf29 Added the Network Addresses community plugin link
Updated analysis.asciidoc with a link to the Network Addresses Analysis Plugin.
2016-01-28 12:16:28 +02:00
Martijn van Groningen b784b81665 docs: Remove the fact that ingest was a plugin from the docs. 2016-01-26 15:49:47 +01:00
Tal Levy 84c2488074 Merge pull request #16216 from talevy/ingest-docs-migration
[Ingest] update ingest docs
2016-01-25 12:40:22 -08:00
Tal Levy 894efa3fb6 update ingest docs
- move ingest plugin docs to core reference docs
- move geoip processor docs to plugins/ingest-geoip.asciidoc
- add missing options tables for some processors
- add description of pipeline definition
- add description of processor definitions including common parameters
  like "tag" and "on_failure"
2016-01-25 12:08:17 -08:00
javanna 36d98478bf Merge branch 'master' into feature/ingest 2016-01-25 18:01:09 +01:00
David Pilato aff3c564b3 Add seoul endpoints for EC2 discovery and S3 snapshots
Add documentation for #16167
2016-01-25 08:46:51 +01:00
Tal Levy 9b5739c43d docs: add docs for on_failure support in ingest pipelines 2016-01-24 19:52:18 -08:00
Ryan Ernst 3b78267c71 Plugins: Remove site plugins
Site plugins used to be used for things like kibana and marvel, but
there is no longer a need since kibana (and marvel as a kibana plugin)
uses node.js. This change removes site plugins, as well as the flag for
jvm plugins. Now all plugins are jvm plugins.
2016-01-16 22:45:37 -08:00
Tal Levy 1754eece66 introduce DeDotProcessor
fixes #15944.
2016-01-15 11:35:18 -08:00
javanna 9c06736dbd Merge branch 'master' into feature/ingest 2016-01-15 10:11:56 +01:00
Ryan Schneider c42149ebe2 merge PR 14212 - add "Best Practices" section to the cloud-aws docs 2016-01-13 11:01:24 -08:00
Jason Bryan 45f192b7cc Minor documentation updates. 2016-01-12 15:52:32 -05:00
Martijn van Groningen 9ec2e140b8 Merge branch 'master' into feature/ingest 2016-01-07 10:44:21 +01:00
debadair 71d146b940 Docs: Removed NSFW link. 2016-01-06 11:01:01 -08:00
Tal Levy 11d4417251 [doc] update set and append processor doc examples 2016-01-05 11:45:52 -08:00
Tal Levy 15ecf76d54 [doc] update convert processor example format 2016-01-05 09:54:13 -08:00
Tal Levy 183386173c cleanup simulate test and add docs 2016-01-04 16:10:42 -08:00
Tal Levy 9dd54f2b4f Introduce a fail processor 2016-01-04 11:33:50 -08:00
Martijn van Groningen 79161d77df Merge remote-tracking branch 'es/master' into feature/ingest 2015-12-29 12:56:09 +01:00
David Pilato 96b3166c6d Add timeout settings (default to 5 minutes)
By default, azure does not timeout. This commit adds support for a timeout settings which defaults to 5 minutes.
It's a timeout **per request** not a global timeout for a snapshot request.

It can be defined globally, per account or both. Defaults to `5m`.

```yml
cloud:
    azure:
        storage:
            timeout: 10s
            my_account1:
                account: your_azure_storage_account1
                key: your_azure_storage_key1
                default: true
            my_account2:
                account: your_azure_storage_account2
                key: your_azure_storage_key2
                timeout: 30s
```

In this example, timeout will be 10s for `my_account1` and 30s for `my_account2`.

Closes #14277.
2015-12-29 11:40:48 +01:00
David Pilato a49fe189b0 Support global `repositories.azure.` settings
All those repository settings can also be defined globally in `elasticsearch.yml` file using prefix `repositories.azure.`. For example:

```yml
repositories.azure:
    container: backup-container
    base_path: backups
    chunk_size: 32m
    compress": true
```

Closes #13776.
2015-12-29 10:43:01 +01:00
Martijn van Groningen 4a0ec0da26 Merge remote-tracking branch 'es/master' into feature/ingest 2015-12-24 15:34:20 +01:00
Martijn van Groningen dbbb296322 added a `node.ingest` setting that controls whether ingest is active or not. Defaults to `false`.
If `node.ingest` isn't active then ingest related API calls fail and if the `pipeline_id` parameter is set then index and bulk requests fail.
2015-12-22 22:38:49 +01:00
javanna 46f99a11a0 Add append processor
The append processor allows to append one or more values to an existing list; add a new list with the provided values if the field doesn't exist yet, or convert an existing scalar into a list and add the provided values to the newly created  list.

This required adapting of IngestDocument#appendFieldValue behaviour, also added support for templating to it.

Closes #14324
2015-12-22 16:11:43 +01:00
Costin Leau 3204e87220 Restrict usage to HDFS only 2015-12-20 15:53:18 +02:00
Costin Leau 323111b715 [DOC] simplify docs for repository-hdfs 2015-12-20 01:49:28 +02:00
Martijn van Groningen 6dfcee6937 Added an internal reload pipeline api that makes sure pipeline changes are visible on all ingest nodes after modifcations have been made.
* Pipeline store can now only start when there is no .ingest index or all primary shards of .ingest have been started
* IngestPlugin adds`node.ingest` setting to `true`. This is used to figure out to what nodes to send the refresh request too. This setting isn't yet configurable. This will be done in a follow up issue.
* Removed the background pipeline updater and added added logic to deal with specific scenarious to reload all pipelines.
* Ingest services are no longer be managed by Guice. Only the bootstrapper gets managed by guice and that contructs
all the services/components ingest will need.
2015-12-18 18:24:27 +01:00
Martijn van Groningen e8a8e22e09 Add template infrastructure, removed meta processor and added template support to set and remove processor.
Added ingest wide template infrastructure to IngestDocument
Added a TemplateService interface that the ingest framework uses
Added a TemplateService implementation that the ingest plugin provides that delegates to the ES' script service
Cut SetProcessor over to use the template infrastructure for the `field` and `value` settings.
Removed the MetaDataProcessor
Removed dependency on mustache library
Added qa  ingest mustache rest test so that the ingest and mustache integration can be tested.
2015-12-18 17:35:53 +01:00
javanna 5f2df6b95a Merge branch 'master' into feature/ingest 2015-12-18 10:34:07 +01:00
Costin Leau 210657a453 [DOC] escape # in programlisting 2015-12-15 16:44:27 +02:00
Costin Leau 4426ed0a09 [DOCS] Link docs on repository-hdfs plugin
relates #15191
2015-12-15 14:52:36 +02:00
Clinton Gormley 82eb498b29 Docs: Updated plugin author help for Gradle
Relates to #15280
2015-12-15 12:27:47 +01:00
Costin Leau 7bca97bba6 HDFS Snapshot/Restore plugin
Migrated from ES-Hadoop. Contains several improvements regarding:

* Security
Takes advantage of the pluggable security in ES 2.2 and uses that in order
to grant the necessary permissions to the Hadoop libs. It relies on a
dedicated DomainCombiner to grant permissions only when needed only to the
libraries installed in the plugin folder
Add security checks for SpecialPermission/scripting and provides out of
the box permissions for the latest Hadoop 1.x (1.2.1) and 2.x (2.7.1)

* Testing
Uses a customized Local FS to perform actual integration testing of the
Hadoop stack (and thus to make sure the proper permissions and ACC blocks
are in place) however without requiring extra permissions for testing.
If needed, a MiniDFS cluster is provided (though it requires extra
permissions to bind ports)
Provides a RestIT test

* Build system
Picks the build system used in ES (still Gradle)
2015-12-14 21:50:09 +02:00
Martijn van Groningen 503a166b71 Merge remote-tracking branch 'es/master' into feature/ingest 2015-12-11 14:32:16 +01:00
David Pilato c1f7171e61 Add support for path_style_access
From https://github.com/elastic/elasticsearch-cloud-aws/pull/159

Add a new option `path_style_access` for S3 buckets. It adds support for path style access for [virtual hosting of buckets](http://docs.aws.amazon.com/AmazonS3/latest/dev/VirtualHosting.html).
Defaults to `false`.

Closes https://github.com/elastic/elasticsearch-cloud-aws/issues/124.
2015-12-10 09:00:31 +01:00
javanna 283d9c1523 [TEST] adjust processor docs. remove throws exception when the field is not there. 2015-12-09 18:24:26 +01:00
David Pilato 7dcb40bcac Add support for proxy authentication for s3 and ec2
When using S3 or EC2, it was possible to use a proxy to access EC2 or S3 API but username and password were not possible to be set.

This commit adds support for this. Also, to make all that consistent, proxy settings for both plugins have been renamed:

* from `cloud.aws.proxy_host` to `cloud.aws.proxy.host`
* from `cloud.aws.ec2.proxy_host` to `cloud.aws.ec2.proxy.host`
* from `cloud.aws.s3.proxy_host` to `cloud.aws.s3.proxy.host`
* from `cloud.aws.proxy_port` to `cloud.aws.proxy.port`
* from `cloud.aws.ec2.proxy_port` to `cloud.aws.ec2.proxy.port`
* from `cloud.aws.s3.proxy_port` to `cloud.aws.s3.proxy.port`

New settings are `proxy.username` and `proxy.password`.

```yml
cloud:
    aws:
        protocol: https
        proxy:
            host: proxy1.company.com
            port: 8083
            username: myself
            password: theBestPasswordEver!
```

You can also set different proxies for `ec2` and `s3`:

```yml
cloud:
    aws:
        s3:
            proxy:
                host: proxy1.company.com
                port: 8083
                username: myself1
                password: theBestPasswordEver1!
        ec2:
            proxy:
                host: proxy2.company.com
                port: 8083
                username: myself2
                password: theBestPasswordEver2!
```

Note that `password` is filtered with `SettingsFilter`.

We also fix a potential issue in S3 repository. We were supposed to accept key/secret either set under `cloud.aws` or `cloud.aws.s3` but the actual code never implemented that.

It was:

```java
account = settings.get("cloud.aws.access_key");
key = settings.get("cloud.aws.secret_key");
```

We replaced that by:

```java
String account = settings.get(CLOUD_S3.KEY, settings.get(CLOUD_AWS.KEY));
String key = settings.get(CLOUD_S3.SECRET, settings.get(CLOUD_AWS.SECRET));
```

Also, we extract all settings for S3 in `AwsS3Service` as it's already the case for `AwsEc2Service` class.

Closes #15268.
2015-12-07 23:10:54 +01:00
Tal Levy 45f48ac126 update all processors to only operate on one field at a time when possible 2015-12-07 08:30:00 -08:00
javanna 6c43137413 Merge branch 'master' into feature/ingest 2015-12-04 14:10:01 +01:00
Ryan 94027f1461 Made requested changes
Moved section to before testing section and made requested formatting changes.
2015-12-03 09:27:20 -08:00
Tal Levy 56da7b32ed add ability to define custom grok patterns within processor config 2015-12-03 08:24:07 -08:00
Ryan 5890c9a813 Update repository-s3.asciidoc
Documentation of AWS VPC public vs. private subnets and their affects on accessing S3.
2015-12-02 16:07:57 -08:00
Martijn van Groningen f427ad2094 docs: undo accidental rename added via: 5e07644788 2015-12-02 12:17:43 +01:00
Martijn van Groningen a9ecde041b Merge branch 'master' into feature/ingest 2015-12-02 11:21:15 +01:00
javanna 5e07644788 [DOCS] add missing comma 2015-12-01 20:07:17 +01:00
javanna 6c0510b01d Make rename processor less error prone
Rename processor now checks whether the field to rename exists and throws exception if it doesn't. It also checks that the new field to rename to doesn't exist yet, and throws exception otherwise. Also we make sure that the rename operation is atomic, otherwise things may break between the remove and the set and we'd leave the document in an inconsistent state.

Note that the requirement for the new field name to not exist simplifies the usecase for e.g. { "rename" : { "list.1": "list.2"} } as such a rename wouldn't be accepted if list is actually a list given that either list.2 already exists or the index is out of bounds for the existing list. If one really wants to replace an existing field, that field needs to be removed first through remove processor and then rename can be used.
2015-12-01 19:58:24 +01:00
David Pilato bed9bf19c6 S3 repository: fix spelling error
Reported at https://github.com/elastic/elasticsearch-cloud-aws/pull/221
2015-11-30 16:01:55 +01:00
Martijn van Groningen fdf4543b8e Renamed `add` processor to `set` processor.
This name makes more sense, because if a field already exists it overwrites it.
2015-11-30 15:03:20 +01:00
Martijn van Groningen 467a47670c Merge remote-tracking branch 'es/master' into feature/ingest 2015-11-30 10:24:27 +01:00
Clinton Gormley 2ab14cb21c Merge pull request #14900 from shikhar/patch-1
link to es-restlog plugin
2015-11-28 15:09:22 +01:00
Martijn van Groningen 9d1fa0d6da ingest: Add `meta` processor that allows to modify the metadata attributes of document being processed 2015-11-26 15:46:32 +01:00
Martijn van Groningen a84d35ab3f Merge remote-tracking branch 'es/master' into feature/ingest 2015-11-25 18:45:05 +01:00
Jimmi Dyson c4ee350c5e Add Kubernetes discovery community plugin 2015-11-25 12:54:29 +00:00
javanna 8f1f5d4da0 Split mutate processor into one processor per function 2015-11-24 14:31:53 +01:00
javanna eeb51ce8d0 Merge branch 'master' into feature/ingest 2015-11-24 10:23:53 +01:00
David Pilato 28109a18a2 Fix example for s3 repository bucket name
Closes #13588.
2015-11-23 13:14:02 +01:00
David Pilato 5b0e2823b1 Merge branch 'docs/mapper-attachments' 2015-11-23 12:14:31 +01:00
javanna 36655b688c Merge branch 'master' into feature/ingest 2015-11-23 10:05:17 +01:00