OpenSearch/plugins/cloud-aws
Robert Muir 4040f194f5 Refactor pluginservice
Closes #12367

Squashed commit of the following:

commit 9453c411798121aa5439c52e95301f60a022ba5f
Merge: 3511a9c 828d8c7
Author: Robert Muir <rmuir@apache.org>
Date:   Wed Jul 22 08:22:41 2015 -0400

    Merge branch 'master' into refactor_pluginservice

commit 3511a9c616503c447de9f0df9b4e9db3e22abd58
Author: Ryan Ernst <ryan@iernst.net>
Date:   Tue Jul 21 21:50:15 2015 -0700

    Remove duplicated constant

commit 4a9b5b4621b0ef2e74c1e017d9c8cf624dd27713
Author: Ryan Ernst <ryan@iernst.net>
Date:   Tue Jul 21 21:01:57 2015 -0700

    Add check that plugin must specify at least site or jvm

commit 19aef2f0596153a549ef4b7f4483694de41e101b
Author: Ryan Ernst <ryan@iernst.net>
Date:   Tue Jul 21 20:52:58 2015 -0700

    Change plugin "plugin" property to "classname"

commit 07ae396f30ed592b7499a086adca72d3f327fe4c
Author: Robert Muir <rmuir@apache.org>
Date:   Tue Jul 21 23:36:05 2015 -0400

    remove test with no methods

commit 550e73bf3d0f94562f4dde95239409dc5a24ce25
Author: Robert Muir <rmuir@apache.org>
Date:   Tue Jul 21 23:31:58 2015 -0400

    fix loading to use classname

commit 04463aed12046da0da5cac2a24c3ace51a79f799
Author: Robert Muir <rmuir@apache.org>
Date:   Tue Jul 21 23:24:19 2015 -0400

    rename to classname

commit 9f3afadd1caf89448c2eb913757036da48758b2d
Author: Ryan Ernst <ryan@iernst.net>
Date:   Tue Jul 21 20:18:46 2015 -0700

    moved PluginInfo and refactored parsing from properties file

commit df63ccc1b8b7cc64d3e59d23f6c8e827825eba87
Author: Robert Muir <rmuir@apache.org>
Date:   Tue Jul 21 23:08:26 2015 -0400

    fix test

commit c7febd844be358707823186a8c7a2d21e37540c9
Author: Robert Muir <rmuir@apache.org>
Date:   Tue Jul 21 23:03:44 2015 -0400

    remove test

commit 017b3410cf9d2b7fca1b8653e6f1ebe2f2519257
Author: Robert Muir <rmuir@apache.org>
Date:   Tue Jul 21 22:58:31 2015 -0400

    fix test

commit c9922938df48041ad43bbb3ed6746f71bc846629
Merge: ad59af4 01ea89a
Author: Robert Muir <rmuir@apache.org>
Date:   Tue Jul 21 22:37:28 2015 -0400

    Merge branch 'master' into refactor_pluginservice

commit ad59af465e1f1ac58897e63e0c25fcce641148a7
Author: Areek Zillur <areek.zillur@elasticsearch.com>
Date:   Tue Jul 21 19:30:26 2015 -0400

    [TEST] Verify expected number of nodes in cluster before issuing shardStores request

commit f0f5a1e087255215b93656550fbc6bd89b8b3205
Author: Lee Hinman <lee@writequit.org>
Date:   Tue Jul 21 11:27:28 2015 -0600

    Ignore EngineClosedException during translog fysnc

    When performing an operation on a primary, the state is captured and the
    operation is performed on the primary shard. The original request is
    then modified to increment the version of the operation as preparation
    for it to be sent to the replicas.

    If the request first fails on the primary during the translog sync
    (because the Engine is already closed due to shadow primaries closing
    the engine on relocation), then the operation is retried on the new primary
    after being modified for the replica shards. It will then fail due to the
    version being incorrect (the document does not yet exist but the request
    expects a version of "1").

    Order of operations:

    - Request is executed against primary
    - Request is modified (version incremented) so it can be sent to replicas
    - Engine's translog is fsync'd if necessary (failing, and throwing an exception)
    - Modified request is retried against new primary

    This change ignores the exception where the engine is already closed
    when syncing the translog (similar to how we ignore exceptions when
    refreshing the shard if the ?refresh=true flag is used).

commit 4ac68bb1658688550ced0c4f479dee6d8b617777
Author: Shay Banon <kimchy@gmail.com>
Date:   Tue Jul 21 22:37:29 2015 +0200

    Replica allocator unit tests
    First batch of unit tests to verify the behavior of replica allocator

commit 94609fc5943c8d85adc751b553847ab4cebe58a3
Author: Jason Tedor <jason@tedor.me>
Date:   Tue Jul 21 14:04:46 2015 -0400

    Correctly list blobs in Azure storage to prevent snapshot corruption and do not unnecessarily duplicate Lucene segments in Azure Storage

    This commit addresses an issue that was leading to snapshot corruption for snapshots stored as blobs in Azure Storage.

    The underlying issue is that in cases when multiple snapshots of an index were taken and persisted into Azure Storage, snapshots subsequent
    to the first would repeatedly overwrite the snapshot files. This issue does render useless all snapshots except the final snapshot.

    The root cause of this is due to String concatenation involving null. In particular, to list all of the blobs in a snapshot directory in
    Azure the code would use the method listBlobsByPrefix where the prefix is null. In the listBlobsByPrefix method, the path keyPath + prefix
    is constructed. However, per 5.1.11, 5.4 and 15.18.1 of the Java Language Specification, the reference null is first converted to the string
    "null" before performing the concatenation. This leads to no blobs being returned and therefore the snapshot mechanism would operate as if
    it were writing the first snapshot of the index. The fix is simply to check if prefix is null and handle the concatenation accordingly.

    Upon fixing this issue so that subsequent snapshots would no longer overwrite earlier snapshots, it was discovered that the snapshot metadata
    returned by the listBlobsByPrefix method was not sufficient for the snapshot layer to detect whether or not the Lucene segments had already
    been copied to the Azure storage layer in an earlier snapshot. This led the snapshot layer to unnecessarily duplicate these Lucene segments
    in Azure Storage.

    The root cause of this is due to known behavior in the CloudBlobContainer.getBlockBlobReference method in the Azure API. Namely, this method
    does not fetch blob attributes from Azure. As such, the lengths of all the blobs appeared to the snapshot layer to be of length zero and
    therefore they would compare as not equal to any new blobs that the snapshot layer is going to persist. To remediate this, the method
    CloudBlockBlob.downloadAttributes must be invoked. This will fetch the attributes from Azure Storage so that a proper comparison of the
    blobs can be performed.

    Closes elastic/elasticsearch-cloud-azure#51, closes elastic/elasticsearch-cloud-azure#99

commit cf1d481ce5dda0a45805e42f3b2e0e1e5d028b9e
Author: Lee Hinman <lee@writequit.org>
Date:   Mon Jul 20 08:41:55 2015 -0600

    Unit tests for `nodesAndVersions` on shared filesystems

    With the `recover_on_any_node` setting, these unit tests check that the
    correct node list and versions are returned.

commit 3c27cc32395c3624f7c794904d9ea4faf2eccbfb
Author: Robert Muir <rmuir@apache.org>
Date:   Tue Jul 21 14:15:59 2015 -0400

    don't fail junit4 integration tests if there are no tests.

    instead fail the failsafe plugin, which means the external cluster will still get shut down

commit 95d2756c5a8c21a157fa844273fc83dfa3c00aea
Author: Alexander Reelsen <alexander@reelsen.net>
Date:   Tue Jul 21 17:16:53 2015 +0200

    Testing: Fix help displaying tests under windows

    The help files are using a unix based file separator, where as
    the test relies on the help being based on the file system separator.

    This commit fixes the test to remove all `\r` characters before
    comparing strings.

    The test has also been moved into its own CliToolTestCase, as it does
    not need to be an integration test.

commit 944f06ea36bd836f007f8eaade8f571d6140aad9
Author: Clinton Gormley <clint@traveljury.com>
Date:   Tue Jul 21 18:04:52 2015 +0200

    Refactored check_license_and_sha.pl to accept a license dir and package path

    In preparation for the move to building the core zip, tar.gz, rpm, and deb as separate modules, refactored check_license_and_sha.pl to:

    * accept a license dir and path to the package to check on the command line
    * to be able to extract zip, tar.gz, deb, and rpm
    * all packages except rpm will work on Windows

commit 2585431e8dfa5c82a2cc5b304cd03eee9bed7a4c
Author: Chris Earle <pickypg@users.noreply.github.com>
Date:   Tue Jul 21 08:35:28 2015 -0700

    Updating breaking changes

    - field names cannot be mapped with `.` in them
    - fixed asciidoc issue where the list was not recognized as a list

commit de299b9d3f4615b12e2226a1e2eff5a38ecaf15f
Author: Shay Banon <kimchy@gmail.com>
Date:   Tue Jul 21 13:27:52 2015 +0200

    Replace primaryPostAllocated flag and use UnassignedInfo
    There is no need to maintain additional state as to if a primary was allocated post api creation on the index routing table, we hold all this information already in the UnassignedInfo class.
    closes #12374

commit 43080bff40f60bedce5bdbc92df302f73aeb9cae
Author: Alexander Reelsen <alexander@reelsen.net>
Date:   Tue Jul 21 15:45:05 2015 +0200

    PluginManager: Fix bin/plugin calls in scripts/bats test

    The release and smoke test python scripts used to install
    plugins in the old fashion.

    Also the BATS testing suite installed/removed plugins in that
    way. Here the marvel tests have been removed, as marvel currently
    does not work with the master branch.

    In addition documentation has been updated as well, where it was
    still missing.

commit b81ccba48993bc13c7678e6d979fd96998499233
Author: Boaz Leskes <b.leskes@gmail.com>
Date:   Tue Jul 21 11:37:50 2015 +0200

    Discovery: make sure NodeJoinController.ElectionCallback is always called from the update cluster state thread

    This is important for correct handling of the joining thread. This causes assertions to trip in our test runs. See http://build-us-00.elastic.co/job/es_g1gc_master_metal/11653/ as an example

    Closes #12372

commit 331853790bf29e34fb248ebc4c1ba585b44f5cab
Author: Boaz Leskes <b.leskes@gmail.com>
Date:   Tue Jul 21 15:54:36 2015 +0200

    Remove left over no commit from TransportReplicationAction

    It asks to double check thread pool rejection. I did and don't see problems with it.

commit e5724931bbc1603e37faa977af4235507f4811f5
Author: Alexander Reelsen <alexander@reelsen.net>
Date:   Tue Jul 21 15:31:57 2015 +0200

    CliTool: Various PluginManager fixes

    The new plugin manager parser was not called correctly in the scripts.
    In addition the plugin manager now creates a plugins/ directory in case
    it does not exist.

    Also the integration tests called the plugin manager in the deprecated way.

commit 7a815a370f83ff12ffb12717ac2fe62571311279
Author: Alexander Reelsen <alexander@reelsen.net>
Date:   Tue Jul 21 13:54:18 2015 +0200

    CLITool: Port PluginManager to use CLITool

    In order to unify the handling and reuse the CLITool infrastructure
    the plugin manager should make use of this as well.

    This obsolets the -i and --install options but requires the user
    to use `install` as the first argument of the CLI.

    This is basically just a port of the existing functionality, which
    is also the reason why this is not a refactoring of the plugin manager,
    which will come in a separate commit.

commit 7f171eba7b71ac5682a355684b6da703ffbfccc7
Author: Martijn van Groningen <martijn.v.groningen@gmail.com>
Date:   Tue Jul 21 10:44:21 2015 +0200

    Remove custom execute local logic in TransportSingleShardAction and TransportInstanceSingleOperationAction and rely on transport service to execute locally. (forking thread etc.)

    Change TransportInstanceSingleOperationAction to have shardActionHandler to, so we can execute locally without endless spinning.

commit 0f38e3eca6b570f74b552e70b4673f47934442e1
Author: Ryan Ernst <ryan@iernst.net>
Date:   Tue Jul 21 17:36:12 2015 -0700

    More readMetadata tests and pickiness

commit 880b47281bd69bd37807e8252934321b089c9f8e
Author: Ryan Ernst <ryan@iernst.net>
Date:   Tue Jul 21 14:42:09 2015 -0700

    Started unit tests for plugin service

commit cd7c8ddd7b8c4f3457824b493bffb19c156c7899
Author: Robert Muir <rmuir@apache.org>
Date:   Tue Jul 21 07:21:07 2015 -0400

    fix tests

commit 673454f0b14f072f66ed70e32110fae4f7aad642
Author: Robert Muir <rmuir@apache.org>
Date:   Tue Jul 21 06:58:25 2015 -0400

    refactor pluginservice
2015-07-22 10:45:45 -04:00
..
licenses [build] cloud-aws doesn't register s3 repos anymore 2015-07-08 15:37:24 +02:00
rest-api-spec/test/cloud_aws Added rest tests for cloud plugins 2015-07-07 19:26:31 +02:00
src Refactor pluginservice 2015-07-22 10:45:45 -04:00
LICENSE.txt Added LICENSE and NOTICE files for all plugins 2015-06-23 12:50:31 +02:00
NOTICE.txt Added LICENSE and NOTICE files for all plugins 2015-06-23 12:50:31 +02:00
README.md CLITool: Port PluginManager to use CLITool 2015-07-21 14:15:39 +02:00
pom.xml Refactor pluginservice 2015-07-22 10:45:45 -04:00

README.md

AWS Cloud Plugin for Elasticsearch

The Amazon Web Service (AWS) Cloud plugin allows to use AWS API for the unicast discovery mechanism and add S3 repositories.

In order to install the plugin, run:

bin/plugin install elasticsearch/elasticsearch-cloud-aws/2.5.1

You need to install a version matching your Elasticsearch version:

Elasticsearch AWS Cloud Plugin Docs
master Build from source See below
es-1.x Build from source 2.6.0-SNAPSHOT
es-1.5 2.5.1 2.5.1
es-1.4 2.4.2 2.4.2
es-1.3 2.3.0 2.3.0
es-1.2 2.2.0 2.2.0
es-1.1 2.1.1 2.1.1
es-1.0 2.0.0 2.0.0
es-0.90 1.16.0 1.16.0

To build a SNAPSHOT version, you need to build it with Maven:

mvn clean install
plugin install cloud-aws \
       --url file:target/releases/elasticsearch-cloud-aws-X.X.X-SNAPSHOT.zip

Generic Configuration

The plugin will default to using IAM Role credentials for authentication. These can be overridden by, in increasing order of precedence, system properties aws.accessKeyId and aws.secretKey, environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_KEY, or the elasticsearch config using cloud.aws.access_key and cloud.aws.secret_key:

cloud:
    aws:
        access_key: AKVAIQBF2RECL7FJWGJQ
        secret_key: vExyMThREXeRMm/b/LRzEB8jWwvzQeXgjqMX+6br

Transport security

By default this plugin uses HTTPS for all API calls to AWS endpoints. If you wish to configure HTTP you can set cloud.aws.protocol in the elasticsearch config. You can optionally override this setting per individual service via: cloud.aws.ec2.protocol or cloud.aws.s3.protocol.

cloud:
    aws:
        protocol: https
        s3: 
            protocol: http
        ec2: 
            protocol: https

In addition, a proxy can be configured with the proxy_host and proxy_port settings (note that protocol can be http or https):

cloud:
    aws:
        protocol: https
        proxy_host: proxy1.company.com
        proxy_port: 8083

You can also set different proxies for ec2 and s3:

cloud:
    aws:
        s3:
            proxy_host: proxy1.company.com
            proxy_port: 8083
        ec2:
            proxy_host: proxy2.company.com
            proxy_port: 8083

Region

The cloud.aws.region can be set to a region and will automatically use the relevant settings for both ec2 and s3. The available values are:

  • us-east (us-east-1)
  • us-west (us-west-1)
  • us-west-1
  • us-west-2
  • ap-southeast (ap-southeast-1)
  • ap-southeast-1
  • ap-southeast-2
  • ap-northeast (ap-northeast-1)
  • eu-west (eu-west-1)
  • eu-central (eu-central-1)
  • sa-east (sa-east-1)
  • cn-north (cn-north-1)

EC2/S3 Signer API

If you are using a compatible EC2 or S3 service, they might be using an older API to sign the requests. You can set your compatible signer API using cloud.aws.signer (or cloud.aws.ec2.signer and cloud.aws.s3.signer) with the right signer to use. Defaults to AWS4SignerType.

EC2 Discovery

ec2 discovery allows to use the ec2 APIs to perform automatic discovery (similar to multicast in non hostile multicast environments). Here is a simple sample configuration:

discovery:
    type: ec2

The ec2 discovery is using the same credentials as the rest of the AWS services provided by this plugin (repositories). See Generic Configuration for details.

The following are a list of settings (prefixed with discovery.ec2) that can further control the discovery:

  • groups: Either a comma separated list or array based list of (security) groups. Only instances with the provided security groups will be used in the cluster discovery. (NOTE: You could provide either group NAME or group ID.)
  • host_type: The type of host type to use to communicate with other instances. Can be one of private_ip, public_ip, private_dns, public_dns. Defaults to private_ip.
  • availability_zones: Either a comma separated list or array based list of availability zones. Only instances within the provided availability zones will be used in the cluster discovery.
  • any_group: If set to false, will require all security groups to be present for the instance to be used for the discovery. Defaults to true.
  • ping_timeout: How long to wait for existing EC2 nodes to reply during discovery. Defaults to 3s. If no unit like ms, s or m is specified, milliseconds are used.

EC2 discovery requires making a call to the EC2 service. You'll want to setup an IAM policy to allow this. You can create a custom policy via the IAM Management Console. It should look similar to this.

{
    "Statement": [
        {
            "Action": [
                "ec2:DescribeInstances"
            ],
            "Effect": "Allow",
            "Resource": [
                "*"
            ]
        }
    ],
    "Version": "2012-10-17"
}

Filtering by Tags

The ec2 discovery can also filter machines to include in the cluster based on tags (and not just groups). The settings to use include the discovery.ec2.tag. prefix. For example, setting discovery.ec2.tag.stage to dev will only filter instances with a tag key set to stage, and a value of dev. Several tags set will require all of those tags to be set for the instance to be included.

One practical use for tag filtering is when an ec2 cluster contains many nodes that are not running elasticsearch. In this case (particularly with high ping_timeout values) there is a risk that a new node's discovery phase will end before it has found the cluster (which will result in it declaring itself master of a new cluster with the same name - highly undesirable). Tagging elasticsearch ec2 nodes and then filtering by that tag will resolve this issue.

Automatic Node Attributes

Though not dependent on actually using ec2 as discovery (but still requires the cloud aws plugin installed), the plugin can automatically add node attributes relating to ec2 (for example, availability zone, that can be used with the awareness allocation feature). In order to enable it, set cloud.node.auto_attributes to true in the settings.

Using other EC2 endpoint

If you are using any EC2 api compatible service, you can set the endpoint you want to use by setting cloud.aws.ec2.endpoint to your URL provider.

S3 Repository

The S3 repository is using S3 to store snapshots. The S3 repository can be created using the following command:

$ curl -XPUT 'http://localhost:9200/_snapshot/my_s3_repository' -d '{
    "type": "s3",
    "settings": {
        "bucket": "my_bucket_name",
        "region": "us-west"
    }
}'

The following settings are supported:

  • bucket: The name of the bucket to be used for snapshots. (Mandatory)
  • region: The region where bucket is located. Defaults to US Standard
  • endpoint: The endpoint to the S3 API. Defaults to AWS's default S3 endpoint. Note that setting a region overrides the endpoint setting.
  • protocol: The protocol to use (http or https). Defaults to value of cloud.aws.protocol or cloud.aws.s3.protocol.
  • base_path: Specifies the path within bucket to repository data. Defaults to root directory.
  • access_key: The access key to use for authentication. Defaults to value of cloud.aws.access_key.
  • secret_key: The secret key to use for authentication. Defaults to value of cloud.aws.secret_key.
  • chunk_size: Big files can be broken down into chunks during snapshotting if needed. The chunk size can be specified in bytes or by using size value notation, i.e. 1g, 10m, 5k. Defaults to 100m.
  • compress: When set to true metadata files are stored in compressed format. This setting doesn't affect index files that are already compressed by default. Defaults to false.
  • server_side_encryption: When set to true files are encrypted on server side using AES256 algorithm. Defaults to false.
  • buffer_size: Minimum threshold below which the chunk is uploaded using a single request. Beyond this threshold, the S3 repository will use the AWS Multipart Upload API to split the chunk into several parts, each of buffer_size length, and to upload each part in its own request. Note that positionning a buffer size lower than 5mb is not allowed since it will prevents the use of the Multipart API and may result in upload errors. Defaults to 5mb.
  • max_retries: Number of retries in case of S3 errors. Defaults to 3.

The S3 repositories are using the same credentials as the rest of the AWS services provided by this plugin (discovery). See Generic Configuration for details.

Multiple S3 repositories can be created. If the buckets require different credentials, then define them as part of the repository settings.

In order to restrict the Elasticsearch snapshot process to the minimum required resources, we recommend using Amazon IAM in conjunction with pre-existing S3 buckets. Here is an example policy which will allow the snapshot access to an S3 bucket named "snaps.example.com". This may be configured through the AWS IAM console, by creating a Custom Policy, and using a Policy Document similar to this (changing snaps.example.com to your bucket name).

{
    "Statement": [
        {
            "Action": [
                "s3:ListBucket",
                "s3:GetBucketLocation",
                "s3:ListBucketMultipartUploads",
                "s3:ListBucketVersions"
            ],
            "Effect": "Allow",
            "Resource": [
                "arn:aws:s3:::snaps.example.com"
            ]
        },
        {
            "Action": [
                "s3:GetObject",
                "s3:PutObject",
                "s3:DeleteObject",
                "s3:AbortMultipartUpload",
                "s3:ListMultipartUploadParts"
            ],
            "Effect": "Allow",
            "Resource": [
                "arn:aws:s3:::snaps.example.com/*"
            ]
        }
    ],
    "Version": "2012-10-17"
}

You may further restrict the permissions by specifying a prefix within the bucket, in this example, named "foo".

{
    "Statement": [
        {
            "Action": [
                "s3:ListBucket",
                "s3:GetBucketLocation",
                "s3:ListBucketMultipartUploads",
                "s3:ListBucketVersions"
            ],
            "Condition": {
                "StringLike": {
                    "s3:prefix": [
                        "foo/*"
                    ]
                }
            },
            "Effect": "Allow",
            "Resource": [
                "arn:aws:s3:::snaps.example.com"
            ]
        },
        {
            "Action": [
                "s3:GetObject",
                "s3:PutObject",
                "s3:DeleteObject",
                "s3:AbortMultipartUpload",
                "s3:ListMultipartUploadParts"
            ],
            "Effect": "Allow",
            "Resource": [
                "arn:aws:s3:::snaps.example.com/foo/*"
            ]
        }
    ],
    "Version": "2012-10-17"
}

The bucket needs to exist to register a repository for snapshots. If you did not create the bucket then the repository registration will fail. If you want elasticsearch to create the bucket instead, you can add the permission to create a specific bucket like this:

{
   "Action": [
      "s3:CreateBucket"
   ],
   "Effect": "Allow",
   "Resource": [
      "arn:aws:s3:::snaps.example.com"
   ]
}

Using other S3 endpoint

If you are using any S3 api compatible service, you can set a global endpoint by setting cloud.aws.s3.endpoint to your URL provider. Note that this setting will be used for all S3 repositories.

Different endpoint, region and protocol settings can be set on a per-repository basis (see S3 Repository section for detail).

Testing

Integrations tests in this plugin require working AWS configuration and therefore disabled by default. Three buckets and two iam users have to be created. The first iam user needs access to two buckets in different regions and the final bucket is exclusive for the other iam user. To enable tests prepare a config file elasticsearch.yml with the following content:

cloud:
    aws:
        access_key: AKVAIQBF2RECL7FJWGJQ
        secret_key: vExyMThREXeRMm/b/LRzEB8jWwvzQeXgjqMX+6br

repositories:
    s3:
        bucket: "bucket_name"
        region: "us-west-2"
        private-bucket:
            bucket: <bucket not accessible by default key>
            access_key: <access key>
            secret_key: <secret key>
        remote-bucket:
            bucket: <bucket in other region>
            region: <region>
	external-bucket:
	    bucket: <bucket>
	    access_key: <access key>
	    secret_key: <secret key>
	    endpoint: <endpoint>
	    protocol: <protocol>

Replace all occurrences of access_key, secret_key, endpoint, protocol, bucket and region with your settings. Please, note that the test will delete all snapshot/restore related files in the specified buckets.

To run test:

mvn -Dtests.aws=true -Dtests.config=/path/to/config/file/elasticsearch.yml clean test

License

This software is licensed under the Apache 2 license, quoted below.

Copyright 2009-2014 Elasticsearch <http://www.elasticsearch.org>

Licensed under the Apache License, Version 2.0 (the "License"); you may not
use this file except in compliance with the License. You may obtain a copy of
the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations under
the License.