OpenSearch/docs/plugins/discovery-ec2.asciidoc

231 lines
10 KiB
Plaintext

[[discovery-ec2]]
=== EC2 Discovery Plugin
The EC2 discovery plugin uses the https://github.com/aws/aws-sdk-java[AWS API] for unicast discovery.
*If you are looking for a hosted solution of Elasticsearch on AWS, please visit http://www.elastic.co/cloud.*
:plugin_name: discovery-ec2
include::install_remove.asciidoc[]
[[discovery-ec2-usage]]
==== Getting started with AWS
The plugin provides a hosts provider for zen discovery named `ec2`. This hosts provider
finds other Elasticsearch instances in EC2 through AWS metadata. Authentication is done using
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html[IAM Role]
credentials by default. The only necessary configuration change to enable the plugin
is setting the unicast host provider for zen discovery:
[source,yaml]
----
discovery.zen.hosts_provider: ec2
----
==== Settings
EC2 host discovery supports a number of settings.
Some settings are sensitive and must be stored in the {ref}/secure-settings.html[elasticsearch keystore].
For example, to use explicit AWS access keys:
[source,sh]
----
bin/elasticsearch-keystore add discovery.ec2.access_key
bin/elasticsearch-keystore add discovery.ec2.secret_key
----
The following are the available discovery settings. All should be prefixed with `discovery.ec2.`.
Those that must be stored in the keystore are marked as `Secure`.
`access_key`::
An s3 access key. The `secret_key` setting must also be specified. (Secure)
`secret_key`::
An s3 secret key. The `access_key` setting must also be specified. (Secure)
`endpoint`::
The ec2 service endpoint to connect to. This will be automatically
figured out by the ec2 client based on the instance location, but
can be specified explicitly. See http://docs.aws.amazon.com/general/latest/gr/rande.html#ec2_region.
`protocol`::
The protocol to use to connect to ec2. Valid values are either `http`
or `https`. Defaults to `https`.
`proxy.host`::
The host name of a proxy to connect to ec2 through.
`proxy.port`::
The port of a proxy to connect to ec2 through.
`proxy.username`::
The username to connect to the `proxy.host` with. (Secure)
`proxy.password`::
The password to connect to the `proxy.host` with. (Secure)
`read_timeout`::
The socket timeout for connecting to ec2. The value should specify the unit. For example,
a value of `5s` specifies a 5 second timeout. The default value is 50 seconds.
`groups`::
Either a comma separated list or array based list of (security) groups.
Only instances with the provided security groups will be used in the
cluster discovery. (NOTE: You could provide either group NAME or group
ID.)
`host_type`::
+
--
The type of host type to use to communicate with other instances. Can be
one of `private_ip`, `public_ip`, `private_dns`, `public_dns` or `tag:TAGNAME` where
`TAGNAME` refers to a name of a tag configured for all EC2 instances. Instances which don't
have this tag set will be ignored by the discovery process.
For example if you defined a tag `my-elasticsearch-host` in ec2 and set it to `myhostname1.mydomain.com`, then
setting `host_type: tag:my-elasticsearch-host` will tell Discovery Ec2 plugin to read the host name from the
`my-elasticsearch-host` tag. In this case, it will be resolved to `myhostname1.mydomain.com`.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html[Read more about EC2 Tags].
Defaults to `private_ip`.
--
`availability_zones`::
Either a comma separated list or array based list of availability zones.
Only instances within the provided availability zones will be used in the
cluster discovery.
`any_group`::
If set to `false`, will require all security groups to be present for the
instance to be used for the discovery. Defaults to `true`.
`node_cache_time`::
How long the list of hosts is cached to prevent further requests to the AWS API.
Defaults to `10s`.
[IMPORTANT]
.Binding the network host
==============================================
It's important to define `network.host` as by default it's bound to `localhost`.
You can use {ref}/modules-network.html[core network host settings] or
<<discovery-ec2-network-host,ec2 specific host settings>>:
==============================================
[[discovery-ec2-network-host]]
===== EC2 Network Host
When the `discovery-ec2` plugin is installed, the following are also allowed
as valid network host settings:
[cols="<,<",options="header",]
|==================================================================
|EC2 Host Value |Description
|`_ec2:privateIpv4_` |The private IP address (ipv4) of the machine.
|`_ec2:privateDns_` |The private host of the machine.
|`_ec2:publicIpv4_` |The public IP address (ipv4) of the machine.
|`_ec2:publicDns_` |The public host of the machine.
|`_ec2:privateIp_` |equivalent to `_ec2:privateIpv4_`.
|`_ec2:publicIp_` |equivalent to `_ec2:publicIpv4_`.
|`_ec2_` |equivalent to `_ec2:privateIpv4_`.
|==================================================================
[[discovery-ec2-permissions]]
===== Recommended EC2 Permissions
EC2 discovery requires making a call to the EC2 service. You'll want to setup
an IAM policy to allow this. You can create a custom policy via the IAM
Management Console. It should look similar to this.
[source,js]
----
{
"Statement": [
{
"Action": [
"ec2:DescribeInstances"
],
"Effect": "Allow",
"Resource": [
"*"
]
}
],
"Version": "2012-10-17"
}
----
// NOTCONSOLE
[[discovery-ec2-filtering]]
===== Filtering by Tags
The ec2 discovery can also filter machines to include in the cluster based on tags (and not just groups). The settings
to use include the `discovery.ec2.tag.` prefix. For example, setting `discovery.ec2.tag.stage` to `dev` will only
filter instances with a tag key set to `stage`, and a value of `dev`. Several tags set will require all of those tags
to be set for the instance to be included.
One practical use for tag filtering is when an ec2 cluster contains many nodes that are not running elasticsearch. In
this case (particularly with high `discovery.zen.ping_timeout` values) there is a risk that a new node's discovery phase
will end before it has found the cluster (which will result in it declaring itself master of a new cluster with the same
name - highly undesirable). Tagging elasticsearch ec2 nodes and then filtering by that tag will resolve this issue.
[[discovery-ec2-attributes]]
===== Automatic Node Attributes
Though not dependent on actually using `ec2` as discovery (but still requires the `discovery-ec2` plugin installed), the
plugin can automatically add node attributes relating to ec2. In the future this may support other attributes, but this will
currently only add an `aws_availability_zone` node attribute, which is the availability zone of the current node. Attributes
can be used to isolate primary and replica shards across availability zones by using the
{ref}/allocation-awareness.html[Allocation Awareness] feature.
In order to enable it, set `cloud.node.auto_attributes` to `true` in the settings. For example:
[source,yaml]
----
cloud.node.auto_attributes: true
cluster.routing.allocation.awareness.attributes: aws_availability_zone
----
[[cloud-aws-best-practices]]
==== Best Practices in AWS
Collection of best practices and other information around running Elasticsearch on AWS.
===== Instance/Disk
When selecting disk please be aware of the following order of preference:
* https://aws.amazon.com/efs/[EFS] - Avoid as the sacrifices made to offer durability, shared storage, and grow/shrink come at performance cost, such file systems have been known to cause corruption of indices, and due to Elasticsearch being distributed and having built-in replication, the benefits that EFS offers are not needed.
* https://aws.amazon.com/ebs/[EBS] - Works well if running a small cluster (1-2 nodes) and cannot tolerate the loss all storage backing a node easily or if running indices with no replicas. If EBS is used, then leverage provisioned IOPS to ensure performance.
* http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html[Instance Store] - When running clusters of larger size and with replicas the ephemeral nature of Instance Store is ideal since Elasticsearch can tolerate the loss of shards. With Instance Store one gets the performance benefit of having disk physically attached to the host running the instance and also the cost benefit of avoiding paying extra for EBS.
Prefer https://aws.amazon.com/amazon-linux-ami/[Amazon Linux AMIs] as since Elasticsearch runs on the JVM, OS dependencies are very minimal and one can benefit from the lightweight nature, support, and performance tweaks specific to EC2 that the Amazon Linux AMIs offer.
===== Networking
* Networking throttling takes place on smaller instance types in both the form of https://lab.getbase.com/how-we-discovered-limitations-on-the-aws-tcp-stack/[bandwidth and number of connections]. Therefore if large number of connections are needed and networking is becoming a bottleneck, avoid https://aws.amazon.com/ec2/instance-types/[instance types] with networking labeled as `Moderate` or `Low`.
* Multicast is not supported, even when in an VPC; the aws cloud plugin which joins by performing a security group lookup.
* When running in multiple http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html[availability zones] be sure to leverage {ref}/allocation-awareness.html[shard allocation awareness] so that not all copies of shard data reside in the same availability zone.
* Do not span a cluster across regions. If necessary, use a tribe node.
===== Misc
* If you have split your nodes into roles, consider https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html[tagging the EC2 instances] by role to make it easier to filter and view your EC2 instances in the AWS console.
* Consider https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/terminating-instances.html#Using_ChangingDisableAPITermination[enabling termination protection] for all of your instances to avoid accidentally terminating a node in the cluster and causing a potentially disruptive reallocation.