135 lines
5.6 KiB
Plaintext
135 lines
5.6 KiB
Plaintext
[[sniffer]]
|
|
== Sniffer
|
|
|
|
Minimal library that allows to automatically discover nodes from a running
|
|
Elasticsearch cluster and set them to an existing `RestClient` instance.
|
|
It retrieves by default the nodes that belong to the cluster using the
|
|
Nodes Info api and uses jackson to parse the obtained json response.
|
|
|
|
Compatible with Elasticsearch 2.x and onwards.
|
|
|
|
=== Maven Repository
|
|
|
|
The low-level REST client is subject to the same release cycle as
|
|
elasticsearch. Replace the version with the desired sniffer version, first
|
|
released with `5.0.0-alpha4`. There is no relation between the sniffer version
|
|
and the elasticsearch version that the client can communicate with. Sniffer
|
|
supports fetching the nodes list from elasticsearch 2.x and onwards.
|
|
|
|
|
|
==== Maven configuration
|
|
|
|
Here is how you can configure the dependency using maven as a dependency manager.
|
|
Add the following to your `pom.xml` file:
|
|
|
|
["source","xml",subs="attributes"]
|
|
--------------------------------------------------
|
|
<dependency>
|
|
<groupId>org.elasticsearch.client</groupId>
|
|
<artifactId>sniffer</artifactId>
|
|
<version>{version}</version>
|
|
</dependency>
|
|
--------------------------------------------------
|
|
|
|
==== Gradle configuration
|
|
|
|
Here is how you can configure the dependency using gradle as a dependency manager.
|
|
Add the following to your `build.gradle` file:
|
|
|
|
["source","groovy",subs="attributes"]
|
|
--------------------------------------------------
|
|
dependencies {
|
|
compile 'org.elasticsearch.client:sniffer:{version}'
|
|
}
|
|
--------------------------------------------------
|
|
|
|
=== Usage
|
|
|
|
Once a `RestClient` instance has been created, a `Sniffer` can be associated
|
|
to it. The `Sniffer` will make use of the provided `RestClient` to periodically
|
|
(every 5 minutes by default) fetch the list of current nodes from the cluster
|
|
and update them by calling `RestClient#setHosts`.
|
|
|
|
|
|
|
|
[source,java]
|
|
--------------------------------------------------
|
|
Sniffer sniffer = Sniffer.builder(restClient).build();
|
|
--------------------------------------------------
|
|
|
|
It is important to close the `Sniffer` so that its background thread gets
|
|
properly shutdown and all of its resources are released. The `Sniffer`
|
|
object should have the same lifecycle as the `RestClient` and get closed
|
|
right before the client:
|
|
|
|
[source,java]
|
|
--------------------------------------------------
|
|
sniffer.close();
|
|
restClient.close();
|
|
--------------------------------------------------
|
|
|
|
The Elasticsearch Nodes Info api doesn't return the protocol to use when
|
|
connecting to the nodes but only their `host:port` key-pair, hence `http`
|
|
is used by default. In case `https` should be used instead, the
|
|
`ElasticsearchHostsSniffer` object has to be manually created and provided
|
|
as follows:
|
|
|
|
[source,java]
|
|
--------------------------------------------------
|
|
HostsSniffer hostsSniffer = new ElasticsearchHostsSniffer(restClient,
|
|
ElasticsearchHostsSniffer.DEFAULT_SNIFF_REQUEST_TIMEOUT,
|
|
ElasticsearchHostsSniffer.Scheme.HTTPS);
|
|
Sniffer sniffer = Sniffer.builder(restClient)
|
|
.setHostsSniffer(hostsSniffer).build();
|
|
--------------------------------------------------
|
|
|
|
In the same way it is also possible to customize the `sniffRequestTimeout`,
|
|
which defaults to one second. That is the `timeout` parameter provided as a
|
|
querystring parameter when calling the Nodes Info api, so that when the
|
|
timeout expires on the server side, a valid response is still returned
|
|
although it may contain only a subset of the nodes that are part of the
|
|
cluster, the ones that have responsed until then.
|
|
Also, a custom `HostsSniffer` implementation can be provided for advanced
|
|
use-cases that may require fetching the hosts from external sources.
|
|
|
|
The `Sniffer` updates the nodes by default every 5 minutes. This interval can
|
|
be customized by providing it (in milliseconds) as follows:
|
|
|
|
[source,java]
|
|
--------------------------------------------------
|
|
Sniffer sniffer = Sniffer.builder(restClient)
|
|
.setSniffIntervalMillis(60000).build();
|
|
--------------------------------------------------
|
|
|
|
It is also possible to enable sniffing on failure, meaning that after each
|
|
failure the nodes list gets updated straightaway rather than at the following
|
|
ordinary sniffing round. In this case a `SniffOnFailureListener` needs to
|
|
be created at first and provided at `RestClient` creation. Also once the
|
|
`Sniffer` is later created, it needs to be associated with that same
|
|
`SniffOnFailureListener` instance, which will be notified at each failure
|
|
and use the `Sniffer` to perform the additional sniffing round as described.
|
|
|
|
[source,java]
|
|
--------------------------------------------------
|
|
SniffOnFailureListener sniffOnFailureListener = new SniffOnFailureListener();
|
|
RestClient restClient = RestClient.builder(new HttpHost("localhost", 9200))
|
|
.setFailureListener(sniffOnFailureListener).build();
|
|
Sniffer sniffer = Sniffer.builder(restClient).build();
|
|
sniffOnFailureListener.setSniffer(sniffer);
|
|
--------------------------------------------------
|
|
|
|
When using sniffing on failure, not only do the nodes get updated after each
|
|
failure, but an additional sniffing round is also scheduled sooner than usual,
|
|
by default one minute after the failure, assuming that things will go back to
|
|
normal and we want to detect that as soon as possible. Said interval can be
|
|
customized at `Sniffer` creation time as follows:
|
|
|
|
[source,java]
|
|
--------------------------------------------------
|
|
Sniffer sniffer = Sniffer.builder(restClient)
|
|
.setSniffAfterFailureDelayMillis(30000).build();
|
|
--------------------------------------------------
|
|
|
|
Note that this last configuration parameter has no effect in case sniffing
|
|
on failure is not enabled like explained above.
|