2016-07-29 05:22:47 -04:00
|
|
|
[[sniffer]]
|
|
|
|
== Sniffer
|
|
|
|
|
|
|
|
Minimal library that allows to automatically discover nodes from a running
|
|
|
|
Elasticsearch cluster and set them to an existing `RestClient` instance.
|
|
|
|
It retrieves by default the nodes that belong to the cluster using the
|
|
|
|
Nodes Info api and uses jackson to parse the obtained json response.
|
|
|
|
|
|
|
|
Compatible with Elasticsearch 2.x and onwards.
|
|
|
|
|
2016-11-23 11:13:01 -05:00
|
|
|
=== Maven Repository
|
|
|
|
|
2016-11-22 10:34:23 -05:00
|
|
|
The low-level REST client is subject to the same release cycle as
|
|
|
|
elasticsearch. Replace the version with the desired sniffer version, first
|
|
|
|
released with `5.0.0-alpha4`. There is no relation between the sniffer version
|
|
|
|
and the elasticsearch version that the client can communicate with. Sniffer
|
|
|
|
supports fetching the nodes list from elasticsearch 2.x and onwards.
|
|
|
|
|
|
|
|
|
2016-11-23 11:13:01 -05:00
|
|
|
==== Maven configuration
|
2016-07-29 05:22:47 -04:00
|
|
|
|
|
|
|
Here is how you can configure the dependency using maven as a dependency manager.
|
|
|
|
Add the following to your `pom.xml` file:
|
|
|
|
|
|
|
|
["source","xml",subs="attributes"]
|
|
|
|
--------------------------------------------------
|
|
|
|
<dependency>
|
|
|
|
<groupId>org.elasticsearch.client</groupId>
|
|
|
|
<artifactId>sniffer</artifactId>
|
|
|
|
<version>{version}</version>
|
|
|
|
</dependency>
|
|
|
|
--------------------------------------------------
|
|
|
|
|
2016-11-23 11:13:01 -05:00
|
|
|
==== Gradle configuration
|
2016-11-22 10:34:23 -05:00
|
|
|
|
|
|
|
Here is how you can configure the dependency using gradle as a dependency manager.
|
|
|
|
Add the following to your `build.gradle` file:
|
|
|
|
|
|
|
|
["source","groovy",subs="attributes"]
|
|
|
|
--------------------------------------------------
|
|
|
|
dependencies {
|
2016-11-23 11:13:01 -05:00
|
|
|
compile 'org.elasticsearch.client:sniffer:{version}'
|
2016-11-22 10:34:23 -05:00
|
|
|
}
|
|
|
|
--------------------------------------------------
|
2016-07-29 05:22:47 -04:00
|
|
|
|
|
|
|
=== Usage
|
|
|
|
|
|
|
|
Once a `RestClient` instance has been created, a `Sniffer` can be associated
|
|
|
|
to it. The `Sniffer` will make use of the provided `RestClient` to periodically
|
|
|
|
(every 5 minutes by default) fetch the list of current nodes from the cluster
|
|
|
|
and update them by calling `RestClient#setHosts`.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
[source,java]
|
|
|
|
--------------------------------------------------
|
|
|
|
Sniffer sniffer = Sniffer.builder(restClient).build();
|
|
|
|
--------------------------------------------------
|
|
|
|
|
|
|
|
It is important to close the `Sniffer` so that its background thread gets
|
|
|
|
properly shutdown and all of its resources are released. The `Sniffer`
|
|
|
|
object should have the same lifecycle as the `RestClient` and get closed
|
|
|
|
right before the client:
|
|
|
|
|
|
|
|
[source,java]
|
|
|
|
--------------------------------------------------
|
|
|
|
sniffer.close();
|
|
|
|
restClient.close();
|
|
|
|
--------------------------------------------------
|
|
|
|
|
|
|
|
The Elasticsearch Nodes Info api doesn't return the protocol to use when
|
|
|
|
connecting to the nodes but only their `host:port` key-pair, hence `http`
|
|
|
|
is used by default. In case `https` should be used instead, the
|
|
|
|
`ElasticsearchHostsSniffer` object has to be manually created and provided
|
|
|
|
as follows:
|
|
|
|
|
|
|
|
[source,java]
|
|
|
|
--------------------------------------------------
|
|
|
|
HostsSniffer hostsSniffer = new ElasticsearchHostsSniffer(restClient,
|
|
|
|
ElasticsearchHostsSniffer.DEFAULT_SNIFF_REQUEST_TIMEOUT,
|
|
|
|
ElasticsearchHostsSniffer.Scheme.HTTPS);
|
|
|
|
Sniffer sniffer = Sniffer.builder(restClient)
|
|
|
|
.setHostsSniffer(hostsSniffer).build();
|
|
|
|
--------------------------------------------------
|
|
|
|
|
|
|
|
In the same way it is also possible to customize the `sniffRequestTimeout`,
|
|
|
|
which defaults to one second. That is the `timeout` parameter provided as a
|
|
|
|
querystring parameter when calling the Nodes Info api, so that when the
|
|
|
|
timeout expires on the server side, a valid response is still returned
|
|
|
|
although it may contain only a subset of the nodes that are part of the
|
|
|
|
cluster, the ones that have responsed until then.
|
|
|
|
Also, a custom `HostsSniffer` implementation can be provided for advanced
|
|
|
|
use-cases that may require fetching the hosts from external sources.
|
|
|
|
|
|
|
|
The `Sniffer` updates the nodes by default every 5 minutes. This interval can
|
|
|
|
be customized by providing it (in milliseconds) as follows:
|
|
|
|
|
|
|
|
[source,java]
|
|
|
|
--------------------------------------------------
|
|
|
|
Sniffer sniffer = Sniffer.builder(restClient)
|
|
|
|
.setSniffIntervalMillis(60000).build();
|
|
|
|
--------------------------------------------------
|
|
|
|
|
|
|
|
It is also possible to enable sniffing on failure, meaning that after each
|
|
|
|
failure the nodes list gets updated straightaway rather than at the following
|
|
|
|
ordinary sniffing round. In this case a `SniffOnFailureListener` needs to
|
|
|
|
be created at first and provided at `RestClient` creation. Also once the
|
|
|
|
`Sniffer` is later created, it needs to be associated with that same
|
|
|
|
`SniffOnFailureListener` instance, which will be notified at each failure
|
|
|
|
and use the `Sniffer` to perform the additional sniffing round as described.
|
|
|
|
|
|
|
|
[source,java]
|
|
|
|
--------------------------------------------------
|
|
|
|
SniffOnFailureListener sniffOnFailureListener = new SniffOnFailureListener();
|
|
|
|
RestClient restClient = RestClient.builder(new HttpHost("localhost", 9200))
|
|
|
|
.setFailureListener(sniffOnFailureListener).build();
|
|
|
|
Sniffer sniffer = Sniffer.builder(restClient).build();
|
|
|
|
sniffOnFailureListener.setSniffer(sniffer);
|
|
|
|
--------------------------------------------------
|
|
|
|
|
|
|
|
When using sniffing on failure, not only do the nodes get updated after each
|
|
|
|
failure, but an additional sniffing round is also scheduled sooner than usual,
|
|
|
|
by default one minute after the failure, assuming that things will go back to
|
|
|
|
normal and we want to detect that as soon as possible. Said interval can be
|
|
|
|
customized at `Sniffer` creation time as follows:
|
|
|
|
|
|
|
|
[source,java]
|
|
|
|
--------------------------------------------------
|
|
|
|
Sniffer sniffer = Sniffer.builder(restClient)
|
|
|
|
.setSniffAfterFailureDelayMillis(30000).build();
|
|
|
|
--------------------------------------------------
|
|
|
|
|
|
|
|
Note that this last configuration parameter has no effect in case sniffing
|
|
|
|
on failure is not enabled like explained above.
|
|
|
|
|
|
|
|
=== License
|
|
|
|
|
|
|
|
Copyright 2013-2016 Elasticsearch
|
|
|
|
|
|
|
|
Licensed under the Apache License, Version 2.0 (the "License");
|
|
|
|
you may not use this file except in compliance with the License.
|
|
|
|
You may obtain a copy of the License at
|
|
|
|
|
|
|
|
http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
|
|
|
|
Unless required by applicable law or agreed to in writing, software
|
|
|
|
distributed under the License is distributed on an "AS IS" BASIS,
|
|
|
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
|
|
See the License for the specific language governing permissions and
|
|
|
|
limitations under the License.
|
|
|
|
|