Plugin discovery documentation contained information about installing Elasticsearch 2.0 and installing an oracle JDK, both of which is no longer valid. While noticing that the instructions used cleartext HTTP to install packages, this commit replaces HTTPs links instead of HTTP where possible. In addition a few community links have been removed, as they do not seem to exist anymore. Co-authored-by: Alexander Reelsen <alexander@reelsen.net>
This commit is contained in:
parent
fb599dc343
commit
5a2c6f0d4f
|
@ -16,8 +16,7 @@ include::{docs-root}/shared/versions/stack/{source_branch}.asciidoc[]
|
|||
Javadoc roots used to generate links from Painless's API reference
|
||||
///////
|
||||
:java11-javadoc: https://docs.oracle.com/en/java/javase/11/docs/api
|
||||
:joda-time-javadoc: http://www.joda.org/joda-time/apidocs
|
||||
:lucene-core-javadoc: http://lucene.apache.org/core/{lucene_version_path}/core
|
||||
:lucene-core-javadoc: https://lucene.apache.org/core/{lucene_version_path}/core
|
||||
|
||||
ifeval::["{release-state}"=="unreleased"]
|
||||
:elasticsearch-javadoc: https://snapshots.elastic.co/javadoc/org/elasticsearch/elasticsearch/{version}-SNAPSHOT
|
||||
|
|
|
@ -53,7 +53,7 @@ a number of clients that have been contributed by the community for various lang
|
|||
* https://github.com/mpenet/spandex[Spandex]:
|
||||
Clojure client, based on the new official low level rest-client.
|
||||
|
||||
* http://github.com/clojurewerkz/elastisch[Elastisch]:
|
||||
* https://github.com/clojurewerkz/elastisch[Elastisch]:
|
||||
Clojure client.
|
||||
|
||||
[[coldfusion]]
|
||||
|
@ -65,12 +65,12 @@ a number of clients that have been contributed by the community for various lang
|
|||
[[erlang]]
|
||||
== Erlang
|
||||
|
||||
* http://github.com/tsloughter/erlastic_search[erlastic_search]:
|
||||
* https://github.com/tsloughter/erlastic_search[erlastic_search]:
|
||||
Erlang client using HTTP.
|
||||
|
||||
* https://github.com/datahogs/tirexs[Tirexs]:
|
||||
An https://github.com/elixir-lang/elixir[Elixir] based API/DSL, inspired by
|
||||
http://github.com/karmi/tire[Tire]. Ready to use in pure Erlang
|
||||
https://github.com/karmi/tire[Tire]. Ready to use in pure Erlang
|
||||
environment.
|
||||
|
||||
* https://github.com/sashman/elasticsearch_elixir_bulk_processor[Elixir Bulk Processor]:
|
||||
|
@ -145,10 +145,10 @@ Also see the {client}/perl-api/current/index.html[official Elasticsearch Perl cl
|
|||
|
||||
Also see the {client}/php-api/current/index.html[official Elasticsearch PHP client].
|
||||
|
||||
* http://github.com/ruflin/Elastica[Elastica]:
|
||||
* https://github.com/ruflin/Elastica[Elastica]:
|
||||
PHP client.
|
||||
|
||||
* http://github.com/nervetattoo/elasticsearch[elasticsearch] PHP client.
|
||||
* https://github.com/nervetattoo/elasticsearch[elasticsearch] PHP client.
|
||||
|
||||
* https://github.com/madewithlove/elasticsearcher[elasticsearcher] Agnostic lightweight package on top of the Elasticsearch PHP client. Its main goal is to allow for easier structuring of queries and indices in your application. It does not want to hide or replace functionality of the Elasticsearch PHP client.
|
||||
|
||||
|
@ -216,9 +216,6 @@ Also see the {client}/ruby-api/current/index.html[official Elasticsearch Ruby cl
|
|||
* https://github.com/newapplesho/elasticsearch-smalltalk[elasticsearch-smalltalk] -
|
||||
Pharo Smalltalk client for Elasticsearch
|
||||
|
||||
* http://ss3.gemstone.com/ss/Elasticsearch.html[Elasticsearch] -
|
||||
Smalltalk client for Elasticsearch
|
||||
|
||||
[[vertx]]
|
||||
== Vert.x
|
||||
|
||||
|
|
|
@ -40,7 +40,7 @@ The javadoc for the REST high level client can be found at {rest-high-level-clie
|
|||
=== Maven Repository
|
||||
|
||||
The high-level Java REST client is hosted on
|
||||
http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.elasticsearch.client%22[Maven
|
||||
https://search.maven.org/search?q=g:org.elasticsearch.client[Maven
|
||||
Central]. The minimum Java version required is `1.8`.
|
||||
|
||||
The High Level REST Client is subject to the same release cycle as
|
||||
|
|
|
@ -140,7 +140,7 @@ openssl pkcs12 -export -in client.crt -inkey private_key.pem \
|
|||
-name "client" -out client.p12
|
||||
```
|
||||
|
||||
If no explicit configuration is provided, the http://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html#CustomizingStores[system default configuration]
|
||||
If no explicit configuration is provided, the https://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html#CustomizingStores[system default configuration]
|
||||
will be used.
|
||||
|
||||
=== Others
|
||||
|
@ -154,11 +154,11 @@ indefinitely and negative hostname resolutions for ten seconds. If the resolved
|
|||
addresses of the hosts to which you are connecting the client to vary with time
|
||||
then you might want to modify the default JVM behavior. These can be modified by
|
||||
adding
|
||||
http://docs.oracle.com/javase/8/docs/technotes/guides/net/properties.html[`networkaddress.cache.ttl=<timeout>`]
|
||||
https://docs.oracle.com/javase/8/docs/technotes/guides/net/properties.html[`networkaddress.cache.ttl=<timeout>`]
|
||||
and
|
||||
http://docs.oracle.com/javase/8/docs/technotes/guides/net/properties.html[`networkaddress.cache.negative.ttl=<timeout>`]
|
||||
https://docs.oracle.com/javase/8/docs/technotes/guides/net/properties.html[`networkaddress.cache.negative.ttl=<timeout>`]
|
||||
to your
|
||||
http://docs.oracle.com/javase/8/docs/technotes/guides/security/PolicyFiles.html[Java
|
||||
https://docs.oracle.com/javase/8/docs/technotes/guides/security/PolicyFiles.html[Java
|
||||
security policy].
|
||||
|
||||
=== Node selector
|
||||
|
|
|
@ -13,7 +13,7 @@ The javadoc for the low level REST client can be found at {rest-client-javadoc}/
|
|||
=== Maven Repository
|
||||
|
||||
The low-level Java REST client is hosted on
|
||||
http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.elasticsearch.client%22[Maven
|
||||
https://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.elasticsearch.client%22[Maven
|
||||
Central]. The minimum Java version required is `1.8`.
|
||||
|
||||
The low-level REST client is subject to the same release cycle as
|
||||
|
@ -57,7 +57,7 @@ dependencies {
|
|||
=== Dependencies
|
||||
|
||||
The low-level Java REST client internally uses the
|
||||
http://hc.apache.org/httpcomponents-asyncclient-dev/[Apache Http Async Client]
|
||||
https://hc.apache.org/httpcomponents-asyncclient-dev/[Apache Http Async Client]
|
||||
to send http requests. It depends on the following artifacts, namely the async
|
||||
http client and its own transitive dependencies:
|
||||
|
||||
|
@ -212,7 +212,7 @@ include-tagged::{doc-tests}/RestClientDocumentation.java[rest-client-init-client
|
|||
--------------------------------------------------
|
||||
<1> Set a callback that allows to modify the http client configuration
|
||||
(e.g. encrypted communication over ssl, or anything that the
|
||||
http://hc.apache.org/httpcomponents-asyncclient-dev/httpasyncclient/apidocs/org/apache/http/impl/nio/client/HttpAsyncClientBuilder.html[`org.apache.http.impl.nio.client.HttpAsyncClientBuilder`]
|
||||
https://hc.apache.org/httpcomponents-asyncclient-dev/httpasyncclient/apidocs/org/apache/http/impl/nio/client/HttpAsyncClientBuilder.html[`org.apache.http.impl.nio.client.HttpAsyncClientBuilder`]
|
||||
allows to set)
|
||||
|
||||
|
||||
|
@ -401,7 +401,7 @@ https://hc.apache.org/httpcomponents-core-ga/httpcore/apidocs/org/apache/http/Ht
|
|||
`HttpEntity#getContent` method comes handy which returns an `InputStream`
|
||||
reading from the previously buffered response body. As an alternative, it is
|
||||
possible to provide a custom
|
||||
http://hc.apache.org/httpcomponents-core-ga/httpcore-nio/apidocs/org/apache/http/nio/protocol/HttpAsyncResponseConsumer.html[`org.apache.http.nio.protocol.HttpAsyncResponseConsumer`]
|
||||
https://hc.apache.org/httpcomponents-core-ga/httpcore-nio/apidocs/org/apache/http/nio/protocol/HttpAsyncResponseConsumer.html[`org.apache.http.nio.protocol.HttpAsyncResponseConsumer`]
|
||||
that controls how bytes are read and buffered.
|
||||
|
||||
[[java-rest-low-usage-logging]]
|
||||
|
|
|
@ -219,7 +219,7 @@ Painless's native support for regular expressions has syntax constructs:
|
|||
|
||||
* `/pattern/`: Pattern literals create patterns. This is the only way to create
|
||||
a pattern in painless. The pattern inside the ++/++'s are just
|
||||
http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java regular expressions].
|
||||
https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java regular expressions].
|
||||
See <<pattern-flags>> for more.
|
||||
* `=~`: The find operator return a `boolean`, `true` if a subsequence of the
|
||||
text matches, `false` otherwise.
|
||||
|
@ -281,7 +281,7 @@ POST hockey/_update_by_query
|
|||
----------------------------------------------------------------
|
||||
|
||||
`Matcher.replaceAll` is just a call to Java's `Matcher`'s
|
||||
http://docs.oracle.com/javase/8/docs/api/java/util/regex/Matcher.html#replaceAll-java.lang.String-[replaceAll]
|
||||
https://docs.oracle.com/javase/8/docs/api/java/util/regex/Matcher.html#replaceAll-java.lang.String-[replaceAll]
|
||||
method so it supports `$1` and `\1` for replacements:
|
||||
|
||||
[source,console]
|
||||
|
|
|
@ -11,8 +11,8 @@ refer to the corresponding topics in the
|
|||
https://docs.oracle.com/javase/specs/jls/se8/html/index.html[Java Language
|
||||
Specification].
|
||||
|
||||
Painless scripts are parsed and compiled using the http://www.antlr.org/[ANTLR4]
|
||||
and http://asm.ow2.org/[ASM] libraries. Scripts are compiled directly
|
||||
Painless scripts are parsed and compiled using the https://www.antlr.org/[ANTLR4]
|
||||
and https://asm.ow2.org/[ASM] libraries. Scripts are compiled directly
|
||||
into Java Virtual Machine (JVM) byte code and executed against a standard JVM.
|
||||
This specification uses ANTLR4 grammar notation to describe the allowed syntax.
|
||||
However, the actual Painless grammar is more compact than what is shown here.
|
||||
|
|
|
@ -57,7 +57,7 @@ convert `nfc` to `nfd` or `nfkc` to `nfkd` respectively:
|
|||
|
||||
Which letters are normalized can be controlled by specifying the
|
||||
`unicode_set_filter` parameter, which accepts a
|
||||
http://icu-project.org/apiref/icu4j/com/ibm/icu/text/UnicodeSet.html[UnicodeSet].
|
||||
https://icu-project.org/apiref/icu4j/com/ibm/icu/text/UnicodeSet.html[UnicodeSet].
|
||||
|
||||
Here are two examples, the default usage and a customised character filter:
|
||||
|
||||
|
@ -103,7 +103,7 @@ PUT icu_sample
|
|||
==== ICU Tokenizer
|
||||
|
||||
Tokenizes text into words on word boundaries, as defined in
|
||||
http://www.unicode.org/reports/tr29/[UAX #29: Unicode Text Segmentation].
|
||||
https://www.unicode.org/reports/tr29/[UAX #29: Unicode Text Segmentation].
|
||||
It behaves much like the {ref}/analysis-standard-tokenizer.html[`standard` tokenizer],
|
||||
but adds better support for some Asian languages by using a dictionary-based
|
||||
approach to identify words in Thai, Lao, Chinese, Japanese, and Korean, and
|
||||
|
@ -137,7 +137,7 @@ for a more detailed explanation.
|
|||
|
||||
To add icu tokenizer rules, set the `rule_files` settings, which should contain a comma-separated list of
|
||||
`code:rulefile` pairs in the following format:
|
||||
http://unicode.org/iso15924/iso15924-codes.html[four-letter ISO 15924 script code],
|
||||
https://unicode.org/iso15924/iso15924-codes.html[four-letter ISO 15924 script code],
|
||||
followed by a colon, then a rule file name. Rule files are placed `ES_HOME/config` directory.
|
||||
|
||||
As a demonstration of how the rule files can be used, save the following user file to `$ES_HOME/config/KeywordTokenizer.rbbi`:
|
||||
|
@ -210,7 +210,7 @@ with the `name` parameter, which accepts `nfc`, `nfkc`, and `nfkc_cf`
|
|||
|
||||
Which letters are normalized can be controlled by specifying the
|
||||
`unicode_set_filter` parameter, which accepts a
|
||||
http://icu-project.org/apiref/icu4j/com/ibm/icu/text/UnicodeSet.html[UnicodeSet].
|
||||
https://icu-project.org/apiref/icu4j/com/ibm/icu/text/UnicodeSet.html[UnicodeSet].
|
||||
|
||||
You should probably prefer the <<analysis-icu-normalization-charfilter,Normalization character filter>>.
|
||||
|
||||
|
@ -287,7 +287,7 @@ no need to use Normalize character or token filter as well.
|
|||
|
||||
Which letters are folded can be controlled by specifying the
|
||||
`unicode_set_filter` parameter, which accepts a
|
||||
http://icu-project.org/apiref/icu4j/com/ibm/icu/text/UnicodeSet.html[UnicodeSet].
|
||||
https://icu-project.org/apiref/icu4j/com/ibm/icu/text/UnicodeSet.html[UnicodeSet].
|
||||
|
||||
The following example exempts Swedish characters from folding. It is important
|
||||
to note that both upper and lowercase forms should be specified, and that
|
||||
|
@ -433,7 +433,7 @@ The following parameters are accepted by `icu_collation_keyword` fields:
|
|||
The strength property determines the minimum level of difference considered
|
||||
significant during comparison. Possible values are : `primary`, `secondary`,
|
||||
`tertiary`, `quaternary` or `identical`. See the
|
||||
http://icu-project.org/apiref/icu4j/com/ibm/icu/text/Collator.html[ICU Collation documentation]
|
||||
https://icu-project.org/apiref/icu4j/com/ibm/icu/text/Collator.html[ICU Collation documentation]
|
||||
for a more detailed explanation for each value. Defaults to `tertiary`
|
||||
unless otherwise specified in the collation.
|
||||
|
||||
|
|
|
@ -4,9 +4,6 @@
|
|||
The Stempel Analysis plugin integrates Lucene's Stempel analysis
|
||||
module for Polish into elasticsearch.
|
||||
|
||||
It provides high quality stemming for Polish, based on the
|
||||
http://www.egothor.org/[Egothor project].
|
||||
|
||||
:plugin_name: analysis-stempel
|
||||
include::install_remove.asciidoc[]
|
||||
|
||||
|
|
|
@ -3,7 +3,7 @@
|
|||
|
||||
The Ukrainian Analysis plugin integrates Lucene's UkrainianMorfologikAnalyzer into elasticsearch.
|
||||
|
||||
It provides stemming for Ukrainian using the http://github.com/morfologik/morfologik-stemming[Morfologik project].
|
||||
It provides stemming for Ukrainian using the https://github.com/morfologik/morfologik-stemming[Morfologik project].
|
||||
|
||||
:plugin_name: analysis-ukrainian
|
||||
include::install_remove.asciidoc[]
|
||||
|
|
|
@ -18,7 +18,7 @@ transliteration.
|
|||
|
||||
<<analysis-kuromoji,Kuromoji>>::
|
||||
|
||||
Advanced analysis of Japanese using the http://www.atilika.org/[Kuromoji analyzer].
|
||||
Advanced analysis of Japanese using the https://www.atilika.org/[Kuromoji analyzer].
|
||||
|
||||
<<analysis-nori,Nori>>::
|
||||
|
||||
|
|
|
@ -9,7 +9,7 @@ API extension plugins add new functionality to Elasticsearch by adding new APIs
|
|||
A number of plugins have been contributed by our community:
|
||||
|
||||
* https://github.com/carrot2/elasticsearch-carrot2[carrot2 Plugin]:
|
||||
Results clustering with http://project.carrot2.org/[carrot2] (by Dawid Weiss)
|
||||
Results clustering with https://github.com/carrot2/carrot2[carrot2] (by Dawid Weiss)
|
||||
|
||||
* https://github.com/wikimedia/search-extra[Elasticsearch Trigram Accelerated Regular Expression Filter]:
|
||||
(by Wikimedia Foundation/Nik Everett)
|
||||
|
@ -18,7 +18,7 @@ A number of plugins have been contributed by our community:
|
|||
(by Wikimedia Foundation/Nik Everett)
|
||||
|
||||
* https://github.com/YannBrrd/elasticsearch-entity-resolution[Entity Resolution Plugin]:
|
||||
Uses http://github.com/larsga/Duke[Duke] for duplication detection (by Yann Barraud)
|
||||
Uses https://github.com/larsga/Duke[Duke] for duplication detection (by Yann Barraud)
|
||||
|
||||
* https://github.com/zentity-io/zentity[Entity Resolution Plugin] (https://zentity.io[zentity]):
|
||||
Real-time entity resolution with pure Elasticsearch (by Dave Moore)
|
||||
|
|
|
@ -116,5 +116,5 @@ AccessController.doPrivileged(
|
|||
);
|
||||
--------------------------------------------------
|
||||
|
||||
See http://www.oracle.com/technetwork/java/seccodeguide-139067.html[Secure Coding Guidelines for Java SE]
|
||||
See https://www.oracle.com/technetwork/java/seccodeguide-139067.html[Secure Coding Guidelines for Java SE]
|
||||
for more information.
|
||||
|
|
|
@ -139,7 +139,7 @@ about your nodes.
|
|||
|
||||
Before starting, you need to have:
|
||||
|
||||
* A http://www.windowsazure.com/[Windows Azure account]
|
||||
* A https://azure.microsoft.com/en-us/[Windows Azure account]
|
||||
* OpenSSL that isn't from MacPorts, specifically `OpenSSL 1.0.1f 6 Jan
|
||||
2014` doesn't seem to create a valid keypair for ssh. FWIW,
|
||||
`OpenSSL 1.0.1c 10 May 2012` on Ubuntu 14.04 LTS is known to work.
|
||||
|
@ -331,27 +331,7 @@ scp /tmp/azurekeystore.pkcs12 azure-elasticsearch-cluster.cloudapp.net:/home/ela
|
|||
ssh azure-elasticsearch-cluster.cloudapp.net
|
||||
----
|
||||
|
||||
Once connected, install Elasticsearch:
|
||||
|
||||
["source","sh",subs="attributes,callouts"]
|
||||
----
|
||||
# Install Latest Java version
|
||||
# Read http://www.webupd8.org/2012/09/install-oracle-java-8-in-ubuntu-via-ppa.html for details
|
||||
sudo add-apt-repository ppa:webupd8team/java
|
||||
sudo apt-get update
|
||||
sudo apt-get install oracle-java8-installer
|
||||
|
||||
# If you want to install OpenJDK instead
|
||||
# sudo apt-get update
|
||||
# sudo apt-get install openjdk-8-jre-headless
|
||||
|
||||
# Download Elasticsearch
|
||||
curl -s https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-{version}.deb -o elasticsearch-{version}.deb
|
||||
|
||||
# Prepare Elasticsearch installation
|
||||
sudo dpkg -i elasticsearch-{version}.deb
|
||||
----
|
||||
// NOTCONSOLE
|
||||
Once connected, {stack-gs}/get-started-elastic-stack.html#install-elasticsearch[install {es}]:
|
||||
|
||||
Check that Elasticsearch is running:
|
||||
|
||||
|
|
|
@ -29,7 +29,7 @@ will work correctly even if it finds master-ineligible nodes, but master
|
|||
elections will be more efficient if this can be avoided.
|
||||
|
||||
The interaction with the AWS API can be authenticated using the
|
||||
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html[instance
|
||||
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html[instance
|
||||
role], or else custom credentials can be supplied.
|
||||
|
||||
===== Enabling EC2 discovery
|
||||
|
@ -76,7 +76,7 @@ The available settings for the EC2 discovery plugin are as follows.
|
|||
`discovery.ec2.endpoint`::
|
||||
|
||||
The EC2 service endpoint to which to connect. See
|
||||
http://docs.aws.amazon.com/general/latest/gr/rande.html#ec2_region to find
|
||||
https://docs.aws.amazon.com/general/latest/gr/rande.html#ec2_region to find
|
||||
the appropriate endpoint for the region. This setting defaults to
|
||||
`ec2.us-east-1.amazonaws.com` which is appropriate for clusters running in
|
||||
the `us-east-1` region.
|
||||
|
@ -152,7 +152,7 @@ For example if you tag some EC2 instances with a tag named
|
|||
`elasticsearch-host-name` and set `host_type: tag:elasticsearch-host-name` then
|
||||
the `discovery-ec2` plugin will read each instance's host name from the value
|
||||
of the `elasticsearch-host-name` tag.
|
||||
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html[Read more
|
||||
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html[Read more
|
||||
about EC2 Tags].
|
||||
|
||||
--
|
||||
|
@ -293,7 +293,7 @@ available on AWS-based infrastructure from https://www.elastic.co/cloud.
|
|||
EC2 instances offer a number of different kinds of storage. Please be aware of
|
||||
the following when selecting the storage for your cluster:
|
||||
|
||||
* http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html[Instance
|
||||
* https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html[Instance
|
||||
Store] is recommended for {es} clusters as it offers excellent performance and
|
||||
is cheaper than EBS-based storage. {es} is designed to work well with this kind
|
||||
of ephemeral storage because it replicates each shard across multiple nodes. If
|
||||
|
@ -327,7 +327,7 @@ https://aws.amazon.com/ec2/instance-types/[instance types] with networking
|
|||
labelled as `Moderate` or `Low`.
|
||||
|
||||
* It is a good idea to distribute your nodes across multiple
|
||||
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html[availability
|
||||
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html[availability
|
||||
zones] and use {ref}/modules-cluster.html#shard-allocation-awareness[shard
|
||||
allocation awareness] to ensure that each shard has copies in more than one
|
||||
availability zone.
|
||||
|
|
|
@ -182,29 +182,7 @@ Failing to set this will result in unauthorized messages when starting Elasticse
|
|||
See <<discovery-gce-usage-tips-permissions>>.
|
||||
==============================================
|
||||
|
||||
|
||||
Once connected, install Elasticsearch:
|
||||
|
||||
[source,sh]
|
||||
--------------------------------------------------
|
||||
sudo apt-get update
|
||||
|
||||
# Download Elasticsearch
|
||||
wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-2.0.0.deb
|
||||
|
||||
# Prepare Java installation (Oracle)
|
||||
sudo echo "deb http://ppa.launchpad.net/webupd8team/java/ubuntu trusty main" | sudo tee /etc/apt/sources.list.d/webupd8team-java.list
|
||||
sudo echo "deb-src http://ppa.launchpad.net/webupd8team/java/ubuntu trusty main" | sudo tee -a /etc/apt/sources.list.d/webupd8team-java.list
|
||||
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys EEA14886
|
||||
sudo apt-get update
|
||||
sudo apt-get install oracle-java8-installer
|
||||
|
||||
# Prepare Java installation (or OpenJDK)
|
||||
# sudo apt-get install java8-runtime-headless
|
||||
|
||||
# Prepare Elasticsearch installation
|
||||
sudo dpkg -i elasticsearch-2.0.0.deb
|
||||
--------------------------------------------------
|
||||
Once connected, {stack-gs}/get-started-elastic-stack.html#install-elasticsearch[install {es}]:
|
||||
|
||||
[[discovery-gce-usage-long-install-plugin]]
|
||||
===== Install Elasticsearch discovery gce plugin
|
||||
|
|
|
@ -30,7 +30,7 @@ addresses of seed hosts.
|
|||
|
||||
The following discovery plugins have been contributed by our community:
|
||||
|
||||
* https://github.com/fabric8io/elasticsearch-cloud-kubernetes[Kubernetes Discovery Plugin] (by Jimmi Dyson, http://fabric8.io[fabric8])
|
||||
* https://github.com/fabric8io/elasticsearch-cloud-kubernetes[Kubernetes Discovery Plugin] (by Jimmi Dyson, https://fabric8.io[fabric8])
|
||||
|
||||
include::discovery-ec2.asciidoc[]
|
||||
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
=== Ingest Attachment Processor Plugin
|
||||
|
||||
The ingest attachment plugin lets Elasticsearch extract file attachments in common formats (such as PPT, XLS, and PDF) by
|
||||
using the Apache text extraction library http://lucene.apache.org/tika/[Tika].
|
||||
using the Apache text extraction library https://tika.apache.org/[Tika].
|
||||
|
||||
You can use the ingest attachment plugin as a replacement for the mapper attachment plugin.
|
||||
|
||||
|
|
|
@ -11,7 +11,7 @@ The core ingest plugins are:
|
|||
<<ingest-attachment>>::
|
||||
|
||||
The ingest attachment plugin lets Elasticsearch extract file attachments in common formats (such as PPT, XLS, and PDF) by
|
||||
using the Apache text extraction library http://lucene.apache.org/tika/[Tika].
|
||||
using the Apache text extraction library https://tika.apache.org/[Tika].
|
||||
|
||||
<<ingest-geoip>>::
|
||||
|
||||
|
|
|
@ -11,7 +11,7 @@ Integrations are not plugins, but are external tools or modules that make it eas
|
|||
[discrete]
|
||||
==== Supported by the community:
|
||||
|
||||
* http://drupal.org/project/search_api_elasticsearch[Drupal]:
|
||||
* https://drupal.org/project/search_api_elasticsearch[Drupal]:
|
||||
Drupal Elasticsearch integration via Search API.
|
||||
|
||||
* https://drupal.org/project/elasticsearch_connector[Drupal]:
|
||||
|
@ -28,7 +28,7 @@ Integrations are not plugins, but are external tools or modules that make it eas
|
|||
search (facets, etc), along with some Natural Language Processing features
|
||||
(ex.: More like this)
|
||||
|
||||
* http://extensions.xwiki.org/xwiki/bin/view/Extension/Elastic+Search+Macro/[XWiki Next Generation Wiki]:
|
||||
* https://extensions.xwiki.org/xwiki/bin/view/Extension/Elastic+Search+Macro/[XWiki Next Generation Wiki]:
|
||||
XWiki has an Elasticsearch and Kibana macro allowing to run Elasticsearch queries and display the results in XWiki pages using XWiki's scripting language as well as include Kibana Widgets in XWiki pages
|
||||
|
||||
[discrete]
|
||||
|
@ -101,13 +101,6 @@ releases 2.0 and later do not support rivers.
|
|||
[discrete]
|
||||
==== Supported by the community:
|
||||
|
||||
* http://www.searchtechnologies.com/aspire-for-elasticsearch[Aspire for Elasticsearch]:
|
||||
Aspire, from Search Technologies, is a powerful connector and processing
|
||||
framework designed for unstructured data. It has connectors to internal and
|
||||
external repositories including SharePoint, Documentum, Jive, RDB, file
|
||||
systems, websites and more, and can transform and normalize this data before
|
||||
indexing in Elasticsearch.
|
||||
|
||||
* https://camel.apache.org/elasticsearch.html[Apache Camel Integration]:
|
||||
An Apache camel component to integrate Elasticsearch
|
||||
|
||||
|
@ -117,13 +110,13 @@ releases 2.0 and later do not support rivers.
|
|||
* https://github.com/FriendsOfSymfony/FOSElasticaBundle[FOSElasticaBundle]:
|
||||
Symfony2 Bundle wrapping Elastica.
|
||||
|
||||
* http://grails.org/plugin/elasticsearch[Grails]:
|
||||
* https://plugins.grails.org/plugin/puneetbehl/elasticsearch[Grails]:
|
||||
Elasticsearch Grails plugin.
|
||||
|
||||
* http://haystacksearch.org/[Haystack]:
|
||||
* https://haystacksearch.org/[Haystack]:
|
||||
Modular search for Django
|
||||
|
||||
* http://hibernate.org/search/[Hibernate Search]
|
||||
* https://hibernate.org/search/[Hibernate Search]
|
||||
Integration with Hibernate ORM, from the Hibernate team. Automatic synchronization of write operations, yet exposes full Elasticsearch capabilities for queries. Can return either Elasticsearch native or re-map queries back into managed entities loaded within transaction from the reference database.
|
||||
|
||||
* https://github.com/spring-projects/spring-data-elasticsearch[Spring Data Elasticsearch]:
|
||||
|
@ -185,7 +178,7 @@ releases 2.0 and later do not support rivers.
|
|||
* https://github.com/radu-gheorghe/check-es[check-es]:
|
||||
Nagios/Shinken plugins for checking on Elasticsearch
|
||||
|
||||
* http://sematext.com/spm/index.html[SPM for Elasticsearch]:
|
||||
* https://sematext.com/spm/index.html[SPM for Elasticsearch]:
|
||||
Performance monitoring with live charts showing cluster and node stats, integrated
|
||||
alerts, email reports, etc.
|
||||
* https://www.zabbix.com/integrations/elasticsearch[Zabbix monitoring template]:
|
||||
|
|
|
@ -93,7 +93,7 @@ To install a plugin from an HTTP URL:
|
|||
+
|
||||
[source,shell]
|
||||
-----------------------------------
|
||||
sudo bin/elasticsearch-plugin install http://some.domain/path/to/plugin.zip
|
||||
sudo bin/elasticsearch-plugin install https://some.domain/path/to/plugin.zip
|
||||
-----------------------------------
|
||||
+
|
||||
The plugin script will refuse to talk to an HTTPS URL with an untrusted
|
||||
|
|
|
@ -139,7 +139,7 @@ stored in the keystore are marked as "secure"; the other settings belong in the
|
|||
The client side timeout for any single request to Azure. The value should
|
||||
specify the time unit. For example, a value of `5s` specifies a 5 second
|
||||
timeout. There is no default value, which means that {es} uses the
|
||||
http://azure.github.io/azure-storage-java/com/microsoft/azure/storage/RequestOptions.html#setTimeoutIntervalInMs(java.lang.Integer)[default value]
|
||||
https://azure.github.io/azure-storage-java/com/microsoft/azure/storage/RequestOptions.html#setTimeoutIntervalInMs(java.lang.Integer)[default value]
|
||||
set by the Azure client (known as 5 minutes). This setting can be defined
|
||||
globally, per account, or both.
|
||||
|
||||
|
@ -241,8 +241,10 @@ client.admin().cluster().preparePutRepository("my_backup_java1")
|
|||
[[repository-azure-validation]]
|
||||
==== Repository validation rules
|
||||
|
||||
According to the http://msdn.microsoft.com/en-us/library/dd135715.aspx[containers naming guide], a container name must
|
||||
be a valid DNS name, conforming to the following naming rules:
|
||||
According to the
|
||||
https://docs.microsoft.com/en-us/rest/api/storageservices/Naming-and-Referencing-Containers--Blobs--and-Metadata[containers
|
||||
naming guide], a container name must be a valid DNS name, conforming to the
|
||||
following naming rules:
|
||||
|
||||
* Container names must start with a letter or number, and can contain only letters, numbers, and the dash (-) character.
|
||||
* Every dash (-) character must be immediately preceded and followed by a letter or number; consecutive dashes are not
|
||||
|
|
|
@ -57,7 +57,7 @@ The following settings are supported:
|
|||
`conf.<key>`::
|
||||
|
||||
Inlined configuration parameter to be added to Hadoop configuration. (Optional)
|
||||
Only client oriented properties from the hadoop http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/core-default.xml[core] and http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml[hdfs] configuration files will be recognized by the plugin.
|
||||
Only client oriented properties from the hadoop https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/core-default.xml[core] and https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml[hdfs] configuration files will be recognized by the plugin.
|
||||
|
||||
`compress`::
|
||||
|
||||
|
|
|
@ -17,7 +17,7 @@ The plugin provides a repository type named `s3` which may be used when creating
|
|||
a repository. The repository defaults to using
|
||||
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html[ECS
|
||||
IAM Role] or
|
||||
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html[EC2
|
||||
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html[EC2
|
||||
IAM Role] credentials for authentication. The only mandatory setting is the
|
||||
bucket name:
|
||||
|
||||
|
@ -117,7 +117,7 @@ settings belong in the `elasticsearch.yml` file.
|
|||
|
||||
The S3 service endpoint to connect to. This defaults to `s3.amazonaws.com`
|
||||
but the
|
||||
http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region[AWS
|
||||
https://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region[AWS
|
||||
documentation] lists alternative S3 endpoints. If you are using an
|
||||
<<repository-s3-compatible-services,S3-compatible service>> then you should
|
||||
set this to the service's endpoint.
|
||||
|
@ -278,7 +278,7 @@ include::repository-shared-settings.asciidoc[]
|
|||
|
||||
Minimum threshold below which the chunk is uploaded using a single request.
|
||||
Beyond this threshold, the S3 repository will use the
|
||||
http://docs.aws.amazon.com/AmazonS3/latest/dev/uploadobjusingmpu.html[AWS
|
||||
https://docs.aws.amazon.com/AmazonS3/latest/dev/uploadobjusingmpu.html[AWS
|
||||
Multipart Upload API] to split the chunk into several parts, each of
|
||||
`buffer_size` length, and to upload each part in its own request. Note that
|
||||
setting a buffer size lower than `5mb` is not allowed since it will prevent
|
||||
|
@ -290,7 +290,7 @@ include::repository-shared-settings.asciidoc[]
|
|||
`canned_acl`::
|
||||
|
||||
The S3 repository supports all
|
||||
http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl[S3
|
||||
https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl[S3
|
||||
canned ACLs] : `private`, `public-read`, `public-read-write`,
|
||||
`authenticated-read`, `log-delivery-write`, `bucket-owner-read`,
|
||||
`bucket-owner-full-control`. Defaults to `private`. You could specify a
|
||||
|
@ -308,7 +308,7 @@ include::repository-shared-settings.asciidoc[]
|
|||
the storage class of existing objects. Due to the extra complexity with the
|
||||
Glacier class lifecycle, it is not currently supported by the plugin. For
|
||||
more information about the different classes, see
|
||||
http://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html[AWS
|
||||
https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html[AWS
|
||||
Storage Classes Guide]
|
||||
|
||||
NOTE: The option of defining client settings in the repository settings as
|
||||
|
|
|
@ -5,22 +5,22 @@
|
|||
Official low-level client for Elasticsearch. Its goal is to provide common
|
||||
ground for all Elasticsearch-related code in Python; because of this it tries
|
||||
to be opinion-free and very extendable. The full documentation is available at
|
||||
http://elasticsearch-py.readthedocs.org/
|
||||
https://elasticsearch-py.readthedocs.org/
|
||||
|
||||
.Elasticsearch DSL
|
||||
************************************************************************************
|
||||
For a more high level client library with more limited scope, have a look at
|
||||
http://elasticsearch-dsl.readthedocs.org/[elasticsearch-dsl] - a more pythonic library
|
||||
https://elasticsearch-dsl.readthedocs.org/[elasticsearch-dsl] - a more pythonic library
|
||||
sitting on top of `elasticsearch-py`.
|
||||
|
||||
It provides a more convenient and idiomatic way to write and manipulate
|
||||
http://elasticsearch-dsl.readthedocs.org/en/latest/search_dsl.html[queries]. It
|
||||
https://elasticsearch-dsl.readthedocs.org/en/latest/search_dsl.html[queries]. It
|
||||
stays close to the Elasticsearch JSON DSL, mirroring its terminology and
|
||||
structure while exposing the whole range of the DSL from Python either directly
|
||||
using defined classes or a queryset-like expressions.
|
||||
|
||||
It also provides an optional
|
||||
http://elasticsearch-dsl.readthedocs.org/en/latest/persistence.html#doctype[persistence
|
||||
https://elasticsearch-dsl.readthedocs.org/en/latest/persistence.html#doctype[persistence
|
||||
layer] for working with documents as Python objects in an ORM-like fashion:
|
||||
defining mappings, retrieving and saving documents, wrapping the document data
|
||||
in user-defined classes.
|
||||
|
@ -114,7 +114,7 @@ The client's features include:
|
|||
* pluggable architecture
|
||||
|
||||
The client also contains a convenient set of
|
||||
http://elasticsearch-py.readthedocs.org/en/master/helpers.html[helpers] for
|
||||
https://elasticsearch-py.readthedocs.org/en/master/helpers.html[helpers] for
|
||||
some of the more engaging tasks like bulk indexing and reindexing.
|
||||
|
||||
|
||||
|
@ -126,7 +126,7 @@ Licensed under the Apache License, Version 2.0 (the "License");
|
|||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
https://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
=== GeoHash grid Aggregation
|
||||
|
||||
A multi-bucket aggregation that works on `geo_point` fields and groups points into buckets that represent cells in a grid.
|
||||
The resulting grid can be sparse and only contains cells that have matching data. Each cell is labeled using a http://en.wikipedia.org/wiki/Geohash[geohash] which is of user-definable precision.
|
||||
The resulting grid can be sparse and only contains cells that have matching data. Each cell is labeled using a {wikipedia}/Geohash[geohash] which is of user-definable precision.
|
||||
|
||||
* High precision geohashes have a long string length and represent cells that cover only a small area.
|
||||
* Low precision geohashes have a short string length and represent cells that each cover a large area.
|
||||
|
|
|
@ -370,7 +370,7 @@ Chi square behaves like mutual information and can be configured with the same p
|
|||
|
||||
|
||||
===== Google normalized distance
|
||||
Google normalized distance as described in "The Google Similarity Distance", Cilibrasi and Vitanyi, 2007 (http://arxiv.org/pdf/cs/0412098v3.pdf) can be used as significance score by adding the parameter
|
||||
Google normalized distance as described in "The Google Similarity Distance", Cilibrasi and Vitanyi, 2007 (https://arxiv.org/pdf/cs/0412098v3.pdf) can be used as significance score by adding the parameter
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
|
|
|
@ -101,7 +101,7 @@ Filtering near-duplicate text is a difficult task at index-time but we can clean
|
|||
`filter_duplicate_text` setting.
|
||||
|
||||
|
||||
First let's look at an unfiltered real-world example using the http://research.signalmedia.co/newsir16/signal-dataset.html[Signal media dataset] of
|
||||
First let's look at an unfiltered real-world example using the https://research.signalmedia.co/newsir16/signal-dataset.html[Signal media dataset] of
|
||||
a million news articles covering a wide variety of news. Here are the raw significant text results for a search for the articles
|
||||
mentioning "elasticsearch":
|
||||
|
||||
|
|
|
@ -71,7 +71,7 @@ values as the required memory usage and the need to communicate those
|
|||
per-shard sets between nodes would utilize too many resources of the cluster.
|
||||
|
||||
This `cardinality` aggregation is based on the
|
||||
http://static.googleusercontent.com/media/research.google.com/fr//pubs/archive/40671.pdf[HyperLogLog++]
|
||||
https://static.googleusercontent.com/media/research.google.com/fr//pubs/archive/40671.pdf[HyperLogLog++]
|
||||
algorithm, which counts based on the hashes of the values with some interesting
|
||||
properties:
|
||||
|
||||
|
|
|
@ -13,12 +13,12 @@ themselves. The regular expression defaults to `\W+` (or all non-word characters
|
|||
========================================
|
||||
|
||||
The pattern analyzer uses
|
||||
http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java Regular Expressions].
|
||||
https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java Regular Expressions].
|
||||
|
||||
A badly written regular expression could run very slowly or even throw a
|
||||
StackOverflowError and cause the node it is running on to exit suddenly.
|
||||
|
||||
Read more about http://www.regular-expressions.info/catastrophic.html[pathological regular expressions and how to avoid them].
|
||||
Read more about https://www.regular-expressions.info/catastrophic.html[pathological regular expressions and how to avoid them].
|
||||
|
||||
========================================
|
||||
|
||||
|
@ -146,11 +146,11 @@ The `pattern` analyzer accepts the following parameters:
|
|||
[horizontal]
|
||||
`pattern`::
|
||||
|
||||
A http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java regular expression], defaults to `\W+`.
|
||||
A https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java regular expression], defaults to `\W+`.
|
||||
|
||||
`flags`::
|
||||
|
||||
Java regular expression http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html#field.summary[flags].
|
||||
Java regular expression https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html#field.summary[flags].
|
||||
Flags should be pipe-separated, eg `"CASE_INSENSITIVE|COMMENTS"`.
|
||||
|
||||
`lowercase`::
|
||||
|
|
|
@ -7,7 +7,7 @@
|
|||
The `standard` analyzer is the default analyzer which is used if none is
|
||||
specified. It provides grammar based tokenization (based on the Unicode Text
|
||||
Segmentation algorithm, as specified in
|
||||
http://unicode.org/reports/tr29/[Unicode Standard Annex #29]) and works well
|
||||
https://unicode.org/reports/tr29/[Unicode Standard Annex #29]) and works well
|
||||
for most languages.
|
||||
|
||||
[discrete]
|
||||
|
|
|
@ -13,12 +13,12 @@ The replacement string can refer to capture groups in the regular expression.
|
|||
========================================
|
||||
|
||||
The pattern replace character filter uses
|
||||
http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java Regular Expressions].
|
||||
https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java Regular Expressions].
|
||||
|
||||
A badly written regular expression could run very slowly or even throw a
|
||||
StackOverflowError and cause the node it is running on to exit suddenly.
|
||||
|
||||
Read more about http://www.regular-expressions.info/catastrophic.html[pathological regular expressions and how to avoid them].
|
||||
Read more about https://www.regular-expressions.info/catastrophic.html[pathological regular expressions and how to avoid them].
|
||||
|
||||
========================================
|
||||
|
||||
|
@ -30,17 +30,17 @@ The `pattern_replace` character filter accepts the following parameters:
|
|||
[horizontal]
|
||||
`pattern`::
|
||||
|
||||
A http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java regular expression]. Required.
|
||||
A https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java regular expression]. Required.
|
||||
|
||||
`replacement`::
|
||||
|
||||
The replacement string, which can reference capture groups using the
|
||||
`$1`..`$9` syntax, as explained
|
||||
http://docs.oracle.com/javase/8/docs/api/java/util/regex/Matcher.html#appendReplacement-java.lang.StringBuffer-java.lang.String-[here].
|
||||
https://docs.oracle.com/javase/8/docs/api/java/util/regex/Matcher.html#appendReplacement-java.lang.StringBuffer-java.lang.String-[here].
|
||||
|
||||
`flags`::
|
||||
|
||||
Java regular expression http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html#field.summary[flags].
|
||||
Java regular expression https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html#field.summary[flags].
|
||||
Flags should be pipe-separated, eg `"CASE_INSENSITIVE|COMMENTS"`.
|
||||
|
||||
[discrete]
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
++++
|
||||
|
||||
Provides <<dictionary-stemmers,dictionary stemming>> based on a provided
|
||||
http://en.wikipedia.org/wiki/Hunspell[Hunspell dictionary]. The `hunspell`
|
||||
{wikipedia}/Hunspell[Hunspell dictionary]. The `hunspell`
|
||||
filter requires
|
||||
<<analysis-hunspell-tokenfilter-dictionary-config,configuration>> of one or more
|
||||
language-specific Hunspell dictionaries.
|
||||
|
|
|
@ -332,7 +332,7 @@ You cannot specify this parameter and `keywords_pattern`.
|
|||
+
|
||||
--
|
||||
(Required*, string)
|
||||
http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java
|
||||
https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java
|
||||
regular expression] used to match tokens. Tokens that match this expression are
|
||||
marked as keywords and not stemmed.
|
||||
|
||||
|
|
|
@ -4,7 +4,7 @@
|
|||
<titleabbrev>KStem</titleabbrev>
|
||||
++++
|
||||
|
||||
Provides http://ciir.cs.umass.edu/pubfiles/ir-35.pdf[KStem]-based stemming for
|
||||
Provides https://ciir.cs.umass.edu/pubfiles/ir-35.pdf[KStem]-based stemming for
|
||||
the English language. The `kstem` filter combines
|
||||
<<algorithmic-stemmers,algorithmic stemming>> with a built-in
|
||||
<<dictionary-stemmers,dictionary>>.
|
||||
|
|
|
@ -108,7 +108,7 @@ Language-specific lowercase token filter to use. Valid values include:
|
|||
{lucene-analysis-docs}/el/GreekLowerCaseFilter.html[GreekLowerCaseFilter]
|
||||
|
||||
`irish`::: Uses Lucene's
|
||||
http://lucene.apache.org/core/{lucene_version_path}/analyzers-common/org/apache/lucene/analysis/ga/IrishLowerCaseFilter.html[IrishLowerCaseFilter]
|
||||
{lucene-analysis-docs}/ga/IrishLowerCaseFilter.html[IrishLowerCaseFilter]
|
||||
|
||||
`turkish`::: Uses Lucene's
|
||||
{lucene-analysis-docs}/tr/TurkishLowerCaseFilter.html[TurkishLowerCaseFilter]
|
||||
|
|
|
@ -10,34 +10,34 @@ characters of a certain language.
|
|||
[horizontal]
|
||||
Arabic::
|
||||
|
||||
http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/ar/ArabicNormalizer.html[`arabic_normalization`]
|
||||
{lucene-analysis-docs}/ar/ArabicNormalizer.html[`arabic_normalization`]
|
||||
|
||||
German::
|
||||
|
||||
http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/de/GermanNormalizationFilter.html[`german_normalization`]
|
||||
{lucene-analysis-docs}/de/GermanNormalizationFilter.html[`german_normalization`]
|
||||
|
||||
Hindi::
|
||||
|
||||
http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/hi/HindiNormalizer.html[`hindi_normalization`]
|
||||
{lucene-analysis-docs}/hi/HindiNormalizer.html[`hindi_normalization`]
|
||||
|
||||
Indic::
|
||||
|
||||
http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/in/IndicNormalizer.html[`indic_normalization`]
|
||||
{lucene-analysis-docs}/in/IndicNormalizer.html[`indic_normalization`]
|
||||
|
||||
Kurdish (Sorani)::
|
||||
|
||||
http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/ckb/SoraniNormalizer.html[`sorani_normalization`]
|
||||
{lucene-analysis-docs}/ckb/SoraniNormalizer.html[`sorani_normalization`]
|
||||
|
||||
Persian::
|
||||
|
||||
http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/fa/PersianNormalizer.html[`persian_normalization`]
|
||||
{lucene-analysis-docs}/fa/PersianNormalizer.html[`persian_normalization`]
|
||||
|
||||
Scandinavian::
|
||||
|
||||
http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/miscellaneous/ScandinavianNormalizationFilter.html[`scandinavian_normalization`],
|
||||
http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/miscellaneous/ScandinavianFoldingFilter.html[`scandinavian_folding`]
|
||||
{lucene-analysis-docs}/miscellaneous/ScandinavianNormalizationFilter.html[`scandinavian_normalization`],
|
||||
{lucene-analysis-docs}/miscellaneous/ScandinavianFoldingFilter.html[`scandinavian_folding`]
|
||||
|
||||
Serbian::
|
||||
|
||||
http://lucene.apache.org/core/7_1_0/analyzers-common/org/apache/lucene/analysis/sr/SerbianNormalizationFilter.html[`serbian_normalization`]
|
||||
{lucene-analysis-docs}/sr/SerbianNormalizationFilter.html[`serbian_normalization`]
|
||||
|
||||
|
|
|
@ -15,12 +15,12 @@ overlap.
|
|||
========================================
|
||||
|
||||
The pattern capture token filter uses
|
||||
http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java Regular Expressions].
|
||||
https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java Regular Expressions].
|
||||
|
||||
A badly written regular expression could run very slowly or even throw a
|
||||
StackOverflowError and cause the node it is running on to exit suddenly.
|
||||
|
||||
Read more about http://www.regular-expressions.info/catastrophic.html[pathological regular expressions and how to avoid them].
|
||||
Read more about https://www.regular-expressions.info/catastrophic.html[pathological regular expressions and how to avoid them].
|
||||
|
||||
========================================
|
||||
|
||||
|
|
|
@ -7,7 +7,7 @@
|
|||
Uses a regular expression to match and replace token substrings.
|
||||
|
||||
The `pattern_replace` filter uses
|
||||
http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java's
|
||||
https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java's
|
||||
regular expression syntax]. By default, the filter replaces matching
|
||||
substrings with an empty substring (`""`).
|
||||
|
||||
|
@ -22,7 +22,7 @@ A poorly-written regular expression may run slowly or return a
|
|||
StackOverflowError, causing the node running the expression to exit suddenly.
|
||||
|
||||
Read more about
|
||||
http://www.regular-expressions.info/catastrophic.html[pathological regular
|
||||
https://www.regular-expressions.info/catastrophic.html[pathological regular
|
||||
expressions and how to avoid them].
|
||||
====
|
||||
|
||||
|
@ -108,7 +108,7 @@ in each token. Defaults to `true`.
|
|||
`pattern`::
|
||||
(Required, string)
|
||||
Regular expression, written in
|
||||
http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java's
|
||||
https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java's
|
||||
regular expression syntax]. The filter replaces token substrings matching this
|
||||
pattern with the substring in the `replacement` parameter.
|
||||
|
||||
|
|
|
@ -125,7 +125,7 @@ Basque::
|
|||
http://snowball.tartarus.org/algorithms/basque/stemmer.html[*`basque`*]
|
||||
|
||||
Bengali::
|
||||
http://www.tandfonline.com/doi/abs/10.1080/02564602.1993.11437284[*`bengali`*]
|
||||
https://www.tandfonline.com/doi/abs/10.1080/02564602.1993.11437284[*`bengali`*]
|
||||
|
||||
Brazilian Portuguese::
|
||||
{lucene-analysis-docs}/br/BrazilianStemmer.html[*`brazilian`*]
|
||||
|
@ -137,7 +137,7 @@ Catalan::
|
|||
http://snowball.tartarus.org/algorithms/catalan/stemmer.html[*`catalan`*]
|
||||
|
||||
Czech::
|
||||
http://portal.acm.org/citation.cfm?id=1598600[*`czech`*]
|
||||
https://dl.acm.org/doi/10.1016/j.ipm.2009.06.001[*`czech`*]
|
||||
|
||||
Danish::
|
||||
http://snowball.tartarus.org/algorithms/danish/stemmer.html[*`danish`*]
|
||||
|
@ -148,9 +148,9 @@ http://snowball.tartarus.org/algorithms/kraaij_pohlmann/stemmer.html[`dutch_kp`]
|
|||
|
||||
English::
|
||||
http://snowball.tartarus.org/algorithms/porter/stemmer.html[*`english`*],
|
||||
http://ciir.cs.umass.edu/pubfiles/ir-35.pdf[`light_english`],
|
||||
https://ciir.cs.umass.edu/pubfiles/ir-35.pdf[`light_english`],
|
||||
http://snowball.tartarus.org/algorithms/lovins/stemmer.html[`lovins`],
|
||||
http://www.researchgate.net/publication/220433848_How_effective_is_suffixing[`minimal_english`],
|
||||
https://www.researchgate.net/publication/220433848_How_effective_is_suffixing[`minimal_english`],
|
||||
http://snowball.tartarus.org/algorithms/english/stemmer.html[`porter2`],
|
||||
{lucene-analysis-docs}/en/EnglishPossessiveFilter.html[`possessive_english`]
|
||||
|
||||
|
@ -162,29 +162,29 @@ http://snowball.tartarus.org/algorithms/finnish/stemmer.html[*`finnish`*],
|
|||
http://clef.isti.cnr.it/2003/WN_web/22.pdf[`light_finnish`]
|
||||
|
||||
French::
|
||||
http://dl.acm.org/citation.cfm?id=1141523[*`light_french`*],
|
||||
https://dl.acm.org/citation.cfm?id=1141523[*`light_french`*],
|
||||
http://snowball.tartarus.org/algorithms/french/stemmer.html[`french`],
|
||||
http://dl.acm.org/citation.cfm?id=318984[`minimal_french`]
|
||||
https://dl.acm.org/citation.cfm?id=318984[`minimal_french`]
|
||||
|
||||
Galician::
|
||||
http://bvg.udc.es/recursos_lingua/stemming.jsp[*`galician`*],
|
||||
http://bvg.udc.es/recursos_lingua/stemming.jsp[`minimal_galician`] (Plural step only)
|
||||
|
||||
German::
|
||||
http://dl.acm.org/citation.cfm?id=1141523[*`light_german`*],
|
||||
https://dl.acm.org/citation.cfm?id=1141523[*`light_german`*],
|
||||
http://snowball.tartarus.org/algorithms/german/stemmer.html[`german`],
|
||||
http://snowball.tartarus.org/algorithms/german2/stemmer.html[`german2`],
|
||||
http://members.unine.ch/jacques.savoy/clef/morpho.pdf[`minimal_german`]
|
||||
|
||||
Greek::
|
||||
http://sais.se/mthprize/2007/ntais2007.pdf[*`greek`*]
|
||||
https://sais.se/mthprize/2007/ntais2007.pdf[*`greek`*]
|
||||
|
||||
Hindi::
|
||||
http://computing.open.ac.uk/Sites/EACLSouthAsia/Papers/p6-Ramanathan.pdf[*`hindi`*]
|
||||
|
||||
Hungarian::
|
||||
http://snowball.tartarus.org/algorithms/hungarian/stemmer.html[*`hungarian`*],
|
||||
http://dl.acm.org/citation.cfm?id=1141523&dl=ACM&coll=DL&CFID=179095584&CFTOKEN=80067181[`light_hungarian`]
|
||||
https://dl.acm.org/citation.cfm?id=1141523&dl=ACM&coll=DL&CFID=179095584&CFTOKEN=80067181[`light_hungarian`]
|
||||
|
||||
Indonesian::
|
||||
http://www.illc.uva.nl/Publications/ResearchReports/MoL-2003-02.text.pdf[*`indonesian`*]
|
||||
|
@ -193,7 +193,7 @@ Irish::
|
|||
http://snowball.tartarus.org/otherapps/oregan/intro.html[*`irish`*]
|
||||
|
||||
Italian::
|
||||
http://www.ercim.eu/publication/ws-proceedings/CLEF2/savoy.pdf[*`light_italian`*],
|
||||
https://www.ercim.eu/publication/ws-proceedings/CLEF2/savoy.pdf[*`light_italian`*],
|
||||
http://snowball.tartarus.org/algorithms/italian/stemmer.html[`italian`]
|
||||
|
||||
Kurdish (Sorani)::
|
||||
|
@ -203,7 +203,7 @@ Latvian::
|
|||
{lucene-analysis-docs}/lv/LatvianStemmer.html[*`latvian`*]
|
||||
|
||||
Lithuanian::
|
||||
http://svn.apache.org/viewvc/lucene/dev/branches/lucene_solr_5_3/lucene/analysis/common/src/java/org/apache/lucene/analysis/lt/stem_ISO_8859_1.sbl?view=markup[*`lithuanian`*]
|
||||
https://svn.apache.org/viewvc/lucene/dev/branches/lucene_solr_5_3/lucene/analysis/common/src/java/org/apache/lucene/analysis/lt/stem_ISO_8859_1.sbl?view=markup[*`lithuanian`*]
|
||||
|
||||
Norwegian (Bokmål)::
|
||||
http://snowball.tartarus.org/algorithms/norwegian/stemmer.html[*`norwegian`*],
|
||||
|
@ -215,20 +215,20 @@ Norwegian (Nynorsk)::
|
|||
{lucene-analysis-docs}/no/NorwegianMinimalStemmer.html[`minimal_nynorsk`]
|
||||
|
||||
Portuguese::
|
||||
http://dl.acm.org/citation.cfm?id=1141523&dl=ACM&coll=DL&CFID=179095584&CFTOKEN=80067181[*`light_portuguese`*],
|
||||
https://dl.acm.org/citation.cfm?id=1141523&dl=ACM&coll=DL&CFID=179095584&CFTOKEN=80067181[*`light_portuguese`*],
|
||||
pass:macros[http://www.inf.ufrgs.br/~buriol/papers/Orengo_CLEF07.pdf[`minimal_portuguese`\]],
|
||||
http://snowball.tartarus.org/algorithms/portuguese/stemmer.html[`portuguese`],
|
||||
http://www.inf.ufrgs.br/\~viviane/rslp/index.htm[`portuguese_rslp`]
|
||||
https://www.inf.ufrgs.br/\~viviane/rslp/index.htm[`portuguese_rslp`]
|
||||
|
||||
Romanian::
|
||||
http://snowball.tartarus.org/algorithms/romanian/stemmer.html[*`romanian`*]
|
||||
|
||||
Russian::
|
||||
http://snowball.tartarus.org/algorithms/russian/stemmer.html[*`russian`*],
|
||||
http://doc.rero.ch/lm.php?url=1000%2C43%2C4%2C20091209094227-CA%2FDolamic_Ljiljana_-_Indexing_and_Searching_Strategies_for_the_Russian_20091209.pdf[`light_russian`]
|
||||
https://doc.rero.ch/lm.php?url=1000%2C43%2C4%2C20091209094227-CA%2FDolamic_Ljiljana_-_Indexing_and_Searching_Strategies_for_the_Russian_20091209.pdf[`light_russian`]
|
||||
|
||||
Spanish::
|
||||
http://www.ercim.eu/publication/ws-proceedings/CLEF2/savoy.pdf[*`light_spanish`*],
|
||||
https://www.ercim.eu/publication/ws-proceedings/CLEF2/savoy.pdf[*`light_spanish`*],
|
||||
http://snowball.tartarus.org/algorithms/spanish/stemmer.html[`spanish`]
|
||||
|
||||
Swedish::
|
||||
|
|
|
@ -145,7 +145,7 @@ However, it is recommended to define large synonyms set in a file using
|
|||
[discrete]
|
||||
==== WordNet synonyms
|
||||
|
||||
Synonyms based on http://wordnet.princeton.edu/[WordNet] format can be
|
||||
Synonyms based on https://wordnet.princeton.edu/[WordNet] format can be
|
||||
declared using `format`:
|
||||
|
||||
[source,console]
|
||||
|
|
|
@ -136,7 +136,7 @@ However, it is recommended to define large synonyms set in a file using
|
|||
[discrete]
|
||||
==== WordNet synonyms
|
||||
|
||||
Synonyms based on http://wordnet.princeton.edu/[WordNet] format can be
|
||||
Synonyms based on https://wordnet.princeton.edu/[WordNet] format can be
|
||||
declared using `format`:
|
||||
|
||||
[source,console]
|
||||
|
|
|
@ -371,7 +371,7 @@ $ => DIGIT
|
|||
|
||||
# in some cases you might not want to split on ZWJ
|
||||
# this also tests the case where we need a bigger byte[]
|
||||
# see http://en.wikipedia.org/wiki/Zero-width_joiner
|
||||
# see https://en.wikipedia.org/wiki/Zero-width_joiner
|
||||
\\u200D => ALPHANUM
|
||||
----
|
||||
|
||||
|
|
|
@ -320,7 +320,7 @@ $ => DIGIT
|
|||
|
||||
# in some cases you might not want to split on ZWJ
|
||||
# this also tests the case where we need a bigger byte[]
|
||||
# see http://en.wikipedia.org/wiki/Zero-width_joiner
|
||||
# see https://en.wikipedia.org/wiki/Zero-width_joiner
|
||||
\\u200D => ALPHANUM
|
||||
----
|
||||
|
||||
|
|
|
@ -16,12 +16,12 @@ non-word characters.
|
|||
========================================
|
||||
|
||||
The pattern tokenizer uses
|
||||
http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java Regular Expressions].
|
||||
https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java Regular Expressions].
|
||||
|
||||
A badly written regular expression could run very slowly or even throw a
|
||||
StackOverflowError and cause the node it is running on to exit suddenly.
|
||||
|
||||
Read more about http://www.regular-expressions.info/catastrophic.html[pathological regular expressions and how to avoid them].
|
||||
Read more about https://www.regular-expressions.info/catastrophic.html[pathological regular expressions and how to avoid them].
|
||||
|
||||
========================================
|
||||
|
||||
|
@ -107,11 +107,11 @@ The `pattern` tokenizer accepts the following parameters:
|
|||
[horizontal]
|
||||
`pattern`::
|
||||
|
||||
A http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java regular expression], defaults to `\W+`.
|
||||
A https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java regular expression], defaults to `\W+`.
|
||||
|
||||
`flags`::
|
||||
|
||||
Java regular expression http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html#field.summary[flags].
|
||||
Java regular expression https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html#field.summary[flags].
|
||||
Flags should be pipe-separated, eg `"CASE_INSENSITIVE|COMMENTS"`.
|
||||
|
||||
`group`::
|
||||
|
|
|
@ -6,7 +6,7 @@
|
|||
|
||||
The `standard` tokenizer provides grammar based tokenization (based on the
|
||||
Unicode Text Segmentation algorithm, as specified in
|
||||
http://unicode.org/reports/tr29/[Unicode Standard Annex #29]) and works well
|
||||
https://unicode.org/reports/tr29/[Unicode Standard Annex #29]) and works well
|
||||
for most languages.
|
||||
|
||||
[discrete]
|
||||
|
|
|
@ -542,7 +542,7 @@ Some queries and APIs support parameters to allow inexact _fuzzy_ matching,
|
|||
using the `fuzziness` parameter.
|
||||
|
||||
When querying `text` or `keyword` fields, `fuzziness` is interpreted as a
|
||||
http://en.wikipedia.org/wiki/Levenshtein_distance[Levenshtein Edit Distance]
|
||||
{wikipedia}/Levenshtein_distance[Levenshtein Edit Distance]
|
||||
-- the number of one character changes that need to be made to one string to
|
||||
make it the same as another string.
|
||||
|
||||
|
|
|
@ -107,7 +107,7 @@ Perl::
|
|||
|
||||
Python::
|
||||
|
||||
See http://elasticsearch-py.readthedocs.org/en/master/helpers.html[elasticsearch.helpers.*]
|
||||
See https://elasticsearch-py.readthedocs.org/en/master/helpers.html[elasticsearch.helpers.*]
|
||||
|
||||
JavaScript::
|
||||
|
||||
|
|
|
@ -61,7 +61,7 @@ PUT /index/_mapping
|
|||
|
||||
TF/IDF based similarity that has built-in tf normalization and
|
||||
is supposed to work better for short fields (like names). See
|
||||
http://en.wikipedia.org/wiki/Okapi_BM25[Okapi_BM25] for more details.
|
||||
{wikipedia}i/Okapi_BM25[Okapi_BM25] for more details.
|
||||
This similarity has the following options:
|
||||
|
||||
[horizontal]
|
||||
|
@ -114,7 +114,7 @@ Type name: `DFR`
|
|||
[[dfi]]
|
||||
==== DFI similarity
|
||||
|
||||
Similarity that implements the http://trec.nist.gov/pubs/trec21/papers/irra.web.nb.pdf[divergence from independence]
|
||||
Similarity that implements the https://trec.nist.gov/pubs/trec21/papers/irra.web.nb.pdf[divergence from independence]
|
||||
model.
|
||||
This similarity has the following options:
|
||||
|
||||
|
|
|
@ -10,7 +10,7 @@ that is generally written for humans and not computer consumption.
|
|||
This processor comes packaged with many
|
||||
https://github.com/elastic/elasticsearch/blob/{branch}/libs/grok/src/main/resources/patterns[reusable patterns].
|
||||
|
||||
If you need help building patterns to match your logs, you will find the {kibana-ref}/xpack-grokdebugger.html[Grok Debugger] tool quite useful! The Grok Debugger is an {xpack} feature under the Basic License and is therefore *free to use*. The Grok Constructor at <http://grokconstructor.appspot.com/> is also a useful tool.
|
||||
If you need help building patterns to match your logs, you will find the {kibana-ref}/xpack-grokdebugger.html[Grok Debugger] tool quite useful! The Grok Debugger is an {xpack} feature under the Basic License and is therefore *free to use*. The https://grokconstructor.appspot.com[Grok Constructor] is also a useful tool.
|
||||
|
||||
[[grok-basics]]
|
||||
==== Grok Basics
|
||||
|
|
|
@ -85,7 +85,7 @@ GET my-index-000001/_search
|
|||
<2> Geo-point expressed as a string with the format: `"lat,lon"`.
|
||||
<3> Geo-point expressed as a geohash.
|
||||
<4> Geo-point expressed as an array with the format: [ `lon`, `lat`]
|
||||
<5> Geo-point expressed as a http://docs.opengeospatial.org/is/12-063r5/12-063r5.html[Well-Known Text]
|
||||
<5> Geo-point expressed as a https://docs.opengeospatial.org/is/12-063r5/12-063r5.html[Well-Known Text]
|
||||
POINT with the format: `"POINT(lon lat)"`
|
||||
<6> A geo-bounding box query which finds all geo-points that fall inside the box.
|
||||
|
||||
|
@ -102,7 +102,7 @@ format was changed early on to conform to the format used by GeoJSON.
|
|||
==================================================
|
||||
|
||||
[NOTE]
|
||||
A point can be expressed as a http://en.wikipedia.org/wiki/Geohash[geohash].
|
||||
A point can be expressed as a {wikipedia}/Geohash[geohash].
|
||||
Geohashes are https://en.wikipedia.org/wiki/Base32[base32] encoded strings of
|
||||
the bits of the latitude and longitude interleaved. Each character in a geohash
|
||||
adds additional 5 bits to the precision. So the longer the hash, the more
|
||||
|
|
|
@ -156,7 +156,7 @@ triangular mesh (see <<geoshape-indexing-approach>>).
|
|||
Multiple PrefixTree implementations are provided:
|
||||
|
||||
* GeohashPrefixTree - Uses
|
||||
http://en.wikipedia.org/wiki/Geohash[geohashes] for grid squares.
|
||||
{wikipedia}/Geohash[geohashes] for grid squares.
|
||||
Geohashes are base32 encoded strings of the bits of the latitude and
|
||||
longitude interleaved. So the longer the hash, the more precise it is.
|
||||
Each character added to the geohash represents another tree level and
|
||||
|
@ -164,7 +164,7 @@ adds 5 bits of precision to the geohash. A geohash represents a
|
|||
rectangular area and has 32 sub rectangles. The maximum number of levels
|
||||
in Elasticsearch is 24; the default is 9.
|
||||
* QuadPrefixTree - Uses a
|
||||
http://en.wikipedia.org/wiki/Quadtree[quadtree] for grid squares.
|
||||
{wikipedia}/Quadtree[quadtree] for grid squares.
|
||||
Similar to geohash, quad trees interleave the bits of the latitude and
|
||||
longitude the resulting hash is a bit set. A tree level in a quad tree
|
||||
represents 2 bits in this bit set, one for each coordinate. The maximum
|
||||
|
@ -254,8 +254,8 @@ Geo-shape queries on geo-shapes implemented with PrefixTrees will not be execute
|
|||
[discrete]
|
||||
==== Input Structure
|
||||
|
||||
Shapes can be represented using either the http://www.geojson.org[GeoJSON]
|
||||
or http://docs.opengeospatial.org/is/12-063r5/12-063r5.html[Well-Known Text]
|
||||
Shapes can be represented using either the http://geojson.org[GeoJSON]
|
||||
or https://docs.opengeospatial.org/is/12-063r5/12-063r5.html[Well-Known Text]
|
||||
(WKT) format. The following table provides a mapping of GeoJSON and WKT
|
||||
to Elasticsearch types:
|
||||
|
||||
|
@ -356,7 +356,7 @@ House to the US Capitol Building.
|
|||
|
||||
[discrete]
|
||||
[[geo-polygon]]
|
||||
===== http://www.geojson.org/geojson-spec.html#id4[Polygon]
|
||||
===== http://geojson.org/geojson-spec.html#id4[Polygon]
|
||||
|
||||
A polygon is defined by a list of a list of points. The first and last
|
||||
points in each (outer) list must be the same (the polygon must be
|
||||
|
@ -418,7 +418,7 @@ ambiguous polygons around the dateline and poles are possible.
|
|||
https://tools.ietf.org/html/rfc7946#section-3.1.6[GeoJSON] mandates that the
|
||||
outer polygon must be counterclockwise and interior shapes must be clockwise,
|
||||
which agrees with the Open Geospatial Consortium (OGC)
|
||||
http://www.opengeospatial.org/standards/sfa[Simple Feature Access]
|
||||
https://www.opengeospatial.org/standards/sfa[Simple Feature Access]
|
||||
specification for vertex ordering.
|
||||
|
||||
Elasticsearch accepts both clockwise and counterclockwise polygons if they
|
||||
|
@ -467,7 +467,7 @@ POST /example/_doc
|
|||
|
||||
[discrete]
|
||||
[[geo-multipoint]]
|
||||
===== http://www.geojson.org/geojson-spec.html#id5[MultiPoint]
|
||||
===== http://geojson.org/geojson-spec.html#id5[MultiPoint]
|
||||
|
||||
The following is an example of a list of geojson points:
|
||||
|
||||
|
@ -496,7 +496,7 @@ POST /example/_doc
|
|||
|
||||
[discrete]
|
||||
[[geo-multilinestring]]
|
||||
===== http://www.geojson.org/geojson-spec.html#id6[MultiLineString]
|
||||
===== http://geojson.org/geojson-spec.html#id6[MultiLineString]
|
||||
|
||||
The following is an example of a list of geojson linestrings:
|
||||
|
||||
|
@ -527,7 +527,7 @@ POST /example/_doc
|
|||
|
||||
[discrete]
|
||||
[[geo-multipolygon]]
|
||||
===== http://www.geojson.org/geojson-spec.html#id7[MultiPolygon]
|
||||
===== http://geojson.org/geojson-spec.html#id7[MultiPolygon]
|
||||
|
||||
The following is an example of a list of geojson polygons (second polygon contains a hole):
|
||||
|
||||
|
|
|
@ -61,7 +61,7 @@ PUT my-index-000001/_doc/5
|
|||
<1> Point expressed as an object, with `x` and `y` keys.
|
||||
<2> Point expressed as a string with the format: `"x,y"`.
|
||||
<3> Point expressed as an array with the format: [ `x`, `y`]
|
||||
<4> Point expressed as a http://docs.opengeospatial.org/is/12-063r5/12-063r5.html[Well-Known Text]
|
||||
<4> Point expressed as a https://docs.opengeospatial.org/is/12-063r5/12-063r5.html[Well-Known Text]
|
||||
POINT with the format: `"POINT(x y)"`
|
||||
|
||||
The coordinates provided to the indexer are single precision floating point values so
|
||||
|
|
|
@ -19,7 +19,7 @@ You can query documents using this type using
|
|||
==== Mapping Options
|
||||
|
||||
Like the <<geo-shape, geo_shape>> field type, the `shape` field mapping maps
|
||||
http://www.geojson.org[GeoJSON] or http://docs.opengeospatial.org/is/12-063r5/12-063r5.html[Well-Known Text]
|
||||
http://geojson.org[GeoJSON] or https://docs.opengeospatial.org/is/12-063r5/12-063r5.html[Well-Known Text]
|
||||
(WKT) geometry objects to the shape type. To enable it, users must explicitly map
|
||||
fields to the shape type.
|
||||
|
||||
|
@ -96,8 +96,8 @@ precision floats for the vertex values so accuracy is guaranteed to the same pre
|
|||
[discrete]
|
||||
==== Input Structure
|
||||
|
||||
Shapes can be represented using either the http://www.geojson.org[GeoJSON]
|
||||
or http://docs.opengeospatial.org/is/12-063r5/12-063r5.html[Well-Known Text]
|
||||
Shapes can be represented using either the http://geojson.org[GeoJSON]
|
||||
or https://docs.opengeospatial.org/is/12-063r5/12-063r5.html[Well-Known Text]
|
||||
(WKT) format. The following table provides a mapping of GeoJSON and WKT
|
||||
to Elasticsearch types:
|
||||
|
||||
|
@ -190,7 +190,7 @@ POST /example/_doc
|
|||
|
||||
[discrete]
|
||||
[[polygon]]
|
||||
===== http://www.geojson.org/geojson-spec.html#id4[Polygon]
|
||||
===== http://geojson.org/geojson-spec.html#id4[Polygon]
|
||||
|
||||
A polygon is defined by a list of a list of points. The first and last
|
||||
points in each (outer) list must be the same (the polygon must be
|
||||
|
@ -251,7 +251,7 @@ POST /example/_doc
|
|||
https://tools.ietf.org/html/rfc7946#section-3.1.6[GeoJSON] mandates that the
|
||||
outer polygon must be counterclockwise and interior shapes must be clockwise,
|
||||
which agrees with the Open Geospatial Consortium (OGC)
|
||||
http://www.opengeospatial.org/standards/sfa[Simple Feature Access]
|
||||
https://www.opengeospatial.org/standards/sfa[Simple Feature Access]
|
||||
specification for vertex ordering.
|
||||
|
||||
By default Elasticsearch expects vertices in counterclockwise (right hand rule)
|
||||
|
@ -277,7 +277,7 @@ POST /example/_doc
|
|||
|
||||
[discrete]
|
||||
[[multipoint]]
|
||||
===== http://www.geojson.org/geojson-spec.html#id5[MultiPoint]
|
||||
===== http://geojson.org/geojson-spec.html#id5[MultiPoint]
|
||||
|
||||
The following is an example of a list of geojson points:
|
||||
|
||||
|
@ -306,7 +306,7 @@ POST /example/_doc
|
|||
|
||||
[discrete]
|
||||
[[multilinestring]]
|
||||
===== http://www.geojson.org/geojson-spec.html#id6[MultiLineString]
|
||||
===== http://geojson.org/geojson-spec.html#id6[MultiLineString]
|
||||
|
||||
The following is an example of a list of geojson linestrings:
|
||||
|
||||
|
@ -337,7 +337,7 @@ POST /example/_doc
|
|||
|
||||
[discrete]
|
||||
[[multipolygon]]
|
||||
===== http://www.geojson.org/geojson-spec.html#id7[MultiPolygon]
|
||||
===== http://geojson.org/geojson-spec.html#id7[MultiPolygon]
|
||||
|
||||
The following is an example of a list of geojson polygons (second polygon contains a hole):
|
||||
|
||||
|
|
|
@ -7,13 +7,13 @@ The HTTP layer exposes {es}'s REST APIs over HTTP.
|
|||
The HTTP mechanism is completely asynchronous in nature, meaning that
|
||||
there is no blocking thread waiting for a response. The benefit of using
|
||||
asynchronous communication for HTTP is solving the
|
||||
http://en.wikipedia.org/wiki/C10k_problem[C10k problem].
|
||||
{wikipedia}/C10k_problem[C10k problem].
|
||||
|
||||
When possible, consider using
|
||||
http://en.wikipedia.org/wiki/Keepalive#HTTP_Keepalive[HTTP keep alive]
|
||||
{wikipedia}/Keepalive#HTTP_Keepalive[HTTP keep alive]
|
||||
when connecting for better performance and try to get your favorite
|
||||
client not to do
|
||||
http://en.wikipedia.org/wiki/Chunked_transfer_encoding[HTTP chunking].
|
||||
{wikipedia}/Chunked_transfer_encoding[HTTP chunking].
|
||||
// end::modules-http-description-tag[]
|
||||
|
||||
[http-settings]
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
++++
|
||||
|
||||
Returns documents that contain terms similar to the search term, as measured by
|
||||
a http://en.wikipedia.org/wiki/Levenshtein_distance[Levenshtein edit distance].
|
||||
a https://en.wikipedia.org/wiki/Levenshtein_distance[Levenshtein edit distance].
|
||||
|
||||
An edit distance is the number of one-character changes needed to turn one term
|
||||
into another. These changes can include:
|
||||
|
|
|
@ -23,7 +23,7 @@ examples.
|
|||
==== Inline Shape Definition
|
||||
|
||||
Similar to the `geo_shape` type, the `geo_shape` query uses
|
||||
http://www.geojson.org[GeoJSON] to represent shapes.
|
||||
http://geojson.org[GeoJSON] to represent shapes.
|
||||
|
||||
Given the following index with locations as `geo_shape` fields:
|
||||
|
||||
|
|
|
@ -123,7 +123,7 @@ operator:
|
|||
quikc~ brwn~ foks~
|
||||
|
||||
This uses the
|
||||
http://en.wikipedia.org/wiki/Damerau-Levenshtein_distance[Damerau-Levenshtein distance]
|
||||
{wikipedia}/Damerau-Levenshtein_distance[Damerau-Levenshtein distance]
|
||||
to find all terms with a maximum of
|
||||
two changes, where a change is the insertion, deletion
|
||||
or substitution of a single character, or transposition of two adjacent
|
||||
|
|
|
@ -82,7 +82,7 @@ Index several documents to the `test` index.
|
|||
----
|
||||
PUT /test/_doc/1?refresh
|
||||
{
|
||||
"url": "http://en.wikipedia.org/wiki/2016_Summer_Olympics",
|
||||
"url": "https://en.wikipedia.org/wiki/2016_Summer_Olympics",
|
||||
"content": "Rio 2016",
|
||||
"pagerank": 50.3,
|
||||
"url_length": 42,
|
||||
|
@ -94,7 +94,7 @@ PUT /test/_doc/1?refresh
|
|||
|
||||
PUT /test/_doc/2?refresh
|
||||
{
|
||||
"url": "http://en.wikipedia.org/wiki/2016_Brazilian_Grand_Prix",
|
||||
"url": "https://en.wikipedia.org/wiki/2016_Brazilian_Grand_Prix",
|
||||
"content": "Formula One motor race held on 13 November 2016",
|
||||
"pagerank": 50.3,
|
||||
"url_length": 47,
|
||||
|
@ -107,7 +107,7 @@ PUT /test/_doc/2?refresh
|
|||
|
||||
PUT /test/_doc/3?refresh
|
||||
{
|
||||
"url": "http://en.wikipedia.org/wiki/Deadpool_(film)",
|
||||
"url": "https://en.wikipedia.org/wiki/Deadpool_(film)",
|
||||
"content": "Deadpool is a 2016 American superhero film",
|
||||
"pagerank": 50.3,
|
||||
"url_length": 37,
|
||||
|
|
|
@ -18,7 +18,7 @@ examples.
|
|||
==== Inline Shape Definition
|
||||
|
||||
Similar to the `geo_shape` query, the `shape` query uses
|
||||
http://www.geojson.org[GeoJSON] or
|
||||
http://geojson.org[GeoJSON] or
|
||||
https://en.wikipedia.org/wiki/Well-known_text_representation_of_geometry[Well Known Text]
|
||||
(WKT) to represent shapes.
|
||||
|
||||
|
|
|
@ -26,7 +26,7 @@ Returns documents that contain any indexed value for a field.
|
|||
<<query-dsl-fuzzy-query,`fuzzy` query>>::
|
||||
Returns documents that contain terms similar to the search term. {es} measures
|
||||
similarity, or fuzziness, using a
|
||||
http://en.wikipedia.org/wiki/Levenshtein_distance[Levenshtein edit distance].
|
||||
{wikipedia}/Levenshtein_distance[Levenshtein edit distance].
|
||||
|
||||
<<query-dsl-ids-query,`ids` query>>::
|
||||
Returns documents based on their <<mapping-id-field, document IDs>>.
|
||||
|
|
|
@ -8,9 +8,9 @@ A cron expression is a string of the following form:
|
|||
<seconds> <minutes> <hours> <day_of_month> <month> <day_of_week> [year]
|
||||
------------------------------
|
||||
|
||||
{es} uses the cron parser from the http://www.quartz-scheduler.org[Quartz Job Scheduler].
|
||||
{es} uses the cron parser from the https://quartz-scheduler.org[Quartz Job Scheduler].
|
||||
For more information about writing Quartz cron expressions, see the
|
||||
http://www.quartz-scheduler.org/documentation/quartz-2.2.x/tutorials/tutorial-lesson-06.html[Quartz CronTrigger Tutorial].
|
||||
http://www.quartz-scheduler.org/documentation/quartz-2.2.2/tutorials/crontrigger.htmll[Quartz CronTrigger Tutorial].
|
||||
|
||||
All schedule times are in coordinated universal time (UTC); other timezones are not supported.
|
||||
|
||||
|
|
|
@ -19,7 +19,7 @@ This allows for very fast execution, even faster than if you had written a `nati
|
|||
|
||||
Expressions support a subset of javascript syntax: a single expression.
|
||||
|
||||
See the link:http://lucene.apache.org/core/6_0_0/expressions/index.html?org/apache/lucene/expressions/js/package-summary.html[expressions module documentation]
|
||||
See the https://lucene.apache.org/core/{lucene_version_path}/expressions/index.html?org/apache/lucene/expressions/js/package-summary.html[expressions module documentation]
|
||||
for details on what operators and functions are available.
|
||||
|
||||
Variables in `expression` scripts are available to access:
|
||||
|
|
|
@ -53,7 +53,7 @@ Bad:
|
|||
[[modules-scripting-other-layers]]
|
||||
=== Other security layers
|
||||
In addition to user privileges and script sandboxing Elasticsearch uses the
|
||||
http://www.oracle.com/technetwork/java/seccodeguide-139067.html[Java Security Manager]
|
||||
https://www.oracle.com/java/technologies/javase/seccodeguide.html[Java Security Manager]
|
||||
and native security tools as additional layers of security.
|
||||
|
||||
As part of its startup sequence Elasticsearch enables the Java Security Manager
|
||||
|
|
|
@ -234,7 +234,7 @@ entire query. Just provide the stored template's ID and the template parameters.
|
|||
This is useful when you want to run a commonly used query quickly and without
|
||||
mistakes.
|
||||
|
||||
Search templates use the http://mustache.github.io/mustache.5.html[mustache
|
||||
Search templates use the https://mustache.github.io/mustache.5.html[mustache
|
||||
templating language]. See <<search-template>> for more information and examples.
|
||||
|
||||
[discrete]
|
||||
|
|
|
@ -405,7 +405,7 @@ in the query. Defaults to 10.
|
|||
Expected Reciprocal Rank (ERR) is an extension of the classical reciprocal rank
|
||||
for the graded relevance case (Olivier Chapelle, Donald Metzler, Ya Zhang, and
|
||||
Pierre Grinspan. 2009.
|
||||
http://olivier.chapelle.cc/pub/err.pdf[Expected reciprocal rank for graded relevance].)
|
||||
https://olivier.chapelle.cc/pub/err.pdf[Expected reciprocal rank for graded relevance].)
|
||||
|
||||
It is based on the assumption of a cascade model of search, in which a user
|
||||
scans through ranked search results in order and stops at the first document
|
||||
|
|
|
@ -25,7 +25,7 @@ Perl::
|
|||
|
||||
Python::
|
||||
|
||||
See http://elasticsearch-py.readthedocs.org/en/master/helpers.html[elasticsearch.helpers.*]
|
||||
See https://elasticsearch-py.readthedocs.org/en/master/helpers.html[elasticsearch.helpers.*]
|
||||
|
||||
JavaScript::
|
||||
|
||||
|
|
|
@ -34,7 +34,7 @@ render search requests, before they are executed and fill existing templates
|
|||
with template parameters.
|
||||
|
||||
For more information on how Mustache templating and what kind of templating you
|
||||
can do with it check out the http://mustache.github.io/mustache.5.html[online
|
||||
can do with it check out the https://mustache.github.io/mustache.5.html[online
|
||||
documentation of the mustache project].
|
||||
|
||||
NOTE: The mustache language is implemented in {es} as a sandboxed scripting
|
||||
|
@ -604,7 +604,7 @@ query as a string instead:
|
|||
|
||||
The `{{#url}}value{{/url}}` function can be used to encode a string value
|
||||
in a HTML encoding form as defined in by the
|
||||
http://www.w3.org/TR/html4/[HTML specification].
|
||||
https://www.w3.org/TR/html4/[HTML specification].
|
||||
|
||||
As an example, it is useful to encode a URL:
|
||||
|
||||
|
|
|
@ -24,14 +24,14 @@ platforms, but it is possible that it will work on other platforms too.
|
|||
== Java (JVM) Version
|
||||
|
||||
Elasticsearch is built using Java, and includes a bundled version of
|
||||
http://openjdk.java.net[OpenJDK] from the JDK maintainers (GPLv2+CE)
|
||||
https://openjdk.java.net[OpenJDK] from the JDK maintainers (GPLv2+CE)
|
||||
within each distribution. The bundled JVM is the recommended JVM and
|
||||
is located within the `jdk` directory of the Elasticsearch home directory.
|
||||
|
||||
To use your own version of Java, set the `JAVA_HOME` environment variable.
|
||||
If you must use a version of Java that is different from the bundled JVM,
|
||||
we recommend using a link:/support/matrix[supported]
|
||||
http://www.oracle.com/technetwork/java/eol-135779.html[LTS version of Java].
|
||||
https://www.oracle.com/technetwork/java/eol-135779.html[LTS version of Java].
|
||||
Elasticsearch will refuse to start if a known-bad version of Java is used.
|
||||
The bundled JVM directory may be removed when using your own JVM.
|
||||
|
||||
|
|
|
@ -48,7 +48,7 @@ change the config directory location.
|
|||
[discrete]
|
||||
=== Config file format
|
||||
|
||||
The configuration format is http://www.yaml.org/[YAML]. Here is an
|
||||
The configuration format is https://yaml.org/[YAML]. Here is an
|
||||
example of changing the path of the data and logs directories:
|
||||
|
||||
[source,yaml]
|
||||
|
|
|
@ -11,7 +11,7 @@ The latest stable version of Elasticsearch can be found on the
|
|||
link:/downloads/elasticsearch[Download Elasticsearch] page. Other versions can
|
||||
be found on the link:/downloads/past-releases[Past Releases page].
|
||||
|
||||
NOTE: Elasticsearch includes a bundled version of http://openjdk.java.net[OpenJDK]
|
||||
NOTE: Elasticsearch includes a bundled version of https://openjdk.java.net[OpenJDK]
|
||||
from the JDK maintainers (GPLv2+CE). To use your own version of Java,
|
||||
see the <<jvm-version, JVM version requirements>>
|
||||
|
||||
|
|
|
@ -15,7 +15,7 @@ The latest stable version of Elasticsearch can be found on the
|
|||
link:/downloads/elasticsearch[Download Elasticsearch] page. Other versions can
|
||||
be found on the link:/downloads/past-releases[Past Releases page].
|
||||
|
||||
NOTE: Elasticsearch includes a bundled version of http://openjdk.java.net[OpenJDK]
|
||||
NOTE: Elasticsearch includes a bundled version of https://openjdk.java.net[OpenJDK]
|
||||
from the JDK maintainers (GPLv2+CE). To use your own version of Java,
|
||||
see the <<jvm-version, JVM version requirements>>
|
||||
|
||||
|
|
|
@ -10,7 +10,7 @@ link:/downloads/elasticsearch[Download Elasticsearch] page.
|
|||
Other versions can be found on the
|
||||
link:/downloads/past-releases[Past Releases page].
|
||||
|
||||
NOTE: Elasticsearch includes a bundled version of http://openjdk.java.net[OpenJDK]
|
||||
NOTE: Elasticsearch includes a bundled version of https://openjdk.java.net[OpenJDK]
|
||||
from the JDK maintainers (GPLv2+CE). To use your own version of Java,
|
||||
see the <<jvm-version, JVM version requirements>>
|
||||
|
||||
|
|
|
@ -25,7 +25,7 @@ link:/downloads/elasticsearch[Download Elasticsearch] page.
|
|||
Other versions can be found on the
|
||||
link:/downloads/past-releases[Past Releases page].
|
||||
|
||||
NOTE: Elasticsearch includes a bundled version of http://openjdk.java.net[OpenJDK]
|
||||
NOTE: Elasticsearch includes a bundled version of https://openjdk.java.net[OpenJDK]
|
||||
from the JDK maintainers (GPLv2+CE). To use your own version of Java,
|
||||
see the <<jvm-version, JVM version requirements>>
|
||||
|
||||
|
|
|
@ -24,7 +24,7 @@ link:/downloads/elasticsearch[Download Elasticsearch] page.
|
|||
Other versions can be found on the
|
||||
link:/downloads/past-releases[Past Releases page].
|
||||
|
||||
NOTE: Elasticsearch includes a bundled version of http://openjdk.java.net[OpenJDK]
|
||||
NOTE: Elasticsearch includes a bundled version of https://openjdk.java.net[OpenJDK]
|
||||
from the JDK maintainers (GPLv2+CE). To use your own version of Java,
|
||||
see the <<jvm-version, JVM version requirements>>
|
||||
|
||||
|
@ -200,7 +200,7 @@ The Elasticsearch service can be configured prior to installation by setting the
|
|||
|
||||
The timeout in seconds that procrun waits for service to exit gracefully. Defaults to `0`.
|
||||
|
||||
NOTE: At its core, `elasticsearch-service.bat` relies on http://commons.apache.org/proper/commons-daemon/[Apache Commons Daemon] project
|
||||
NOTE: At its core, `elasticsearch-service.bat` relies on https://commons.apache.org/proper/commons-daemon/[Apache Commons Daemon] project
|
||||
to install the service. Environment variables set prior to the service installation are copied and will be used during the service lifecycle. This means any changes made to them after the installation will not be picked up unless the service is reinstalled.
|
||||
|
||||
NOTE: On Windows, the <<heap-size,heap size>> can be configured as for
|
||||
|
|
|
@ -117,7 +117,7 @@ loggers. The logger section contains the java packages and their corresponding
|
|||
log level. The appender section contains the destinations for the logs.
|
||||
Extensive information on how to customize logging and all the supported
|
||||
appenders can be found on the
|
||||
http://logging.apache.org/log4j/2.x/manual/configuration.html[Log4j
|
||||
https://logging.apache.org/log4j/2.x/manual/configuration.html[Log4j
|
||||
documentation].
|
||||
|
||||
[discrete]
|
||||
|
|
|
@ -10,10 +10,10 @@ seconds. These values should be suitable for most environments, including
|
|||
environments where DNS resolutions vary with time. If not, you can edit the
|
||||
values `es.networkaddress.cache.ttl` and `es.networkaddress.cache.negative.ttl`
|
||||
in the <<jvm-options,JVM options>>. Note that the values
|
||||
http://docs.oracle.com/javase/8/docs/technotes/guides/net/properties.html[`networkaddress.cache.ttl=<timeout>`]
|
||||
https://docs.oracle.com/javase/8/docs/technotes/guides/net/properties.html[`networkaddress.cache.ttl=<timeout>`]
|
||||
and
|
||||
http://docs.oracle.com/javase/8/docs/technotes/guides/net/properties.html[`networkaddress.cache.negative.ttl=<timeout>`]
|
||||
https://docs.oracle.com/javase/8/docs/technotes/guides/net/properties.html[`networkaddress.cache.negative.ttl=<timeout>`]
|
||||
in the
|
||||
http://docs.oracle.com/javase/8/docs/technotes/guides/security/PolicyFiles.html[Java
|
||||
https://docs.oracle.com/javase/8/docs/technotes/guides/security/PolicyFiles.html[Java
|
||||
security policy] are ignored by Elasticsearch unless you remove the settings for
|
||||
`es.networkaddress.cache.ttl` and `es.networkaddress.cache.negative.ttl`.
|
||||
|
|
|
@ -16,7 +16,7 @@ The JDBC driver can be obtained from:
|
|||
Dedicated page::
|
||||
https://www.elastic.co/downloads/jdbc-client[elastic.co] provides links, typically for manual downloads.
|
||||
Maven dependency::
|
||||
http://maven.apache.org/[Maven]-compatible tools can retrieve it automatically as a dependency:
|
||||
https://maven.apache.org/[Maven]-compatible tools can retrieve it automatically as a dependency:
|
||||
|
||||
["source","xml",subs="attributes"]
|
||||
----
|
||||
|
|
|
@ -98,7 +98,7 @@ s|Description
|
|||
|
||||
|cbor
|
||||
|application/cbor
|
||||
|http://cbor.io/[Concise Binary Object Representation]
|
||||
|https://cbor.io/[Concise Binary Object Representation]
|
||||
|
||||
|smile
|
||||
|application/smile
|
||||
|
|
|
@ -627,7 +627,7 @@ When using multiple data paths, an index could be falsely reported as corrupted.
|
|||
[discrete]
|
||||
=== Randomized Testing (STATUS: DONE, v1.0.0)
|
||||
|
||||
In order to best validate for resiliency in Elasticsearch, we rewrote the Elasticsearch test infrastructure to introduce the concept of http://berlinbuzzwords.de/sites/berlinbuzzwords.de/files/media/documents/dawidweiss-randomizedtesting-pub.pdf[randomized testing]. Randomized testing allows us to easily enhance the Elasticsearch testing infrastructure with predictably irrational conditions, making the resulting code base more resilient.
|
||||
In order to best validate for resiliency in Elasticsearch, we rewrote the Elasticsearch test infrastructure to introduce the concept of https://github.com/randomizedtesting/randomizedtesting[randomized testing]. Randomized testing allows us to easily enhance the Elasticsearch testing infrastructure with predictably irrational conditions, making the resulting code base more resilient.
|
||||
|
||||
Each of our integration tests runs against a cluster with a random number of nodes, and indices have a random number of shards and replicas. Merge settings change for every run, indexing is done in serial or async fashion or even wrapped in a bulk operation and thread pool sizes vary to ensure that we don’t produce a deadlock no matter what happens. The list of places we use this randomization infrastructure is long, and growing every day, and has saved us headaches several times before we shipped a particular feature.
|
||||
|
||||
|
|
Loading…
Reference in New Issue