Docs: Updated elasticsearch.org links to elastic.co

This commit is contained in:
Clinton Gormley 2015-05-01 20:37:26 +02:00
parent b2e022bd94
commit c28bf3bb3f
12 changed files with 26 additions and 164 deletions

View File

@ -50,13 +50,13 @@ See the {client}/ruby-api/current/index.html[official Elasticsearch Ruby client]
* https://github.com/ddnexus/flex[Flex]:
Ruby Client.
* https://github.com/printercu/elastics-rb[elastics]:
Tiny client with built-in zero-downtime migrations and ActiveRecord integration.
* https://github.com/toptal/chewy[chewy]:
Chewy is ODM and wrapper for official elasticsearch client
Chewy is ODM and wrapper for official elasticsearch client
* https://github.com/ankane/searchkick[Searchkick]:
Intelligent search made easy
@ -82,7 +82,7 @@ See the {client}/php-api/current/index.html[official Elasticsearch PHP client].
* https://github.com/searchbox-io/Jest[Jest]:
Java Rest client.
* There is of course the http://www.elasticsearch.org/guide/en/elasticsearch/client/java-api/current/index.html[native ES Java client]
* There is of course the {client}/java-api/current/index.html[native ES Java client]
[[community-javascript]]
=== JavaScript

View File

@ -1,6 +1,6 @@
= Community Supported Clients
:client: http://www.elasticsearch.org/guide/en/elasticsearch/client
:client: http://www.elastic.co/guide/en/elasticsearch/client
include::clients.asciidoc[]

View File

@ -1,6 +1,6 @@
= Groovy API
:ref: http://www.elasticsearch.org/guide/en/elasticsearch/reference/current
:java: http://www.elasticsearch.org/guide/en/elasticsearch/client/java-api/current
:ref: http://www.elastic.co/guide/en/elasticsearch/reference/current
:java: http://www.elastic.co/guide/en/elasticsearch/client/java-api/current
[preface]
== Preface

View File

@ -1,6 +1,6 @@
[[java-api]]
= Java API
:ref: http://www.elasticsearch.org/guide/en/elasticsearch/reference/current
:ref: http://www.elastic.co/guide/en/elasticsearch/reference/current
[preface]
== Preface

View File

@ -1,138 +0,0 @@
= elasticsearch-js
== Overview
Official low-level client for Elasticsearch. Its goal is to provide common
ground for all Elasticsearch-related code in JavaScript; because of this it tries
to be opinion-free and very extendable.
The full documentation is available at http://elasticsearch.github.io/elasticsearch-js
=== Getting the Node.js module
To install the module into an existing Node.js project use npm:
[source,sh]
------------------------------------
npm install elasticsearch
------------------------------------
=== Getting the browser client
For a browser-based projects, builds for modern browsers are available http://elasticsearch.github.io/elasticsearch-js#browser-builds[here]. Download one of the archives and extract it, inside you'll find three files, pick the one that best matches your environment:
* elasticsearch.jquery.js - for projects that already use jQuery
* elasticsearch.angular.js - for Angular projects
* elasticsearch.js - generic build for all other projects
Each of the library specific builds tie into the AJAX and Promise creation facilities provided by their respective libraries. This is an example of how Elasticsearch.js can be extended to provide a more opinionated approach when appropriate.
=== Setting up the client
Now you are ready to get busy! First thing you'll need to do is create an instance of `elasticsearch.Client`. Here are several examples of configuration parameters you can use when creating that instance. For a full list of configuration options see http://elasticsearch.github.io/elasticsearch-js/index.html#configuration[the configuration docs].
[source,javascript]
------------------------------------
var elasticsearch = require('elasticsearch');
// Connect to localhost:9200 and use the default settings
var client = new elasticsearch.Client();
// Connect the client to two nodes, requests will be
// load-balanced between them using round-robin
var client = elasticsearch.Client({
hosts: [
'elasticsearch1:9200',
'elasticsearch2:9200'
]
});
// Connect to the this host's cluster, sniff
// for the rest of the cluster right away, and
// again every 5 minutes
var client = elasticsearch.Client({
host: 'elasticsearch1:9200',
sniffOnStart: true,
sniffInterval: 300000
});
// Connect to this host using https, basic auth,
// a path prefix, and static query string values
var client = new elasticsearch.Client({
host: 'https://user:password@elasticsearch1/search?app=blog'
});
------------------------------------
=== Setting up the client in the browser
The params accepted by the `Client` constructor are the same in the browser versions of the client, but how you access the Client constructor is different based on the build you are using. Below is an example of instantiating a client in each build.
[source,javascript]
------------------------------------
// elasticsearch.js adds the elasticsearch namespace to the window
var client = elasticsearch.Client({ ... });
// elasticsearch.jquery.js adds the es namespace to the jQuery object
var client = jQuery.es.Client({ ... });
// elasticsearch.angular.js creates an elasticsearch
// module, which provides an esFactory
var app = angular.module('app', ['elasticsearch']);
app.service('es', function (esFactory) {
return esFactory({ ... });
});
------------------------------------
=== Using the client instance to make API calls.
Once you create the client, making API calls is simple.
[source,javascript]
------------------------------------
// get the current status of the entire cluster.
// Note: params are always optional, you can just send a callback
client.cluster.health(function (err, resp) {
if (err) {
console.error(err.message);
} else {
console.dir(resp);
}
});
// index a document
client.index({
index: 'blog',
type: 'post',
id: 1,
body: {
title: 'JavaScript Everywhere!',
content: 'It all started when...',
date: '2013-12-17'
}
}, function (err, resp) {
// ...
});
// search for documents (and also promises!!)
client.search({
index: 'users',
size: 50,
body: {
query: {
match: {
profile: 'elasticsearch'
}
}
}
}).then(function (resp) {
var hits = resp.body.hits;
});
------------------------------------
== Copyright and License
This software is Copyright (c) 2013-2015 by Elasticsearch BV.
This is free software, licensed under The Apache License Version 2.0.

View File

@ -89,7 +89,7 @@ The number of shards and replicas can be defined per index at the time the index
By default, each index in Elasticsearch is allocated 5 primary shards and 1 replica which means that if you have at least two nodes in your cluster, your index will have 5 primary shards and another 5 replica shards (1 complete replica) for a total of 10 shards per index.
NOTE: Each Elasticsearch shard is a Lucene index. There is a maximum number of documents you can have in a single Lucene index. As of https://issues.apache.org/jira/browse/LUCENE-5843[`LUCENE-5843`], the limit is `2,147,483,519` (= Integer.MAX_VALUE - 128) documents.
NOTE: Each Elasticsearch shard is a Lucene index. There is a maximum number of documents you can have in a single Lucene index. As of https://issues.apache.org/jira/browse/LUCENE-5843[`LUCENE-5843`], the limit is `2,147,483,519` (= Integer.MAX_VALUE - 128) documents.
You can monitor shard sizes using the <<cat-shards,`_cat/shards`>> api.
With that out of the way, let's get started with the fun part...
@ -104,13 +104,13 @@ java -version
echo $JAVA_HOME
--------------------------------------------------
Once we have Java set up, we can then download and run Elasticsearch. The binaries are available from http://www.elasticsearch.org/download[`www.elasticsearch.org/download`] along with all the releases that have been made in the past. For each release, you have a choice among a `zip` or `tar` archive, or a `DEB` or `RPM` package. For simplicity, let's use the tar file.
Once we have Java set up, we can then download and run Elasticsearch. The binaries are available from http://www.elastic.co/downloads[`www.elastic.co/downloads`] along with all the releases that have been made in the past. For each release, you have a choice among a `zip` or `tar` archive, or a `DEB` or `RPM` package. For simplicity, let's use the tar file.
Let's download the Elasticsearch {version} tar as follows (Windows users should download the zip package):
["source","sh",subs="attributes,callouts"]
--------------------------------------------------
curl -L -O https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-{version}.tar.gz
curl -L -O https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-{version}.tar.gz
--------------------------------------------------
Then extract it as follows (Windows users should unzip the zip package):
@ -868,7 +868,7 @@ In the previous section, we skipped over a little detail called the document sco
All queries in Elasticsearch trigger computation of the relevance scores. In cases where we do not need the relevance scores, Elasticsearch provides another query capability in the form of <<query-dsl-filters,filters>. Filters are similar in concept to queries except that they are optimized for much faster execution speeds for two primary reasons:
* Filters do not score so they are faster to execute than queries
* Filters can be http://www.elasticsearch.org/blog/all-about-elasticsearch-filter-bitsets/[cached in memory] allowing repeated search executions to be significantly faster than queries
* Filters can be http://www.elastic.co/blog/all-about-elasticsearch-filter-bitsets/[cached in memory] allowing repeated search executions to be significantly faster than queries
To understand filters, let's first introduce the <<query-dsl-filtered-query,`filtered` query>>, which allows you to combine a query (like `match_all`, `match`, `bool`, etc.) together with a filter. As an example, let's introduce the <<query-dsl-range-filter,`range` filter>>, which allows us to filter documents by a range of values. This is generally used for numeric or date filtering.

View File

@ -362,7 +362,7 @@ in the query string.
=== Percolator
The percolator has been redesigned and because of this the dedicated `_percolator` index is no longer used by the percolator,
but instead the percolator works with a dedicated `.percolator` type. Read the http://www.elasticsearch.org/blog/percolator-redesign-blog-post/[redesigned percolator]
but instead the percolator works with a dedicated `.percolator` type. Read the http://www.elastic.co/blog/percolator-redesign-blog-post[redesigned percolator]
blog post for the reasons why the percolator has been redesigned.
Elasticsearch will *not* delete the `_percolator` index when upgrading, only the percolate api will not use the queries

View File

@ -26,7 +26,7 @@ plugin --install <org>/<user/component>/<version>
-----------------------------------
The plugins will be
automatically downloaded in this case from `download.elasticsearch.org`,
automatically downloaded in this case from `download.elastic.co`,
and in case they don't exist there, from maven (central and sonatype).
Note that when the plugin is located in maven central or sonatype

View File

@ -4,7 +4,7 @@
[partintro]
--
This section includes information on how to setup *elasticsearch* and
get it running. If you haven't already, http://www.elasticsearch.org/download[download] it, and
get it running. If you haven't already, http://www.elastic.co/downloads[download] it, and
then check the <<setup-installation,installation>> docs.
NOTE: Elasticsearch can also be installed from our repositories using `apt` or `yum`.

View File

@ -22,14 +22,14 @@ Download and install the Public Signing Key:
[source,sh]
--------------------------------------------------
wget -qO - https://packages.elasticsearch.org/GPG-KEY-elasticsearch | sudo apt-key add -
wget -qO - https://packages.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
--------------------------------------------------
Add the repository definition to your `/etc/apt/sources.list` file:
["source","sh",subs="attributes,callouts"]
--------------------------------------------------
echo "deb http://packages.elasticsearch.org/elasticsearch/{branch}/debian stable main" | sudo tee -a /etc/apt/sources.list
echo "deb http://packages.elastic.co/elasticsearch/{branch}/debian stable main" | sudo tee -a /etc/apt/sources.list
--------------------------------------------------
[WARNING]
@ -65,7 +65,7 @@ Download and install the public signing key:
[source,sh]
--------------------------------------------------
rpm --import https://packages.elasticsearch.org/GPG-KEY-elasticsearch
rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch
--------------------------------------------------
Add the following in your `/etc/yum.repos.d/` directory
@ -75,9 +75,9 @@ in a file with a `.repo` suffix, for example `elasticsearch.repo`
--------------------------------------------------
[elasticsearch-{branch}]
name=Elasticsearch repository for {branch}.x packages
baseurl=http://packages.elasticsearch.org/elasticsearch/{branch}/centos
baseurl=http://packages.elastic.co/elasticsearch/{branch}/centos
gpgcheck=1
gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1
--------------------------------------------------

View File

@ -69,7 +69,7 @@ $ curl -XPUT 'http://localhost:9200/_cluster/settings' -d '{
[float]
==== 1.0 and later
To back up a running 1.0 or later system, it is simplest to use the snapshot feature. Complete instructions for backup and restore with snapshots are available http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-snapshots.html[here].
To back up a running 1.0 or later system, it is simplest to use the snapshot feature. See the complete instructions for <<modules-snapshots,backup and restore with snapshots>>.
[float]
[[rolling-upgrades]]
@ -96,7 +96,7 @@ This syntax applies to Elasticsearch 1.0 and later:
* Confirm that all shards are correctly reallocated to the remaining running nodes.
* Upgrade the stopped node. To upgrade using a zip or compressed tarball from elasticsearch.org:
* Upgrade the stopped node. To upgrade using a zip or compressed tarball from elastic.co:
** Extract the zip or tarball to a new directory, usually in the same volume as the current Elasticsearch installation. Do not overwrite the existing installation, as the downloaded archive will contain a default elasticsearch.yml file and will overwrite your existing configuration.
** Copy the configuration files from the old Elasticsearch installation's config directory to the new Elasticsearch installation's config directory. Move data files from the old Elasticsesarch installation's data directory if necessary. If data files are not located within the tarball's extraction directory, they will not have to be moved.
** The simplest solution for moving from one version to another is to have a symbolic link for 'elasticsearch' that points to the currently running version. This link can be easily updated and will provide a stable access point to the most recent version. Update this symbolic link if it is being used.

View File

@ -22,10 +22,10 @@ improvements throughout this page to provide the full context.
If youre interested in more on how we approach ensuring resiliency in
Elasticsearch, you may be interested in Igor Motovs recent talk
http://www.elasticsearch.org/videos/improving-elasticsearch-resiliency/[Improving Elasticsearch Resiliency].
http://www.elastic.co/videos/improving-elasticsearch-resiliency[Improving Elasticsearch Resiliency].
You may also be interested in our blog post
http://www.elasticsearch.org/blog/resiliency-elasticsearch/[Resiliency in Elasticsearch],
http://www.elastic.co/blog/resiliency-elasticsearch[Resiliency in Elasticsearch],
which details our thought processes when addressing resiliency in both
Elasticsearch and the work our developers do upstream in Apache Lucene.
@ -416,7 +416,7 @@ The Snapshot/Restore API supports a number of different repository types for sto
[float]
=== Circuit Breaker: Fielddata (STATUS: DONE, v1.0.0)
Currently, the http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/index-modules-fielddata.html[circuit breaker] protects against loading too much field data by estimating how much memory the field data will take to load, then aborting the request if the memory requirements are too high. This feature was added in Elasticsearch version 1.0.0.
Currently, the https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules-fielddata.html[circuit breaker] protects against loading too much field data by estimating how much memory the field data will take to load, then aborting the request if the memory requirements are too high. This feature was added in Elasticsearch version 1.0.0.
[float]
=== Use of Paginated Data Structures to Ease Garbage Collection (STATUS: DONE, v1.0.0 & v1.2.0)