Commit Graph

108 Commits

Author SHA1 Message Date
olcbean b98514c6d9 Add Close Index API to the high level REST client (#27734)
Add support for _close endpoint to the high level REST client

Relates to #27205
2018-01-17 11:47:08 +01:00
Martijn van Groningen 853f7e8780
Added multi get api to the high level rest client.
Relates to #27205
2018-01-16 17:27:02 +01:00
olcbean fd45a46ce8 Deprecate `isShardsAcked()` in favour of `isShardsAcknowledged()` (#27819)
Several responses include the shards_acknowledged flag (indicating whether the
requisite number of shard copies started before the completion of the operation)
and there are two different getters used : isShardsAcknowledged() and isShardsAcked().

This PR deprecates the isShardsAcked() in favour of isShardsAcknowledged() in 
CreateIndexResponse, RolloverResponse and CreateIndexClusterStateUpdateResponse.

Closes #27784
2018-01-08 10:57:45 +01:00
Tanguy Leroux 0f80e7c5f6
[Test] Fix IndicesClientDocumentationIT (#27899)
The last operation executed in IndicesClientDocumentationIT.testCreate()
 is an asynchronous index creation. Because nothing waits for its
 completion, on slow machines the index can sometimes be created after
 the testCreate() test is finished, and it can fail the following test.

 Closes #27754
2017-12-20 09:31:10 +01:00
David Turner b1039164a1 Mute MigrationDocumentationIT#testClusterHealth()
Relates #27754
2017-12-19 12:16:49 +00:00
Christoph Büscher bb14b8f7c5 Merge branch 'rankeval'
This commit adds a new module that provides an endpoint that can be used to
evaluate search ranking results.

Closes #19195
2017-12-14 16:45:03 +01:00
Christoph Büscher 5406a9f30d Add rank-eval module to transport client and HL client dependencies 2017-12-13 18:05:43 +01:00
Tanguy Leroux 28f6512319
[Test] Fix MigrationDocumentationIT.testClusterHealth (#27774)
Closes #27754
2017-12-13 16:47:01 +01:00
javanna e01643126b [TEST] extend wait_for_active_shards randomization to include 'all' value
This was already changed in 6.x as part of the backport of the recently added open and create index API. wait_for_active_shards can be a number but also "all", with this commit we verify that providing "all" works too.
2017-12-11 14:27:12 +01:00
javanna 27e157f67c [TEST] remove code duplications in RequestTests 2017-12-08 10:51:27 +01:00
olcbean bcc33f391f Add Open Index API to the high level REST client (#27574)
Add _open to the high level REST client

Relates to #27205
2017-12-07 18:16:03 +01:00
Catalin Ursachi f823cea79c Added Create Index support to high-level REST client (#27351)
Relates to #27205
2017-12-07 11:39:59 +01:00
Martijn van Groningen 4d78e1a9ad
Added msearch api to high level client 2017-12-05 10:17:47 +01:00
Simon Willnauer fadbe0de08
Automatically prepare indices for splitting (#27451)
Today we require users to prepare their indices for split operations.
Yet, we can do this automatically when an index is created which would
make the split feature a much more appealing option since it doesn't have
any 3rd party prerequisites anymore.

This change automatically sets the number of routinng shards such that
an index is guaranteed to be able to split once into twice as many shards.
The number of routing shards is scaled towards the default shard limit per index
such that indices with a smaller amount of shards can be split more often than
larger ones. For instance an index with 1 or 2 shards can be split 10x
(until it approaches 1024 shards) while an index created with 128 shards can only
be split 3x by a factor of 2. Please note this is just a default value and users
can still prepare their indices with `index.number_of_routing_shards` for custom
splitting.

NOTE: this change has an impact on the document distribution since we are changing
the hash space. Documents are still uniformly distributed across all shards but since
we are artificually changing the number of buckets in the consistent hashign space
document might be hashed into different shards compared to previous versions.

This is a 7.0 only change.
2017-11-23 09:48:54 +01:00
Jim Ferenczi 6319424e4a
Move composite aggregation to core (#27474)
This change removes the module named aggs-composite and adds the `composite` aggs
as a core aggregation. This allows other plugins to use this new aggregation
and simplifies the integration in the HL rest client.
2017-11-21 13:31:01 +01:00
Luca Cavanna 29450de7b5
Cross Cluster Search: make remote clusters optional (#27182)
Today Cross Cluster Search requires at least one node in each remote cluster to be up once the cross cluster search is run. Otherwise the whole search request fails despite some of the data (either local and/or remote) is available. This happens when performing the _search/shards calls to find out which remote shards the query has to be executed on. This scenario is different from shard failures that may happen later on when the query is actually executed, in case e.g. remote shards are missing, which is not going to fail the whole request but rather yield partial results, and the _shards section in the response will indicate that.

This commit introduces a boolean setting per cluster called search.remote.$cluster_alias.skip_if_disconnected, set to false by default, which allows to skip certain clusters if they are down when trying to reach them through a cross cluster search requests. By default all clusters are mandatory.

Scroll requests support such setting too when they are first initiated (first search request with scroll parameter), but subsequent scroll rounds (_search/scroll endpoint) will fail if some of the remote clusters went down meanwhile.

The search API response contains now a new _clusters section, similar to the _shards section, that gets returned whenever one or more clusters were disconnected and got skipped:

"_clusters" : {
    "total" : 3,
    "successful" : 2,
    "skipped" : 1
}
Such section won't be part of the response if no clusters have been skipped.

The per cluster skip_unavailable setting value has also been added to the output of the remote/info API.
2017-11-21 11:41:47 +01:00
Mayya Sharipova 858b2c7cb8
Standardize underscore requirements in parameters (#27414)
Stardardize underscore requirements in parameters across different type of
requests:
_index, _type, _source, _id keep their underscores
params like version and retry_on_conflict will be without underscores
Throw an error if older versions of parameters are used

BulkRequest, MultiGetRequest, TermVectorcRequest, MoreLikeThisQuery
were changed

Closes #26886
2017-11-17 15:31:52 -05:00
Jim Ferenczi 623367d793
Add composite aggregator (#26800)
* This change adds a module called `aggs-composite` that defines a new aggregation named `composite`.
The `composite` aggregation is a multi-buckets aggregation that creates composite buckets made of multiple sources.
The sources for each bucket can be defined as:
  * A `terms` source, values are extracted from a field or a script.
  * A `date_histogram` source, values are extracted from a date field and rounded to the provided interval.
This aggregation can be used to retrieve all buckets of a deeply nested aggregation by flattening the nested aggregation in composite buckets.
A composite buckets is composed of one value per source and is built for each document as the combinations of values in the provided sources.
For instance the following aggregation:

````
"test_agg": {
  "terms": {
    "field": "field1"
  },
  "aggs": {
    "nested_test_agg":
      "terms": {
        "field": "field2"
      }
  }
}
````
... which retrieves the top N terms for `field1` and for each top term in `field1` the top N terms for `field2`, can be replaced by a `composite` aggregation in order to retrieve **all** the combinations of `field1`, `field2` in the matching documents:

````
"composite_agg": {
  "composite": {
    "sources": [
      {
	"field1": {
          "terms": {
              "field": "field1"
            }
        }
      },
      {
	"field2": {
          "terms": {
            "field": "field2"
          }
        }
      },
    }
  }
````

The response of the aggregation looks like this:

````
"aggregations": {
  "composite_agg": {
    "buckets": [
      {
        "key": {
          "field1": "alabama",
          "field2": "almanach"
        },
        "doc_count": 100
      },
      {
        "key": {
          "field1": "alabama",
          "field2": "calendar"
        },
        "doc_count": 1
      },
      {
        "key": {
          "field1": "arizona",
          "field2": "calendar"
        },
        "doc_count": 1
      }
    ]
  }
}
````

By default this aggregation returns 10 buckets sorted in ascending order of the composite key.
Pagination can be achieved by providing `after` values, the values of the composite key to aggregate after.
For instance the following aggregation will aggregate all composite keys that sorts after `arizona, calendar`:

````
"composite_agg": {
  "composite": {
    "after": {"field1": "alabama", "field2": "calendar"},
    "size": 100,
    "sources": [
      {
	"field1": {
          "terms": {
            "field": "field1"
          }
        }
      },
      {
	"field2": {
          "terms": {
            "field": "field2"
          }
	}
      }
    }
  }
````

This aggregation is optimized for indices that set an index sorting that match the composite source definition.
For instance the aggregation above could run faster on indices that defines an index sorting like this:

````
"settings": {
  "index.sort.field": ["field1", "field2"]
}
````

In this case the `composite` aggregation can early terminate on each segment.
This aggregation also accepts multi-valued field but disables early termination for these fields even if index sorting matches the sources definition.
This is mandatory because index sorting picks only one value per document to perform the sort.
2017-11-16 15:13:36 +01:00
Nicholas Knize 8904fc8210 [Geo] Decouple geojson parse logic from ShapeBuilders
This is the first step to supporting WKT (and other future) format(s). The ShapeBuilders are quite messy and can be simplified by decoupling the parse logic from the build logic. This commit refactors the parsing logic into its own package separate from the Shape builders. It also decouples the GeoShapeType into a standalone enumerator that is responsible for validating the parsed data and providing the appropriate builder. This future-proofs the code making it easier to maintain and add new shape types.
2017-11-10 14:37:58 -06:00
Luca Cavanna 5d7d01ba75
Adjust RestHighLevelClient method modifiers (#27238)
RestHighLevelClient can be subclassed to add support for additional methods, but its public and protected methods should be final.
2017-11-06 10:05:40 +01:00
Catalin Ursachi 8bf33241ed Add Delete Index API support to high-level REST client (#27019)
Relates to #25847
2017-10-26 09:52:46 +02:00
Luca Cavanna 8caf7d4ff8 Decouple BulkProcessor from ThreadPool (#26727)
Introduce minimal thread scheduler as a base class for `ThreadPool`. Such a class can be used from the `BulkProcessor` to schedule retries and the flush task. This allows to remove the `ThreadPool` dependency from `BulkProcessor`, which requires to provide settings that contain `node.name` and also needed log4j for logging. Instead, it needs now a `Scheduler` that is much lighter and gets automatically created and shut down on close.

Closes #26028
2017-10-25 10:30:23 +02:00
Tanguy Leroux b3819e7f30 Make RestHighLevelClient's Request class public (#26627)
Request class is currently package protected, making it difficult for
the users to extend the RestHighLevelClient and to use its protected
methods to execute requests. This commit makes the Request class public
and changes few methods of RestHighLevelClient to be protected.
2017-09-20 11:36:10 +02:00
Tanguy Leroux ad355f3e0b Forbid direct usage of ContentType.create() methods (#26457)
It's easy to create a wrong Content-Type header when converting a 
XContentType to a Apache HTTP ContentType instance.

This commit the direct usages of ContentType.create() methods in favor of a Request.createContentType(XContentType) method that does the right thing.

Closes #26438
2017-09-06 09:58:46 +02:00
Michael Basnight cfd14cd2b8 Revert shading for the low level rest client (#26367)
At current, we do not feel there is enough of a reason to shade the low
level rest client. It caused problems with commons logging and IDE's
during the brief time it was used. We did not know exactly how many
users will need this, and decided that leaving shading out until we
gather more information is best. Users can still shade the jar
themselves. For information and feeback, see issue #26366.

Closes #26328

This reverts commit 3a20922046.
This reverts commit 2c271f0f22.
This reverts commit 9d10dbea39.
This reverts commit e816ef89a2.
2017-08-25 14:13:12 -05:00
Christoph Büscher ad8f359deb Register ip_range aggregation with the high level client (#26383)
The parser for the `ip_range` aggregation response is currently missing from the
NamedXContentRegistry in the high level rest client. Also changes the testing
around the expected number of parsers so we at least check that we register all
the parsers that we also test in InternalAggregationTestCase.
2017-08-25 13:10:03 +02:00
Marco Monaco f6bbc91c0d Register ParsedTopHits aggregation with the rest high level client (#26370) 2017-08-25 11:18:16 +02:00
Luca Cavanna 6d8e2c6d4c Make RestHighLevelClient Closeable and simplify its creation (#26180)
By making RestHighLevelClient Closeable, its close method will close the internal low-level REST client instance by default, which simplifies the way most users interact with the high-level client.

Its constructor accepts now a RestClientBuilder, which clarifies that the low-level REST client is internally created and managed.

It is still possible to provide an already built `RestClient` instance, but that can only be done by subclassing `RestHighLevelClient` and calling the protected constructor that accepts a `RestClient`. In such case a consumer has also to be provided, which controls what has to be done when the high-level client gets done.

Closes #26086
2017-08-24 09:39:41 +02:00
desmorto 292dd8f992 (refactor) some opportunities to use diamond operator (#25585)
* (refactor) some opportunities to use diamond operator

* Update ExceptionRetryIT.java

update typo
2017-08-15 16:36:42 -06:00
Tanguy Leroux 69f8641568 [Docs] Add documentation for search queries in high-level rest client (#25984) 2017-08-02 09:57:47 +02:00
Tanguy Leroux 9c8d3d3569 [Docs] Add migration notes for the high-level rest client (#25911) 2017-08-01 10:38:56 +02:00
Tanguy Leroux 90ebaaa9a8 [Docs] Add profile section to the Search API documentation (#25880) 2017-07-26 10:31:46 +02:00
Michael Basnight e816ef89a2 Shade external dependencies in the rest client jar
This commit removes all external dependencies from the rest client jar
and shades them in an 'org.elasticsearch.client' package within the jar
using shadowJar gradle plugin. All projects that depended on the
existing jar have been converted to using the 'org.elasticsearch.client'
package prefixes to interact with the rest client.

Closes #25208
2017-07-24 12:55:43 -05:00
javanna c4c1e909a3 [TEST] SearchDocumentationIT#testSearch to sort on _uid instead of _id 2017-07-21 11:15:21 +02:00
Adrien Grand f1ff7f2454 Require a field when a `seed` is provided to the `random_score` function. (#25594)
We currently use fielddata on the `_id` field which is trappy, especially as we
do it implicitly. This changes the `random_score` function to use doc ids when
no seed is provided and to suggest a field when a seed is provided.

For now the change only emits a deprecation warning when no field is supplied
but this should be replaced by a strict check on 7.0.

Closes #25240
2017-07-19 14:11:15 +02:00
Christoph Büscher 43bfe06759 [Docs] Add sorting and source filtering section to client docs (#25767) 2017-07-18 16:58:46 +02:00
Christoph Büscher 56b1250a34 [Docs] Adding highlighting section to high level client docs (#25751)
Adding a section about how to use highlighting in the SearchSourceBuilder and
how to retrieve highlighted fragments from the SearchResponse.
2017-07-17 19:30:58 +02:00
Christoph Büscher 5387ed00d2 [Docs] Adding suggestion sections to high level client docs (#25724)
This adds a section about how to add suggestions to the SearchSourceBuilder and
how to retrieve them from a SearchResponse.
2017-07-14 18:33:28 +02:00
Christoph Büscher f809a12493 [Docs] Adding aggregation sections to high level client docs (#25707)
This adds a section about how to add aggregations to the SearchSourceBuilder and how
to retrieve them from a SearchRepsonse to the documentation for the high level rest client.
2017-07-14 12:47:47 +02:00
Martijn van Groningen 02fad9ac8c
docs: updated java client api to take this into account too to take into account the p/c queries are in parent-join module
Closes #25624
2017-07-13 11:24:22 +02:00
Luca Cavanna ec66d655b5 Rename client artifacts (#25693)
It was brought up that our current client artifacts have generic names like 'rest' that may cause conflicts with other artifacts.

This commit renames:

- rest -> elasticsearch-rest-client
- sniffer -> elasticsearch-rest-client-sniffer
- rest-high-level -> elasticsearch-rest-high-level-client

A couple of small changes are also preparing the high level client for its first release.

Closes #20248
2017-07-13 09:44:25 +02:00
Simon Willnauer e81804cfa4 Add a shard filter search phase to pre-filter shards based on query rewriting (#25658)
Today if we search across a large amount of shards we hit every shard. Yet, it's quite
common to search across an index pattern for time based indices but filtering will exclude
all results outside a certain time range ie. `now-3d`. While the search can potentially hit
hundreds of shards the majority of the shards might yield 0 results since there is not document
that is within this date range. Kibana for instance does this regularly but used `_field_stats`
to optimize the indexes they need to query. Now with the deprecation of `_field_stats` and it's upcoming removal a single dashboard in kibana can potentially turn into searches hitting hundreds or thousands of shards and that can easily cause search rejections even though the most of the requests are very likely super cheap and only need a query rewriting to early terminate with 0 results.

This change adds a pre-filter phase for searches that can, if the number of shards are higher than a the `pre_filter_shard_size` threshold (defaults to 128 shards), fan out to the shards
and check if the query can potentially match any documents at all. While false positives are possible, a negative response means that no matches are possible. These requests are not subject to rejection and can greatly reduce the number of shards a request needs to hit. The approach here is preferable to the kibana approach with field stats since it correctly handles aliases and uses the correct threadpools to execute these requests. Further it's completely transparent to the user and improves scalability of elasticsearch in general on large clusters.
2017-07-12 22:19:20 +02:00
Christoph Büscher f3e7a1c4a4 Adding basic search request documentation for high level client (#25651) 2017-07-12 17:06:46 +02:00
Jack Conradson d2b4f7ac5a Disallow lang to be used with Stored Scripts (#25610)
Requests that execute a stored script will no longer be allowed to specify the lang of the script. This information is stored in the cluster state making only an id necessary to execute against. Putting a stored script will still require a lang.
2017-07-12 07:55:57 -07:00
Colin Goodheart-Smithe 3a5a54e83e Collapses package structure for some bucket aggs (#25579)
This change collapses some of the packages for the bucket aggregations into their parent packages. This was done for the following aggregations:
* The variants of the range aggregation (geo_distance, date and ip) were moved into the `o.e.s.a.bucket.range` package
* The `o.e.s.a.bucket.terms.support` package was removed and the classes were moved to `o.e.s.a.bucket.terms`
* The filter aggregation was moved to `o.e.s.a.bucket.filter`

Since this PR is already relatively large with only the above changes subsequent PRs will do similar operations on relevant metric and pipeline aggregations

Relates to #22868
2017-07-10 15:08:15 +01:00
Luca Cavanna 4f4f9e0af1 [DOCS] revise high level client Search Scroll API docs (#25599)
Moved the full example at the end of the page, reduced the number of bullet points for it, and added smaller examples at the beginning of the page.
2017-07-07 17:48:58 +02:00
Tanguy Leroux b06a744b05 [Docs] Document Scroll API for Java High Level REST Client (#25554)
This commit adds documentation for _search/scroll and clear scroll methods of the high level Java REST client
2017-07-07 12:19:33 +02:00
Tanguy Leroux d9bc0f48b4 [Docs] Document Bulk Processor for Java High Level REST Client (#25572) 2017-07-06 17:05:10 +02:00
Tanguy Leroux fefcae3d45 [Docs] Document Update API for Java High Level REST Client (#25536)
This commit adds documentation for Java High Level REST Client's Update API.
2017-07-05 12:16:42 +02:00
Luca Cavanna 6f8c0453bc [DOCS] add docs for high level client get method (#25538)
Document high level client get method
2017-07-05 11:57:57 +02:00