Commit Graph

5126 Commits

Author SHA1 Message Date
Yannick Welsch c96f2d7bf7
Document woes between auto-expand-replicas and allocation filtering (#30531)
Relates to #2869
2018-05-14 12:14:37 +02:00
Jason Tedor 901436148b Adjust copy settings versions
This commit adjusts the versions on the copy settings behavior now
that the default behavior is configured in 7.0.0.
2018-05-13 22:23:13 -04:00
Jason Tedor 593fdd40ed
Deprecate not copy settings and explicitly disallow (#30404)
We want copying settings to be the default behavior. This commit
deprecates not copying settings, and disallows explicitly not copying
settings. This gives users a transition path to the future default
behavior.
2018-05-13 10:30:05 -04:00
Costin Leau 52580b5ca8
SQL: Fix parsing of dates with milliseconds (#30419)
Dates internally contain milliseconds (which appear when converting them
to Strings) however parsing does not accept them (and is being strict).
The parser has been changed so that Date is mandatory but the time
(including its fractions such as millis) are optional.

Fix #30002
2018-05-10 20:14:54 +03:00
Nik Everett b4502dbf74
LLClient: Add setJsonEntity (#30447)
Adds `Request#setJsonEntity(String)` which short circuits the process of
sending a json string which is super common.
2018-05-09 18:33:03 -04:00
Mueed Chaudhry bf141a3fd1 [docs] add warning for read-write indices in force merge documentation (#28869) 2018-05-09 18:53:55 +02:00
Nik Everett f9dc86836d
Docs: Test examples that recreate lang analyzers (#29535)
We have a pile of documentation describing how to rebuild the built in
language analyzers and, previously, our documentation testing framework
made sure that the examples successfully built *an* analyzer but they
didn't assert that the analyzer built by the documentation matches the
built in anlayzer. Unsuprisingly, some of the examples aren't quite
right.

This adds a mechanism that tests that the analyzers built by the docs.
The mechanism is fairly simple and brutal but it seems to be working:
build a hundred random unicode sequences and send them through the
`_analyze` API with the rebuilt analyzer and then again through the
built in analyzer. Then make sure both APIs return the same results.
Each of these calls to `_anlayze` takes about 20ms on my laptop which
seems fine.
2018-05-09 09:23:10 -04:00
Michael Basnight 3b9c8204a6
Add GET Repository High Level REST API (#30362)
This commit adds the Snapshot Client with a first API call within it,
the get repositories call in snapshot/restore module. This also creates
a snapshot namespace for the docs, as well as get repositories docs.

Relates #27205
2018-05-09 07:25:23 -05:00
Yu 106bed90c7 Add `coordinating_only` node selector (#30313)
Today we can execute cluster API actions on only master, data or ingest nodes
using the `master:true`, `data:true` and `ingest:true` filters, but it is not
so easy to select coordinating-only nodes (i.e. those nodes that are neither
master nor data nor ingest nodes). This change fixes this by adding support for
a `coordinating_only` filter such that `coordinating_only:true` adds all
coordinating-only nodes to the set of selected nodes, and 
`coordinating_only:false` deletes them.

Resolves #28831.
2018-05-09 12:14:07 +01:00
Ke Li 0c6789bc72 Use date format in `date_range` mapping before fallback to default (#29310)
If the date format is not forced in query, use the format in mapping before 
fallback to the default format.

Closes #29282
2018-05-09 09:41:44 +02:00
Alexander Reelsen f00890ee38
Watcher: Increase HttpClient parallel sent requests (#30130)
The HTTPClient used in watcher is based on the apache http client. The
current client is using a lot of defaults - which are not always
optimal. Two of those defaults are the maximum number of total
connections and the maximum number of connections to a single route.

If one of those limits is reached, the HTTPClient waits for a connection
to be finished thus acting in a blocking fashion. In order to prevent
this when many requests are being executed, we increase the limit of
total connections as well as the connections per route (a route is
basically an endpoint, which also contains proxy information, not
containing an URL, just hosts).

On top of that an additional option has been set to evict
long running connections, which can potentially be reused after some
time. As this requires an additional background thread, this required
some changes to ensure that the httpclient is closed properly. Also the
timeout for this can be configured.
2018-05-09 09:37:47 +02:00
Nik Everett b062ce5634
Client: Deprecate many argument performRequest (#30315)
Deprecate the many arguments versions of `performRequest` and
`performRequestAsync` in favor of the `Request` object flavored variants
introduced in #29623. We'll be dropping the many arguments variants in
7.0 because they make it difficult to add new features in a backwards
compatible way and they create a *ton* of intellisense noise.
2018-05-08 14:38:55 -04:00
Nik Everett d20e8e2bb4
Docs: Use task_id in examples of tasks (#30436)
We had been using `task_id:1` or `taskId:1` because it is parses as a
valid task identifier but the `:1` part is confusing. This replaces
those examples with `task_id` which matches the response from the list
tasks API.

Closes #28314
2018-05-08 14:23:32 -04:00
Karim Frenn 3acca0b35c [Docs] Fix typo in cardinality-aggregation.asciidoc (#30434) 2018-05-08 16:12:36 +02:00
aditya-agrawal 27ddb4ffea Avoid NPE in `more_like_this` when field has zero tokens (#30365)
Fixes and edge case when using `more_like_this` where TermVectorsWriter
could throw an NPE when a field produced zero tokens after analysis. This
changes the implementation to use an empty list of tokens in this case.

Closes #30148
2018-05-08 15:13:07 +02:00
Yannick Welsch 82b251adcf
Auto-expand replicas when adding or removing nodes (#30423)
Auto-expands replicas in the same cluster state update (instead of a follow-up reroute) where nodes are added or removed.

Closes #1873, fixing an issue where nodes drop their copy of auto-expanded data when coming up, only to sync it again later.
2018-05-07 22:26:31 +02:00
Igor Motov 44c6dcf5be Docs: fix changelog merge 2018-05-07 14:25:03 -04:00
Igor Motov 6fb189ce47
Add stricter geohash parsing (#30376)
Adds verification that geohashes are not empty and contain only
valid characters. It fixes the issue when en empty geohash is
treated as [-180, -90] and geohashes with non-geohash character
are getting resolved into invalid coordinates.

Closes #23579
2018-05-07 13:56:39 -04:00
javanna c9f5a7893b [DOCS] convert forcemerge snippet
Relates to #30113
2018-05-07 16:09:03 +02:00
Matija Bruncic e5653e635d Update forcemerge.asciidoc (#30113) 2018-05-07 14:56:12 +02:00
Dave Moore 391bcbcbe1 Added zentity to the list of API extension plugins (#29143) 2018-05-07 14:46:47 +02:00
Ke Li d373e1b49c Fix the search request default operation behavior doc (#29302) (#29405) 2018-05-07 14:43:45 +02:00
Tanguy Leroux 1987d6261f
Do not fail snapshot when deleting a missing snapshotted file (#30332)
When deleting or creating a snapshot for a given shard, elasticsearch 
usually starts by listing all the existing snapshotted files in the repository. 
Then it computes a diff and deletes the snapshotted files that are not 
needed anymore. During this deletion, an exception is thrown if the file 
to be deleted does not exist anymore.

This behavior is challenging with cloud based repository implementations 
like S3 where a file that has been deleted can still appear in the bucket for 
few seconds/minutes (because the deletion can take some time to be fully 
replicated on S3). If the deleted file appears in the listing of files, then the 
following deletion will fail with a NoSuchFileException and the snapshot 
will be partially created/deleted.

This pull request makes the deletion of these files a bit less strict, ie not 
failing if the file we want to delete does not exist anymore. It introduces a 
new BlobContainer.deleteIgnoringIfNotExists() method that can be used 
at some specific places where not failing when deleting a file is 
considered harmless.

Closes #28322
2018-05-07 09:35:55 +02:00
Nhat Nguyen 3e58463256 DOCS: Correct mapping tags in put-template api
The mapping tags were not named consistently and not linked correctly.

Relates #30400
2018-05-06 15:56:49 -04:00
Nhat Nguyen eed8a3b585
Add put index template api to high level rest client (#30400)
Relates #27205
2018-05-06 09:47:36 -04:00
Jim Ferenczi d3ee35ef18 [Docs] Add snippets for POS stop tags default value
relates #30397
2018-05-05 07:53:50 +02:00
Jason Tedor 10fcf30ce1 Move respect accept header on no handler to 6.3.1
This commit moves the changelog entry for the change to respect the
accept header on no handler from the 7.0.0 section of the docs to 6.3.1.
2018-05-04 20:50:30 -04:00
Jason Tedor beee5fe004
Respect accept header on no handler (#30383)
Today when processing a request for a URL path for which we can not find
a handler we send back a plain-text response. Yet, we have the accept
header in our hand and can respect the accepted media type of the
request. This commit addresses this.
2018-05-04 18:13:50 -04:00
Jim Ferenczi ec187ed3be [Docs] Fix bad link
relates #30397
2018-05-04 22:07:12 +02:00
Jim Ferenczi d7c2a99347 [Docs] Fix end of section in the korean plugin docs
relates #30397
2018-05-04 21:41:50 +02:00
Jim Ferenczi 891d3bd9c3
Expose the Lucene Korean analyzer module in a plugin (#30397)
This change adds a new plugin called `analysis-nori` that exposes
Korean text analysis in es using the new Lucene Korean analyzer module named (`nori`).
The plugin adds:
* a Korean analyzer: `nori`
* a Korean tokenizer: `nori_tokenizer`
* a part of speech stop filter: `nori_part_of_speech`
* a filter that can replace Hanja characters with their Hangul transcription: `nori_readingform`
2018-05-04 20:46:13 +02:00
Zachary Tong 1c0d339904
[Rollup] Validate timezone in range queries (#30338)
When validating the search request, we make sure any date_histogram
aggregations have timezones that match the jobs.  But we didn't
do any such validation on range queries.

While it wouldn't produce incorrect results, it would be confusing
to the user as no documents would match the aggregation (because we
add a filter clause on the timezone for the agg).

Now the user gets an exception up front, and some helpful text about
why the range query didnt match, and which timezones are acceptable
2018-05-04 10:45:16 -07:00
tomcallahan 0a93956194
Add Get Settings API support to java high-level rest client (#29229)
This PR adds support for the Get Settings API to the java high-level rest client.
Furthermore, logic related to the retrieval of default settings has been moved from the rest layer into the transport layer and now default settings may be retrieved consistency via both the rest API and the transport API.
2018-05-04 11:14:28 -04:00
Jim Ferenczi dbd857341f
Upgrade to 7.4.0-snapshot-1ed95c097b (#30357)
Upgrade to lucene-7.4.0-snapshot-1ed95c097b

This version contains:
* An Analyzer for Korean
* An IntervalQuery and IntervalsSource that retrieve minimum intervals of positional queries.
* A new API to retrieve matches (offsets and positions) of a query for a single document.
* Support for soft deletes in the index writer.
* A fixed shingle filter that handles index time synonyms.
* Support for emoji sequence in ICUTokenizer (with an upgrade to icu 61.1)
2018-05-04 11:44:22 +02:00
lcawley 137ce702a4 [DOCS] Added coming qualifiers in changelog 2018-05-03 19:38:19 -07:00
debadair 19624466e8
[DOCS] Commented out empty sections in the changelog to fix the doc build. (#30372) 2018-05-03 16:31:32 -07:00
Jay Modi aa0d7c73f8
Security: reduce garbage during index resolution (#30180)
The IndexAndAliasesResolver resolves the indices and aliases for each
request and also handles local and remote indices. The current
implementation uses the ResolvedIndices class to hold the resolved
indices and aliases. While evaluating the indices and aliases against
the user's permissions, the final value for ResolvedIndices is
constructed. Prior to this change, this was done by creating a
ResolvedIndices for the first set of indices and for each additional
addition, a new ResolvedIndices object is created and merged with
the existing one. With a small number of indices and aliases this does
not pose a large problem; however as the number of indices/aliases
grows more list allocations and array copies are needed resulting in a
large amount of garbage and severely impacted performance.

This change introduces a builder for ResolvedIndices that appends to
mutable lists until the final value has been constructed, which will
ultimately reduce the amount of garbage generated by this code.
2018-05-03 12:48:23 -06:00
Sue Gallagher 09a6ba4fea
Change quad tree max levels to 29. Closes #21191 (#29663)
* [DOCS] Changed quad tree max levels to 29. Clears 21191

* Changed QuadPrefixTree max levels to 29 and added defaults. Closes #21191
2018-05-03 09:48:21 -07:00
Dimitris Athanasiou 3b260dcfc1
[ML] Account for gaps in data counts after job is reopened (#30294)
This commit fixes an issue with the data diagnostics were
empty buckets are not reported even though they should. Once
a job is reopened, the diagnostics do not get initialized from
the current data counts (especially the latest record timestamp).
The result is that if the data that is sent have a time gap compared
to the previous ones, that gap is not accounted for in the empty bucket
count.

This commit fixes that by initializing the diagnostics with the current
data counts.

Closes #30080
2018-05-03 15:08:24 +01:00
wmellouli c8d8407012 [Docs] Add term query with normalizer example 2018-05-03 10:23:14 +02:00
Alexander Reelsen 2c38d12e23
Watcher: Make start/stop cycle more predictable and synchronous (#30118)
The current implementation starts/stops watcher using an executor. This
can result in our of order operations.

This commit reduces those executor calls to an absolute minimum in order
to be able to do state changes within the cluster state listener method,
which runs in sequence.

When a state change occurs that forces the watcher service to pause
(like no watcher index, no master node, no local shards), the service is
now in a paused state.

Pausing is a super lightweight operation, which marks the
ExecutionService as paused and waits for the currently executing watches
to finish in the background via an executor. The same applies for
stopping, the potentially long running operation is outsourced in to an
executor, as waiting for executed watches is decoupled from the current
state.

The only other long running operation is starting, where watches need to
be loaded. This is also done via an executor, but has an additional
protection by checking the cluster state version it was started with. If
another cluster state version was trying to load the watches, then this
loading will not take effect.

This PR also cleans up some unused states, like the a simple boolean in
the HistoryStore/TriggeredWatchStore marking it as started or stopped,
as this can now be caught in the execution service.

Another advantage of this approach is the fact, that now only triggered
watches are not getting executed, while watches that are run via the
Execute Watch API will still be executed regardless if watcher is
stopped or not.

Lastly the TickerScheduleTriggerEngine thread now only starts on data nodes.
2018-05-03 09:47:12 +02:00
Zachary Tong 3c2d2a7d4a
Fix NPE when CumulativeSum agg encounters null/empty bucket (#29641)
Fix NPE when CumulativeSum agg encounters null/empty bucket

If the cusum agg encounters a null value, it's because the value is
missing (like the first value from a derivative agg), the path is
not valid, or the bucket in the path was empty.

Previously cusum would just explode on the null, but this changes it
so we only increment the sum if the value is non-null and finite.
This is safe because even if the cusum encounters all null or empty
buckets, the cumulative sum is still zero (like how the sum agg returns
zero even if all the docs were missing values)

I went ahead and tweaked AggregatorTestCase to allow testing pipelines,
so that I could delete the IT test and reimplement it as AggTests.

Closes #27544
2018-05-02 12:22:55 -07:00
Ryan Ernst fb0aa562a5
Network: Remove http.enabled setting (#29601)
This commit removes the http.enabled setting. While all real nodes (started with bin/elasticsearch) will always have an http binding, there are many tests that rely on the quickness of not actually needing to bind to 2 ports. For this case, the MockHttpTransport.TestPlugin provides a dummy http transport implementation which is used by default in ESIntegTestCase.

closes #12792
2018-05-02 11:42:05 -07:00
Lisa Cawley 0d7ac9a74c
[DOCS] Enables edit links for X-Pack pages (#30278) 2018-05-02 10:13:42 -07:00
Ryan Ernst fba2f00a73
Packaging: Unmark systemd service file as a config file (#29004)
Systemd overrides should happen through /etc/systemd/system, not
directly editing the service file. This commit removes marking the
service file as configuration for rpm and deb packages.
2018-05-02 09:48:49 -07:00
Ryan Ernst 62f2918abc
Added changelog entry for deb prerelease version change (#30184)
This commit adds a changelog entry for the change in #29000.
2018-05-02 09:00:35 -07:00
Conor Landry 5deda6929a [Docs] Clarify `fuzzy_like_this` redirect (#30183) 2018-05-02 11:45:37 +02:00
Adrien Grand 5991ede9ef Fix docs of the `_ignored` meta field.
Relates #29658
2018-05-02 11:43:50 +02:00
Adrien Grand 7358946bda
Add a new `_ignored` meta field. (#29658)
This adds a new `_ignored` meta field which indexes and stores fields that have
been ignored at index time because of the `ignore_malformed` option. It makes
malformed documents easier to identify by using `exists` or `term(s)` queries
on the `_ignored` field.

Closes #29494
2018-05-02 10:47:02 +02:00
Lisa Cawley fd20370145
[DOCS] Adds new installation package details (#29590) 2018-05-01 17:04:16 -07:00