Adds a setting to disable detailed error messages and full exception stack traces
in HTTP responses. When set to false, the error_trace request parameter will result
in a HTTP 400 response. When the error_trace parameter is not present, the message
of the first ElasticsearchException will be output and no nested exception messages
will be output.
This commit adds the current total number of translog operations to the recovery reporting API. We also expose the recovered / total percentage:
```
"translog": {
"recovered": 536,
"total": 986,
"percent": "54.3%",
"total_time": "2ms",
"total_time_in_millis": 2
},
```
Closes#9368Closes#10042
The behaviour is better in the case someone has multiple levels of nested object fields defined in the mapping and like to define a single inner_hits definition that is two or more levels deep.
If someone wants inner hits on a nested field that is 2 levels deep the following would need to be defined:
```
{
...
"inner_hits" : {
"path" : {
"level1" : {
"inner_hits" : {
"path" : {
"level2" : {
"query" : { .... }
}
}
}
}
}
}
}
```
With this change the above can be defined as:
```
{
...
"inner_hits" : {
"path" : {
"level1.level2" : {
"query" : { .... }
}
}
}
}
```
Closes#9251
The analysis chain should be used instead of relying on this, as it is
confusing when dealing with different per-field analysers.
The `locale` option was only used for `lowercase_expanded_terms`, which,
once removed, is no longer needed, so it was removed as well.
Fixes#9978
Relates to #9973
This commit adds scripting capability to significant_terms.
Custom heuristics can be implemented with a script that provides
parameters subset_freq, superset_freq,subset_size, superset_size.
closes#7850
Changed search_type docs to reflect that the `(dfs_)query_and_fetch` modes are an internal optimization and should not be specified explicitly by the user.
Relates to #9606
As explained in elasticsearch/elasticsearch-mapper-attachments#101, we should have consistent documentation.
The best option is to link the documentation in elasticsearch guide to the most recent README in the plugin repo.
Closes#9756
The request tracer logs in TRACE level under the `transport.tracer` log and is dynamically configurable with include and exclude arrays to filter out unneeded info. By default all requests are logged with the exception of fault detection pings (fired every second).
add the notion of tracers in the MockTransportService for testing purposes
Closes#9286
To support the `_recovery` API, the recovery process keeps track of current progress in a class called RecoveryState. This class currently have some issues, mostly around concurrency (see #6644 ). This PR cleans it up as well as other issues around it:
- Make the Index subsection API cleaner:
- remove redundant information - all calculation is done based on the underlying file map
- clearer definition of what is what: total files, vs reused files (local files that match the source) vs recovered files (copied over). % based progress is reported based on recovered files only.
- cleaned up json response to match other API (sadly this breaks the structure). We now properly report human values for dates and other units.
- Add more robust unit testing
- Detail flag was passed along as state (it's now a ToXContent param)
- State lookup during reporting is now always done via the IndexShard , no more fall backs to many other classes.
- Cleanup APIs around time and move the little computations to the state class as opposed to doing them out of the API
I also improved error messages out of the REST testing infra for things I run into.
Closes#6644Closes#9811
Together with #8782 it should help in the situations simliar to #8887 by adding an ability to get information about currently running snapshot without accessing the repository itself.
Closes#8887
The number of current pending tasks is useful to detect and overloaded master. This commit adds it to the cluster health API. The complete list can be retrieved from the dedicated pending tasks API.
It also adds rest tests for the cluster health variants.
Closes#9877
There are two implications to this change.
First, percolator now uses _uid internally, extracting the id portion
when needed. Second, sorting on _id is no longer possible, since you
can no longer index _id. However, _uid can still be used to sort, and
is better anyways as indexing _id just to make it available to
fielddata for sorting is wasteful.
see #8143closes#9842
Double negatives are confusing, but a triple negative (1 no, 2 non, 3 null)? It takes five minutes to understand this little sentence. Cleaned that up a bit.
Closes#9789
This change removes the deprecated script parameter names ('file', 'id', and 'scriptField').
It also removes the ability to load file scripts using the 'script' parameter. File scripts should be loaded using the 'script_file' parameter only.
This was previously attempted in #8854. I revived that branch and did
some performance testing as was suggested in the comments there.
I fixed all the errors, mostly just the rest tests, which
needed to have http enabled on the node settings (the global cluster
previously had this always enabled). I also addressed the comments from
that issue.
My performance tests involved running the entire test suite on my
desktop which has 6 cores, 16GB of ram, and nothing else was being
run on the box at the time. I ran each set of settings 3 times and
took the average time.
| mode | master | patch | diff |
| ------- | ------ | ----- | ---- |
| local | 409s | 417s | +2% |
| network | 368s | 380s | +3% |
This increase in average time is clearly worthwhile to pay to achieve
isolation of tests. One caveat is the way I fixed the rest tests
is still to have one cluster for the entire suite, so all the rest
tests can still potentially affect each other, but this is an
issue for another day.
There were some oddities that I noticed while running these tests
that I would like to point out, as they probably deserve some
investigation (but orthogonal to this PR):
* The total test run times are highly variable (more than a minute between the min and max)
* Running in network mode is on average actually *faster* than local mode. How is this possible!?
This commit makes the `postings_format` and `doc_values_format` options of
mappings illegal on 2.0 and ignored on 1.x (meaning that the default postings
and doc values formats from the codec will be used in such a case).
This removes a fair amount of code.
Close#8746#9741
Squashed commit of the following:
commit 20835037c98e7d2fac4206c372717a05a27c4790
Author: Lee Hinman <lee@writequit.org>
Date: Wed Feb 18 15:27:17 2015 -0700
Use Enum for "_primary" preference
commit 325acbe4585179190a959ba3101ee63b99f1931a
Author: Lee Hinman <lee@writequit.org>
Date: Wed Feb 18 14:32:41 2015 -0700
Use ?preference=_primary automatically for realtime GET operations
commit edd49434af5de7e55928f27a1c9ed0fddb1fb133
Author: Lee Hinman <lee@writequit.org>
Date: Wed Feb 18 14:32:06 2015 -0700
Move engine creation into protected createNewEngine method
commit 67a797a9235d4aa376ff4af16f3944d907df4577
Author: Lee Hinman <lee@writequit.org>
Date: Wed Feb 18 13:14:01 2015 -0700
Factor out AssertingSearcher so it can be used by mock Engines
commit 62b0c28df8c23cc0b8205b33f7595c68ff940e2b
Author: Lee Hinman <lee@writequit.org>
Date: Wed Feb 18 11:43:17 2015 -0700
Use IndexMetaData.isIndexUsingShadowReplicas helper
commit 1a0d45629457578a60ae5bccbeba05acf5d79ddd
Author: Lee Hinman <lee@writequit.org>
Date: Wed Feb 18 09:59:31 2015 -0700
Rename usesSharedFilesystem -> isOnSharedFilesystem
commit 73c62df4fc7da8a5ed557620a83910d89b313aa1
Author: Lee Hinman <lee@writequit.org>
Date: Wed Feb 18 09:58:02 2015 -0700
Add MockShadowEngine and hook it up to be used
commit c8e8db473830fce1bdca3c4df80a685e782383bc
Author: Lee Hinman <lee@writequit.org>
Date: Wed Feb 18 09:45:50 2015 -0700
Clarify comment about pre-defined mappings
commit 60a4d5374af5262bd415f4ef40f635278ed12a03
Author: Lee Hinman <lee@writequit.org>
Date: Wed Feb 18 09:18:22 2015 -0700
Add a test for shadow replicas that uses field data
commit 7346f9f382f83a21cd2445b3386fe67472bc3184
Author: Lee Hinman <lee@writequit.org>
Date: Wed Feb 18 08:37:14 2015 -0700
Revert changes to RecoveryTarget.java
commit d90d6980c9b737bd8c0f4339613a5373b1645e95
Author: Lee Hinman <lee@writequit.org>
Date: Wed Feb 18 08:35:44 2015 -0700
Rename `ownsShard` to `canDeleteShardContent`
commit 23001af834d66278ac84d9a72c37b5d1f3a10a7b
Author: Lee Hinman <lee@writequit.org>
Date: Wed Feb 18 08:35:25 2015 -0700
Remove ShadowEngineFactory, add .newReadOnlyEngine method in EngineFactory
commit b64fef1d2c5e167713e869b22d388ff479252173
Author: Lee Hinman <lee@writequit.org>
Date: Wed Feb 18 08:25:19 2015 -0700
Add warning that predefined mappings should be used
commit a1b8b8cf0db49d1bd1aeb84e51491f7f0de43b59
Author: Lee Hinman <lee@writequit.org>
Date: Tue Feb 17 14:31:50 2015 -0700
Remove unused import and fix index creation example in docs
commit 0b1b852365ceafc0df86866ac3a4ffb6988b08e4
Merge: b9d1fed a22bd49
Author: Lee Hinman <lee@writequit.org>
Date: Tue Feb 17 10:56:02 2015 -0700
Merge remote-tracking branch 'refs/remotes/origin/master' into shadow-replicas
commit b9d1fed25ae472a9dce1904eb806702fba4d9786
Merge: 4473e63 41fd4d8
Author: Lee Hinman <lee@writequit.org>
Date: Tue Feb 17 09:02:27 2015 -0700
Merge remote-tracking branch 'refs/remotes/origin/master' into shadow-replicas
commit 4473e630460e2f0ca2a2e2478f3712f39a64c919
Author: Lee Hinman <lee@writequit.org>
Date: Tue Feb 17 09:00:39 2015 -0700
Add asciidoc documentation for shadow replicas
commit eb699c19f04965952ae45e2caf107124837c4654
Author: Simon Willnauer <simonw@apache.org>
Date: Tue Feb 17 16:15:39 2015 +0100
remove last nocommit
commit c5ece6d16d423fbdd36f5d789bd8daa5724d77b0
Author: Simon Willnauer <simonw@apache.org>
Date: Tue Feb 17 16:13:12 2015 +0100
simplify shadow engine
commit 45cd34a12a442080477da3ef14ab2fe7947ea97e
Author: Simon Willnauer <simonw@apache.org>
Date: Tue Feb 17 11:32:57 2015 +0100
fix tests
commit 744f228c192602a6737051571e040731d413ba8b
Author: Simon Willnauer <simonw@apache.org>
Date: Tue Feb 17 11:28:12 2015 +0100
revert changes to IndexShardGateway - these are leftovers from previous iterations
commit 11886b7653dabc23655ec76d112f291301f98f4a
Author: Simon Willnauer <simonw@apache.org>
Date: Tue Feb 17 11:26:48 2015 +0100
Back out non-shared FS code. this will go in in a second iteration
commit 77fba571f150a0ca7fb340603669522c3ed65363
Merge: e8ad614 2e3c6a9
Author: Simon Willnauer <simonw@apache.org>
Date: Tue Feb 17 11:16:46 2015 +0100
Merge branch 'master' into shadow-replicas
Conflicts:
src/main/java/org/elasticsearch/index/engine/Engine.java
commit e8ad61467304e6d175257e389b8406d2a6cf8dba
Merge: 48a700d 1b8d8da
Author: Simon Willnauer <simonw@apache.org>
Date: Tue Feb 17 10:54:20 2015 +0100
Merge branch 'master' into shadow-replicas
commit 48a700d23cff117b8e4851d4008364f92b8272a0
Author: Simon Willnauer <simonw@apache.org>
Date: Tue Feb 17 10:50:59 2015 +0100
add test for failing shadow engine / remove nocommit
commit d77414c5e7b2cde830a8e3f70fe463ccc904d4d0
Author: Simon Willnauer <simonw@apache.org>
Date: Tue Feb 17 10:27:56 2015 +0100
remove nocommits in IndexMetaData
commit abb696563a9e418d3f842a790fcb832f91150be2
Author: Simon Willnauer <simonw@apache.org>
Date: Mon Feb 16 17:05:02 2015 +0100
remove nocommit and simplify delete logic
commit 82b9f0449108cd4741568d9b4495bf6c10a5b019
Author: Simon Willnauer <simonw@apache.org>
Date: Mon Feb 16 16:45:27 2015 +0100
reduce the changes compared to master
commit 28f069b6d99a65e285ac8c821e6a332a1d8eb315
Author: Simon Willnauer <simonw@apache.org>
Date: Mon Feb 16 16:43:46 2015 +0100
fix primary relocation
commit c4c999dd61a44a7a0db9798275a622f2b85b1039
Merge: 2ae80f9 455a85d
Author: Simon Willnauer <simonw@apache.org>
Date: Mon Feb 16 15:04:26 2015 +0100
Merge branch 'master' into shadow-replicas
commit 2ae80f9689346f8fd346a0d3775a6341874d8bef
Author: Lee Hinman <lee@writequit.org>
Date: Fri Feb 13 16:25:34 2015 -0700
throw UnsupportedOperationException on write operations in ShadowEngine
commit 740c28dd9ef987bf56b670fa1a8bcc6de2845819
Merge: e5bc047 305ba33
Author: Lee Hinman <lee@writequit.org>
Date: Fri Feb 13 15:38:39 2015 -0700
Merge branch 'master' into shadow-replicas
commit e5bc047d7c872ae960d397b1ae7b4b78d6a1ea10
Author: Lee Hinman <lee@writequit.org>
Date: Fri Feb 13 11:38:09 2015 -0700
Don't replicate document request when using shadow replicas
commit 213292e0679d8ae1492ea11861178236f4abd8ea
Author: Simon Willnauer <simonw@apache.org>
Date: Fri Feb 13 13:58:05 2015 +0100
add one more nocommit
commit 83d171cf632f9b77cca9de58505f7db8fcda5599
Merge: aea9692 09eb8d1
Author: Simon Willnauer <simonw@apache.org>
Date: Fri Feb 13 13:52:29 2015 +0100
Merge branch 'master' into shadow-replicas
commit aea96920d995dacef294e48e719ba18f1ecf5860
Author: Simon Willnauer <simonw@apache.org>
Date: Fri Feb 13 09:56:41 2015 +0100
revert unneeded changes on Store
commit ea4e3e58dc6959a92c06d5990276268d586735f3
Author: Lee Hinman <lee@writequit.org>
Date: Thu Feb 12 14:26:30 2015 -0700
Add documentation to ShadowIndexShard, remove nocommit
commit 4f71c8d9f706a0c1c39aa3a370efb1604559d928
Author: Lee Hinman <lee@writequit.org>
Date: Thu Feb 12 14:17:22 2015 -0700
Add documentation to ShadowEngine
commit 28a9d1842722acba7ea69e0fa65200444532a30c
Author: Lee Hinman <lee@writequit.org>
Date: Thu Feb 12 14:08:25 2015 -0700
Remove nocommit, document canDeleteIndexContents
commit d8d59dbf6d0525cd823d97268d035820e5727ac9
Author: Lee Hinman <lee@writequit.org>
Date: Thu Feb 12 10:34:32 2015 -0700
Refactor more shared methods into the abstract Engine
commit a7eb53c1e8b8fbfd9281b43ae39eacbe3cd1a0a6
Author: Simon Willnauer <simonw@apache.org>
Date: Thu Feb 12 17:38:59 2015 +0100
Simplify shared filesystem recovery by using a dedicated recovery handler that skip
most phases and enforces shard closing on the soruce before the target opens it's engine
commit a62b9a70adad87d7492c526f4daf868cb05018d9
Author: Simon Willnauer <simonw@apache.org>
Date: Thu Feb 12 15:59:54 2015 +0100
fix compile error after upstream changes
commit abda7807bc3328a89fd783ca7ad8c6deac35f16f
Merge: f229719 35f6496
Author: Simon Willnauer <simonw@apache.org>
Date: Thu Feb 12 15:57:28 2015 +0100
Merge branch 'master' into shadow-replicas
Conflicts:
src/main/java/org/elasticsearch/index/engine/Engine.java
commit f2297199b7dd5d3f9f1f109d0ddf3dd83390b0d1
Author: Simon Willnauer <simonw@apache.org>
Date: Thu Feb 12 12:41:32 2015 +0100
first cut at catchup from primary
make flush to a refresh
factor our ShadowIndexShard to have IndexShard be idential to the master and least intrusive
cleanup abstractions
commit 4a367c07505b84b452807a58890f1cbe21711f27
Author: Simon Willnauer <simonw@apache.org>
Date: Thu Feb 12 09:50:36 2015 +0100
fix primary promotion
commit cf2fb807e7e243f1ad603a79bc9d5f31a499b769
Author: Lee Hinman <lee@writequit.org>
Date: Wed Feb 11 16:45:41 2015 -0700
Make assertPathHasBeenCleared recursive
commit 5689b7d2f84ca1c41e4459030af56cb9c0151eff
Author: Lee Hinman <lee@writequit.org>
Date: Wed Feb 11 15:58:19 2015 -0700
Add testShadowReplicaNaturalRelocation
commit fdbe4133537eaeb768747c2200cfc91878afeb97
Author: Lee Hinman <lee@writequit.org>
Date: Wed Feb 11 15:28:57 2015 -0700
Use check for shared filesystem in primary -> primary relocation
Also adds a nocommit
commit 06e2eb4496762130af87ce68a47d360962091697
Author: Lee Hinman <lee@writequit.org>
Date: Wed Feb 11 15:21:32 2015 -0700
Add a test checking that indices with shadow replicas clean up after themselves
commit e4dbfb09a689b449f0edf6ee24222d7eaba2a215
Author: Lee Hinman <lee@writequit.org>
Date: Wed Feb 11 15:08:18 2015 -0700
Fix segment info for ShadowEngine, remove test nocommit
commit 80cf0e884c66eda7d59ac5d59235e1ce215af8f5
Author: Lee Hinman <lee@writequit.org>
Date: Wed Feb 11 14:30:13 2015 -0700
Remove nocommit in ShadowEngineTests#testFailStart()
commit 5e33eeaca971807b342f9be51a6a566eee005251
Author: Lee Hinman <lee@writequit.org>
Date: Wed Feb 11 14:22:59 2015 -0700
Remove overly-complex test
commit 2378fbb917b467e79c0262d7a41c23321bbeb147
Author: Lee Hinman <lee@writequit.org>
Date: Wed Feb 11 13:45:44 2015 -0700
Fix missing import
commit 52e9cd1b8334a5dd228d5d68bd03fd0040e9c8e9
Author: Lee Hinman <lee@writequit.org>
Date: Wed Feb 11 13:45:05 2015 -0700
Add a test for replica -> primary promotion
commit a95adbeded426d7f69f6ddc4cbd6712b6f6380b4
Author: Lee Hinman <lee@writequit.org>
Date: Wed Feb 11 12:54:14 2015 -0700
Remove tests that don't apply to ShadowEngine
commit 1896feda9de69e4f9cf774ef6748a5c50e953946
Author: Lee Hinman <lee@writequit.org>
Date: Wed Feb 11 10:29:12 2015 -0700
Add testShadowEngineIgnoresWriteOperations and testSearchResultRelease
commit 67d7df41eac5e10a1dd63ddb31de74e326e9d38b
Author: Lee Hinman <lee@writequit.org>
Date: Wed Feb 11 10:06:05 2015 -0700
Add start of ShadowEngine unit tests
commit ca9beb2d93d9b5af9aa6c75dbc0ead4ef57e220d
Merge: 2d42736 57a4646
Author: Simon Willnauer <simonw@apache.org>
Date: Wed Feb 11 18:03:53 2015 +0100
Merge branch 'master' into shadow-replicas
commit 2d42736fed3ed8afda7e4aff10b65d292e1c6f92
Author: Simon Willnauer <simonw@apache.org>
Date: Wed Feb 11 17:51:22 2015 +0100
shortcut recovery if we are on a shared FS - no need to compare files etc.
commit 24d36c92dd82adce650e7ac8e9f0b43c83b2dc53
Author: Simon Willnauer <simonw@apache.org>
Date: Wed Feb 11 17:08:08 2015 +0100
utilize the new delete code
commit 2a2eed10f58825aae29ffe4cf01aefa5743a97c7
Merge: 343dc0b 173cfc1
Author: Simon Willnauer <simonw@apache.org>
Date: Wed Feb 11 16:07:41 2015 +0100
Merge branch 'master' into shadow-replicas
Conflicts:
src/main/java/org/elasticsearch/gateway/GatewayMetaState.java
commit 343dc0b527a7052acdc783ac5abcaad1ef78dbda
Author: Simon Willnauer <simonw@apache.org>
Date: Wed Feb 11 16:05:28 2015 +0100
long adder is not available in java7
commit be02cabfeebaea74b51b212957a2a466cfbfb716
Author: Lee Hinman <lee@writequit.org>
Date: Tue Feb 10 22:04:24 2015 -0700
Add test that restarts nodes to ensure shadow replicas recover
commit 7fcb373f0617050ca1a5a577b8cf32e32dc612b0
Author: Simon Willnauer <simonw@apache.org>
Date: Tue Feb 10 23:19:21 2015 +0100
make test more evil
commit 38135af0c1991b88f168ece0efb72ffe9498ff59
Author: Simon Willnauer <simonw@apache.org>
Date: Tue Feb 10 22:25:11 2015 +0100
make tests pass
commit 05975af69e6db63cb95f3e40d25bfa7174e006ea
Author: Lee Hinman <lee@writequit.org>
Date: Mon Jan 12 18:44:29 2015 +0100
Add ShadowEngine
Removed the existing `pre_zone` and `post_zone` option in `date_histogram` in favor of
the simpler `time_zone` option. Previously, specifying different values for these could
lead to confusing scenarios where ES would return bucket keys that are not UTC.
Now `time_zone` is the only option setting, the calculation of date buckets to take place in the
preferred time zone, but after rounding converting the bucket key values back to UTC.
Closes#9062Closes#9637
When multiple fields under object fields share the same name, accessing
by short name is ambiguous. This removes support for short names,
always requiring the full name when used in queries.
closes#8872
Enabling GC logging works now by setting the environment variable ES_GC_LOG_FILE
to the full path to the GC log file. Missing directories will be created as needed.
The ES_USE_GC_LOGGING environment variable is no longer used.
Closes#8471Closes#8479
This has been very trappy. Rather than continue to allow buggy behavior
of having upgrade/optimize requests sidestep the single shard per node
limits optimize is supposed to be subject to, this removes
the ability to run the upgrade/optimize async.
closes#9638
_id and _routing now no longer support the 'path' setting on indexes
created with 2.0. Indexes created before 2.0 still support this
setting for backcompat.
closes#6730
Add offset option to 'date_histogram' replacing and simplifying the previous 'pre_offset' and 'post_offset' options.
This change is part of a larger clean up task for `date_histogram` from issue #9062.
Issue #9566 raises the point that setting the number of shards on a closed index can lead to this index not beeing able to open again. This change in documentation is ment to warn the user about this issue.
We now have a very useful annotation to mark features or parameters as
experimental. Let's use it! This commit replaces some custom text warnings with
this annotation and adds this annotation to some existing features/parameters:
- inner_hits (unreleased yet)
- terminate_after (released in 1.4)
- per-bucket doc count errors in the terms agg (released in 1.4)
I also tagged with this annotation settings which should either be not needed
(like the ability to evict entries from the filter cache based on time) or that
are too deep into the way that Elasticsearch works like the Directory
implementation or merge settings.
Close#9563
These aggregations are not experimental anymore but some of their parameters
still are:
- `precision_threshold` and `rehash` on `cardinality`
- `compression` on percentiles(-ranks)
Close#9560
The `full` option and `FlushType.NEW_WRITER` only exists to allow
realtime changes to two settings (`index.codec` and `index.concurrency`).
Those settings are very expert and don't really need to be updateable
in realtime.
The current "checkindex" on startup is very very expensive. This is
like running one of the old school hard drive diagnostic checkers and
usually not a good idea.
But we can do a CRC32 verification of files. We don't even need to
open an indexreader to do this, its much more lightweight.
This option (as well as the existing true/false) are randomized in
tests to find problems.
Also fix bug where use of the current option would always leak
an indexwriter lock.
Closes#9183
Histogram aggregation supports an 'offset' option to move bucket boundaries.
In a histogram with buckets of size X these can be moved from 0, X, 2X, 3X,...
by an offset value of Y to Y, X+Y, 2X+Y, 3X+Y... by using the 'offset' option.
The previous 'pre_offset' and 'post_offset' options are removed in favour of
the simplified 'offset' option.
Closes#9417Closes#9505
The `analyzer` setting is now the base setting, and `search_analyzer`
is simply an override of the search time analyzer. When setting
`search_analyzer`, `analyzer` must be set.
closes#9371
Extended_stats now displays the upper and lower bounds on standard deviations (e.g. avg +/- std).
Default is to show 2 std above/below, but can be changed using the `sigma` parameter.
Accepts non-negative doubles
Closes#9356
This change makes the response API object for Histogram Aggregations the same for all types of Histogram, and does the same for all types of Ranges.
The change removes getBucketByKey() from all aggregations except filters and terms. It also reduces the methods on the Bucket class to just getKey() and getKeyAsString().
The getKey() method returns Object and the actual Type is returns will be appropriate for the type of aggregation being run. e.g. date_histogram will return a DateTime for this method and Histogram will return a Number.
Related to #9049.
By default, the default value for `timestamp` is `now` which means the date the document was processed by the indexing chain.
You can now reject documents which not provide a `timestamp` value by setting `ignore_missing` to false (default to `true`):
```js
{
"tweet" : {
"_timestamp" : {
"enabled" : true,
"ignore_missing" : false
}
}
}
```
When you update the cluster to 1.5 or master, this index created with 1.4 we automatically migrate an index created with 1.4 to the 1.5 syntax.
Let say you have defined this in elasticsearch 1.4.x:
```js
DELETE test
PUT test
{
"settings": {
"number_of_shards": 1,
"number_of_replicas": 0
}
}
PUT test/type/_mapping
{
"type" : {
"_timestamp" : {
"enabled" : true,
"default" : null
}
}
}
```
After migration, the mapping become:
```js
{
"test": {
"mappings": {
"type": {
"_timestamp": {
"enabled": true,
"store": false,
"ignore_missing": false
},
"properties": {}
}
}
}
}
```
Closes#8882.
This adds a new boolean (index.merge.scheduler.auto_throttle) dynamic
setting, default true (matching Lucene), to adaptively set the IO rate
limit for merges over time.
This is more flexible than the previous fixed rate throttling because
it responds depending on the incoming merge rate, so search-heavy
applications that are not doing much indexing will see merges heavily
throttled while indexing-heavy cases will lighten the throttle so
merges can keep up within incoming indexing.
The fixed rate throttling is still available as a fallback if things
go horribly wrong.
Closes#9243Closes#9133
Today we give the HTTP status back within the HTTP response itself and within the JSON response as well:
```sh
curl localhost:9200/
```
```js
{
"status" : 200,
"name" : "Red Wolf",
"version" : {
"number" : "2.0.0",
"build_hash" : "6837a61d8a646a2ac7dc8da1ab3c4ab85d60882d",
"build_timestamp" : "2014-08-19T13:55:56Z",
"build_snapshot" : true,
"lucene_version" : "4.9"
},
"tagline" : "You Know, for Search"
}
```
Before Elasticsearch 1.0, the type was allowed to be passed as the root
element when uploading a document. However, this was ambiguous if the
mappings also contained a field with the same name as the type. The
behavior was changed in 1.0 to not allow this, but a setting was added
for backwards compatibility. This change removes the setting for 2.0.
The header indicates to how many shard copies (primary and replicas shards) a write was supposed to go to, to how many
shard copies to write succeeded and potentially captures shard failures if writing into a replica shard fails.
For async writes it also includes the number of shards a write is still pending.
Closes#7994
additional element per segment.
This commit adds a verbose flag to the _segments api. Currently the
only additional information returned when set to true is the full
ram tree from lucene for each segment.
This is our only cache which is not 'exact' and might allow for stalled results.
Additionally, a similar cache that we have and needs to perform lookups in other
indices in order to run queries is the script index, and for this index we rely
on the filesystem cache, so we should probably do the same with terms filters
lookups.
Close#9056
The `cluster.routing.allocation.balance.primary` setting has caused
a lot of confusion in the past while it has very little benefit form a
shard allocatioon point of view. Users tend to modify this value to
evently distribute primaries across the nodes which is dangerous since
a prmiary flag on it's own can trigger relocations. The primary flag for a shard
is should not have any impact on cluster performance unless the high level feature
suffereing from primary hotspots is buggy. Yet, this setting was intended to be a
tie-breaker which is not necessary anymore since the algorithm is deterministic.
This commit removes this setting entriely.
In some situations the shard balanceing weight delta becomes negative. Yet,
a negative delta is always treated as `well balanced` which is wrong. I wasn't
able to reproduce the issue in any way other than useing the real world data
from issue #9023. This commit adds a fix for absolute deltas as well as a base
test class that allows to build tests or simulations from the cat API output.
Closes#9023
This feature adds an optional orientation parameter to the GeoJSON document and geo_shape mapping enabling users to explicitly define how they want Elasticsearch to interpret vertex ordering. The default uses the right-hand rule (counterclockwise for outer ring, clockwise for inner ring) complying with OGC Simple Feature Access standards. The parameter can be explicitly specified for an entire index using the geo_shape mapping by adding "orientation":{"left"|"right"|"cw"|"ccw"|"clockwise"|"counterclockwise"} and/or overridden on each insert by adding the same parameter to the GeoJSON document.
closes#8764
Up to now, all filters could be cached using the `_cache` flag that could be
set to `true` or `false` and the default was set depending on the type of the
`filter`. For instance, `script` filters are not cached by default while
`terms` are. For some filters, the default is more complicated and eg. date
range filters are cached unless they use `now` in a non-rounded fashion.
This commit adds a 3rd option called `auto`, which becomes the default for
all filters. So for all filters a cache wrapper will be returned, and the
decision will be made at caching time, per-segment. Here is the default logic:
- if there is already a cache entry for this filter in the current segment,
then return the cache entry.
- else if the doc id set cannot iterate (eg. script filter) then do not cache.
- else if the doc id set is already cacheable and it has been used twice or
more in the last 1000 filters then cache it.
- else if the filter is costly (eg. multi-term) and has been used twice or more
in the last 1000 filters then cache it.
- else if the doc id set is not cacheable and it has been used 5 times or more
in the last 1000 filters, then load it into a cacheable set and cache it.
- else return the uncached set.
So for instance geo-distance filters and script filters are going to use this
new default and are not going to be cached because of their iterators.
Similarly, date range filters are going to use this default all the time, but
it is very unlikely that those that use `now` in a not rounded fashion will get
reused so in practice they won't be cached.
`terms`, `range`, ... filters produce cacheable doc id sets with good iterators
so they will be cached as soon as they have been used twice.
Filters that don't produce cacheable doc id sets such as the `term` filter will
need to be used 5 times before being cached. This ensures that we don't spend
CPU iterating over all documents matching such filters unless we have good
evidence of reuse.
One last interesting point about this change is that it also applies to compound
filters. So if you keep on repeating the same `bool` filter with the same
underlying clauses, it will be cached on its own while up to now it used to
never be cached by default.
`_cache: true` has been changed to only cache on large segments, in order to not
pollute the cache since small segments should not be the bottleneck anyway.
However `_cache: false` still has the same semantics.
Close#8449
Add a new ignore_idle_threads boolean option (default true) to
/_nodes/hot_threads, to filter out threads in known idle places like
waiting on a socket select or on pulling the next task from an empty
queue.
Closes#8985Closes#8908
The setting `mapping.date.round_ceil` (and the undocumented setting
`index.mapping.date.parse_upper_inclusive`) affect how date ranges using
`lte` are parsed. In #8556 the semantics of date rounding were
solidified, eliminating the need to have different parsing functions
whether the date is inclusive or exclusive.
This change removes these legacy settings and improves the tests
for the date math parser (now at 100% coverage!). It also removes the
unnecessary function `DateMathParser.parseTimeZone` for which
the existing `DateTimeZone.forID` handles all use cases.
Any user previously using these settings can refer to the changed
semantics and change their query accordingly. This is a breaking change
because even dates without datemath previously used the different
parsing functions depending on context.
closes#8598closes#8889
I replaced "high frequent terms" with "high frequency terms" and "low frequent terms" with "low frequency terms".
Alternatively, we could write, "highly frequent terms" and "minimally frequent terms" (or just "rare terms").
Closes#8962
We only have a single gatweway since es 1.3. There is no need to keep all
these abstractsion and nested packages. We can fold most of it into simpler
structures.
If you use the java-package tool to create java packages, those
paths also should be added to the debian init script.
Also updated the docs, that it is ok to install java8.
Closes#7383
This change adds a 'http.publish_port' setting to the HTTP module to configure
the port which HTTP clients should use when communicating with the node. This
is useful when running on a bridged network interface or when running behind
a proxy or firewall.
Closes#8807Closes#8137
Upgrades lucene to latest, and supports the BEST_COMPRESSION parameter
now supported (with backwards compatibility, etc) in Lucene.
This option uses deflate, tuned for highly compressible data.
index.codec::
The default value compresses stored data with LZ4 compression, but
this can be set to best_compression for a higher compression ratio,
at the expense of slower stored fields performance.
IMO its safest to implement as a named codec here, because ES already
has logic to handle this correctly, and because its unrealistic to have
a plethora of options to Lucene's default codec... we are practically
limited in Lucene to what we can support with back compat, so I don't
think we should overengineer this and add additional unnecessary plumbing.
See also:
https://issues.apache.org/jira/browse/LUCENE-5914https://issues.apache.org/jira/browse/LUCENE-6089https://issues.apache.org/jira/browse/LUCENE-6090https://issues.apache.org/jira/browse/LUCENE-6100Closes#8863
1. Enable the repository using "add-apt-repository" to avoid this error "No command 'deb' found".
2. Adding "sudo" to update and install command.
Closes#8691
Related to #8667:
Some QueryBuilders have been deprecated in 1.x branches. We removed them in 2.0.
Removed
-------
* `textPhrase(...)`
* `textPhrasePrefix(...)`
* `textPhrasePrefixQuery(...)`
* `filtered(...)`
* `inQuery(...)`
* `commonTerms(...)`
* `queryString(...)`
* `simpleQueryString(...)`
Closes#8721.
I'm not sure if the `distance-units` section is totally clear, when using the 'Geohash Cell Filter' and omitting a unit, the default is to interpret the integer as the 'length of the geohash prefix', not to default it to 'meter'. Maybe I'm being pedantic.
Closes#8744
Inner hits allows to embed nested inner objects, children documents or the parent document that contributed to the matching of the returned search hit as inner hits, which would otherwise be hidden.
Closes#8153Closes#3022Closes#3152
Adds a `ignore_like` parameter to the MLT Query, which simply tells the
algorithm to skip all the terms from the given documents. This could be useful
in order to better guide nearest neighbor search by telling the algorithm to
never explore the space spanned by the given `ignore_like` docs. In essence we
are interested about the characteristic of a given item, but not of the ones
provided by `ignore_like`, thereby forcing the algorithm to go deeper in its
selection of terms. Note that this is different than simply performing a must
not boolean query on the unliked items. The syntax is exactly the same as the
`like` parameter.
Closes#8674
functon_score matched each document regardless of the computed score.
This commit adds a query parameter `min_score` (-Float.MAX_VALUE default).
Documents that have a score lower than this threshold will not be mached.
closes#6952
Today, Elasticsearch has a separate merge thread pool checking once
per second (by default) if any merges are necessary, but this is no
longer necessary since we can and do now tell Lucene's
ConcurrentMergeScheduler never to "hard pause" threads when merges
fall behind, since we do our own index throttling.
This change goes back to letting Lucene launch merges as needed, and
removes these two expert settings:
index.merge.force_async_merge
index.merge.async_interval
Now merges kick off immediately instead of waiting up to 1 second
before running.
Closes#8643
Always use the LocalGateway* equivalents
We already check in the LocalGateway whether a node is a client node, or
is not master-eligible, and skip writing the state there. This allows us
to remove this code that was previously used only for tribe nodes (which
are not master eligible anyway and wouldn't write state) and in
tests (which can shake more bugs out)
Elasticsearch no longer unlocks the Lucene index on startup (this was
dangerous, and could possibly lead to corruption).
Added the new serbian_normalization TokenFilter from Lucene.
NoLockFactory is no longer supported (index.store.fs.fs_lock = none),
and if you have a typo in your fs_lock you'll now hit a StoreException
instead of silently using NoLockFactory.
Closes#8588
We speak of the term vectors of a document, where each field has an associated
stored term vector. Since by default we are requesting all the term vectors of
a document, the HTTP request endpoint should rather be called `_termvectors`
instead of `_termvector`. The usage of `_termvector` is now deprecated, as
well as the transport client call to termVector and prepareTermVector.
Closes#8484
make the "es090" postings format read-only, just to support old segments. There is a test version that subclasses it with write-capability for testing.
Closes#8571
In addition to `_source`, the following variables are available through
the `ctx` map: `_index`, `_type`, `_id`, `_version`, `_routing`,
`_parent`, `_timestamp`, `_ttl`.
Some of these fields are more useful still within the context of an
Update By Query, see #1607, #2230, #2231.
Previous to this change all features (_alias,_mapping,_settings,_warmer) are run regardless of which features are actually requested. This change fixes the request object to resolve this bug
Updated log4j link so it doesn't point to log4j 2.0 but version 1.2. Clarified which formats are supported and briefly explained what loggers and appenders are, plus added a link to the log4j docs.
Closes#5305Closes#8455
This prevents too-difficult regular expressions from consuming
excessive RAM/CPU; the default max_determinized_states is 10,000 (same
as Lucene) but query_string and regepx query/filter can override
per-request.
The also upgrades to a new Lucene 5.0.0 snapshot.
Closes#8386Closes#8357
This has a lot of improvements in lucene, particularly around memory usage, merging, safety, compressed bitsets, etc.
On the elasticsearch side, summary of the larger changes:
API changes: postings API became a "pull" rather than "push", collector API became per-segment, etc.
packaging changes: add lucene-backwards-codecs.jar as a dependency.
improvements to boolean filtering: especially ensuring it will not be slow for SparseBitSet.
use generic BitSet api in plumbing so that concrete bitset type is an implementation detail.
use generic BitDocIdSetFilter api for dedicated bitset cache, so there is type safety.
changes to support atomic commits
implement Accountable.getChildResources (detailed memory usage API) for fielddata, etc
change handling of IndexFormatTooOld/New, since they no longer extends CorruptIndexException
Closes#8347.
Squashed commit of the following:
commit d90d53f5f21b876efc1e09cbd6d63c538a16cd89
Author: Simon Willnauer <simonw@apache.org>
Date: Wed Nov 5 21:35:28 2014 +0100
Make default codec/postings/docvalues format constants
commit cb66c22c71cd304a36e7371b199a8c279908ae37
Merge: d4e2f6d ad4ff43
Author: Robert Muir <rmuir@apache.org>
Date: Wed Nov 5 11:41:13 2014 -0500
Merge branch 'master' into enhancement/lucene_5_0_upgrade
commit d4e2f6dfe767a5128c9b9ae9e75036378de08f47
Merge: 4e5445c 4111d93
Author: Robert Muir <rmuir@apache.org>
Date: Wed Nov 5 06:26:32 2014 -0500
Merge branch 'master' into enhancement/lucene_5_0_upgrade
commit 4e5445c775f580730eb01360244e9330c0dc3958
Author: Robert Muir <rmuir@apache.org>
Date: Tue Nov 4 16:19:19 2014 -0500
FixedBitSet -> BitSet
commit 9887ea73e8b857eeda7f851ef3722ef580c92acf
Merge: 1bf8894 fc84666
Author: Robert Muir <rmuir@apache.org>
Date: Tue Nov 4 15:26:25 2014 -0500
Merge branch 'master' into enhancement/lucene_5_0_upgrade
commit 1bf8894430de3e566d0dc5623b0cc28b0d674ebb
Author: Robert Muir <rmuir@apache.org>
Date: Tue Nov 4 15:22:51 2014 -0500
remove nocommit
commit a9c2a2259ff79c69bae7806b64e92d5f472c18c8
Author: Robert Muir <rmuir@apache.org>
Date: Tue Nov 4 13:48:43 2014 -0500
turn jenkins red again
commit 067baaaa4d52fce772c81654dcdb5051ea79139f
Author: Robert Muir <rmuir@apache.org>
Date: Tue Nov 4 13:18:21 2014 -0500
unzip from stream
commit 82b6fba33d362aca2313cc0ca495f28f5ebb9260
Merge: b2214bb 6523cd9
Author: Robert Muir <rmuir@apache.org>
Date: Tue Nov 4 13:10:59 2014 -0500
Merge branch 'master' into enhancement/lucene_5_0_upgrade
commit b2214bb093ec2f759003c488c3c403c8931db914
Author: Robert Muir <rmuir@apache.org>
Date: Tue Nov 4 13:09:53 2014 -0500
go back to my URL until we can figure out what is up with jenkins
commit e7d614172240175a51f580aeaefb6460d21cede9
Author: Robert Muir <rmuir@apache.org>
Date: Tue Nov 4 10:52:54 2014 -0500
try this jenkins
commit 337a3c7704efa7c9809bf373152d711ee55f876c
Author: Simon Willnauer <simonw@apache.org>
Date: Tue Nov 4 16:17:49 2014 +0100
Rename temp-files under lock to prevent metadata reads while renaming
commit 77d5ba80d0a76efa549dd753b9f114b2f2d2d29c
Author: Robert Muir <rmuir@apache.org>
Date: Tue Nov 4 10:07:11 2014 -0500
continue to treat too-old/too-new as corruption for now
commit 98d0fd2f4851bc50e505a94ca592a694d502c51c
Author: Robert Muir <rmuir@apache.org>
Date: Tue Nov 4 09:24:21 2014 -0500
fix last nocommit
commit 643fceed66c8caf22b97fc489d67b4a2a90a1a1c
Author: Simon Willnauer <simonw@apache.org>
Date: Tue Nov 4 14:46:17 2014 +0100
remove NoSuchDirectoryException
commit 2e43c4feba05cfaf451df70f946c0930cbcc4557
Merge: 93826e4 8163107
Author: Simon Willnauer <simonw@apache.org>
Date: Tue Nov 4 14:38:00 2014 +0100
Merge branch 'master' into enhancement/lucene_5_0_upgrade
commit 93826e4d56a6a97c2074669014af77ff519bde63
Merge: 7f10129 44e24d3
Author: Simon Willnauer <simonw@apache.org>
Date: Tue Nov 4 12:54:27 2014 +0100
Merge branch 'master' into enhancement/lucene_5_0_upgrade
Conflicts:
src/main/java/org/elasticsearch/index/store/DistributorDirectory.java
src/main/java/org/elasticsearch/index/store/Store.java
src/main/java/org/elasticsearch/indices/recovery/RecoveryStatus.java
src/test/java/org/elasticsearch/index/store/DistributorDirectoryTest.java
src/test/java/org/elasticsearch/index/store/StoreTest.java
src/test/java/org/elasticsearch/indices/recovery/RecoveryStatusTests.java
commit 7f10129364623620575c109df725cf54488b3abb
Author: Adrien Grand <jpountz@gmail.com>
Date: Tue Nov 4 11:32:24 2014 +0100
Fix TopHitsAggregator to not ignore the top-level/leaf collector split.
commit 042fadc8603b997bdfdc45ca44fec70dc86774a6
Author: Adrien Grand <jpountz@gmail.com>
Date: Tue Nov 4 11:31:20 2014 +0100
Remove MatchDocIdSet in favor of DocValuesDocIdSet.
commit 7d877581ff5db585a674c95ac391ac78a0282826
Author: Adrien Grand <jpountz@gmail.com>
Date: Tue Nov 4 11:10:08 2014 +0100
Make the and filter use the cost API.
Lucene 5 ensured that cost() can safely be used, and this will have the benefit
that the order in which filters are specified is not important anymore (only
for slow random-access filters in practice).
commit 78f1718aa2cd82184db7c3a8393e6215f43eb4a8
Author: Robert Muir <rmuir@apache.org>
Date: Mon Nov 3 23:55:17 2014 -0500
fix previous eclipse import braindamage
commit 186c40e9258ce32f22a9a714ab442a310b6376e0
Author: Robert Muir <rmuir@apache.org>
Date: Mon Nov 3 22:32:34 2014 -0500
allow child queries to exhaust iterators again
commit b0b1271305e1b6d0c4c4da51a3c54df1aa5c0605
Author: Ryan Ernst <ryan@iernst.net>
Date: Mon Nov 3 14:50:44 2014 -0800
Fix nocommit for mapping output. index_options will not be printed if
the field is not indexed.
commit ba223eb85e399c9620a347a983e29bf703953e7a
Author: Ryan Ernst <ryan@iernst.net>
Date: Mon Nov 3 14:07:26 2014 -0800
Remove no commit for chinese analyzer provider. We should have a
separate issue to address not using this provider on new indexes.
commit ca554b03c4471797682b2fb724f25205cf040c4a
Author: Ryan Ernst <ryan@iernst.net>
Date: Mon Nov 3 13:41:59 2014 -0800
Fix stop tests
commit de67c4653ec47dee9c671390536110749d2bb05f
Author: Ryan Ernst <ryan@iernst.net>
Date: Mon Nov 3 12:51:17 2014 -0800
Remove analysis nocommits, switching over to Lucene43*Filters for
backcompat
commit 50cae9bec72c25c33a1ab8a8931bccb3355171e2
Author: Robert Muir <rmuir@apache.org>
Date: Mon Nov 3 15:32:25 2014 -0500
add ram accounting and TODO lazy-loading (its no worse than master, can be a followup improvement) for suggesters
commit 7a7f0122f138684b312d0f0b03dc2a9c16c15f9c
Author: Robert Muir <rmuir@apache.org>
Date: Mon Nov 3 15:11:26 2014 -0500
bump lucene version
commit cd0cae5c35e7a9e049f49ae45431f658fb86676b
Merge: 446bc09 3c72073
Author: Robert Muir <rmuir@apache.org>
Date: Mon Nov 3 14:49:05 2014 -0500
Merge branch 'master' into enhancement/lucene_5_0_upgrade
commit 446bc09b4e8bf4602d3c252b53ddaa0da65cce2f
Author: Robert Muir <rmuir@apache.org>
Date: Mon Nov 3 14:46:30 2014 -0500
remove hack
commit a19d85a968d82e6d00292b49630ef6ff2dbf2f32
Author: Robert Muir <rmuir@apache.org>
Date: Mon Nov 3 12:53:11 2014 -0500
dont create exceptions with circular references on corruption (will open a PR for this)
commit 0beefb9e821d97c37e90ec556d81ac7b00369b8a
Author: Robert Muir <rmuir@apache.org>
Date: Mon Nov 3 11:47:14 2014 -0500
temporarily add craptastic detector for this horrible bug
commit e9f2d298bff75f3d1591f8622441e459c3ce7ac3
Author: Robert Muir <rmuir@apache.org>
Date: Mon Nov 3 10:56:01 2014 -0500
add nocommit
commit e97f1d50a91a7129650b8effc7a9ecf74ca0569a
Merge: c57a3c8 f1f50ac
Author: Robert Muir <rmuir@apache.org>
Date: Mon Nov 3 10:12:12 2014 -0500
Merge branch 'master' into enhancement/lucene_5_0_upgrade
commit c57a3c8341ed61dca62eaf77fad6b8b48aeb6940
Author: Robert Muir <rmuir@apache.org>
Date: Mon Nov 3 10:11:46 2014 -0500
fix nocommit
commit dd0e77e4ec07c7011ab5f6b60b2ead33dc2333d2
Author: Robert Muir <rmuir@apache.org>
Date: Mon Nov 3 09:54:09 2014 -0500
nocommit -> TODO, this is in much more places in the codebase, bigger issue
commit 3cc3bf56d72d642059f8fe220d6f2fed608363e9
Author: Ryan Ernst <ryan@iernst.net>
Date: Sat Nov 1 23:59:17 2014 -0700
Remove nocommit and awaitsfix for edge ngram filter test.
commit 89f115245155511c0fbc0d5ee62e63141c3700c1
Author: Ryan Ernst <ryan@iernst.net>
Date: Sat Nov 1 23:57:44 2014 -0700
Fix EdgeNGramTokenFilter logic for version <= 4.3, and fixed instanceof
checks in corresponding tests to correctly check for reverse filter when
applicable.
commit 112df869cd199e36aab0e1a7a288bb1fdb2ebf1c
Author: Robert Muir <rmuir@apache.org>
Date: Sun Nov 2 00:08:30 2014 -0400
execute geo disjoint query/filter as intersects
commit e5061273cc685f1252e9a3a9ae4877ec9bce7752
Author: Robert Muir <rmuir@apache.org>
Date: Sat Nov 1 22:58:59 2014 -0400
remove chinese analyzer from docs
commit ea1af11b8978fcc551f198e24fe21d52806993ef
Author: Robert Muir <rmuir@apache.org>
Date: Sat Nov 1 22:29:00 2014 -0400
fix ram accounting bug
commit 53c0a42c6aa81aa6bf81d3aa77b95efd513e0f81
Merge: e3bcd3c 6011a18
Author: Robert Muir <rmuir@apache.org>
Date: Sat Nov 1 22:16:29 2014 -0400
Merge branch 'master' into enhancement/lucene_5_0_upgrade
commit e3bcd3cc07a4957e12c7b3affc462c31290a9186
Author: Robert Muir <rmuir@apache.org>
Date: Sat Nov 1 22:15:01 2014 -0400
fix url-email back compat (thanks ryan)
commit 91d6b096a96c357755abee167098607223be1aad
Author: Robert Muir <rmuir@apache.org>
Date: Sat Nov 1 22:11:26 2014 -0400
bump lucene version
commit d2bb9568df72b37ec7050d25940160b8517394bc
Author: Robert Muir <rmuir@apache.org>
Date: Sat Nov 1 20:33:07 2014 -0400
remove nocommit
commit 1d049c471e19e5c457262c7399c5bad9e023b2e3
Author: Robert Muir <rmuir@apache.org>
Date: Sat Nov 1 20:28:58 2014 -0400
fix eclipse to group org/com imports together: without this, its madness
commit 09d8c1585ee99b6e63be032732c04ef6fed84ed2
Author: Robert Muir <rmuir@apache.org>
Date: Sat Nov 1 14:27:41 2014 -0400
remove nocommit, if you dont liek it, print assembly and tell me how it can be better
commit 8a6a294313fdf33b50c7126ec20c07867ecd637c
Author: Adrien Grand <jpountz@gmail.com>
Date: Fri Oct 31 20:01:55 2014 +0100
Remove deprecated usage of DocIdSets.newDocIDSet.
commit 601bee60543610558403298124a84b1b3bbd1045
Author: Robert Muir <rmuir@apache.org>
Date: Fri Oct 31 14:13:18 2014 -0400
maybe one of these zillions of annotations will stop thread leaks
commit 9d3f69abc7267c5e455aefa26db95cb554b02d62
Author: Robert Muir <rmuir@apache.org>
Date: Fri Oct 31 14:05:39 2014 -0400
fix some analysis nocommits
commit 312e3a29c77214b8142d21c33a6b2c2b151acf9a
Author: Adrien Grand <jpountz@gmail.com>
Date: Fri Oct 31 18:28:45 2014 +0100
Remove XConstantScoreQuery/XFilteredQuery/ApplyAcceptedDocsFilter.
commit 5a0cb9f8e167215df7f1b1fad11eec6e6c74940f
Author: Adrien Grand <jpountz@gmail.com>
Date: Fri Oct 31 17:06:45 2014 +0100
Fix misleading documentation of DocIdSets.toCacheable.
commit 8b4ef2b5b476fff4c79c0c2a0e4769ead26cf82b
Author: Adrien Grand <jpountz@gmail.com>
Date: Fri Oct 31 17:05:59 2014 +0100
Fix CustomRandomAccessFilterStrategy to override the right method.
commit d7a9a407a615987cfffc651f724fbd8795c9c671
Author: Adrien Grand <jpountz@gmail.com>
Date: Fri Oct 31 16:21:35 2014 +0100
Better handle the special case when there is a single SHOULD clause.
commit 648ad389f07e92dfc451f345549c9841ba5e4c9a
Author: Adrien Grand <jpountz@gmail.com>
Date: Fri Oct 31 15:53:38 2014 +0100
Cut over XBooleanFilter to BitDocIdSet.Builder.
The idea is similar to what happened to Lucene's BooleanFilter.
Yet XBooleanFilter is a bit more sophisticated and I had to slightly
change the way it is implemented in order to make it work. The main difference
with before is that slow filters are now applied lazily, so eg. if you have 3
MUST clauses, two with a fast iterator and the third with a slow iterator, the
previous implementation used to apply the fast iterators first and then only
check the slow filter for bits which were set in the bit set. Now we are
computing a bit set based on the fast must clauses and then basically returning
a BitsFilteredDocIdSet.wrap(bitset, slowClause).
Other than that, BooleanFilter still uses the bitset optimizations when or-ing
and and-ind filters.
Another improvement is that BooleanFilter is now aware of the cost API.
commit b2dad312b4bc9f931dc3a25415dd81c0d9deee08
Author: Robert Muir <rmuir@apache.org>
Date: Fri Oct 31 10:18:53 2014 -0400
clear nocommit
commit 4851d2091e744294336dfade33906c75fbe695cd
Author: Simon Willnauer <simonw@apache.org>
Date: Fri Oct 31 15:15:16 2014 +0100
cut over to RoaringDocIdSet
commit ca6aec24a901073e65ce4dd6b70964fd3612409e
Author: Simon Willnauer <simonw@apache.org>
Date: Fri Oct 31 14:57:30 2014 +0100
make nocommit more explicit
commit d0742ee2cb7a6c48b0bbb31580b7fbcebdb6ec40
Author: Robert Muir <rmuir@apache.org>
Date: Fri Oct 31 09:55:24 2014 -0400
fix standardtokenizer nocommit
commit 7d6faccafff22a86af62af0384838391d46695ca
Author: Simon Willnauer <simonw@apache.org>
Date: Fri Oct 31 14:54:08 2014 +0100
fix compilation
commit a038a405c1ff6458ad294e6b5bc469e622f699d0
Author: Simon Willnauer <simonw@apache.org>
Date: Fri Oct 31 14:53:43 2014 +0100
fix compilation
commit 30c9e307b1f5d80e2deca3392c0298682241207f
Author: Simon Willnauer <simonw@apache.org>
Date: Fri Oct 31 14:52:35 2014 +0100
fix compilation
commit e5139bc5a0a9abd2bdc6ba0dfbcb7e3c2e7b8481
Author: Robert Muir <rmuir@apache.org>
Date: Fri Oct 31 09:52:16 2014 -0400
clear nocommit here
commit 85dd2cedf7a7994bed871ac421cfda06aaf5c0a5
Author: Simon Willnauer <simonw@apache.org>
Date: Fri Oct 31 14:46:17 2014 +0100
fix CompletionPostingsFormatTest
commit c0f3781f616c9b0ee3b5c4d0998810f595868649
Author: Robert Muir <rmuir@apache.org>
Date: Fri Oct 31 09:38:00 2014 -0400
add tests for these analyzers
commit 51f9999b4ad079c283ae762c862fd0e22d00445f
Author: Simon Willnauer <simonw@apache.org>
Date: Fri Oct 31 14:10:26 2014 +0100
remove nocommit - this is not an issue
commit fd1388fa03e622b0738601c8aeb2dbf7949a6dd2
Author: Martijn van Groningen <martijn.v.groningen@gmail.com>
Date: Fri Oct 31 14:07:01 2014 +0100
Remove redundant null check
commit 3d6dd51b0927337ba941a235446b22e8cd500dc3
Author: Martijn van Groningen <martijn.v.groningen@gmail.com>
Date: Fri Oct 31 14:01:37 2014 +0100
Removed the work around to prevent p/c error when invoking #iterator() twice, because the custom query filter wrapper now doesn't transform the result to a cache doc id set any more.
I think the transforming to a cachable doc id set in CustomQueryWrappingFilter isn't needed at all, because we use the DocIdSet only once and because of that is just slowed things down.
commit 821832a537e00cd1216064b379df3e01d2911d3a
Author: Simon Willnauer <simonw@apache.org>
Date: Fri Oct 31 13:54:33 2014 +0100
one more nocommit
commit 77eb9ea4c4ea50afb2680c29682ddcb3851a9d4f
Author: Martijn van Groningen <martijn.v.groningen@gmail.com>
Date: Fri Oct 31 13:52:29 2014 +0100
Remove cast
commit a400573c034ed602221f801b20a58a9186a06eae
Author: Simon Willnauer <simonw@apache.org>
Date: Fri Oct 31 13:49:24 2014 +0100
fix stop filter
commit 51746087cf8ec34c4d20aa05ba8dbff7b3b43eec
Author: Simon Willnauer <simonw@apache.org>
Date: Fri Oct 31 13:21:36 2014 +0100
fix changed semantics of FBS.nextSetBit to check for NO_MORE_DOCS
commit 8d0a4e2511310f1293860823fe3ba80ac771bbe3
Author: Robert Muir <rmuir@apache.org>
Date: Fri Oct 31 08:13:44 2014 -0400
do the bogus cast differently
commit 46a5cc5732dea096c0c80ae5ce42911c9c51e44e
Author: Simon Willnauer <simonw@apache.org>
Date: Fri Oct 31 13:00:16 2014 +0100
I hate it but P/C now passes
commit 580c0c2f82bbeacf217e594f22312b11d1bdb839
Merge: a9d3c00 1645434
Author: Robert Muir <rmuir@apache.org>
Date: Fri Oct 31 06:54:31 2014 -0400
fix nocommit/classcast
commit a9d3c004d62fe04989f49a897e6ff84973c06eb9
Author: Adrien Grand <jpountz@gmail.com>
Date: Fri Oct 31 08:49:31 2014 +0100
Update TODO.
commit aa75af0b407792aeef32017f03a6f442ed970baa
Author: Robert Muir <rmuir@apache.org>
Date: Thu Oct 30 19:18:25 2014 -0400
clear obselete nocommits from lucene bump
commit d438534cf41fcbe2d88070e2f27c994625e082c2
Author: Robert Muir <rmuir@apache.org>
Date: Thu Oct 30 18:53:20 2014 -0400
throw classcastexception when ES abuses regular filtercache for nested docs
commit 2c751f3a8feda43ec127c34769b069de21f3d16f
Author: Robert Muir <rmuir@apache.org>
Date: Thu Oct 30 18:31:34 2014 -0400
bump lucene revision, fix tests
commit d6ef7f6304ae262bf6228a7d661b2a452df332be
Author: Simon Willnauer <simonw@apache.org>
Date: Thu Oct 30 22:37:58 2014 +0100
fix merge problems
commit de9d361f88a9ce6bb3fba85285de41f223c95767
Merge: 41f6aab f6b37a3
Author: Simon Willnauer <simonw@apache.org>
Date: Thu Oct 30 22:28:59 2014 +0100
Merge branch 'master' into enhancement/lucene_5_0_upgrade
Conflicts:
pom.xml
src/main/java/org/elasticsearch/Version.java
src/main/java/org/elasticsearch/gateway/local/state/meta/MetaDataStateFormat.java
commit 41f6aab388aa80c40b08a2facab2617576203a0d
Author: Simon Willnauer <simonw@apache.org>
Date: Thu Oct 30 17:48:46 2014 +0100
fix potiential NPE
commit c4428b12e1ae838b91e847df8b4a8be7f49e10f4
Author: Simon Willnauer <simonw@apache.org>
Date: Thu Oct 30 17:38:46 2014 +0100
don't advance iterator in a match(doc) method
commit 28ab948e99e3ea4497c9b1e468384806ba7e1790
Author: Simon Willnauer <simonw@apache.org>
Date: Thu Oct 30 17:34:58 2014 +0100
don't advance iterator in a match(doc) method
commit eb0f33f6634fadfcf4b2bf7327400e568f0427bb
Author: Simon Willnauer <simonw@apache.org>
Date: Thu Oct 30 16:55:54 2014 +0100
fix GeoUtilsTest
commit 7f711fe3eaf73b6c2268cf42d5a41132a61ad831
Author: Simon Willnauer <simonw@apache.org>
Date: Thu Oct 30 16:43:16 2014 +0100
Use a dedicated default index option if field type is not indexed by default
commit 78e3f37ab779e3e1b25b45a742cc86ab5f975149
Author: Robert Muir <rmuir@apache.org>
Date: Thu Oct 30 10:56:14 2014 -0400
disable this test with AwaitsFix to reduce noise
commit 9a590f563c8e03a99ecf0505c92d12d7ab20d11d
Author: Simon Willnauer <simonw@apache.org>
Date: Thu Oct 30 09:38:49 2014 +0100
fix lucene version
commit abe3ca1d8bb6b5101b545198f59aec44bacfa741
Author: Simon Willnauer <simonw@apache.org>
Date: Thu Oct 30 09:35:05 2014 +0100
fix AnalyzingCompletionLookupProvider to wrok with new codec API
commit 464293b245852d60bde050c6d3feb5907dcfbf5f
Author: Robert Muir <rmuir@apache.org>
Date: Thu Oct 30 00:26:00 2014 -0400
don't try to write stuff to tests class directory
commit 031cc6c19f4fe4423a034b515f77e5a0e282a124
Author: Robert Muir <rmuir@apache.org>
Date: Thu Oct 30 00:12:36 2014 -0400
AwaitsFix these known issues to reduce noise
commit 4600d51891e35847f2d344247d6f915a0605c0d1
Author: Robert Muir <rmuir@apache.org>
Date: Thu Oct 30 00:06:53 2014 -0400
openbitset lives on
commit 8492bae056249e2555d24acd55f1046b66a667c4
Author: Robert Muir <rmuir@apache.org>
Date: Wed Oct 29 23:42:54 2014 -0400
fixes for filter tests
commit 31f24ce4efeda31f97eafdb122346c7047a53bf2
Author: Robert Muir <rmuir@apache.org>
Date: Wed Oct 29 23:12:38 2014 -0400
don't use fieldcache
commit 8480789942fdff14a6d2b2cd8134502fe62f20c8
Author: Robert Muir <rmuir@apache.org>
Date: Wed Oct 29 23:04:29 2014 -0400
ancient index no longer supported
commit 02e78dc7ebdd827533009f542582e8db44309c57
Author: Simon Willnauer <simonw@apache.org>
Date: Wed Oct 29 23:37:02 2014 +0100
fix more tests
commit ff746c6df23c50b3f3ec24922413b962c8983080
Author: Simon Willnauer <simonw@apache.org>
Date: Wed Oct 29 23:08:19 2014 +0100
fix all mapper
commit e4fb84b517107b25cb064c66f83c9aa814a311b2
Author: Simon Willnauer <simonw@apache.org>
Date: Wed Oct 29 22:55:54 2014 +0100
fix distributor tests and cut over to FileStore API
commit 20c850e2cfe3210cd1fb9e232afed8d4ac045857
Author: Simon Willnauer <simonw@apache.org>
Date: Wed Oct 29 22:42:18 2014 +0100
use DOCS_ONLY if index=true and current options == null
commit 44169c108418413cfe51f5ce23ab82047463e4c2
Author: Simon Willnauer <simonw@apache.org>
Date: Wed Oct 29 22:33:36 2014 +0100
Fix index=yes|no settings in mappers
commit a3c5f77987461a18121156ed345d42ded301c566
Author: Simon Willnauer <simonw@apache.org>
Date: Wed Oct 29 21:51:41 2014 +0100
fix several field mappers conversion from setIndexed to indexOptions
commit df84d736908e88a031d710f98e222be68ae96af1
Author: Simon Willnauer <simonw@apache.org>
Date: Wed Oct 29 21:33:35 2014 +0100
fix SourceFieldMapper to be not indexed
commit b2bf01d12a8271a31fb2df601162d0e89924c8f5
Author: Simon Willnauer <simonw@apache.org>
Date: Wed Oct 29 21:23:08 2014 +0100
Cut over to .liv files in store and corruption tests
commit 619004df436f9ef05d24bef1b6a7f084c6b0ad75
Author: Simon Willnauer <simonw@apache.org>
Date: Wed Oct 29 17:05:52 2014 +0100
fix more tests
commit b7ed653a8b464de446e00456bce0a89e47627c38
Author: Simon Willnauer <simonw@apache.org>
Date: Wed Oct 29 16:19:08 2014 +0100
[STORE] Add dedicated method to write temporary files
Recovery writes temporary files which might not end up in the
right distributor directories today. This commit adds a dedicated
API that allows specifying the target file name in order to create the
tempoary file in the correct directory.
commit 7d574659f6ae04adc2b857146ad0d8d56ca66f12
Author: Robert Muir <rmuir@apache.org>
Date: Wed Oct 29 10:28:49 2014 -0400
add some leniency to temporary bogus method
commit f97022ea7c2259f7a5cf97d924c59ed75ab65b32
Author: Robert Muir <rmuir@apache.org>
Date: Wed Oct 29 10:24:17 2014 -0400
fix MultiCollector bug
commit b760533128c2b4eb10ad76e9689ef714293dd819
Author: Simon Willnauer <simonw@apache.org>
Date: Wed Oct 29 14:56:08 2014 +0100
CheckIndex is now closeable we need to close it
commit 9dae9fb6d63546a6c2427be2a2d5c8358f5b1934
Author: Simon Willnauer <simonw@apache.org>
Date: Wed Oct 29 14:45:11 2014 +0100
s/Lucene51/Lucene50
commit 7aea9b86856a8c1b06a08e7c312ede1168af1287
Author: Simon Willnauer <simonw@apache.org>
Date: Wed Oct 29 14:42:30 2014 +0100
fix BloomFilterPostingsFormat
commit 16fea6fe842e88665d59cc091e8224e8dc6ce08c
Author: Simon Willnauer <simonw@apache.org>
Date: Wed Oct 29 14:41:16 2014 +0100
fix some codec format issues
commit 3d77aa97dd2c4012b63befef3f2ba2525965e8a6
Author: Simon Willnauer <simonw@apache.org>
Date: Wed Oct 29 14:30:43 2014 +0100
fix CodecTests
commit 6ef823b1fde25657438ace1aabd9d552d6ae215e
Author: Simon Willnauer <simonw@apache.org>
Date: Wed Oct 29 14:26:47 2014 +0100
make it compile
commit 9991eee1fe99435118d4dd42b297ffc83fce5ec5
Author: Robert Muir <rmuir@apache.org>
Date: Wed Oct 29 09:12:43 2014 -0400
add an ugly hack for TopHitsAggregator for now
commit 03e768a01fcae6b1f4cb50bcceec7d42977ac3e6
Author: Simon Willnauer <simonw@apache.org>
Date: Wed Oct 29 14:01:02 2014 +0100
cut over ES090PostingsFormat
commit 463d281faadb794fdde3b469326bdaada25af048
Merge: 0f8740a 8eac79c
Author: Robert Muir <rmuir@apache.org>
Date: Wed Oct 29 08:30:36 2014 -0400
Merge branch 'master' into enhancement/lucene_5_0_upgrade
commit 0f8740a782455a63524a5a82169f6bbbfc613518
Author: Robert Muir <rmuir@apache.org>
Date: Wed Oct 29 01:00:15 2014 -0400
fix/hack remaining filter and analysis issues
commit df534488569da13b31d66e581456dfd4b55156b9
Author: Robert Muir <rmuir@apache.org>
Date: Tue Oct 28 23:11:47 2014 -0400
fix ngrams / openbitset usage
commit 11f5dc3b9887f4da80a0fa1818e1350b30599329
Author: Robert Muir <rmuir@apache.org>
Date: Tue Oct 28 22:42:44 2014 -0400
hack over sort comparators
commit 4ebdc754350f512596f6a02770d223e9f5f7975a
Author: Robert Muir <rmuir@apache.org>
Date: Tue Oct 28 21:27:07 2014 -0400
compiler errors < 100
commit 2d60c9e29de48ccb0347dd87f7201f47b67b83a0
Author: Robert Muir <rmuir@apache.org>
Date: Tue Oct 28 03:13:08 2014 -0400
clear some nocommits around ram usage
commit aaf47fe6c0aabcfb2581dd456fc50edf871da758
Author: Robert Muir <rmuir@apache.org>
Date: Mon Oct 27 12:27:34 2014 -0400
migrate fieldinfo handling
commit ef6ed6d15d8def71cd880d97249678136cd29fe3
Author: Robert Muir <rmuir@apache.org>
Date: Mon Oct 27 12:07:13 2014 -0400
more simple fixes
commit f475e1048ae697dd9da5bd9da445102b0b7bc5b3
Author: Robert Muir <rmuir@apache.org>
Date: Mon Oct 27 11:58:21 2014 -0400
more fielddata ram accounting fixes
commit 16b4239eaa9b4262df258257df4f31d39f28a3a2
Author: Simon Willnauer <simonw@apache.org>
Date: Mon Oct 27 16:47:32 2014 +0100
add missing file
commit 5b542fa2a6da81e36a0c35b8e891a1d8bc58f663
Author: Simon Willnauer <simonw@apache.org>
Date: Mon Oct 27 16:43:29 2014 +0100
cut over completion posting formats - still some nocommits
commit ecdea49404c4ec4e1b78fb54575825f21b4e096e
Author: Robert Muir <rmuir@apache.org>
Date: Mon Oct 27 11:21:09 2014 -0400
fielddata accountable fixes
commit d43da265718917e20c8264abd43342069198fe9c
Author: Simon Willnauer <simonw@apache.org>
Date: Mon Oct 27 16:19:53 2014 +0100
cut over BloomFilterPostings to new API
commit 29b192ba621c14820175775d01242162b88bd364
Author: Robert Muir <rmuir@apache.org>
Date: Mon Oct 27 10:22:51 2014 -0400
fix more analyzers
commit 74b4a0c5283e323a7d02490df469497c722780d2
Author: Robert Muir <rmuir@apache.org>
Date: Mon Oct 27 09:54:25 2014 -0400
fix tests
commit 554084ccb4779dd6b1c65fa7212ad1f64f3a6968
Author: Simon Willnauer <simonw@apache.org>
Date: Mon Oct 27 14:51:48 2014 +0100
maintain supressed exceptions on CorruptIndexException
commit cf882d9112c5e8ef1e9f2b0f800f7aa59001a4f2
Author: Simon Willnauer <simonw@apache.org>
Date: Mon Oct 27 14:47:17 2014 +0100
commitOnClose=false
commit ebb2a9189ab2f459b7c6c9985be610fd90dfe410
Author: Simon Willnauer <simonw@apache.org>
Date: Mon Oct 27 14:46:06 2014 +0100
cut over indexwriter closeing in InternalEngine
commit cd21b3d4706f0b562bd37792d077d60832aff65f
Author: Simon Willnauer <simonw@apache.org>
Date: Mon Oct 27 14:38:10 2014 +0100
fix constant
commit f93f900c4a1c90af3a21a4af5735a7536423fe28
Author: Robert Muir <rmuir@apache.org>
Date: Mon Oct 27 09:50:49 2014 -0400
fix test
commit a9a752940b1ab4699a6a08ba8b34afca82b843fe
Author: Martijn van Groningen <martijn.v.groningen@gmail.com>
Date: Mon Oct 27 09:26:18 2014 +0100
Be explicit about the index options
commit d9ee815babd030fa2ceaec9f467c105ee755bf6b
Author: Simon Willnauer <simonw@apache.org>
Date: Sun Oct 26 20:03:44 2014 +0100
cut over store and directory
commit b3f5c8e39039dd8f5caac0c4dd1fc3b1116e64ca
Author: Robert Muir <rmuir@apache.org>
Date: Sun Oct 26 13:08:39 2014 -0400
more test fixes
commit 8842f2684e3606aae0860c27f7a4c53e273d47fb
Author: Robert Muir <rmuir@apache.org>
Date: Sun Oct 26 12:14:52 2014 -0400
tests manual labor
commit c43de5aec337919a3fdc3638406dff17fc80bc98
Author: Robert Muir <rmuir@apache.org>
Date: Sun Oct 26 11:04:13 2014 -0400
BytesRef -> BytesRefBuilder
commit 020c0d087a2f37566a1db390b0e044ebab030138
Author: Martijn van Groningen <martijn.v.groningen@gmail.com>
Date: Sun Oct 26 15:53:37 2014 +0100
Moved over to BitSetFilter
commit 48dd1b909e6c52cef733961c9ecebfe4f67109fe
Author: Martijn van Groningen <martijn.v.groningen@gmail.com>
Date: Sun Oct 26 15:53:11 2014 +0100
Left over Collector api change in ScanContext
commit 6ec248ef63f262bcda400181b838fd9244752625
Author: Martijn van Groningen <martijn.v.groningen@gmail.com>
Date: Sun Oct 26 15:47:40 2014 +0100
Moved indexed() over to indexOptions != null or indexOptions == null
commit 9937aebfd8546ae4bb652cd976b3b43ac5ab7a63
Author: Martijn van Groningen <martijn.v.groningen@gmail.com>
Date: Sun Oct 26 13:26:31 2014 +0100
Fixed many compile errors. Mainly around the breaking Collector api change in 5.0.
commit fec32c4abc0e3309cf34260c8816305a6f820c9e
Author: Robert Muir <rmuir@apache.org>
Date: Sat Oct 25 11:22:17 2014 -0400
more easy fixes
commit dab22531d801800d17a65dc7c9464148ce8ebffd
Author: Robert Muir <rmuir@apache.org>
Date: Sat Oct 25 09:33:41 2014 -0400
more progress
commit 414767e9a955010076b0497cc4f6d0c1850b48d3
Author: Robert Muir <rmuir@apache.org>
Date: Sat Oct 25 06:33:17 2014 -0400
more progress
commit ad9d969fddf139a8830254d3eb36a908ba87cc12
Author: Robert Muir <rmuir@apache.org>
Date: Fri Oct 24 14:28:01 2014 -0400
current state of fun
commit 464475eecb0be15d7d084135ed16051f76a7e521
Author: Robert Muir <rmuir@apache.org>
Date: Fri Oct 24 11:42:41 2014 -0400
bump to 5.0 snapshot
We currently use the djb2 hash function in order to compute the shard a
document should go to. Unfortunately this hash function is not very
sophisticated and you can sometimes hit adversarial cases, such as numeric ids
on 33 shards.
Murmur3 generates hashes with a better distribution, which should avoid the
adversarial cases.
Here are some examples of how 100000 incremental ids are distributed to shards
using either djb2 or murmur3.
5 shards:
Murmur3: [19933, 19964, 19940, 20030, 20133]
DJB: [20000, 20000, 20000, 20000, 20000]
3 shards:
Murmur3: [33185, 33347, 33468]
DJB: [30100, 30000, 39900]
33 shards:
Murmur3: [2999, 3096, 2930, 2986, 3070, 3093, 3023, 3052, 3112, 2940, 3036, 2985, 3031, 3048, 3127, 2961, 2901, 3105, 3041, 3130, 3013, 3035, 3031, 3019, 3008, 3022, 3111, 3086, 3016, 2996, 3075, 2945, 2977]
DJB: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 900, 900, 900, 900, 1000, 1000, 10000, 10000, 10000, 10000, 9100, 9100, 9100, 9100, 9000, 9000, 0, 0, 0, 0, 0, 0]
Even if djb2 looks ideal in some cases (5 shards), the fact that the
distribution of its hashes has some patterns can raise issues with some shard
counts (eg. 3, or even worse 33).
Some tests have been modified because they relied on implementation details of
the routing hash function.
Close#7954
This commit adds the ability to associate a bit of state with each
individual aggregation.
The aggregation response can be hard to stitch back together without
having a reference to the aggregation request. In many cases this is not
available, many json serializer frameworks cache types globally or have a
static deserialisation override mechanism. In these cases making the
original request available, if at all possible, would be a hack.
The old facets returned `_type` which was just enough metadata to know
what the originating facet type in the request was.
This PR takes `_type` one step further by introducing ANY arbitrary meta
data. This could be further <strike>ab</strike>used for instance by
generic/automated aggregations that include UI state (color information,
thresholds, user input states, etc) per aggregation.
This adds HTTP pipelining support to netty. Previously pipelining was not
supported due to the asynchronous nature of elasticsearch. The first request
that was returned by Elasticsearch, was returned as first response,
regardless of the correct order.
The solution to this problem is to add a handler to the netty pipeline
that maintains an ordered list and thus orders the responses before
returning them to the client. This means, we will always have some state
on the server side and also requires some memory in order to keep the
responses there.
Pipelining is enabled by default, but can be configured by setting the
http.pipelining property to true|false. In addition the maximum size of
the event queue can be configured.
The initial netty handler is copied from this repo
https://github.com/typesafehub/netty-http-pipeliningCloses#2665
The MLT field query is simply replaced by a MLT query set to specififc field.
To simplify code maintenance we should deprecate it in 1.4 and remove it in
2.0.
Closes#8238
Fixes a bug where alias creation would allow `null` for index name, which thereby
applied the alias to _all_ indices. This patch makes the validator throw an
exception if the index is null.
```bash
POST /_aliases
{
"actions": [
{
"add": {
"alias": "empty-alias",
"index": null
}
}
]
}
```
```json
{
"error": "ActionRequestValidationException[Validation Failed: 1: Alias action [add]: [index] may not be null;]",
"status": 400
}
```
The reason this bug wasn't caught by the existing tests is because
the old test for nullness only validated against a cluster which had
zero indices. The null index is translated into "_all", and since
there are no indices, this fails because the index doesn't exist.
So the test passes.
However, as soon as you add an index, "_all" resolves and you get the
situation described in the original bug report: null index is
accepted by the alias, resolves to "_all" and gets applied to everything.
The REST tests, otoh, explicitly tested this bug as a real feature and therefore
passed. The REST tests were modified to change this behavior.
Fixes#7863
This commit adds a new field to the response of the terms aggregation called
`sum_other_doc_count` which is equal to the sum of the doc counts of the buckets
that did not make it to the list of top buckets. It is typically useful to have
a sector called eg. `other` when using terms aggregations to build pie charts.
Example query and response:
```json
GET test/_search?search_type=count
{
"aggs": {
"colors": {
"terms": {
"field": "color",
"size": 3
}
}
}
}
```
```json
{
[...],
"aggregations": {
"colors": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 4,
"buckets": [
{
"key": "blue",
"doc_count": 65
},
{
"key": "red",
"doc_count": 14
},
{
"key": "brown",
"doc_count": 3
}
]
}
}
}
```
Close#8213
Add source_node and target_node fields to the recovery cat API. Also fixed and updated the documentation which was not complete concerning fields names.
Closes#8041
The MLT query has a lot of parameters. For example, a set of documents is
specified with either `like_text`, `ids` or `docs`, with at least one
parameter required. This commit groups all the document specification
parameters under one called `like`. The syntax is described below and could
easily be extended to allow for new means of specifying document input. The
`like_text`, `ids` and `docs` parameters are deprecated.
As a single piece text:
{
"query": {
"more_like_this": {
"like": "some text here"
}
}
}
As a single item:
{
"query": {
"more_like_this": {
"like": {
"_index": "imdb",
"_type": "movies",
"_id": "88247"
}
}
}
}
Or as a mixture of all:
{
"query": {
"more_like_this": {
"like": [
"Some random text ...",
{
"_index": "imdb",
"_type": "movies",
"_id": "88247"
},
{
"_index": "imdb",
"_type": "movies",
"doc": {
"title": "Document with an artificial title!"
}
}
]
}
}
}
Closes#8039
When reading the [rolling upgrade process](http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/setup-upgrade.html#rolling-upgrades), you can see that we wrote:
* disable allocation
* upgrade node1
* upgrade node2
* upgrade node3
* ...
* enable allocation
That won't work as after a node has been removed and restarted, no shard will be allocated anymore.
So closing node2 and remaining nodes, won't help to serve index and search request anymore.
We should write:
* disable allocation
* upgrade node1
* enable allocation
* wait for shards being recovered on node1
* disable allocation
* upgrade node2
* enable allocation
* wait for shards being recovered on node2
* disable allocation
* upgrade node3
* enable allocation
* wait for shards being recovered on node3
* disable allocation
* ...
* enable allocation
I think this documentation update should go in 1.3, 1.4, 1.x and master branches.
Closes#8218Closes#7973.
This commit adds the ability to enable / disable relocations
on an entire cluster or on individual indices for either:
* `primaries` - only primaries can rebalance
* `replica` - only replicas can rebalance
* `all` - everything can rebalance (default)
* `none` - all rebalances are disabled
similar to the allocation enable / disable functionality.
Relates to #7288
Query String query now supports a new `time_zone` option based on JODA time zones.
When using a range on date field, the time zone is applied.
```json
{
"query": {
"query_string": {
"text": "date:[2012 TO 2014]",
"timezone": "Europe/Paris"
}
}
}
```
Closes#7880.
Storing `_timestamp` by default means that under the default configuration, you
would have all the information you need in order to reindex into a different
index.
Close#8139
This is functionally equivalent to before, so there should be no
user-visible impact, except I added a NOTE in the docs warning about
the interaction of pagination and rescoring.
Closes#6232Closes#7707
This patch allows to create several netty bootstrap, each of which
listening on different ports. This will potentially allow for features
to listen to different network interfaces for node-to-node or node-to-client
communication and is also the base to listen to several interfaces, so that those
can be used to speed up cluster communication in the future.
Closes#8098
It is strange to provide an example with `"store" : false` when talking about possibility of enabling the field to be stored.
Broke the line in the mapping in two lines for better readability.
More verbose sentence above the mapping.
Closes#7894
cat/nodes currently does not report any details related to file descriptors. This adds the current number in use, the maximum number available as well as their ratio (percentage) to cat/nodes as hidden-by-default metrics. In addition, this also adds current heap usage (as a non-percentage of ts max) and ram usage (as a non-percerntage of its max) to allow tools to provide more granularity.
Closes#7652
This particular note was about fielddata filtering but could cause confusion
that fields that have doc values enabled cannot be used for filtering (as in
a `filtered_query`).
By letting the fetch phase understand the nested docs structure we can serve nested docs as hits.
The `top_hits` aggregation can because of this commit be placed in a `nested` or `reverse_nested` aggregation.
Closes#7164
This patch adds to `_cat/indices` information about memory usage per index by adding memory used by FieldData, IdCache, Percolate, Segments (memory, index writer, version map).
```
% curl 'localhost:9200/_cat/indices?v&h=i,tm'
i tm
wiki 8.1gb
test 30.5kb
user 1.9mb
```
Closes#7008
This commit does the following:
* Add the new API at the rest layer, being backed by the optimize API
with upgrade flag, and segments api to find upgrade status.
* Add `upgrade` flag to optimize API, and deprecate `force` flag (will
remove in master)
* Add test for both synchronous and async upgrade
closes#7884closes#7922
When the date format is defined in mapping, you can not use another format when querying using range date query or filter.
For example, this won't work:
```
DELETE /test
PUT /test/t/1
{
"date": "2014-01-01"
}
GET /test/_search
{
"query": {
"filtered": {
"filter": {
"range": {
"date": {
"from": "01/01/2014"
}
}
}
}
}
}
```
It causes:
```
Caused by: org.elasticsearch.ElasticsearchParseException: failed to parse date field [01/01/2014], tried both date format [dateOptionalTime], and timestamp number
```
It could be nice if we can support at query time another date format just like we support `analyzer` at search time on String fields.
Something like:
```
GET /test/_search
{
"query": {
"filtered": {
"filter": {
"range": {
"date": {
"from": "01/01/2014",
"format": "dd/MM/yyyy"
}
}
}
}
}
}
```
Same for queries:
```
GET /test/_search
{
"query": {
"range": {
"date": {
"from": "01/01/2014",
"format": "dd/MM/yyyy"
}
}
}
}
```
Closes#7189.
This change removes the script_type parameter form the Scripted Metric Aggregation and adds support for _file and _id suffixes to the init_script, map_script, combine_script and reduce_script parameters to make defining the source of the script consistent with the other APIs which use the ScriptService
This adds a `per_field_analyzer` parameter to the Term Vectors API, which
allows to override the default analyzer at the field. If the field already
stores term vectors, then they will be re-generated. Since the MLT Query uses
the Term Vectors API under its hood, this commits also adds the same ability
to the MLT Query, thereby allowing users to fine grain how each field item
should be processed and analyzed.
Closes#7801
By default term vectors are now realtime, as opposed to previously near
realtime. If they are not found in the index, they will be generated on the
fly. The document is fetched from the transaction log and treated as an
artificial document. One can set `realtime` parameter to `false` in order to
disable this functionality. This consequently makes the MLT query realtime in
fetching documents, as it previsouly used to be before switching from using
the multi get API to the mtv API.
Closes#7846
Previously, the only way to specify a document not present in the index was to
use `like_text`. This would usually lead to complex queries made of multiple
MLT queries per document field. This commit adds the ability to the MLT query
to directly specify documents not present in the index (artificial documents).
The syntax is similar to the Percolator API or to the Multi Term Vector API.
Closes#7725
The minimum number of optional should clauses of the generated query to match
can now be set using the more extensive minimum should match syntax. This
makes the `percent_terms_to_match` parameter deprecated, and replaced in favor
to a new `minimum_should_match` parameter.
Closes#7898
On the page http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/analysis-synonym-tokenfilter.html
even on a huge monitor the text is being wrapped the next way
```
mapping:
ipod, i-pod, i pod => ipod, i-pod, i pod
mapping:
ipod, i-pod, i pod => ipod
```
So one can think that "mapping:" is not in comment and is a part of syntax. But the lines are less than 80 chars, so perhaps the problem is in the page layout and there may be some other pages in the reference where the text is also being wrapped in an undesirable way.
Closes#7739
Today, when executing an action (mainly when using the Java API), a listener threaded flag can be set to true in order to execute the listener on a different thread pool. Today, this thread pool is the generic thread pool, which is cached. This can create problems for Java clients (mainly) around potential thread explosion.
Introduce a new thread pool called listener, that is fixed sized and defaults to the half the cores maxed at 10, and use it where listeners are executed.
relates to #5152closes#7837
The parameter `percent_terms_to_match` (percentage of terms that must match in
the generated query) was wrongly set to the top level boolean query. This
would lead to zero or all results type of situations. This commit ensures that
the parameter is indeed applied to the query of generated terms.
Closes#7754
When using the DiskThresholdDecider, it's possible that shards could
already be marked as relocating to the node being evaluated. This commit
adds a new setting `cluster.routing.allocation.disk.include_relocations`
which adds the size of the shards currently being relocated to this node
to the node's used disk space.
This new option defaults to `true`, however it's possible to
over-estimate the usage for a node if the relocation is already
partially complete, for instance:
A node with a 10gb shard that's 45% of the way through a relocation
would add 10gb + (.45 * 10) = 14.5gb to the node's disk usage before
examining the watermarks to see if a new shard can be allocated.
Fixes#7753
Relates to #6168
Changes the name of the field in the scripted metrics aggregation from 'aggregation' to 'value' to be more in line with the other metrics aggregations like 'avg'
The terms aggregation can now support sorting on multiple criteria by replacing the sort object with an array or sort object whose order signifies the priority of the sort. The existing syntax for sorting on a single criteria also still works.
Contributes to #6917
Replaces #7588
Returns information about settings, aliases, warmers, and mappings. Basically returns the IndexMetadata. This new endpoint replaces the /{index}/_alias|_aliases|_mapping|_mappings|_settings|_warmer|_warmers and /_alias|_aliases|_mapping|_mappings|_settings|_warmer|_warmers endpoints whilst maintaining the same response formats. The only exception to this is on the /_alias|_aliases|_warmer|_warmers endpoint which will now return a section for 'aliases' or 'warmers' even if no aliases or warmers exist. This backwards compatibility change is documented in the reference docs.
Closes#4069
The terms aggregation can now support sorting on multiple criteria by replacing the sort object with an array or sort object whose order signifies the priority of the sort. The existing syntax for sorting on a single criteria also still works.
Contributes to #6917
BlobContainer used to provide async APIs which are not used
internally. The implementation of these APIs are also not async
by nature and neither is any of the pluggable BlobContainers. This
commit simplifies the API to a simple input / output stream and
reduces the hierarchy of BlobContainer dramatically.
NOTE: This is a breaking change!
Closes#7551
This adds the ability to the Term Vector API to generate term vectors for
artifical documents, that is for documents not present in the index. Following
a similar syntax to the Percolator API, a new 'doc' parameter is used, instead
of '_id', that specifies the document of interest. The parameters '_index' and
'_type' determine the mapping and therefore analyzers to apply to each value
field.
Closes#7530
Aggregations are collection-wide statistics, which is incompatible with the
collection mode of search_type=SCAN since it doesn't collect all matches on
calls to the search API.
Close#7429
Aggregations are collection-wide statistics so they would always be the same.
In order to save CPU/bandwidth, we can just return them on the first page.
Same as #1642 but for aggregations.
RandomScoreFunction previously relied on the order the documents were
iterated in from Lucene. This caused changes in ordering, with the same
seed, if documents moved to different segments. With this change, a
murmur32 hash of the _uid for each document is used as the "random"
value. Also, the hash is adjusted so as to only return values between
0.0 and 1.0 to enable easier manipulation to fit into users' scoring
models.
closes#6907, #7446
Items with no specified field now defaults to all the possible fields from the
document source. Previously, we had required 'fields' to be specified either
as a top level parameter or for each item. The default behavior is now similar
to the MLT API.
Closes#7382
The term vector API can now generate term vectors on the fly, if the terms are
not already stored in the index. This commit exploits this new functionality
for the MLT query. Now the terms are directly retrieved using multi-
termvectors API, instead of generating them from the texts retrieved using the
multi-get API.
Closes#7014
The current implementation of 'date_histogram' does not understand
the `factor` parameter. Since the docs shouldn't raise false hopes,
I removed the section.
Closes#7277
This change means that the default settings for expand_wildcards are only applied if the expand_wildcards parameter is not specified rather than being set upfront. It also adds the none and all options to the parameter to allow the user to specify no expansion and expansion to all indexes (equivalent to 'open,closed')
Closes#7258
This change stores the index creation time in the index metadata when an index is created. The creation time cannot be changed but can be set as part of the create index request to allow for correct creation times for historical data.
Closes#7119
This documentation was dangerous because it felt like it was possible to gain
substantial performance by just switching the codec of the index.
However, non-default codecs are dangerous to use since they are not supported
in terms of backward compatibility, and most improvements that they bring have
been folded into the default codec anyway (for example, the default codec
"pulses" postings lists that contain a single document).
In the case of inserts the UpdateHelper class will now allow the script used to apply updates to run on the upsert doc provided by clients. This allows the logic for managing the internal state of the data item to be managed by the script and is not reliant on clients performing the initialisation of data structures managed by the script.
Closes#7143
This adds support to return the "Access-Control-Allow-Credentials" header
if needed, so CORS will work flawlessly with authenticated applications.
Closes#6380
AND and OR filter docs talk about different targets for the operators. I
believe that both should be described in terms of modifying other 'filters'.
I also added articles for easier (human) parsing. This fixes#4762Closes#7165
Filters and Queries now supports `time_zone` parameter which defines which time zone should be applied to the query or filter to convert it to UTC time based value.
When applied on `date` fields the `range` filter and queries accept also a `time_zone` parameter.
The `time_zone` parameter will be applied to your input lower and upper bounds and will move them to UTC time based date:
[source,js]
--------------------------------------------------
{
"constant_score": {
"filter": {
"range" : {
"born" : {
"gte": "2012-01-01",
"lte": "now",
"time_zone": "+1:00"
}
}
}
}
}
{
"range" : {
"born" : {
"gte": "2012-01-01",
"lte": "now",
"time_zone": "+1:00"
}
}
}
--------------------------------------------------
In the above examples, `gte` will be actually moved to `2011-12-31T23:00:00` UTC date.
NOTE: if you give a date with a timezone explicitly defined and use the `time_zone` parameter, `time_zone` will be
ignored. For example, setting `from` to `2012-01-01T00:00:00+01:00` with `"time_zone":"+10:00"` will still use `+01:00` time zone.
Closes#3729.
Fields of type `token_count`, `murmur3`, `_all` and `_field_names` are generated only when indexing.
If a GET requests accesses the transaction log (because no refresh
between indexing and GET request) then these fields cannot be retrieved at all.
Before the behavior was so:
`_all, _field_names`: The field was siletly ignored
`murmur3, token_count`: `NumberFormatException` because GET tried to parse the values from the source.
In addition, if these fields were not stored, the same behavior occured if the fields were
retrieved with GET after a `refresh()` because here also the source was used to get the fields.
Now, GET accepts a parameter `ignore_errors_on_generated_fields` which has
the following effect:
- Throw exception with meaningful error message explaining the problem if set to false (default)
- Ignore the field if set to true
- Always ignore the field if it was not set to stored
This changes the behavior for `_all` and `_field_names` as now an Exception is thrown if a user
tries to GET them before a `refresh()`.
closes#6676closes#6973
Allow to set the value default to network.tcp.no_delay and network.tcp.keep_alive so they won't be set at all, since on solaris, setting tcpNoDelay can actually cause failure
relates to #7115
A multi-bucket aggregation where multiple filters can be defined (each filter defines a bucket). The buckets will collect all the documents that match their associated filter.
This aggregation can be very useful when one wants to compare analytics between different criterias. It can also be accomplished using multiple definitions of the single filter aggregation, but here, the user will only need to define the sub-aggregations only once.
Closes#6118
Implements a new Exists API allowing users to do fast exists check on any matched documents for a given query.
This API should be faster then using the Count API as it will:
- early terminate the search execution once any document is found to exist
- return the response as soon as the first shard reports matched documents
closes#6995
Index process fails when having `_timestamp` enabled and `path` option is set.
It fails with a `TimestampParsingException[failed to parse timestamp [null]]` message.
Reproduction:
```
DELETE test
PUT test
{
"mappings": {
"test": {
"_timestamp" : {
"enabled" : "yes",
"path" : "post_date"
}
}
}
}
PUT test/test/1
{
"foo": "bar"
}
```
You can define a default value for when timestamp is not provided
within the index request or in the `_source` document.
By default, the default value is `now` which means the date the document was processed by the indexing chain.
You can disable that default value by setting `default` to `null`. It means that `timestamp` is mandatory:
```
{
"tweet" : {
"_timestamp" : {
"enabled" : true,
"default" : null
}
}
}
```
If you don't provide any timestamp value, indexation will fail.
You can also set the default value to any date respecting timestamp format:
```
{
"tweet" : {
"_timestamp" : {
"enabled" : true,
"format" : "YYYY-MM-dd",
"default" : "1970-01-01"
}
}
}
```
If you don't provide any timestamp value, indexation will fail.
Closes#4718.
Closes#7036.
Adds a breaker for request BigArrays, which are used for parent/child
queries as well as some aggregations. Certain operations like Netty HTTP
responses and transport responses increment the breaker, but will not
trip.
This also changes the output of the nodes' stats endpoint to show the
parent breaker as well as the fielddata and request breakers.
There are a number of new settings for breakers now:
`indices.breaker.total.limit`: starting limit for all memory-use breaker,
defaults to 70%
`indices.breaker.fielddata.limit`: starting limit for fielddata breaker,
defaults to 60%
`indices.breaker.fielddata.overhead`: overhead for fielddata breaker
estimations, defaults to 1.03
(the fielddata breaker settings also use the backwards-compatible
setting `indices.fielddata.breaker.limit` and
`indices.fielddata.breaker.overhead`)
`indices.breaker.request.limit`: starting limit for request breaker,
defaults to 40%
`indices.breaker.request.overhead`: request breaker estimation overhead,
defaults to 1.0
The breaker service infrastructure is now generic and opens the path to
adding additional circuit breakers in the future.
Fixes#6129
Conflicts:
src/main/java/org/elasticsearch/index/fielddata/IndexFieldData.java
src/main/java/org/elasticsearch/index/fielddata/IndexFieldDataService.java
src/main/java/org/elasticsearch/index/fielddata/RamAccountingTermsEnum.java
src/main/java/org/elasticsearch/index/fielddata/ordinals/GlobalOrdinalsBuilder.java
src/main/java/org/elasticsearch/index/fielddata/ordinals/InternalGlobalOrdinalsBuilder.java
src/main/java/org/elasticsearch/index/fielddata/plain/AbstractIndexOrdinalsFieldData.java
src/main/java/org/elasticsearch/index/fielddata/plain/DisabledIndexFieldData.java
src/main/java/org/elasticsearch/index/fielddata/plain/IndexIndexFieldData.java
src/main/java/org/elasticsearch/index/fielddata/plain/NonEstimatingEstimator.java
src/main/java/org/elasticsearch/index/fielddata/plain/PackedArrayIndexFieldData.java
src/main/java/org/elasticsearch/index/fielddata/plain/ParentChildIndexFieldData.java
src/main/java/org/elasticsearch/index/fielddata/plain/SortedSetDVOrdinalsIndexFieldData.java
src/main/java/org/elasticsearch/node/internal/InternalNode.java
src/test/java/org/elasticsearch/index/aliases/IndexAliasesServiceTests.java
src/test/java/org/elasticsearch/index/codec/CodecTests.java
src/test/java/org/elasticsearch/index/fielddata/AbstractFieldDataTests.java
src/test/java/org/elasticsearch/index/fielddata/IndexFieldDataServiceTests.java
src/test/java/org/elasticsearch/index/mapper/MapperTestUtils.java
src/test/java/org/elasticsearch/index/query/IndexQueryParserFilterCachingTests.java
src/test/java/org/elasticsearch/index/query/SimpleIndexQueryParserTests.java
src/test/java/org/elasticsearch/index/query/guice/IndexQueryParserModuleTests.java
src/test/java/org/elasticsearch/index/search/FieldDataTermsFilterTests.java
src/test/java/org/elasticsearch/index/search/child/ChildrenConstantScoreQueryTests.java
src/test/java/org/elasticsearch/index/similarity/SimilarityTests.java
This is only applicable when the order is set to _count. The upper bound of the error in the doc count is calculated by summing the doc count of the last term on each shard which did not return the term. The implementation calculates the error by summing the doc count for the last term on each shard for which the term IS returned and then subtracts this value from the sum of the doc counts for the last term from ALL shards.
Closes#6696
This commit adds regular expression support for the allow-origin
header depending on the value of the request `Origin` header.
The existing HttpRequestBuilder is also extended to support the
OPTIONS HTTP method.
Relates #5601Closes#6891
This commit adds the ability to force blocking on the flush operaition
to make sure all files have been written and synced to disk. Without
this option a flush might be executing at the same time causing the
current flush to fail and return before all files being synced.
Closes#6996
Allow users to control document collection termination, if a specified terminate_after number is
set. Upon setting the newly added parameter, the response will include a boolean terminated_early
flag, indicating if the document collection for any shard terminated early.
closes#6876
This change just changes the default for index.codec.bloom.load to
false: with recent performance improvements to ID lookup, such as
#6298, bloom filters don't give much of a performance gain anymore,
and they can consume non-trivial RAM when there are many tiny
documents.
For now, we still index the bloom filters, so if a given app wants
them back, it can just update the index.codec.bloom.load to true.
Closes#6959
A new option `prune` has been added to allow users to control phrase suggestion pruning when `collate`
is set. If the new option is set, the phrase suggestion option will contain a boolean `collate_match`
indicating whether the respective result had hits in collation.
CLoses#6927
Adds the ability to the Term Vector API to generate term vectors for some
chosen fields, even though they haven't been explicitely stored in the index.
Relates to #5184Closes#6567
These are javascript expressions, which can only access numeric
fielddata, parameters, and _score. They can only be used for searches (not document updates).
closes#6818
The newly added collate option will let the user provide a template query/filter which will be executed for every phrase suggestions generated to ensure that the suggestion matches at least one document for the filter/query.
The user can also add routing preference `preference` to route the collate query/filter and additional `params` to inject into the collate template.
Closes#3482
The Hunspell service would throw a confusing error message if more than
one affix file was present. This commit distinguishes between the two
error cases: where there are no affix files and when there are too many
affix files.
Also implements lazy dictionary loading, which was used in the tests
but not implemented.
Closes#6850
This commit adds the infrastructure to allow pluging in different
measures for computing the significance of a term.
Significance measures can be provided externally by overriding
- SignificanceHeuristic
- SignificanceHeuristicBuilder
- SignificanceHeuristicParser
closes#6561
The `recovery_after_time` tells the gateway to wait before starting recovery from disk. The goal here is to allow for more nodes to join the cluster and thus not start potentially unneeded replications. The `expectedNodes` setting (and friends) tells the gateway when it can start recovering even if the `recover_after_time` has not yet elapsed. However, `expectedNodes` is useless if one doesn't set `recovery_after_time`. This commit changes that by setting a sensible default of 5m for `recover_after_time` *if* a `expectedNodes` setting is present.
Closes#6742
`mmapfs` is really good for random access but can have sideeffects if
memory maps are large depending on the operating system etc. A hybrid
solution where only selected files are actually memory mapped but others
mostly consumed sequentially brings the best of both worlds and
minimizes the memory map impact.
This commit mmaps only the `dvd` and `tim` file for fast random access
on docvalues and term dictionaries.
Closes#6636