From 610ce078fb3c84c47d6d32aff7d77ba850e28f9d Mon Sep 17 00:00:00 2001 From: Robert Muir Date: Wed, 5 Nov 2014 15:48:51 -0500 Subject: [PATCH] Upgrade master to lucene 5.0 snapshot This has a lot of improvements in lucene, particularly around memory usage, merging, safety, compressed bitsets, etc. On the elasticsearch side, summary of the larger changes: API changes: postings API became a "pull" rather than "push", collector API became per-segment, etc. packaging changes: add lucene-backwards-codecs.jar as a dependency. improvements to boolean filtering: especially ensuring it will not be slow for SparseBitSet. use generic BitSet api in plumbing so that concrete bitset type is an implementation detail. use generic BitDocIdSetFilter api for dedicated bitset cache, so there is type safety. changes to support atomic commits implement Accountable.getChildResources (detailed memory usage API) for fielddata, etc change handling of IndexFormatTooOld/New, since they no longer extends CorruptIndexException Closes #8347. Squashed commit of the following: commit d90d53f5f21b876efc1e09cbd6d63c538a16cd89 Author: Simon Willnauer Date: Wed Nov 5 21:35:28 2014 +0100 Make default codec/postings/docvalues format constants commit cb66c22c71cd304a36e7371b199a8c279908ae37 Merge: d4e2f6d ad4ff43 Author: Robert Muir Date: Wed Nov 5 11:41:13 2014 -0500 Merge branch 'master' into enhancement/lucene_5_0_upgrade commit d4e2f6dfe767a5128c9b9ae9e75036378de08f47 Merge: 4e5445c 4111d93 Author: Robert Muir Date: Wed Nov 5 06:26:32 2014 -0500 Merge branch 'master' into enhancement/lucene_5_0_upgrade commit 4e5445c775f580730eb01360244e9330c0dc3958 Author: Robert Muir Date: Tue Nov 4 16:19:19 2014 -0500 FixedBitSet -> BitSet commit 9887ea73e8b857eeda7f851ef3722ef580c92acf Merge: 1bf8894 fc84666 Author: Robert Muir Date: Tue Nov 4 15:26:25 2014 -0500 Merge branch 'master' into enhancement/lucene_5_0_upgrade commit 1bf8894430de3e566d0dc5623b0cc28b0d674ebb Author: Robert Muir Date: Tue Nov 4 15:22:51 2014 -0500 remove nocommit commit a9c2a2259ff79c69bae7806b64e92d5f472c18c8 Author: Robert Muir Date: Tue Nov 4 13:48:43 2014 -0500 turn jenkins red again commit 067baaaa4d52fce772c81654dcdb5051ea79139f Author: Robert Muir Date: Tue Nov 4 13:18:21 2014 -0500 unzip from stream commit 82b6fba33d362aca2313cc0ca495f28f5ebb9260 Merge: b2214bb 6523cd9 Author: Robert Muir Date: Tue Nov 4 13:10:59 2014 -0500 Merge branch 'master' into enhancement/lucene_5_0_upgrade commit b2214bb093ec2f759003c488c3c403c8931db914 Author: Robert Muir Date: Tue Nov 4 13:09:53 2014 -0500 go back to my URL until we can figure out what is up with jenkins commit e7d614172240175a51f580aeaefb6460d21cede9 Author: Robert Muir Date: Tue Nov 4 10:52:54 2014 -0500 try this jenkins commit 337a3c7704efa7c9809bf373152d711ee55f876c Author: Simon Willnauer Date: Tue Nov 4 16:17:49 2014 +0100 Rename temp-files under lock to prevent metadata reads while renaming commit 77d5ba80d0a76efa549dd753b9f114b2f2d2d29c Author: Robert Muir Date: Tue Nov 4 10:07:11 2014 -0500 continue to treat too-old/too-new as corruption for now commit 98d0fd2f4851bc50e505a94ca592a694d502c51c Author: Robert Muir Date: Tue Nov 4 09:24:21 2014 -0500 fix last nocommit commit 643fceed66c8caf22b97fc489d67b4a2a90a1a1c Author: Simon Willnauer Date: Tue Nov 4 14:46:17 2014 +0100 remove NoSuchDirectoryException commit 2e43c4feba05cfaf451df70f946c0930cbcc4557 Merge: 93826e4 8163107 Author: Simon Willnauer Date: Tue Nov 4 14:38:00 2014 +0100 Merge branch 'master' into enhancement/lucene_5_0_upgrade commit 93826e4d56a6a97c2074669014af77ff519bde63 Merge: 7f10129 44e24d3 Author: Simon Willnauer Date: Tue Nov 4 12:54:27 2014 +0100 Merge branch 'master' into enhancement/lucene_5_0_upgrade Conflicts: src/main/java/org/elasticsearch/index/store/DistributorDirectory.java src/main/java/org/elasticsearch/index/store/Store.java src/main/java/org/elasticsearch/indices/recovery/RecoveryStatus.java src/test/java/org/elasticsearch/index/store/DistributorDirectoryTest.java src/test/java/org/elasticsearch/index/store/StoreTest.java src/test/java/org/elasticsearch/indices/recovery/RecoveryStatusTests.java commit 7f10129364623620575c109df725cf54488b3abb Author: Adrien Grand Date: Tue Nov 4 11:32:24 2014 +0100 Fix TopHitsAggregator to not ignore the top-level/leaf collector split. commit 042fadc8603b997bdfdc45ca44fec70dc86774a6 Author: Adrien Grand Date: Tue Nov 4 11:31:20 2014 +0100 Remove MatchDocIdSet in favor of DocValuesDocIdSet. commit 7d877581ff5db585a674c95ac391ac78a0282826 Author: Adrien Grand Date: Tue Nov 4 11:10:08 2014 +0100 Make the and filter use the cost API. Lucene 5 ensured that cost() can safely be used, and this will have the benefit that the order in which filters are specified is not important anymore (only for slow random-access filters in practice). commit 78f1718aa2cd82184db7c3a8393e6215f43eb4a8 Author: Robert Muir Date: Mon Nov 3 23:55:17 2014 -0500 fix previous eclipse import braindamage commit 186c40e9258ce32f22a9a714ab442a310b6376e0 Author: Robert Muir Date: Mon Nov 3 22:32:34 2014 -0500 allow child queries to exhaust iterators again commit b0b1271305e1b6d0c4c4da51a3c54df1aa5c0605 Author: Ryan Ernst Date: Mon Nov 3 14:50:44 2014 -0800 Fix nocommit for mapping output. index_options will not be printed if the field is not indexed. commit ba223eb85e399c9620a347a983e29bf703953e7a Author: Ryan Ernst Date: Mon Nov 3 14:07:26 2014 -0800 Remove no commit for chinese analyzer provider. We should have a separate issue to address not using this provider on new indexes. commit ca554b03c4471797682b2fb724f25205cf040c4a Author: Ryan Ernst Date: Mon Nov 3 13:41:59 2014 -0800 Fix stop tests commit de67c4653ec47dee9c671390536110749d2bb05f Author: Ryan Ernst Date: Mon Nov 3 12:51:17 2014 -0800 Remove analysis nocommits, switching over to Lucene43*Filters for backcompat commit 50cae9bec72c25c33a1ab8a8931bccb3355171e2 Author: Robert Muir Date: Mon Nov 3 15:32:25 2014 -0500 add ram accounting and TODO lazy-loading (its no worse than master, can be a followup improvement) for suggesters commit 7a7f0122f138684b312d0f0b03dc2a9c16c15f9c Author: Robert Muir Date: Mon Nov 3 15:11:26 2014 -0500 bump lucene version commit cd0cae5c35e7a9e049f49ae45431f658fb86676b Merge: 446bc09 3c72073 Author: Robert Muir Date: Mon Nov 3 14:49:05 2014 -0500 Merge branch 'master' into enhancement/lucene_5_0_upgrade commit 446bc09b4e8bf4602d3c252b53ddaa0da65cce2f Author: Robert Muir Date: Mon Nov 3 14:46:30 2014 -0500 remove hack commit a19d85a968d82e6d00292b49630ef6ff2dbf2f32 Author: Robert Muir Date: Mon Nov 3 12:53:11 2014 -0500 dont create exceptions with circular references on corruption (will open a PR for this) commit 0beefb9e821d97c37e90ec556d81ac7b00369b8a Author: Robert Muir Date: Mon Nov 3 11:47:14 2014 -0500 temporarily add craptastic detector for this horrible bug commit e9f2d298bff75f3d1591f8622441e459c3ce7ac3 Author: Robert Muir Date: Mon Nov 3 10:56:01 2014 -0500 add nocommit commit e97f1d50a91a7129650b8effc7a9ecf74ca0569a Merge: c57a3c8 f1f50ac Author: Robert Muir Date: Mon Nov 3 10:12:12 2014 -0500 Merge branch 'master' into enhancement/lucene_5_0_upgrade commit c57a3c8341ed61dca62eaf77fad6b8b48aeb6940 Author: Robert Muir Date: Mon Nov 3 10:11:46 2014 -0500 fix nocommit commit dd0e77e4ec07c7011ab5f6b60b2ead33dc2333d2 Author: Robert Muir Date: Mon Nov 3 09:54:09 2014 -0500 nocommit -> TODO, this is in much more places in the codebase, bigger issue commit 3cc3bf56d72d642059f8fe220d6f2fed608363e9 Author: Ryan Ernst Date: Sat Nov 1 23:59:17 2014 -0700 Remove nocommit and awaitsfix for edge ngram filter test. commit 89f115245155511c0fbc0d5ee62e63141c3700c1 Author: Ryan Ernst Date: Sat Nov 1 23:57:44 2014 -0700 Fix EdgeNGramTokenFilter logic for version <= 4.3, and fixed instanceof checks in corresponding tests to correctly check for reverse filter when applicable. commit 112df869cd199e36aab0e1a7a288bb1fdb2ebf1c Author: Robert Muir Date: Sun Nov 2 00:08:30 2014 -0400 execute geo disjoint query/filter as intersects commit e5061273cc685f1252e9a3a9ae4877ec9bce7752 Author: Robert Muir Date: Sat Nov 1 22:58:59 2014 -0400 remove chinese analyzer from docs commit ea1af11b8978fcc551f198e24fe21d52806993ef Author: Robert Muir Date: Sat Nov 1 22:29:00 2014 -0400 fix ram accounting bug commit 53c0a42c6aa81aa6bf81d3aa77b95efd513e0f81 Merge: e3bcd3c 6011a18 Author: Robert Muir Date: Sat Nov 1 22:16:29 2014 -0400 Merge branch 'master' into enhancement/lucene_5_0_upgrade commit e3bcd3cc07a4957e12c7b3affc462c31290a9186 Author: Robert Muir Date: Sat Nov 1 22:15:01 2014 -0400 fix url-email back compat (thanks ryan) commit 91d6b096a96c357755abee167098607223be1aad Author: Robert Muir Date: Sat Nov 1 22:11:26 2014 -0400 bump lucene version commit d2bb9568df72b37ec7050d25940160b8517394bc Author: Robert Muir Date: Sat Nov 1 20:33:07 2014 -0400 remove nocommit commit 1d049c471e19e5c457262c7399c5bad9e023b2e3 Author: Robert Muir Date: Sat Nov 1 20:28:58 2014 -0400 fix eclipse to group org/com imports together: without this, its madness commit 09d8c1585ee99b6e63be032732c04ef6fed84ed2 Author: Robert Muir Date: Sat Nov 1 14:27:41 2014 -0400 remove nocommit, if you dont liek it, print assembly and tell me how it can be better commit 8a6a294313fdf33b50c7126ec20c07867ecd637c Author: Adrien Grand Date: Fri Oct 31 20:01:55 2014 +0100 Remove deprecated usage of DocIdSets.newDocIDSet. commit 601bee60543610558403298124a84b1b3bbd1045 Author: Robert Muir Date: Fri Oct 31 14:13:18 2014 -0400 maybe one of these zillions of annotations will stop thread leaks commit 9d3f69abc7267c5e455aefa26db95cb554b02d62 Author: Robert Muir Date: Fri Oct 31 14:05:39 2014 -0400 fix some analysis nocommits commit 312e3a29c77214b8142d21c33a6b2c2b151acf9a Author: Adrien Grand Date: Fri Oct 31 18:28:45 2014 +0100 Remove XConstantScoreQuery/XFilteredQuery/ApplyAcceptedDocsFilter. commit 5a0cb9f8e167215df7f1b1fad11eec6e6c74940f Author: Adrien Grand Date: Fri Oct 31 17:06:45 2014 +0100 Fix misleading documentation of DocIdSets.toCacheable. commit 8b4ef2b5b476fff4c79c0c2a0e4769ead26cf82b Author: Adrien Grand Date: Fri Oct 31 17:05:59 2014 +0100 Fix CustomRandomAccessFilterStrategy to override the right method. commit d7a9a407a615987cfffc651f724fbd8795c9c671 Author: Adrien Grand Date: Fri Oct 31 16:21:35 2014 +0100 Better handle the special case when there is a single SHOULD clause. commit 648ad389f07e92dfc451f345549c9841ba5e4c9a Author: Adrien Grand Date: Fri Oct 31 15:53:38 2014 +0100 Cut over XBooleanFilter to BitDocIdSet.Builder. The idea is similar to what happened to Lucene's BooleanFilter. Yet XBooleanFilter is a bit more sophisticated and I had to slightly change the way it is implemented in order to make it work. The main difference with before is that slow filters are now applied lazily, so eg. if you have 3 MUST clauses, two with a fast iterator and the third with a slow iterator, the previous implementation used to apply the fast iterators first and then only check the slow filter for bits which were set in the bit set. Now we are computing a bit set based on the fast must clauses and then basically returning a BitsFilteredDocIdSet.wrap(bitset, slowClause). Other than that, BooleanFilter still uses the bitset optimizations when or-ing and and-ind filters. Another improvement is that BooleanFilter is now aware of the cost API. commit b2dad312b4bc9f931dc3a25415dd81c0d9deee08 Author: Robert Muir Date: Fri Oct 31 10:18:53 2014 -0400 clear nocommit commit 4851d2091e744294336dfade33906c75fbe695cd Author: Simon Willnauer Date: Fri Oct 31 15:15:16 2014 +0100 cut over to RoaringDocIdSet commit ca6aec24a901073e65ce4dd6b70964fd3612409e Author: Simon Willnauer Date: Fri Oct 31 14:57:30 2014 +0100 make nocommit more explicit commit d0742ee2cb7a6c48b0bbb31580b7fbcebdb6ec40 Author: Robert Muir Date: Fri Oct 31 09:55:24 2014 -0400 fix standardtokenizer nocommit commit 7d6faccafff22a86af62af0384838391d46695ca Author: Simon Willnauer Date: Fri Oct 31 14:54:08 2014 +0100 fix compilation commit a038a405c1ff6458ad294e6b5bc469e622f699d0 Author: Simon Willnauer Date: Fri Oct 31 14:53:43 2014 +0100 fix compilation commit 30c9e307b1f5d80e2deca3392c0298682241207f Author: Simon Willnauer Date: Fri Oct 31 14:52:35 2014 +0100 fix compilation commit e5139bc5a0a9abd2bdc6ba0dfbcb7e3c2e7b8481 Author: Robert Muir Date: Fri Oct 31 09:52:16 2014 -0400 clear nocommit here commit 85dd2cedf7a7994bed871ac421cfda06aaf5c0a5 Author: Simon Willnauer Date: Fri Oct 31 14:46:17 2014 +0100 fix CompletionPostingsFormatTest commit c0f3781f616c9b0ee3b5c4d0998810f595868649 Author: Robert Muir Date: Fri Oct 31 09:38:00 2014 -0400 add tests for these analyzers commit 51f9999b4ad079c283ae762c862fd0e22d00445f Author: Simon Willnauer Date: Fri Oct 31 14:10:26 2014 +0100 remove nocommit - this is not an issue commit fd1388fa03e622b0738601c8aeb2dbf7949a6dd2 Author: Martijn van Groningen Date: Fri Oct 31 14:07:01 2014 +0100 Remove redundant null check commit 3d6dd51b0927337ba941a235446b22e8cd500dc3 Author: Martijn van Groningen Date: Fri Oct 31 14:01:37 2014 +0100 Removed the work around to prevent p/c error when invoking #iterator() twice, because the custom query filter wrapper now doesn't transform the result to a cache doc id set any more. I think the transforming to a cachable doc id set in CustomQueryWrappingFilter isn't needed at all, because we use the DocIdSet only once and because of that is just slowed things down. commit 821832a537e00cd1216064b379df3e01d2911d3a Author: Simon Willnauer Date: Fri Oct 31 13:54:33 2014 +0100 one more nocommit commit 77eb9ea4c4ea50afb2680c29682ddcb3851a9d4f Author: Martijn van Groningen Date: Fri Oct 31 13:52:29 2014 +0100 Remove cast commit a400573c034ed602221f801b20a58a9186a06eae Author: Simon Willnauer Date: Fri Oct 31 13:49:24 2014 +0100 fix stop filter commit 51746087cf8ec34c4d20aa05ba8dbff7b3b43eec Author: Simon Willnauer Date: Fri Oct 31 13:21:36 2014 +0100 fix changed semantics of FBS.nextSetBit to check for NO_MORE_DOCS commit 8d0a4e2511310f1293860823fe3ba80ac771bbe3 Author: Robert Muir Date: Fri Oct 31 08:13:44 2014 -0400 do the bogus cast differently commit 46a5cc5732dea096c0c80ae5ce42911c9c51e44e Author: Simon Willnauer Date: Fri Oct 31 13:00:16 2014 +0100 I hate it but P/C now passes commit 580c0c2f82bbeacf217e594f22312b11d1bdb839 Merge: a9d3c00 1645434 Author: Robert Muir Date: Fri Oct 31 06:54:31 2014 -0400 fix nocommit/classcast commit a9d3c004d62fe04989f49a897e6ff84973c06eb9 Author: Adrien Grand Date: Fri Oct 31 08:49:31 2014 +0100 Update TODO. commit aa75af0b407792aeef32017f03a6f442ed970baa Author: Robert Muir Date: Thu Oct 30 19:18:25 2014 -0400 clear obselete nocommits from lucene bump commit d438534cf41fcbe2d88070e2f27c994625e082c2 Author: Robert Muir Date: Thu Oct 30 18:53:20 2014 -0400 throw classcastexception when ES abuses regular filtercache for nested docs commit 2c751f3a8feda43ec127c34769b069de21f3d16f Author: Robert Muir Date: Thu Oct 30 18:31:34 2014 -0400 bump lucene revision, fix tests commit d6ef7f6304ae262bf6228a7d661b2a452df332be Author: Simon Willnauer Date: Thu Oct 30 22:37:58 2014 +0100 fix merge problems commit de9d361f88a9ce6bb3fba85285de41f223c95767 Merge: 41f6aab f6b37a3 Author: Simon Willnauer Date: Thu Oct 30 22:28:59 2014 +0100 Merge branch 'master' into enhancement/lucene_5_0_upgrade Conflicts: pom.xml src/main/java/org/elasticsearch/Version.java src/main/java/org/elasticsearch/gateway/local/state/meta/MetaDataStateFormat.java commit 41f6aab388aa80c40b08a2facab2617576203a0d Author: Simon Willnauer Date: Thu Oct 30 17:48:46 2014 +0100 fix potiential NPE commit c4428b12e1ae838b91e847df8b4a8be7f49e10f4 Author: Simon Willnauer Date: Thu Oct 30 17:38:46 2014 +0100 don't advance iterator in a match(doc) method commit 28ab948e99e3ea4497c9b1e468384806ba7e1790 Author: Simon Willnauer Date: Thu Oct 30 17:34:58 2014 +0100 don't advance iterator in a match(doc) method commit eb0f33f6634fadfcf4b2bf7327400e568f0427bb Author: Simon Willnauer Date: Thu Oct 30 16:55:54 2014 +0100 fix GeoUtilsTest commit 7f711fe3eaf73b6c2268cf42d5a41132a61ad831 Author: Simon Willnauer Date: Thu Oct 30 16:43:16 2014 +0100 Use a dedicated default index option if field type is not indexed by default commit 78e3f37ab779e3e1b25b45a742cc86ab5f975149 Author: Robert Muir Date: Thu Oct 30 10:56:14 2014 -0400 disable this test with AwaitsFix to reduce noise commit 9a590f563c8e03a99ecf0505c92d12d7ab20d11d Author: Simon Willnauer Date: Thu Oct 30 09:38:49 2014 +0100 fix lucene version commit abe3ca1d8bb6b5101b545198f59aec44bacfa741 Author: Simon Willnauer Date: Thu Oct 30 09:35:05 2014 +0100 fix AnalyzingCompletionLookupProvider to wrok with new codec API commit 464293b245852d60bde050c6d3feb5907dcfbf5f Author: Robert Muir Date: Thu Oct 30 00:26:00 2014 -0400 don't try to write stuff to tests class directory commit 031cc6c19f4fe4423a034b515f77e5a0e282a124 Author: Robert Muir Date: Thu Oct 30 00:12:36 2014 -0400 AwaitsFix these known issues to reduce noise commit 4600d51891e35847f2d344247d6f915a0605c0d1 Author: Robert Muir Date: Thu Oct 30 00:06:53 2014 -0400 openbitset lives on commit 8492bae056249e2555d24acd55f1046b66a667c4 Author: Robert Muir Date: Wed Oct 29 23:42:54 2014 -0400 fixes for filter tests commit 31f24ce4efeda31f97eafdb122346c7047a53bf2 Author: Robert Muir Date: Wed Oct 29 23:12:38 2014 -0400 don't use fieldcache commit 8480789942fdff14a6d2b2cd8134502fe62f20c8 Author: Robert Muir Date: Wed Oct 29 23:04:29 2014 -0400 ancient index no longer supported commit 02e78dc7ebdd827533009f542582e8db44309c57 Author: Simon Willnauer Date: Wed Oct 29 23:37:02 2014 +0100 fix more tests commit ff746c6df23c50b3f3ec24922413b962c8983080 Author: Simon Willnauer Date: Wed Oct 29 23:08:19 2014 +0100 fix all mapper commit e4fb84b517107b25cb064c66f83c9aa814a311b2 Author: Simon Willnauer Date: Wed Oct 29 22:55:54 2014 +0100 fix distributor tests and cut over to FileStore API commit 20c850e2cfe3210cd1fb9e232afed8d4ac045857 Author: Simon Willnauer Date: Wed Oct 29 22:42:18 2014 +0100 use DOCS_ONLY if index=true and current options == null commit 44169c108418413cfe51f5ce23ab82047463e4c2 Author: Simon Willnauer Date: Wed Oct 29 22:33:36 2014 +0100 Fix index=yes|no settings in mappers commit a3c5f77987461a18121156ed345d42ded301c566 Author: Simon Willnauer Date: Wed Oct 29 21:51:41 2014 +0100 fix several field mappers conversion from setIndexed to indexOptions commit df84d736908e88a031d710f98e222be68ae96af1 Author: Simon Willnauer Date: Wed Oct 29 21:33:35 2014 +0100 fix SourceFieldMapper to be not indexed commit b2bf01d12a8271a31fb2df601162d0e89924c8f5 Author: Simon Willnauer Date: Wed Oct 29 21:23:08 2014 +0100 Cut over to .liv files in store and corruption tests commit 619004df436f9ef05d24bef1b6a7f084c6b0ad75 Author: Simon Willnauer Date: Wed Oct 29 17:05:52 2014 +0100 fix more tests commit b7ed653a8b464de446e00456bce0a89e47627c38 Author: Simon Willnauer Date: Wed Oct 29 16:19:08 2014 +0100 [STORE] Add dedicated method to write temporary files Recovery writes temporary files which might not end up in the right distributor directories today. This commit adds a dedicated API that allows specifying the target file name in order to create the tempoary file in the correct directory. commit 7d574659f6ae04adc2b857146ad0d8d56ca66f12 Author: Robert Muir Date: Wed Oct 29 10:28:49 2014 -0400 add some leniency to temporary bogus method commit f97022ea7c2259f7a5cf97d924c59ed75ab65b32 Author: Robert Muir Date: Wed Oct 29 10:24:17 2014 -0400 fix MultiCollector bug commit b760533128c2b4eb10ad76e9689ef714293dd819 Author: Simon Willnauer Date: Wed Oct 29 14:56:08 2014 +0100 CheckIndex is now closeable we need to close it commit 9dae9fb6d63546a6c2427be2a2d5c8358f5b1934 Author: Simon Willnauer Date: Wed Oct 29 14:45:11 2014 +0100 s/Lucene51/Lucene50 commit 7aea9b86856a8c1b06a08e7c312ede1168af1287 Author: Simon Willnauer Date: Wed Oct 29 14:42:30 2014 +0100 fix BloomFilterPostingsFormat commit 16fea6fe842e88665d59cc091e8224e8dc6ce08c Author: Simon Willnauer Date: Wed Oct 29 14:41:16 2014 +0100 fix some codec format issues commit 3d77aa97dd2c4012b63befef3f2ba2525965e8a6 Author: Simon Willnauer Date: Wed Oct 29 14:30:43 2014 +0100 fix CodecTests commit 6ef823b1fde25657438ace1aabd9d552d6ae215e Author: Simon Willnauer Date: Wed Oct 29 14:26:47 2014 +0100 make it compile commit 9991eee1fe99435118d4dd42b297ffc83fce5ec5 Author: Robert Muir Date: Wed Oct 29 09:12:43 2014 -0400 add an ugly hack for TopHitsAggregator for now commit 03e768a01fcae6b1f4cb50bcceec7d42977ac3e6 Author: Simon Willnauer Date: Wed Oct 29 14:01:02 2014 +0100 cut over ES090PostingsFormat commit 463d281faadb794fdde3b469326bdaada25af048 Merge: 0f8740a 8eac79c Author: Robert Muir Date: Wed Oct 29 08:30:36 2014 -0400 Merge branch 'master' into enhancement/lucene_5_0_upgrade commit 0f8740a782455a63524a5a82169f6bbbfc613518 Author: Robert Muir Date: Wed Oct 29 01:00:15 2014 -0400 fix/hack remaining filter and analysis issues commit df534488569da13b31d66e581456dfd4b55156b9 Author: Robert Muir Date: Tue Oct 28 23:11:47 2014 -0400 fix ngrams / openbitset usage commit 11f5dc3b9887f4da80a0fa1818e1350b30599329 Author: Robert Muir Date: Tue Oct 28 22:42:44 2014 -0400 hack over sort comparators commit 4ebdc754350f512596f6a02770d223e9f5f7975a Author: Robert Muir Date: Tue Oct 28 21:27:07 2014 -0400 compiler errors < 100 commit 2d60c9e29de48ccb0347dd87f7201f47b67b83a0 Author: Robert Muir Date: Tue Oct 28 03:13:08 2014 -0400 clear some nocommits around ram usage commit aaf47fe6c0aabcfb2581dd456fc50edf871da758 Author: Robert Muir Date: Mon Oct 27 12:27:34 2014 -0400 migrate fieldinfo handling commit ef6ed6d15d8def71cd880d97249678136cd29fe3 Author: Robert Muir Date: Mon Oct 27 12:07:13 2014 -0400 more simple fixes commit f475e1048ae697dd9da5bd9da445102b0b7bc5b3 Author: Robert Muir Date: Mon Oct 27 11:58:21 2014 -0400 more fielddata ram accounting fixes commit 16b4239eaa9b4262df258257df4f31d39f28a3a2 Author: Simon Willnauer Date: Mon Oct 27 16:47:32 2014 +0100 add missing file commit 5b542fa2a6da81e36a0c35b8e891a1d8bc58f663 Author: Simon Willnauer Date: Mon Oct 27 16:43:29 2014 +0100 cut over completion posting formats - still some nocommits commit ecdea49404c4ec4e1b78fb54575825f21b4e096e Author: Robert Muir Date: Mon Oct 27 11:21:09 2014 -0400 fielddata accountable fixes commit d43da265718917e20c8264abd43342069198fe9c Author: Simon Willnauer Date: Mon Oct 27 16:19:53 2014 +0100 cut over BloomFilterPostings to new API commit 29b192ba621c14820175775d01242162b88bd364 Author: Robert Muir Date: Mon Oct 27 10:22:51 2014 -0400 fix more analyzers commit 74b4a0c5283e323a7d02490df469497c722780d2 Author: Robert Muir Date: Mon Oct 27 09:54:25 2014 -0400 fix tests commit 554084ccb4779dd6b1c65fa7212ad1f64f3a6968 Author: Simon Willnauer Date: Mon Oct 27 14:51:48 2014 +0100 maintain supressed exceptions on CorruptIndexException commit cf882d9112c5e8ef1e9f2b0f800f7aa59001a4f2 Author: Simon Willnauer Date: Mon Oct 27 14:47:17 2014 +0100 commitOnClose=false commit ebb2a9189ab2f459b7c6c9985be610fd90dfe410 Author: Simon Willnauer Date: Mon Oct 27 14:46:06 2014 +0100 cut over indexwriter closeing in InternalEngine commit cd21b3d4706f0b562bd37792d077d60832aff65f Author: Simon Willnauer Date: Mon Oct 27 14:38:10 2014 +0100 fix constant commit f93f900c4a1c90af3a21a4af5735a7536423fe28 Author: Robert Muir Date: Mon Oct 27 09:50:49 2014 -0400 fix test commit a9a752940b1ab4699a6a08ba8b34afca82b843fe Author: Martijn van Groningen Date: Mon Oct 27 09:26:18 2014 +0100 Be explicit about the index options commit d9ee815babd030fa2ceaec9f467c105ee755bf6b Author: Simon Willnauer Date: Sun Oct 26 20:03:44 2014 +0100 cut over store and directory commit b3f5c8e39039dd8f5caac0c4dd1fc3b1116e64ca Author: Robert Muir Date: Sun Oct 26 13:08:39 2014 -0400 more test fixes commit 8842f2684e3606aae0860c27f7a4c53e273d47fb Author: Robert Muir Date: Sun Oct 26 12:14:52 2014 -0400 tests manual labor commit c43de5aec337919a3fdc3638406dff17fc80bc98 Author: Robert Muir Date: Sun Oct 26 11:04:13 2014 -0400 BytesRef -> BytesRefBuilder commit 020c0d087a2f37566a1db390b0e044ebab030138 Author: Martijn van Groningen Date: Sun Oct 26 15:53:37 2014 +0100 Moved over to BitSetFilter commit 48dd1b909e6c52cef733961c9ecebfe4f67109fe Author: Martijn van Groningen Date: Sun Oct 26 15:53:11 2014 +0100 Left over Collector api change in ScanContext commit 6ec248ef63f262bcda400181b838fd9244752625 Author: Martijn van Groningen Date: Sun Oct 26 15:47:40 2014 +0100 Moved indexed() over to indexOptions != null or indexOptions == null commit 9937aebfd8546ae4bb652cd976b3b43ac5ab7a63 Author: Martijn van Groningen Date: Sun Oct 26 13:26:31 2014 +0100 Fixed many compile errors. Mainly around the breaking Collector api change in 5.0. commit fec32c4abc0e3309cf34260c8816305a6f820c9e Author: Robert Muir Date: Sat Oct 25 11:22:17 2014 -0400 more easy fixes commit dab22531d801800d17a65dc7c9464148ce8ebffd Author: Robert Muir Date: Sat Oct 25 09:33:41 2014 -0400 more progress commit 414767e9a955010076b0497cc4f6d0c1850b48d3 Author: Robert Muir Date: Sat Oct 25 06:33:17 2014 -0400 more progress commit ad9d969fddf139a8830254d3eb36a908ba87cc12 Author: Robert Muir Date: Fri Oct 24 14:28:01 2014 -0400 current state of fun commit 464475eecb0be15d7d084135ed16051f76a7e521 Author: Robert Muir Date: Fri Oct 24 11:42:41 2014 -0400 bump to 5.0 snapshot --- .settings/org.eclipse.jdt.ui.prefs | 2 +- dev-tools/forbidden/core-signatures.txt | 4 - .../analysis/analyzers/lang-analyzer.asciidoc | 10 - .../query-dsl/queries/filtered-query.asciidoc | 5 +- pom.xml | 14 +- .../lucene/analysis/PrefixAnalyzer.java | 5 +- .../miscellaneous/UniqueTokenFilter.java | 4 +- .../lucene/queries/BlendedTermQuery.java | 2 +- .../classic/ExistsFieldQueryExtension.java | 4 +- .../classic/MapperQueryParser.java | 18 +- .../classic/MissingFieldQueryExtension.java | 4 +- .../classic/QueryParserSettings.java | 2 +- .../search/XFilteredDocIdSetIterator.java | 108 +++++ .../CustomPostingsHighlighter.java | 4 +- .../XPostingsHighlighter.java | 10 +- .../analyzing/XAnalyzingSuggester.java | 20 +- .../vectorhighlight/CustomFieldQuery.java | 14 +- .../ElasticsearchCorruptionException.java | 12 +- .../org/elasticsearch/ExceptionsHelper.java | 17 +- src/main/java/org/elasticsearch/Version.java | 75 ++-- .../mlt/TransportMoreLikeThisAction.java | 3 +- .../action/termvector/TermVectorFields.java | 10 - .../common/blobstore/fs/FsBlobContainer.java | 4 +- .../common/io/FileSystemUtils.java | 2 +- .../elasticsearch/common/lucene/Lucene.java | 42 +- .../common/lucene/MinimumScoreCollector.java | 20 +- .../common/lucene/MultiCollector.java | 40 +- .../common/lucene/ReaderContextAware.java | 4 +- .../common/lucene/SegmentReaderUtils.java | 18 +- .../common/lucene/all/AllField.java | 2 +- .../common/lucene/all/AllTermQuery.java | 6 +- .../common/lucene/docset/AndDocIdSet.java | 126 +++--- .../common/lucene/docset/ContextDocIdSet.java | 8 +- .../common/lucene/docset/DocIdSets.java | 64 +-- .../common/lucene/docset/MatchDocIdSet.java | 170 -------- .../common/lucene/docset/OrDocIdSet.java | 7 +- .../lucene/index/FilterableTermsEnum.java | 52 +-- .../common/lucene/search/AndFilter.java | 9 +- .../search/ApplyAcceptedDocsFilter.java | 217 ---------- .../lucene/search/FilteredCollector.java | 20 +- .../common/lucene/search/LimitFilter.java | 22 +- .../lucene/search/MatchAllDocsFilter.java | 7 +- .../lucene/search/MatchNoDocsFilter.java | 4 +- .../lucene/search/MatchNoDocsQuery.java | 6 +- .../lucene/search/MultiPhrasePrefixQuery.java | 4 +- .../common/lucene/search/NoCacheFilter.java | 4 +- .../common/lucene/search/NoopCollector.java | 8 +- .../common/lucene/search/NotFilter.java | 14 +- .../common/lucene/search/OrFilter.java | 14 +- .../common/lucene/search/Queries.java | 6 +- .../common/lucene/search/RegexpFilter.java | 6 +- .../common/lucene/search/XBooleanFilter.java | 398 ++++++++---------- .../common/lucene/search/XCollector.java | 6 +- .../lucene/search/XConstantScoreQuery.java | 44 -- .../common/lucene/search/XFilteredQuery.java | 261 ------------ .../search/function/BoostScoreFunction.java | 4 +- .../function/FieldValueFactorFunction.java | 4 +- .../function/FiltersFunctionScoreQuery.java | 6 +- .../search/function/FunctionScoreQuery.java | 6 +- .../search/function/RandomScoreFunction.java | 4 +- .../lucene/search/function/ScoreFunction.java | 4 +- .../search/function/ScriptScoreFunction.java | 4 +- .../search/function/WeightFactorFunction.java | 6 +- .../lucene/store/OutputStreamIndexOutput.java | 5 - .../uid/PerThreadIDAndVersionLookup.java | 13 +- .../common/lucene/uid/Versions.java | 6 +- .../common/util/AbstractArray.java | 8 + .../common/util/BloomFilter.java | 7 + .../elasticsearch/env/NodeEnvironment.java | 2 +- .../state/meta/CorruptStateException.java | 5 +- .../local/state/meta/MetaDataStateFormat.java | 18 +- .../index/analysis/Analysis.java | 38 +- .../analysis/ArabicAnalyzerProvider.java | 6 +- .../analysis/ArmenianAnalyzerProvider.java | 6 +- .../analysis/BasqueAnalyzerProvider.java | 6 +- .../analysis/BrazilianAnalyzerProvider.java | 6 +- .../BrazilianStemTokenFilterFactory.java | 2 +- .../analysis/BulgarianAnalyzerProvider.java | 6 +- .../analysis/CatalanAnalyzerProvider.java | 6 +- .../analysis/ChineseAnalyzerProvider.java | 17 +- .../index/analysis/CjkAnalyzerProvider.java | 5 +- .../analysis/ClassicTokenizerFactory.java | 4 +- .../CommonGramsTokenFilterFactory.java | 4 +- .../index/analysis/CustomAnalyzer.java | 4 +- .../index/analysis/CzechAnalyzerProvider.java | 6 +- .../analysis/DanishAnalyzerProvider.java | 6 +- .../index/analysis/DutchAnalyzerProvider.java | 6 +- .../analysis/DutchStemTokenFilterFactory.java | 8 +- .../analysis/EdgeNGramTokenFilterFactory.java | 48 ++- .../analysis/EdgeNGramTokenizerFactory.java | 12 +- .../analysis/ElisionTokenFilterFactory.java | 2 +- .../analysis/EnglishAnalyzerProvider.java | 6 +- .../analysis/FinnishAnalyzerProvider.java | 6 +- .../analysis/FrenchAnalyzerProvider.java | 6 +- .../FrenchStemTokenFilterFactory.java | 8 +- .../analysis/GalicianAnalyzerProvider.java | 6 +- .../analysis/GermanAnalyzerProvider.java | 6 +- .../GermanStemTokenFilterFactory.java | 2 +- .../index/analysis/GreekAnalyzerProvider.java | 4 +- .../index/analysis/HindiAnalyzerProvider.java | 6 +- .../analysis/HungarianAnalyzerProvider.java | 6 +- .../analysis/IndonesianAnalyzerProvider.java | 6 +- .../index/analysis/IrishAnalyzerProvider.java | 6 +- .../analysis/ItalianAnalyzerProvider.java | 6 +- .../analysis/KeepTypesFilterFactory.java | 2 +- .../index/analysis/KeepWordFilterFactory.java | 16 +- .../KeywordMarkerTokenFilterFactory.java | 4 +- .../analysis/KeywordTokenizerFactory.java | 6 +- .../analysis/LatvianAnalyzerProvider.java | 6 +- .../analysis/LengthTokenFilterFactory.java | 20 +- .../analysis/LetterTokenizerFactory.java | 6 +- .../analysis/LowerCaseTokenFilterFactory.java | 4 +- .../analysis/LowerCaseTokenizerFactory.java | 6 +- .../analysis/NGramTokenFilterFactory.java | 9 +- .../index/analysis/NGramTokenizerFactory.java | 12 +- .../analysis/NorwegianAnalyzerProvider.java | 6 +- .../index/analysis/NumericAnalyzer.java | 7 +- .../index/analysis/NumericDateAnalyzer.java | 8 +- .../index/analysis/NumericDateTokenizer.java | 5 +- .../index/analysis/NumericDoubleAnalyzer.java | 5 +- .../analysis/NumericDoubleTokenizer.java | 5 +- .../index/analysis/NumericFloatAnalyzer.java | 5 +- .../index/analysis/NumericFloatTokenizer.java | 5 +- .../analysis/NumericIntegerAnalyzer.java | 5 +- .../analysis/NumericIntegerTokenizer.java | 5 +- .../index/analysis/NumericLongAnalyzer.java | 5 +- .../index/analysis/NumericLongTokenizer.java | 4 +- .../index/analysis/NumericTokenizer.java | 5 +- .../PathHierarchyTokenizerFactory.java | 8 +- .../index/analysis/PatternAnalyzer.java | 56 +++ .../analysis/PatternAnalyzerProvider.java | 35 +- .../analysis/PatternTokenizerFactory.java | 4 +- .../analysis/PersianAnalyzerProvider.java | 4 +- .../analysis/PortugueseAnalyzerProvider.java | 6 +- .../analysis/ReverseTokenFilterFactory.java | 2 +- .../analysis/RomanianAnalyzerProvider.java | 6 +- .../analysis/RussianAnalyzerProvider.java | 6 +- .../analysis/SimpleAnalyzerProvider.java | 3 +- .../index/analysis/SnowballAnalyzer.java | 86 ++++ .../analysis/SnowballAnalyzerProvider.java | 9 +- .../analysis/SoraniAnalyzerProvider.java | 6 +- .../analysis/SpanishAnalyzerProvider.java | 6 +- .../analysis/StandardAnalyzerProvider.java | 5 +- .../analysis/StandardHtmlStripAnalyzer.java | 42 +- .../StandardHtmlStripAnalyzerProvider.java | 5 +- .../analysis/StandardTokenFilterFactory.java | 2 +- .../analysis/StandardTokenizerFactory.java | 18 +- .../analysis/StemmerTokenFilterFactory.java | 2 +- .../index/analysis/StopAnalyzerProvider.java | 5 +- .../analysis/StopTokenFilterFactory.java | 22 +- .../analysis/SwedishAnalyzerProvider.java | 6 +- .../analysis/SynonymTokenFilterFactory.java | 6 +- .../index/analysis/ThaiAnalyzerProvider.java | 3 +- .../index/analysis/ThaiTokenizerFactory.java | 6 +- .../index/analysis/TokenizerFactory.java | 2 +- .../analysis/TrimTokenFilterFactory.java | 12 +- .../analysis/TurkishAnalyzerProvider.java | 6 +- .../UAX29URLEmailTokenizerFactory.java | 18 +- .../analysis/UpperCaseTokenFilterFactory.java | 2 +- .../analysis/WhitespaceAnalyzerProvider.java | 3 +- .../analysis/WhitespaceTokenizerFactory.java | 6 +- .../WordDelimiterTokenFilterFactory.java | 6 +- ...bstractCompoundWordTokenFilterFactory.java | 2 +- ...tionaryCompoundWordTokenFilterFactory.java | 12 +- ...enationCompoundWordTokenFilterFactory.java | 13 +- .../elasticsearch/index/cache/IndexCache.java | 18 +- .../index/cache/IndexCacheModule.java | 4 +- .../BitsetFilterCache.java} | 93 ++-- .../BitsetFilterCacheModule.java} | 11 +- .../ShardBitsetFilterCache.java} | 6 +- .../ShardBitsetFilterCacheModule.java} | 6 +- .../cache/filter/support/CacheKeyFilter.java | 9 +- .../filter/weighted/WeightedFilterCache.java | 10 +- .../cache/fixedbitset/FixedBitSetFilter.java | 37 -- .../PerFieldMappingPostingFormatCodec.java | 10 +- .../docvaluesformat/DocValuesFormats.java | 3 +- .../BloomFilterPostingsFormat.java | 218 +++++----- .../DefaultPostingsFormatProvider.java | 8 +- .../Elasticsearch090PostingsFormat.java | 53 ++- .../codec/postingsformat/PostingFormats.java | 3 +- .../elasticsearch/index/engine/Engine.java | 8 +- .../index/engine/SegmentsStats.java | 24 +- .../index/engine/internal/InternalEngine.java | 78 ++-- .../index/engine/internal/LiveVersionMap.java | 7 + .../index/engine/internal/VersionValue.java | 7 + .../index/fielddata/AtomicFieldData.java | 2 +- .../index/fielddata/IndexFieldData.java | 28 +- .../index/fielddata/IndexFieldDataCache.java | 6 +- .../index/fielddata/NumericDoubleValues.java | 24 ++ .../BytesRefFieldComparatorSource.java | 20 +- .../DoubleValuesComparatorSource.java | 23 +- .../FloatValuesComparatorSource.java | 21 +- .../LongValuesComparatorSource.java | 20 +- .../GlobalOrdinalsIndexFieldData.java | 11 +- .../InternalGlobalOrdinalsIndexFieldData.java | 10 +- .../fielddata/ordinals/MultiOrdinals.java | 13 + .../fielddata/ordinals/OrdinalsBuilder.java | 10 +- .../ordinals/SinglePackedOrdinals.java | 9 + .../AbstractAtomicGeoPointFieldData.java | 9 +- .../AbstractAtomicOrdinalsFieldData.java | 9 + .../AbstractAtomicParentChildFieldData.java | 7 + .../plain/AbstractIndexFieldData.java | 4 +- .../plain/AbstractIndexOrdinalsFieldData.java | 6 +- .../plain/AtomicDoubleFieldData.java | 8 + .../fielddata/plain/AtomicLongFieldData.java | 8 + .../plain/BinaryDVAtomicFieldData.java | 13 +- .../plain/BinaryDVIndexFieldData.java | 6 +- .../plain/BinaryDVNumericIndexFieldData.java | 18 +- .../plain/BytesBinaryDVAtomicFieldData.java | 7 + .../plain/BytesBinaryDVIndexFieldData.java | 6 +- .../plain/DisabledIndexFieldData.java | 4 +- .../plain/DoubleArrayIndexFieldData.java | 38 +- .../plain/FSTBytesAtomicFieldData.java | 15 + .../plain/FSTBytesIndexFieldData.java | 4 +- .../plain/FloatArrayIndexFieldData.java | 38 +- .../GeoPointBinaryDVAtomicFieldData.java | 7 + .../plain/GeoPointBinaryDVIndexFieldData.java | 6 +- .../GeoPointCompressedAtomicFieldData.java | 31 +- .../GeoPointCompressedIndexFieldData.java | 12 +- .../GeoPointDoubleArrayAtomicFieldData.java | 31 +- .../GeoPointDoubleArrayIndexFieldData.java | 12 +- .../fielddata/plain/IndexIndexFieldData.java | 12 +- .../plain/NumericDVIndexFieldData.java | 13 +- .../plain/PackedArrayIndexFieldData.java | 53 ++- .../plain/PagedBytesAtomicFieldData.java | 15 + .../plain/PagedBytesIndexFieldData.java | 14 +- .../plain/ParentChildAtomicFieldData.java | 9 + .../plain/ParentChildIndexFieldData.java | 25 +- .../plain/ParentChildIntersectTermsEnum.java | 9 +- .../plain/SortedNumericDVIndexFieldData.java | 37 +- .../SortedSetDVBytesAtomicFieldData.java | 14 +- .../SortedSetDVOrdinalsIndexFieldData.java | 6 +- .../gateway/local/LocalIndexShardGateway.java | 2 +- .../index/mapper/DocumentMapper.java | 15 +- .../index/mapper/MapperService.java | 3 +- .../index/mapper/ParseContext.java | 3 +- .../mapper/core/AbstractFieldMapper.java | 42 +- .../index/mapper/core/BinaryFieldMapper.java | 7 +- .../index/mapper/core/BooleanFieldMapper.java | 6 +- .../index/mapper/core/ByteFieldMapper.java | 5 +- .../index/mapper/core/DateFieldMapper.java | 3 +- .../index/mapper/core/DoubleFieldMapper.java | 8 +- .../index/mapper/core/FloatFieldMapper.java | 9 +- .../index/mapper/core/IntegerFieldMapper.java | 5 +- .../index/mapper/core/LongFieldMapper.java | 5 +- .../index/mapper/core/NumberFieldMapper.java | 10 +- .../index/mapper/core/ShortFieldMapper.java | 5 +- .../index/mapper/core/StringFieldMapper.java | 14 +- .../mapper/core/TokenCountFieldMapper.java | 3 +- .../index/mapper/core/TypeParsers.java | 8 +- .../index/mapper/geo/GeoPointFieldMapper.java | 13 +- .../index/mapper/geo/GeoShapeFieldMapper.java | 5 +- .../index/mapper/internal/AllFieldMapper.java | 10 +- .../mapper/internal/BoostFieldMapper.java | 17 +- .../internal/FieldNamesFieldMapper.java | 14 +- .../index/mapper/internal/IdFieldMapper.java | 31 +- .../mapper/internal/IndexFieldMapper.java | 5 +- .../mapper/internal/ParentFieldMapper.java | 6 +- .../mapper/internal/RoutingFieldMapper.java | 15 +- .../mapper/internal/SourceFieldMapper.java | 5 +- .../index/mapper/internal/TTLFieldMapper.java | 5 +- .../mapper/internal/TimestampFieldMapper.java | 15 +- .../mapper/internal/TypeFieldMapper.java | 30 +- .../index/mapper/internal/UidFieldMapper.java | 5 +- .../index/mapper/ip/IpFieldMapper.java | 12 +- .../policy/ElasticsearchMergePolicy.java | 34 +- .../percolator/PercolatorQueriesRegistry.java | 4 +- .../percolator/QueriesLoaderCollector.java | 12 +- .../index/query/ConstantScoreQueryParser.java | 3 +- .../index/query/ExistsFilterParser.java | 3 +- .../index/query/FilteredQueryParser.java | 85 +++- .../index/query/GeoShapeFilterParser.java | 19 +- .../index/query/GeoShapeQueryParser.java | 19 +- .../index/query/HasChildFilterParser.java | 12 +- .../index/query/HasChildQueryParser.java | 12 +- .../index/query/HasParentQueryParser.java | 8 +- .../index/query/IndexQueryParserService.java | 8 +- .../index/query/MissingFilterParser.java | 4 +- .../index/query/NestedFilterParser.java | 21 +- .../index/query/NestedQueryParser.java | 33 +- .../index/query/QueryParseContext.java | 6 +- .../index/query/ScriptFilterParser.java | 19 +- .../index/query/TopChildrenQueryParser.java | 10 +- .../functionscore/DecayFunctionParser.java | 6 +- .../FunctionScoreQueryParser.java | 10 +- .../index/query/support/QueryParsers.java | 8 +- .../query/support/XContentStructure.java | 6 +- .../index/search/FieldDataTermsFilter.java | 26 +- .../search/NumericRangeFieldDataFilter.java | 24 +- .../child/ChildrenConstantScoreQuery.java | 42 +- .../index/search/child/ChildrenQuery.java | 40 +- .../child/CustomQueryWrappingFilter.java | 25 +- .../child/ParentConstantScoreQuery.java | 29 +- .../index/search/child/ParentIdsFilter.java | 49 ++- .../index/search/child/ParentQuery.java | 30 +- .../index/search/child/TopChildrenQuery.java | 28 +- .../index/search/geo/GeoDistanceFilter.java | 14 +- .../search/geo/GeoDistanceRangeFilter.java | 14 +- .../index/search/geo/GeoPolygonFilter.java | 14 +- .../geo/InMemoryGeoBoundingBoxFilter.java | 14 +- .../search/nested/IncludeNestedDocsQuery.java | 31 +- .../index/search/nested/NestedDocsFilter.java | 4 +- .../search/nested/NonNestedDocsFilter.java | 4 +- .../index/service/IndexService.java | 4 +- .../index/service/InternalIndexService.java | 18 +- .../settings/IndexDynamicSettingsModule.java | 1 - .../elasticsearch/index/shard/ShardUtils.java | 4 +- .../index/shard/service/IndexShard.java | 4 +- .../shard/service/InternalIndexShard.java | 51 ++- .../BlobStoreIndexShardRepository.java | 10 +- .../index/store/DirectoryService.java | 6 +- .../index/store/DirectoryUtils.java | 3 - .../index/store/DistributorDirectory.java | 101 ++--- .../org/elasticsearch/index/store/Store.java | 116 +++-- .../distributor/AbstractDistributor.java | 11 +- .../index/store/distributor/Distributor.java | 4 +- .../distributor/LeastUsedDistributor.java | 2 +- .../RandomWeightedDistributor.java | 2 +- .../store/fs/DefaultFsDirectoryService.java | 2 +- .../index/store/fs/FsDirectoryService.java | 48 --- .../store/fs/MmapFsDirectoryService.java | 2 +- .../index/store/fs/NioFsDirectoryService.java | 2 +- .../store/fs/SimpleFsDirectoryService.java | 2 +- .../index/store/ram/RamDirectoryService.java | 11 - .../termvectors/ShardTermVectorService.java | 4 +- .../index/translog/Translog.java | 6 + .../index/translog/TranslogStreams.java | 4 +- .../index/translog/fs/FsTranslog.java | 7 + .../analysis/IndicesAnalysisService.java | 32 -- .../indices/analysis/PreBuiltAnalyzers.java | 177 +++++--- .../analysis/PreBuiltTokenFilters.java | 63 ++- .../indices/analysis/PreBuiltTokenizers.java | 81 ++-- .../cache/query/IndicesQueryCache.java | 7 + .../cache/IndicesFieldDataCache.java | 4 +- .../indices/recovery/RecoverySource.java | 8 +- .../indices/recovery/RecoveryStatus.java | 30 +- .../indices/ttl/IndicesTTLService.java | 10 +- .../MultiDocumentPercolatorIndex.java | 4 +- .../percolator/PercolateContext.java | 10 +- .../percolator/PercolatorService.java | 23 +- .../percolator/QueryCollector.java | 23 +- .../SingleDocumentPercolatorIndex.java | 3 +- .../rest/action/cat/RestIndicesAction.java | 4 +- .../rest/action/cat/RestNodesAction.java | 2 +- .../rest/action/cat/RestShardsAction.java | 2 +- .../script/AbstractSearchScript.java | 4 +- .../script/expression/ExpressionScript.java | 4 +- .../expression/FieldDataValueSource.java | 4 +- .../ReplaceableConstValueSource.java | 4 +- .../groovy/GroovyScriptEngineService.java | 4 +- .../elasticsearch/search/MultiValueMode.java | 14 +- .../elasticsearch/search/SearchService.java | 10 +- .../search/aggregations/AggregationPhase.java | 15 +- .../search/aggregations/Aggregator.java | 4 +- .../aggregations/AggregatorFactories.java | 4 +- .../search/aggregations/BucketCollector.java | 6 +- .../FilteringBucketCollector.java | 4 +- .../aggregations/NonCollectingAggregator.java | 4 +- .../RecordingPerReaderBucketCollector.java | 8 +- .../bucket/DeferringBucketCollector.java | 6 +- .../children/ParentToChildrenAggregator.java | 19 +- .../bucket/filter/FilterAggregator.java | 4 +- .../bucket/filters/FiltersAggregator.java | 4 +- .../bucket/geogrid/GeoHashGridAggregator.java | 4 +- .../bucket/global/GlobalAggregator.java | 4 +- .../bucket/histogram/HistogramAggregator.java | 4 +- .../bucket/missing/MissingAggregator.java | 4 +- .../bucket/nested/NestedAggregator.java | 36 +- .../nested/ReverseNestedAggregator.java | 12 +- .../bucket/range/RangeAggregator.java | 4 +- .../range/geodistance/GeoDistanceParser.java | 4 +- .../GlobalOrdinalsStringTermsAggregator.java | 6 +- .../bucket/terms/LongTermsAggregator.java | 6 +- .../bucket/terms/StringTermsAggregator.java | 6 +- .../metrics/avg/AvgAggregator.java | 4 +- .../cardinality/CardinalityAggregator.java | 11 +- .../cardinality/HyperLogLogPlusPlus.java | 29 +- .../geobounds/GeoBoundsAggregator.java | 4 +- .../metrics/max/MaxAggregator.java | 4 +- .../metrics/min/MinAggregator.java | 4 +- .../AbstractPercentilesAggregator.java | 4 +- .../scripted/ScriptedMetricAggregator.java | 4 +- .../metrics/stats/StatsAggegator.java | 4 +- .../extended/ExtendedStatsAggregator.java | 4 +- .../metrics/sum/SumAggregator.java | 4 +- .../metrics/tophits/TopHitsAggregator.java | 47 ++- .../metrics/tophits/TopHitsContext.java | 6 +- .../valuecount/ValueCountAggregator.java | 4 +- .../support/AggregationContext.java | 8 +- .../aggregations/support/ValuesSource.java | 16 +- .../search/dfs/CachedDfSource.java | 6 +- .../search/fetch/FetchPhase.java | 46 +- .../search/fetch/FetchSubPhase.java | 12 +- .../search/highlight/CustomQueryScorer.java | 6 +- .../search/highlight/HighlightPhase.java | 6 +- .../search/highlight/PostingsHighlighter.java | 26 +- .../FragmentBuilderHelper.java | 4 +- .../SourceScoreOrderFragmentsBuilder.java | 4 +- .../SourceSimpleFragmentsBuilder.java | 4 +- .../search/internal/ContextIndexSearcher.java | 15 +- .../search/internal/DefaultSearchContext.java | 14 +- .../search/internal/SearchContext.java | 4 +- .../search/lookup/DocLookup.java | 8 +- .../search/lookup/FieldsLookup.java | 8 +- .../search/lookup/IndexField.java | 4 +- .../search/lookup/IndexFieldTerm.java | 6 +- .../search/lookup/IndexLookup.java | 6 +- .../search/lookup/SearchLookup.java | 5 +- .../search/lookup/SourceLookup.java | 8 +- .../search/scan/ScanContext.java | 23 +- .../search/sort/GeoDistanceSortParser.java | 31 +- .../search/sort/ScriptSortParser.java | 16 +- .../search/sort/SortParseElement.java | 10 +- .../AnalyzingCompletionLookupProvider.java | 179 ++++---- .../Completion090PostingsFormat.java | 103 ++--- .../completion/CompletionSuggester.java | 8 +- .../context/GeolocationContextMapping.java | 3 +- .../TruncateTokenFilterTests.java | 9 +- .../miscellaneous/UniqueTokenFilterTests.java | 9 +- .../lucene/queries/BlendedTermQueryTest.java | 6 +- .../CustomPostingsHighlighterTests.java | 10 +- .../XPostingsHighlighterTests.java | 70 +-- .../lucene/util/AbstractRandomizedTest.java | 1 - .../termvector/AbstractTermVectorTests.java | 10 +- .../termvector/TermVectorUnitTests.java | 4 +- .../uidscan/LuceneUidScanBenchmark.java | 4 +- .../org/elasticsearch/codecs/CodecTests.java | 15 +- .../common/blobstore/BlobStoreTest.java | 11 +- .../common/lucene/all/SimpleAllTests.java | 12 +- .../lucene/search/AndDocIdSetTests.java | 96 +++++ .../search/MatchAllDocsFilterTests.java | 6 +- .../lucene/search/MoreLikeThisQueryTests.java | 2 +- .../search/MultiPhrasePrefixQueryTests.java | 2 +- .../lucene/search/TermsFilterTests.java | 32 +- .../search/XBooleanFilterLuceneTests.java | 38 +- .../lucene/search/XBooleanFilterTests.java | 29 +- .../common/lucene/uid/VersionsTests.java | 13 +- .../deps/lucene/SimpleLuceneTests.java | 26 +- .../deps/lucene/VectorHighlighterTests.java | 12 +- .../state/meta/MetaDataStateFormatTest.java | 13 +- .../ASCIIFoldingTokenFilterFactoryTests.java | 6 +- .../index/analysis/AnalysisFactoryTests.java | 4 +- .../index/analysis/AnalysisModuleTests.java | 11 +- .../index/analysis/AnalysisTests.java | 5 +- .../index/analysis/CJKFilterFactoryTests.java | 12 +- .../analysis/KeepFilterFactoryTests.java | 6 +- .../analysis/KeepTypesFilterFactoryTests.java | 3 +- .../LimitTokenCountFilterFactoryTests.java | 15 +- .../analysis/NGramTokenizerFactoryTests.java | 69 +-- .../index/analysis/PatternAnalyzerTest.java | 154 +++++++ .../ShingleTokenFilterFactoryTests.java | 15 +- .../index/analysis/SnowballAnalyzerTests.java | 59 +++ .../StemmerTokenFilterFactoryTests.java | 9 +- .../index/analysis/StopTokenFilterTests.java | 36 +- .../WordDelimiterTokenFilterFactoryTests.java | 37 +- .../CommonGramsTokenFilterFactoryTests.java | 27 +- .../filter1/MyFilterTokenFilterFactory.java | 2 +- .../BitSetFilterCacheTest.java} | 29 +- .../elasticsearch/index/codec/CodecTests.java | 22 +- .../DefaultPostingsFormatTests.java | 16 +- .../SnapshotDeletionPolicyTests.java | 2 +- .../internal/InternalEngineSettingsTest.java | 7 - .../engine/internal/InternalEngineTests.java | 7 +- .../fielddata/AbstractFieldDataImplTests.java | 6 +- .../fielddata/AbstractFieldDataTests.java | 15 +- .../AbstractStringFieldDataTests.java | 40 +- .../fielddata/BinaryDVFieldDataTests.java | 4 +- .../index/fielddata/DuelFieldDataTests.java | 41 +- .../index/fielddata/FilterFieldDataTest.java | 6 +- .../fielddata/IndexFieldDataServiceTests.java | 8 +- .../NoOrdinalsStringFieldDataTests.java | 6 +- .../fieldcomparator/TestReplaceMissing.java | 2 +- .../index/mapper/boost/BoostMappingTests.java | 7 +- ...TokenCountFieldMapperIntegrationTests.java | 2 +- .../simple/SimpleDynamicTemplatesTests.java | 25 +- .../mapper/lucene/DoubleIndexingDocTest.java | 2 +- .../lucene/StoredNumericValuesTest.java | 2 +- .../mapper/multifield/MultiFieldTests.java | 83 ++-- .../merge/JavaMultiFieldMergeTests.java | 29 +- .../mapper/numeric/SimpleNumericTests.java | 2 +- .../routing/RoutingTypeMapperTests.java | 3 +- .../string/SimpleStringMappingTests.java | 16 +- .../timestamp/TimestampMappingTests.java | 7 +- .../index/mapper/ttl/TTLMappingTests.java | 7 +- .../query/SimpleIndexQueryParserTests.java | 389 ++++++++++------- .../search/FieldDataTermsFilterTests.java | 4 +- .../search/child/AbstractChildTests.java | 42 +- .../index/search/child/BitSetCollector.java | 4 +- .../ChildrenConstantScoreQueryTests.java | 44 +- .../search/child/ChildrenQueryTests.java | 46 +- .../child/ParentConstantScoreQueryTests.java | 40 +- .../index/search/child/ParentQueryTests.java | 39 +- .../index/search/geo/GeoUtilsTests.java | 6 +- .../AbstractNumberNestedSortingTests.java | 23 +- .../nested/DoubleNestedSortingTests.java | 15 +- .../nested/FloatNestedSortingTests.java | 15 +- .../search/nested/NestedSortingTests.java | 21 +- .../index/store/CorruptedFileTest.java | 35 +- .../index/store/DirectoryUtilsTest.java | 10 +- .../index/store/DistributorDirectoryTest.java | 35 +- .../index/store/DistributorInTheWildTest.java | 4 +- .../elasticsearch/index/store/StoreTest.java | 63 ++- .../store/distributor/DistributorTests.java | 54 +-- .../indices/analysis/DummyAnalyzer.java | 5 +- .../analysis/DummyAnalyzerProvider.java | 3 +- .../analysis/DummyIndicesAnalysis.java | 3 +- .../analysis/DummyTokenizerFactory.java | 4 +- .../RandomExceptionCircuitBreakerTests.java | 12 +- .../indices/stats/IndexStatsTests.java | 2 +- .../indices/store/StrictDistributor.java | 2 +- .../nested/SimpleNestedTests.java | 8 +- .../upgrade/UpgradeReallyOldIndexTest.java | 92 ---- .../support/ScriptValuesTests.java | 4 +- .../SearchWithRandomExceptionsTests.java | 12 +- .../child/SimpleChildQuerySearchTests.java | 10 +- .../search/geo/GeoFilterTests.java | 5 + .../search/query/SimpleQueryTests.java | 2 +- .../suggest/CompletionTokenStreamTest.java | 20 +- .../AnalyzingCompletionLookupProviderV1.java | 164 ++++---- .../CompletionPostingsFormatTest.java | 268 +++++++++--- .../phrase/NoisyChannelSpellCheckerTests.java | 70 +-- .../test/ElasticsearchIntegrationTest.java | 5 +- .../test/ElasticsearchTestCase.java | 28 +- .../test/ExternalTestCluster.java | 2 +- .../test/InternalTestCluster.java | 2 +- .../elasticsearch/test/TestSearchContext.java | 8 +- .../test/cache/recycler/MockBigArrays.java | 29 +- ...er.java => ThrowingLeafReaderWrapper.java} | 10 +- .../test/store/MockFSDirectoryService.java | 53 +-- .../test/store/MockRamDirectoryService.java | 10 - .../admin/indices/upgrade/index-0.20.zip | Bin 7539 -> 0 bytes 531 files changed, 5572 insertions(+), 4613 deletions(-) create mode 100644 src/main/java/org/apache/lucene/search/XFilteredDocIdSetIterator.java delete mode 100644 src/main/java/org/elasticsearch/common/lucene/docset/MatchDocIdSet.java delete mode 100644 src/main/java/org/elasticsearch/common/lucene/search/ApplyAcceptedDocsFilter.java delete mode 100644 src/main/java/org/elasticsearch/common/lucene/search/XConstantScoreQuery.java delete mode 100644 src/main/java/org/elasticsearch/common/lucene/search/XFilteredQuery.java create mode 100644 src/main/java/org/elasticsearch/index/analysis/PatternAnalyzer.java create mode 100644 src/main/java/org/elasticsearch/index/analysis/SnowballAnalyzer.java rename src/main/java/org/elasticsearch/index/cache/{fixedbitset/FixedBitSetFilterCache.java => bitset/BitsetFilterCache.java} (76%) rename src/main/java/org/elasticsearch/index/cache/{fixedbitset/FixedBitSetFilterCacheModule.java => bitset/BitsetFilterCacheModule.java} (75%) rename src/main/java/org/elasticsearch/index/cache/{fixedbitset/ShardFixedBitSetFilterCache.java => bitset/ShardBitsetFilterCache.java} (86%) rename src/main/java/org/elasticsearch/index/cache/{fixedbitset/ShardFixedBitSetFilterCacheModule.java => bitset/ShardBitsetFilterCacheModule.java} (82%) delete mode 100644 src/main/java/org/elasticsearch/index/cache/fixedbitset/FixedBitSetFilter.java create mode 100644 src/test/java/org/elasticsearch/common/lucene/search/AndDocIdSetTests.java create mode 100644 src/test/java/org/elasticsearch/index/analysis/PatternAnalyzerTest.java create mode 100644 src/test/java/org/elasticsearch/index/analysis/SnowballAnalyzerTests.java rename src/test/java/org/elasticsearch/index/cache/{fixedbitset/FixedBitSetFilterCacheTest.java => bitset/BitSetFilterCacheTest.java} (74%) delete mode 100644 src/test/java/org/elasticsearch/rest/action/admin/indices/upgrade/UpgradeReallyOldIndexTest.java rename src/test/java/org/elasticsearch/test/engine/{ThrowingAtomicReaderWrapper.java => ThrowingLeafReaderWrapper.java} (95%) delete mode 100644 src/test/resources/org/elasticsearch/rest/action/admin/indices/upgrade/index-0.20.zip diff --git a/.settings/org.eclipse.jdt.ui.prefs b/.settings/org.eclipse.jdt.ui.prefs index 9cb87898888..7f6fe0688d1 100644 --- a/.settings/org.eclipse.jdt.ui.prefs +++ b/.settings/org.eclipse.jdt.ui.prefs @@ -1,6 +1,6 @@ eclipse.preferences.version=1 formatter_settings_version=12 # Intellij IDEA import order -org.eclipse.jdt.ui.importorder=;java;javax;\#; +org.eclipse.jdt.ui.importorder=;java;javax;com;org;\#; # License header org.eclipse.jdt.ui.text.custom_code_templates= diff --git a/dev-tools/forbidden/core-signatures.txt b/dev-tools/forbidden/core-signatures.txt index c49d82ef9b7..e67f95a3b24 100644 --- a/dev-tools/forbidden/core-signatures.txt +++ b/dev-tools/forbidden/core-signatures.txt @@ -26,10 +26,6 @@ org.apache.lucene.index.IndexReader#tryIncRef() @defaultMessage QueryWrapperFilter is cachable by default - use Queries#wrap instead org.apache.lucene.search.QueryWrapperFilter#(org.apache.lucene.search.Query) -@defaultMessage Because the filtercache doesn't take deletes into account FilteredQuery can't be used - use XFilteredQuery instead -org.apache.lucene.search.FilteredQuery#(org.apache.lucene.search.Query,org.apache.lucene.search.Filter) -org.apache.lucene.search.FilteredQuery#(org.apache.lucene.search.Query,org.apache.lucene.search.Filter,org.apache.lucene.search.FilteredQuery$FilterStrategy) - @defaultMessage Pass the precision step from the mappings explicitly instead org.apache.lucene.search.NumericRangeQuery#newDoubleRange(java.lang.String,java.lang.Double,java.lang.Double,boolean,boolean) org.apache.lucene.search.NumericRangeQuery#newFloatRange(java.lang.String,java.lang.Float,java.lang.Float,boolean,boolean) diff --git a/docs/reference/analysis/analyzers/lang-analyzer.asciidoc b/docs/reference/analysis/analyzers/lang-analyzer.asciidoc index 1c5c2ceef9b..69388ffa7a1 100644 --- a/docs/reference/analysis/analyzers/lang-analyzer.asciidoc +++ b/docs/reference/analysis/analyzers/lang-analyzer.asciidoc @@ -9,7 +9,6 @@ following types are supported: <>, <>, <>, -<>, <>, <>, <>, @@ -339,15 +338,6 @@ The `catalan` analyzer could be reimplemented as a `custom` analyzer as follows: <2> This filter should be removed unless there are words which should be excluded from stemming. -[[chinese-analyzer]] -===== `chinese` analyzer - -The `chinese` analyzer cannot be reimplemented as a `custom` analyzer -because it depends on the ChineseTokenizer and ChineseFilter classes, -which are not exposed in Elasticsearch. These classes are -deprecated in Lucene 4 and the `chinese` analyzer will be replaced -with the <> in Lucene 5. - [[cjk-analyzer]] ===== `cjk` analyzer diff --git a/docs/reference/query-dsl/queries/filtered-query.asciidoc b/docs/reference/query-dsl/queries/filtered-query.asciidoc index 951cbe25c2d..91ff294c2d9 100644 --- a/docs/reference/query-dsl/queries/filtered-query.asciidoc +++ b/docs/reference/query-dsl/queries/filtered-query.asciidoc @@ -144,8 +144,9 @@ The `strategy` parameter accepts the following options: `random_access_${threshold}`:: - If the filter supports random access and if there is at least one matching - document among the first `threshold` ones, then apply the filter first. + If the filter supports random access and if the number of documents in the + index divided by the cardinality of the filter is greater than ${threshold}, + then apply the filter first. Otherwise fall back to `leap_frog_query_first`. `${threshold}` must be greater than or equal to `1`. diff --git a/pom.xml b/pom.xml index dfc376bafcd..ee08aa3d0f0 100644 --- a/pom.xml +++ b/pom.xml @@ -31,8 +31,8 @@ - 4.10.2 - 4.10.2 + 5.0.0 + 5.0.0-snapshot-1636426 auto true onerror @@ -50,6 +50,10 @@ Codehaus Snapshots http://repository.codehaus.org/ + + Lucene snapshots + https://download.elasticsearch.org/lucenesnapshots/maven/ + @@ -84,6 +88,12 @@ ${lucene.maven.version} compile + + org.apache.lucene + lucene-backward-codecs + ${lucene.maven.version} + compile + org.apache.lucene lucene-analyzers-common diff --git a/src/main/java/org/apache/lucene/analysis/PrefixAnalyzer.java b/src/main/java/org/apache/lucene/analysis/PrefixAnalyzer.java index 86074f529f1..8a8f1fce31d 100644 --- a/src/main/java/org/apache/lucene/analysis/PrefixAnalyzer.java +++ b/src/main/java/org/apache/lucene/analysis/PrefixAnalyzer.java @@ -24,7 +24,6 @@ import org.apache.lucene.analysis.tokenattributes.PositionIncrementAttribute; import org.elasticsearch.ElasticsearchIllegalArgumentException; import java.io.IOException; -import java.io.Reader; import java.util.Collections; import java.util.Iterator; @@ -65,8 +64,8 @@ public class PrefixAnalyzer extends Analyzer { } @Override - protected TokenStreamComponents createComponents(String fieldName, Reader reader) { - TokenStreamComponents createComponents = analyzer.createComponents(fieldName, reader); + protected TokenStreamComponents createComponents(String fieldName) { + TokenStreamComponents createComponents = analyzer.createComponents(fieldName); TokenStream stream = new PrefixTokenFilter(createComponents.getTokenStream(), separator, prefix); TokenStreamComponents tsc = new TokenStreamComponents(createComponents.getTokenizer(), stream); return tsc; diff --git a/src/main/java/org/apache/lucene/analysis/miscellaneous/UniqueTokenFilter.java b/src/main/java/org/apache/lucene/analysis/miscellaneous/UniqueTokenFilter.java index 4ccad586739..b0a6122b548 100644 --- a/src/main/java/org/apache/lucene/analysis/miscellaneous/UniqueTokenFilter.java +++ b/src/main/java/org/apache/lucene/analysis/miscellaneous/UniqueTokenFilter.java @@ -24,7 +24,6 @@ import org.apache.lucene.analysis.TokenStream; import org.apache.lucene.analysis.tokenattributes.CharTermAttribute; import org.apache.lucene.analysis.tokenattributes.PositionIncrementAttribute; import org.apache.lucene.analysis.util.CharArraySet; -import org.apache.lucene.util.Version; import java.io.IOException; @@ -37,8 +36,7 @@ public class UniqueTokenFilter extends TokenFilter { private final CharTermAttribute termAttribute = addAttribute(CharTermAttribute.class); private final PositionIncrementAttribute posIncAttribute = addAttribute(PositionIncrementAttribute.class); - // use a fixed version, as we don't care about case sensitivity. - private final CharArraySet previous = new CharArraySet(Version.LUCENE_31, 8, false); + private final CharArraySet previous = new CharArraySet(8, false); private final boolean onlyOnSamePosition; public UniqueTokenFilter(TokenStream in) { diff --git a/src/main/java/org/apache/lucene/queries/BlendedTermQuery.java b/src/main/java/org/apache/lucene/queries/BlendedTermQuery.java index 67812771621..50f54731970 100644 --- a/src/main/java/org/apache/lucene/queries/BlendedTermQuery.java +++ b/src/main/java/org/apache/lucene/queries/BlendedTermQuery.java @@ -162,7 +162,7 @@ public abstract class BlendedTermQuery extends Query { return termContext; } TermContext newTermContext = new TermContext(termContext.topReaderContext); - List leaves = termContext.topReaderContext.leaves(); + List leaves = termContext.topReaderContext.leaves(); final int len; if (leaves == null) { len = 1; diff --git a/src/main/java/org/apache/lucene/queryparser/classic/ExistsFieldQueryExtension.java b/src/main/java/org/apache/lucene/queryparser/classic/ExistsFieldQueryExtension.java index 470f7b841ee..7ca24668206 100644 --- a/src/main/java/org/apache/lucene/queryparser/classic/ExistsFieldQueryExtension.java +++ b/src/main/java/org/apache/lucene/queryparser/classic/ExistsFieldQueryExtension.java @@ -19,8 +19,8 @@ package org.apache.lucene.queryparser.classic; +import org.apache.lucene.search.ConstantScoreQuery; import org.apache.lucene.search.Query; -import org.elasticsearch.common.lucene.search.XConstantScoreQuery; import org.elasticsearch.index.query.ExistsFilterParser; import org.elasticsearch.index.query.QueryParseContext; @@ -33,6 +33,6 @@ public class ExistsFieldQueryExtension implements FieldQueryExtension { @Override public Query query(QueryParseContext parseContext, String queryText) { - return new XConstantScoreQuery(ExistsFilterParser.newFilter(parseContext, queryText, null)); + return new ConstantScoreQuery(ExistsFilterParser.newFilter(parseContext, queryText, null)); } } diff --git a/src/main/java/org/apache/lucene/queryparser/classic/MapperQueryParser.java b/src/main/java/org/apache/lucene/queryparser/classic/MapperQueryParser.java index 2796649c380..2fac20445a9 100644 --- a/src/main/java/org/apache/lucene/queryparser/classic/MapperQueryParser.java +++ b/src/main/java/org/apache/lucene/queryparser/classic/MapperQueryParser.java @@ -25,12 +25,16 @@ import org.apache.lucene.analysis.Analyzer; import org.apache.lucene.analysis.TokenStream; import org.apache.lucene.analysis.tokenattributes.CharTermAttribute; import org.apache.lucene.index.Term; -import org.apache.lucene.search.*; +import org.apache.lucene.search.BooleanClause; +import org.apache.lucene.search.DisjunctionMaxQuery; +import org.apache.lucene.search.FilteredQuery; +import org.apache.lucene.search.FuzzyQuery; +import org.apache.lucene.search.MultiPhraseQuery; +import org.apache.lucene.search.PhraseQuery; +import org.apache.lucene.search.Query; import org.apache.lucene.util.automaton.RegExp; -import org.elasticsearch.common.lucene.Lucene; import org.elasticsearch.common.lucene.search.MatchNoDocsQuery; import org.elasticsearch.common.lucene.search.Queries; -import org.elasticsearch.common.lucene.search.XFilteredQuery; import org.elasticsearch.common.unit.Fuzziness; import org.elasticsearch.index.mapper.FieldMapper; import org.elasticsearch.index.mapper.MapperService; @@ -79,12 +83,12 @@ public class MapperQueryParser extends QueryParser { private String quoteFieldSuffix; public MapperQueryParser(QueryParseContext parseContext) { - super(Lucene.QUERYPARSER_VERSION, null, null); + super(null, null); this.parseContext = parseContext; } public MapperQueryParser(QueryParserSettings settings, QueryParseContext parseContext) { - super(Lucene.QUERYPARSER_VERSION, settings.defaultField(), settings.defaultAnalyzer()); + super(settings.defaultField(), settings.defaultAnalyzer()); this.parseContext = parseContext; reset(settings); } @@ -855,8 +859,8 @@ public class MapperQueryParser extends QueryParser { } private void applySlop(Query q, int slop) { - if (q instanceof XFilteredQuery) { - applySlop(((XFilteredQuery)q).getQuery(), slop); + if (q instanceof FilteredQuery) { + applySlop(((FilteredQuery)q).getQuery(), slop); } if (q instanceof PhraseQuery) { ((PhraseQuery) q).setSlop(slop); diff --git a/src/main/java/org/apache/lucene/queryparser/classic/MissingFieldQueryExtension.java b/src/main/java/org/apache/lucene/queryparser/classic/MissingFieldQueryExtension.java index ad200d4407d..c2212e97690 100644 --- a/src/main/java/org/apache/lucene/queryparser/classic/MissingFieldQueryExtension.java +++ b/src/main/java/org/apache/lucene/queryparser/classic/MissingFieldQueryExtension.java @@ -19,8 +19,8 @@ package org.apache.lucene.queryparser.classic; +import org.apache.lucene.search.ConstantScoreQuery; import org.apache.lucene.search.Query; -import org.elasticsearch.common.lucene.search.XConstantScoreQuery; import org.elasticsearch.index.query.MissingFilterParser; import org.elasticsearch.index.query.QueryParseContext; @@ -33,7 +33,7 @@ public class MissingFieldQueryExtension implements FieldQueryExtension { @Override public Query query(QueryParseContext parseContext, String queryText) { - return new XConstantScoreQuery(MissingFilterParser.newFilter(parseContext, queryText, + return new ConstantScoreQuery(MissingFilterParser.newFilter(parseContext, queryText, MissingFilterParser.DEFAULT_EXISTENCE_VALUE, MissingFilterParser.DEFAULT_NULL_VALUE, null)); } } diff --git a/src/main/java/org/apache/lucene/queryparser/classic/QueryParserSettings.java b/src/main/java/org/apache/lucene/queryparser/classic/QueryParserSettings.java index 9b54bef889f..8316b2cc68f 100644 --- a/src/main/java/org/apache/lucene/queryparser/classic/QueryParserSettings.java +++ b/src/main/java/org/apache/lucene/queryparser/classic/QueryParserSettings.java @@ -58,7 +58,7 @@ public class QueryParserSettings { private Analyzer forcedAnalyzer = null; private Analyzer forcedQuoteAnalyzer = null; private String quoteFieldSuffix = null; - private MultiTermQuery.RewriteMethod rewriteMethod = MultiTermQuery.CONSTANT_SCORE_AUTO_REWRITE_DEFAULT; + private MultiTermQuery.RewriteMethod rewriteMethod = MultiTermQuery.CONSTANT_SCORE_FILTER_REWRITE; private String minimumShouldMatch; private boolean lenient; private Locale locale; diff --git a/src/main/java/org/apache/lucene/search/XFilteredDocIdSetIterator.java b/src/main/java/org/apache/lucene/search/XFilteredDocIdSetIterator.java new file mode 100644 index 00000000000..0b3600e5717 --- /dev/null +++ b/src/main/java/org/apache/lucene/search/XFilteredDocIdSetIterator.java @@ -0,0 +1,108 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.apache.lucene.search; + +import java.io.IOException; + +// this is just one possible solution for "early termination" ! + +/** + * Abstract decorator class of a DocIdSetIterator + * implementation that provides on-demand filter/validation + * mechanism on an underlying DocIdSetIterator. See {@link + * FilteredDocIdSet}. + */ +public abstract class XFilteredDocIdSetIterator extends DocIdSetIterator { + protected DocIdSetIterator _innerIter; + private int doc; + + /** + * Constructor. + * @param innerIter Underlying DocIdSetIterator. + */ + public XFilteredDocIdSetIterator(DocIdSetIterator innerIter) { + if (innerIter == null) { + throw new IllegalArgumentException("null iterator"); + } + _innerIter = innerIter; + doc = -1; + } + + /** Return the wrapped {@link DocIdSetIterator}. */ + public DocIdSetIterator getDelegate() { + return _innerIter; + } + + /** + * Validation method to determine whether a docid should be in the result set. + * @param doc docid to be tested + * @return true if input docid should be in the result set, false otherwise. + * @see #FilteredDocIdSetIterator(DocIdSetIterator) + * @throws CollectionTerminatedException if the underlying iterator is exhausted. + */ + protected abstract boolean match(int doc); + + @Override + public int docID() { + return doc; + } + + @Override + public int nextDoc() throws IOException { + try { + while ((doc = _innerIter.nextDoc()) != NO_MORE_DOCS) { + if (match(doc)) { + return doc; + } + } + } catch (CollectionTerminatedException e) { + return doc = NO_MORE_DOCS; + } + return doc; + } + + @Override + public int advance(int target) throws IOException { + doc = _innerIter.advance(target); + try { + if (doc != NO_MORE_DOCS) { + if (match(doc)) { + return doc; + } else { + while ((doc = _innerIter.nextDoc()) != NO_MORE_DOCS) { + if (match(doc)) { + return doc; + } + } + return doc; + } + } + } catch (CollectionTerminatedException e) { + return doc = NO_MORE_DOCS; + } + return doc; + } + + @Override + public long cost() { + return _innerIter.cost(); + } +} + diff --git a/src/main/java/org/apache/lucene/search/postingshighlight/CustomPostingsHighlighter.java b/src/main/java/org/apache/lucene/search/postingshighlight/CustomPostingsHighlighter.java index be5ad66bbeb..7528206f6ae 100644 --- a/src/main/java/org/apache/lucene/search/postingshighlight/CustomPostingsHighlighter.java +++ b/src/main/java/org/apache/lucene/search/postingshighlight/CustomPostingsHighlighter.java @@ -18,7 +18,7 @@ package org.apache.lucene.search.postingshighlight; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.IndexReader; import org.apache.lucene.index.IndexReaderContext; import org.apache.lucene.search.IndexSearcher; @@ -94,7 +94,7 @@ public final class CustomPostingsHighlighter extends XPostingsHighlighter { public Snippet[] highlightDoc(String field, BytesRef[] terms, IndexSearcher searcher, int docId, int maxPassages) throws IOException { IndexReader reader = searcher.getIndexReader(); IndexReaderContext readerContext = reader.getContext(); - List leaves = readerContext.leaves(); + List leaves = readerContext.leaves(); String[] contents = new String[]{loadCurrentFieldValue()}; Map snippetsMap = highlightField(field, contents, getBreakIterator(field), terms, new int[]{docId}, leaves, maxPassages); diff --git a/src/main/java/org/apache/lucene/search/postingshighlight/XPostingsHighlighter.java b/src/main/java/org/apache/lucene/search/postingshighlight/XPostingsHighlighter.java index 6e8e032b877..3144ed96b76 100644 --- a/src/main/java/org/apache/lucene/search/postingshighlight/XPostingsHighlighter.java +++ b/src/main/java/org/apache/lucene/search/postingshighlight/XPostingsHighlighter.java @@ -289,7 +289,7 @@ public class XPostingsHighlighter { query.extractTerms(queryTerms); IndexReaderContext readerContext = reader.getContext(); - List leaves = readerContext.leaves(); + List leaves = readerContext.leaves(); // Make our own copies because we sort in-place: int[] docids = new int[docidsIn.length]; @@ -384,8 +384,8 @@ public class XPostingsHighlighter { } //BEGIN EDIT: made protected so that we can call from our subclass and pass in the terms by ourselves - protected Map highlightField(String field, String contents[], BreakIterator bi, BytesRef terms[], int[] docids, List leaves, int maxPassages) throws IOException { - //private Map highlightField(String field, String contents[], BreakIterator bi, BytesRef terms[], int[] docids, List leaves, int maxPassages) throws IOException { + protected Map highlightField(String field, String contents[], BreakIterator bi, BytesRef terms[], int[] docids, List leaves, int maxPassages) throws IOException { + //private Map highlightField(String field, String contents[], BreakIterator bi, BytesRef terms[], int[] docids, List leaves, int maxPassages) throws IOException { //END EDIT Map highlights = new HashMap<>(); @@ -408,8 +408,8 @@ public class XPostingsHighlighter { bi.setText(content); int doc = docids[i]; int leaf = ReaderUtil.subIndex(doc, leaves); - AtomicReaderContext subContext = leaves.get(leaf); - AtomicReader r = subContext.reader(); + LeafReaderContext subContext = leaves.get(leaf); + LeafReader r = subContext.reader(); Terms t = r.terms(field); if (t == null) { continue; // nothing to do diff --git a/src/main/java/org/apache/lucene/search/suggest/analyzing/XAnalyzingSuggester.java b/src/main/java/org/apache/lucene/search/suggest/analyzing/XAnalyzingSuggester.java index 221defdad03..0c2973373e8 100644 --- a/src/main/java/org/apache/lucene/search/suggest/analyzing/XAnalyzingSuggester.java +++ b/src/main/java/org/apache/lucene/search/suggest/analyzing/XAnalyzingSuggester.java @@ -36,10 +36,11 @@ import org.apache.lucene.util.fst.Util.Result; import org.apache.lucene.util.fst.Util.TopResults; import org.elasticsearch.common.collect.HppcMaps; -import java.io.File; import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; +import java.nio.file.Files; +import java.nio.file.Path; import java.util.*; /** @@ -444,9 +445,9 @@ public class XAnalyzingSuggester extends Lookup { @Override public void build(InputIterator iterator) throws IOException { String prefix = getClass().getSimpleName(); - File directory = OfflineSorter.defaultTempDir(); - File tempInput = File.createTempFile(prefix, ".input", directory); - File tempSorted = File.createTempFile(prefix, ".sorted", directory); + Path directory = OfflineSorter.defaultTempDir(); + Path tempInput = Files.createTempFile(directory, prefix, ".input"); + Path tempSorted = Files.createTempFile(directory, prefix, ".sorted"); hasPayloads = iterator.hasPayloads(); @@ -530,7 +531,7 @@ public class XAnalyzingSuggester extends Lookup { new OfflineSorter(new AnalyzingComparator(hasPayloads)).sort(tempInput, tempSorted); // Free disk space: - tempInput.delete(); + Files.delete(tempInput); reader = new OfflineSorter.ByteSequencesReader(tempSorted); @@ -625,14 +626,13 @@ public class XAnalyzingSuggester extends Lookup { success = true; } finally { + IOUtils.closeWhileHandlingException(reader, writer); + if (success) { - IOUtils.close(reader, writer); + IOUtils.deleteFilesIfExist(tempInput, tempSorted); } else { - IOUtils.closeWhileHandlingException(reader, writer); + IOUtils.deleteFilesIgnoringExceptions(tempInput, tempSorted); } - - tempInput.delete(); - tempSorted.delete(); } } diff --git a/src/main/java/org/apache/lucene/search/vectorhighlight/CustomFieldQuery.java b/src/main/java/org/apache/lucene/search/vectorhighlight/CustomFieldQuery.java index fb98a6dbbe4..7b59c422541 100644 --- a/src/main/java/org/apache/lucene/search/vectorhighlight/CustomFieldQuery.java +++ b/src/main/java/org/apache/lucene/search/vectorhighlight/CustomFieldQuery.java @@ -24,11 +24,18 @@ import org.apache.lucene.index.Term; import org.apache.lucene.queries.BlendedTermQuery; import org.apache.lucene.queries.FilterClause; import org.apache.lucene.queries.TermFilter; -import org.apache.lucene.search.*; +import org.apache.lucene.search.BooleanClause; +import org.apache.lucene.search.ConstantScoreQuery; +import org.apache.lucene.search.Filter; +import org.apache.lucene.search.FilteredQuery; +import org.apache.lucene.search.MultiPhraseQuery; +import org.apache.lucene.search.MultiTermQueryWrapperFilter; +import org.apache.lucene.search.PhraseQuery; +import org.apache.lucene.search.Query; +import org.apache.lucene.search.TermQuery; import org.apache.lucene.search.spans.SpanTermQuery; import org.elasticsearch.common.lucene.search.MultiPhrasePrefixQuery; import org.elasticsearch.common.lucene.search.XBooleanFilter; -import org.elasticsearch.common.lucene.search.XFilteredQuery; import org.elasticsearch.common.lucene.search.function.FiltersFunctionScoreQuery; import org.elasticsearch.common.lucene.search.function.FunctionScoreQuery; @@ -81,9 +88,6 @@ public class CustomFieldQuery extends FieldQuery { } else if (sourceQuery instanceof FilteredQuery) { flatten(((FilteredQuery) sourceQuery).getQuery(), reader, flatQueries); flatten(((FilteredQuery) sourceQuery).getFilter(), reader, flatQueries); - } else if (sourceQuery instanceof XFilteredQuery) { - flatten(((XFilteredQuery) sourceQuery).getQuery(), reader, flatQueries); - flatten(((XFilteredQuery) sourceQuery).getFilter(), reader, flatQueries); } else if (sourceQuery instanceof MultiPhrasePrefixQuery) { flatten(sourceQuery.rewrite(reader), reader, flatQueries); } else if (sourceQuery instanceof FiltersFunctionScoreQuery) { diff --git a/src/main/java/org/elasticsearch/ElasticsearchCorruptionException.java b/src/main/java/org/elasticsearch/ElasticsearchCorruptionException.java index 3e9c2992e6e..350bbc31121 100644 --- a/src/main/java/org/elasticsearch/ElasticsearchCorruptionException.java +++ b/src/main/java/org/elasticsearch/ElasticsearchCorruptionException.java @@ -18,8 +18,6 @@ */ package org.elasticsearch; -import org.apache.lucene.index.CorruptIndexException; - import java.io.IOException; /** @@ -39,14 +37,20 @@ public class ElasticsearchCorruptionException extends IOException { /** * Creates a new {@link ElasticsearchCorruptionException} with the given exceptions stacktrace. * This constructor copies the stacktrace as well as the message from the given - * {@link org.apache.lucene.index.CorruptIndexException} into this exception. + * {@code Throwable} into this exception. * * @param ex the exception cause */ - public ElasticsearchCorruptionException(CorruptIndexException ex) { + public ElasticsearchCorruptionException(Throwable ex) { this(ex.getMessage()); if (ex != null) { this.setStackTrace(ex.getStackTrace()); } + Throwable[] suppressed = ex.getSuppressed(); + if (suppressed != null) { + for (Throwable supressedExc : suppressed) { + addSuppressed(supressedExc); + } + } } } \ No newline at end of file diff --git a/src/main/java/org/elasticsearch/ExceptionsHelper.java b/src/main/java/org/elasticsearch/ExceptionsHelper.java index b97daa1fc43..e97eb41d0f1 100644 --- a/src/main/java/org/elasticsearch/ExceptionsHelper.java +++ b/src/main/java/org/elasticsearch/ExceptionsHelper.java @@ -19,11 +19,15 @@ package org.elasticsearch; +import org.apache.lucene.index.CorruptIndexException; +import org.apache.lucene.index.IndexFormatTooNewException; +import org.apache.lucene.index.IndexFormatTooOldException; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.logging.ESLogger; import org.elasticsearch.common.logging.Loggers; import org.elasticsearch.rest.RestStatus; +import java.io.IOException; import java.io.PrintWriter; import java.io.StringWriter; import java.util.List; @@ -161,12 +165,19 @@ public final class ExceptionsHelper { return first; } + public static IOException unwrapCorruption(Throwable t) { + return (IOException) unwrap(t, CorruptIndexException.class, + IndexFormatTooOldException.class, + IndexFormatTooNewException.class); + } - public static T unwrap(Throwable t, Class clazz) { + public static Throwable unwrap(Throwable t, Class... clazzes) { if (t != null) { do { - if (clazz.isInstance(t)) { - return clazz.cast(t); + for (Class clazz : clazzes) { + if (clazz.isInstance(t)) { + return t; + } } } while ((t = t.getCause()) != null); } diff --git a/src/main/java/org/elasticsearch/Version.java b/src/main/java/org/elasticsearch/Version.java index 74d0d80b743..237c123cae1 100644 --- a/src/main/java/org/elasticsearch/Version.java +++ b/src/main/java/org/elasticsearch/Version.java @@ -40,82 +40,85 @@ public class Version implements Serializable { // The logic for ID is: XXYYZZAA, where XX is major version, YY is minor version, ZZ is revision, and AA is Beta/RC indicator // AA values below 50 are beta builds, and below 99 are RC builds, with 99 indicating a release // the (internal) format of the id is there so we can easily do after/before checks on the id + + // NOTE: indexes created with 3.6 use this constant for e.g. analysis chain emulation (imperfect) + public static final org.apache.lucene.util.Version LUCENE_3_EMULATION_VERSION = org.apache.lucene.util.Version.LUCENE_4_0_0; public static final int V_0_18_0_ID = /*00*/180099; - public static final Version V_0_18_0 = new Version(V_0_18_0_ID, false, org.apache.lucene.util.Version.LUCENE_3_6); + public static final Version V_0_18_0 = new Version(V_0_18_0_ID, false, LUCENE_3_EMULATION_VERSION); public static final int V_0_18_1_ID = /*00*/180199; - public static final Version V_0_18_1 = new Version(V_0_18_1_ID, false, org.apache.lucene.util.Version.LUCENE_3_6); + public static final Version V_0_18_1 = new Version(V_0_18_1_ID, false, LUCENE_3_EMULATION_VERSION); public static final int V_0_18_2_ID = /*00*/180299; - public static final Version V_0_18_2 = new Version(V_0_18_2_ID, false, org.apache.lucene.util.Version.LUCENE_3_6); + public static final Version V_0_18_2 = new Version(V_0_18_2_ID, false, LUCENE_3_EMULATION_VERSION); public static final int V_0_18_3_ID = /*00*/180399; - public static final Version V_0_18_3 = new Version(V_0_18_3_ID, false, org.apache.lucene.util.Version.LUCENE_3_6); + public static final Version V_0_18_3 = new Version(V_0_18_3_ID, false, LUCENE_3_EMULATION_VERSION); public static final int V_0_18_4_ID = /*00*/180499; - public static final Version V_0_18_4 = new Version(V_0_18_4_ID, false, org.apache.lucene.util.Version.LUCENE_3_6); + public static final Version V_0_18_4 = new Version(V_0_18_4_ID, false, LUCENE_3_EMULATION_VERSION); public static final int V_0_18_5_ID = /*00*/180599; - public static final Version V_0_18_5 = new Version(V_0_18_5_ID, false, org.apache.lucene.util.Version.LUCENE_3_6); + public static final Version V_0_18_5 = new Version(V_0_18_5_ID, false, LUCENE_3_EMULATION_VERSION); public static final int V_0_18_6_ID = /*00*/180699; - public static final Version V_0_18_6 = new Version(V_0_18_6_ID, false, org.apache.lucene.util.Version.LUCENE_3_6); + public static final Version V_0_18_6 = new Version(V_0_18_6_ID, false, LUCENE_3_EMULATION_VERSION); public static final int V_0_18_7_ID = /*00*/180799; - public static final Version V_0_18_7 = new Version(V_0_18_7_ID, false, org.apache.lucene.util.Version.LUCENE_3_6); + public static final Version V_0_18_7 = new Version(V_0_18_7_ID, false, LUCENE_3_EMULATION_VERSION); public static final int V_0_18_8_ID = /*00*/180899; - public static final Version V_0_18_8 = new Version(V_0_18_8_ID, false, org.apache.lucene.util.Version.LUCENE_3_6); + public static final Version V_0_18_8 = new Version(V_0_18_8_ID, false, LUCENE_3_EMULATION_VERSION); public static final int V_0_19_0_RC1_ID = /*00*/190051; - public static final Version V_0_19_0_RC1 = new Version(V_0_19_0_RC1_ID, false, org.apache.lucene.util.Version.LUCENE_3_6); + public static final Version V_0_19_0_RC1 = new Version(V_0_19_0_RC1_ID, false, LUCENE_3_EMULATION_VERSION); public static final int V_0_19_0_RC2_ID = /*00*/190052; - public static final Version V_0_19_0_RC2 = new Version(V_0_19_0_RC2_ID, false, org.apache.lucene.util.Version.LUCENE_3_6); + public static final Version V_0_19_0_RC2 = new Version(V_0_19_0_RC2_ID, false, LUCENE_3_EMULATION_VERSION); public static final int V_0_19_0_RC3_ID = /*00*/190053; - public static final Version V_0_19_0_RC3 = new Version(V_0_19_0_RC3_ID, false, org.apache.lucene.util.Version.LUCENE_3_6); + public static final Version V_0_19_0_RC3 = new Version(V_0_19_0_RC3_ID, false, LUCENE_3_EMULATION_VERSION); public static final int V_0_19_0_ID = /*00*/190099; - public static final Version V_0_19_0 = new Version(V_0_19_0_ID, false, org.apache.lucene.util.Version.LUCENE_3_6); + public static final Version V_0_19_0 = new Version(V_0_19_0_ID, false, LUCENE_3_EMULATION_VERSION); public static final int V_0_19_1_ID = /*00*/190199; - public static final Version V_0_19_1 = new Version(V_0_19_1_ID, false, org.apache.lucene.util.Version.LUCENE_3_6); + public static final Version V_0_19_1 = new Version(V_0_19_1_ID, false, LUCENE_3_EMULATION_VERSION); public static final int V_0_19_2_ID = /*00*/190299; - public static final Version V_0_19_2 = new Version(V_0_19_2_ID, false, org.apache.lucene.util.Version.LUCENE_3_6); + public static final Version V_0_19_2 = new Version(V_0_19_2_ID, false, LUCENE_3_EMULATION_VERSION); public static final int V_0_19_3_ID = /*00*/190399; - public static final Version V_0_19_3 = new Version(V_0_19_3_ID, false, org.apache.lucene.util.Version.LUCENE_3_6); + public static final Version V_0_19_3 = new Version(V_0_19_3_ID, false, LUCENE_3_EMULATION_VERSION); public static final int V_0_19_4_ID = /*00*/190499; - public static final Version V_0_19_4 = new Version(V_0_19_4_ID, false, org.apache.lucene.util.Version.LUCENE_3_6); + public static final Version V_0_19_4 = new Version(V_0_19_4_ID, false, LUCENE_3_EMULATION_VERSION); public static final int V_0_19_5_ID = /*00*/190599; - public static final Version V_0_19_5 = new Version(V_0_19_5_ID, false, org.apache.lucene.util.Version.LUCENE_3_6); + public static final Version V_0_19_5 = new Version(V_0_19_5_ID, false, LUCENE_3_EMULATION_VERSION); public static final int V_0_19_6_ID = /*00*/190699; - public static final Version V_0_19_6 = new Version(V_0_19_6_ID, false, org.apache.lucene.util.Version.LUCENE_3_6); + public static final Version V_0_19_6 = new Version(V_0_19_6_ID, false, LUCENE_3_EMULATION_VERSION); public static final int V_0_19_7_ID = /*00*/190799; - public static final Version V_0_19_7 = new Version(V_0_19_7_ID, false, org.apache.lucene.util.Version.LUCENE_3_6); + public static final Version V_0_19_7 = new Version(V_0_19_7_ID, false, LUCENE_3_EMULATION_VERSION); public static final int V_0_19_8_ID = /*00*/190899; - public static final Version V_0_19_8 = new Version(V_0_19_8_ID, false, org.apache.lucene.util.Version.LUCENE_3_6); + public static final Version V_0_19_8 = new Version(V_0_19_8_ID, false, LUCENE_3_EMULATION_VERSION); public static final int V_0_19_9_ID = /*00*/190999; - public static final Version V_0_19_9 = new Version(V_0_19_9_ID, false, org.apache.lucene.util.Version.LUCENE_3_6); + public static final Version V_0_19_9 = new Version(V_0_19_9_ID, false, LUCENE_3_EMULATION_VERSION); public static final int V_0_19_10_ID = /*00*/191099; - public static final Version V_0_19_10 = new Version(V_0_19_10_ID, false, org.apache.lucene.util.Version.LUCENE_3_6); + public static final Version V_0_19_10 = new Version(V_0_19_10_ID, false, LUCENE_3_EMULATION_VERSION); public static final int V_0_19_11_ID = /*00*/191199; - public static final Version V_0_19_11 = new Version(V_0_19_11_ID, false, org.apache.lucene.util.Version.LUCENE_3_6); + public static final Version V_0_19_11 = new Version(V_0_19_11_ID, false, LUCENE_3_EMULATION_VERSION); public static final int V_0_19_12_ID = /*00*/191299; - public static final Version V_0_19_12 = new Version(V_0_19_12_ID, false, org.apache.lucene.util.Version.LUCENE_3_6); + public static final Version V_0_19_12 = new Version(V_0_19_12_ID, false, LUCENE_3_EMULATION_VERSION); public static final int V_0_19_13_ID = /*00*/191399; - public static final Version V_0_19_13 = new Version(V_0_19_13_ID, false, org.apache.lucene.util.Version.LUCENE_3_6); + public static final Version V_0_19_13 = new Version(V_0_19_13_ID, false, LUCENE_3_EMULATION_VERSION); public static final int V_0_20_0_RC1_ID = /*00*/200051; - public static final Version V_0_20_0_RC1 = new Version(V_0_20_0_RC1_ID, false, org.apache.lucene.util.Version.LUCENE_3_6); + public static final Version V_0_20_0_RC1 = new Version(V_0_20_0_RC1_ID, false, LUCENE_3_EMULATION_VERSION); public static final int V_0_20_0_ID = /*00*/200099; - public static final Version V_0_20_0 = new Version(V_0_20_0_ID, false, org.apache.lucene.util.Version.LUCENE_3_6); + public static final Version V_0_20_0 = new Version(V_0_20_0_ID, false, LUCENE_3_EMULATION_VERSION); public static final int V_0_20_1_ID = /*00*/200199; - public static final Version V_0_20_1 = new Version(V_0_20_1_ID, false, org.apache.lucene.util.Version.LUCENE_3_6); + public static final Version V_0_20_1 = new Version(V_0_20_1_ID, false, LUCENE_3_EMULATION_VERSION); public static final int V_0_20_2_ID = /*00*/200299; - public static final Version V_0_20_2 = new Version(V_0_20_2_ID, false, org.apache.lucene.util.Version.LUCENE_3_6); + public static final Version V_0_20_2 = new Version(V_0_20_2_ID, false, LUCENE_3_EMULATION_VERSION); public static final int V_0_20_3_ID = /*00*/200399; - public static final Version V_0_20_3 = new Version(V_0_20_3_ID, false, org.apache.lucene.util.Version.LUCENE_3_6); + public static final Version V_0_20_3 = new Version(V_0_20_3_ID, false, LUCENE_3_EMULATION_VERSION); public static final int V_0_20_4_ID = /*00*/200499; - public static final Version V_0_20_4 = new Version(V_0_20_4_ID, false, org.apache.lucene.util.Version.LUCENE_3_6); + public static final Version V_0_20_4 = new Version(V_0_20_4_ID, false, LUCENE_3_EMULATION_VERSION); public static final int V_0_20_5_ID = /*00*/200599; - public static final Version V_0_20_5 = new Version(V_0_20_5_ID, false, org.apache.lucene.util.Version.LUCENE_3_6); + public static final Version V_0_20_5 = new Version(V_0_20_5_ID, false, LUCENE_3_EMULATION_VERSION); public static final int V_0_20_6_ID = /*00*/200699; - public static final Version V_0_20_6 = new Version(V_0_20_6_ID, false, org.apache.lucene.util.Version.LUCENE_3_6); + public static final Version V_0_20_6 = new Version(V_0_20_6_ID, false, LUCENE_3_EMULATION_VERSION); public static final int V_0_20_7_ID = /*00*/200799; - public static final Version V_0_20_7 = new Version(V_0_20_7_ID, false, org.apache.lucene.util.Version.LUCENE_3_6); + public static final Version V_0_20_7 = new Version(V_0_20_7_ID, false, LUCENE_3_EMULATION_VERSION); public static final int V_0_90_0_Beta1_ID = /*00*/900001; public static final Version V_0_90_0_Beta1 = new Version(V_0_90_0_Beta1_ID, false, org.apache.lucene.util.Version.LUCENE_4_1); @@ -213,7 +216,7 @@ public class Version implements Serializable { public static final int V_1_5_0_ID = /*00*/1050099; public static final Version V_1_5_0 = new Version(V_1_5_0_ID, false, org.apache.lucene.util.Version.LUCENE_4_10_2); public static final int V_2_0_0_ID = /*00*/2000099; - public static final Version V_2_0_0 = new Version(V_2_0_0_ID, true, org.apache.lucene.util.Version.LUCENE_4_10_2); + public static final Version V_2_0_0 = new Version(V_2_0_0_ID, true, org.apache.lucene.util.Version.LUCENE_5_0_0); public static final Version CURRENT = V_2_0_0; diff --git a/src/main/java/org/elasticsearch/action/mlt/TransportMoreLikeThisAction.java b/src/main/java/org/elasticsearch/action/mlt/TransportMoreLikeThisAction.java index 39ac5614159..7b835fe3fea 100644 --- a/src/main/java/org/elasticsearch/action/mlt/TransportMoreLikeThisAction.java +++ b/src/main/java/org/elasticsearch/action/mlt/TransportMoreLikeThisAction.java @@ -20,6 +20,7 @@ package org.elasticsearch.action.mlt; import org.apache.lucene.document.Field; +import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.Term; import org.apache.lucene.util.BytesRef; import org.elasticsearch.ElasticsearchException; @@ -284,7 +285,7 @@ public class TransportMoreLikeThisAction extends HandledTransportAction getComparator() { - return BytesRef.getUTF8SortedAsUnicodeComparator(); - } - @Override public SeekStatus seekCeil(BytesRef text) throws IOException { throw new UnsupportedOperationException(); @@ -345,11 +340,6 @@ public final class TermVectorFields extends Fields { }; } - @Override - public Comparator getComparator() { - return BytesRef.getUTF8SortedAsUnicodeComparator(); - } - @Override public long size() throws IOException { return numTerms; diff --git a/src/main/java/org/elasticsearch/common/blobstore/fs/FsBlobContainer.java b/src/main/java/org/elasticsearch/common/blobstore/fs/FsBlobContainer.java index 0e331a55b24..12119bce910 100644 --- a/src/main/java/org/elasticsearch/common/blobstore/fs/FsBlobContainer.java +++ b/src/main/java/org/elasticsearch/common/blobstore/fs/FsBlobContainer.java @@ -86,8 +86,8 @@ public class FsBlobContainer extends AbstractBlobContainer { @Override public void close() throws IOException { super.close(); - IOUtils.fsync(file, false); - IOUtils.fsync(path, true); + IOUtils.fsync(file.toPath(), false); + IOUtils.fsync(path.toPath(), true); } }, blobStore.bufferSizeInBytes()); } diff --git a/src/main/java/org/elasticsearch/common/io/FileSystemUtils.java b/src/main/java/org/elasticsearch/common/io/FileSystemUtils.java index cd1b70270f8..d7a9156a5f3 100644 --- a/src/main/java/org/elasticsearch/common/io/FileSystemUtils.java +++ b/src/main/java/org/elasticsearch/common/io/FileSystemUtils.java @@ -154,7 +154,7 @@ public class FileSystemUtils { * because not all file systems and operating systems allow to fsync on a directory) */ public static void syncFile(File fileToSync, boolean isDir) throws IOException { - IOUtils.fsync(fileToSync, isDir); + IOUtils.fsync(fileToSync.toPath(), isDir); } /** diff --git a/src/main/java/org/elasticsearch/common/lucene/Lucene.java b/src/main/java/org/elasticsearch/common/lucene/Lucene.java index 880e763e436..9bb3788b4aa 100644 --- a/src/main/java/org/elasticsearch/common/lucene/Lucene.java +++ b/src/main/java/org/elasticsearch/common/lucene/Lucene.java @@ -21,7 +21,10 @@ package org.elasticsearch.common.lucene; import org.apache.lucene.analysis.core.KeywordAnalyzer; import org.apache.lucene.analysis.standard.StandardAnalyzer; +import org.apache.lucene.codecs.Codec; import org.apache.lucene.codecs.CodecUtil; +import org.apache.lucene.codecs.DocValuesFormat; +import org.apache.lucene.codecs.PostingsFormat; import org.apache.lucene.index.*; import org.apache.lucene.search.*; import org.apache.lucene.store.Directory; @@ -56,8 +59,18 @@ public class Lucene { public static final Version VERSION = Version.LATEST; public static final Version ANALYZER_VERSION = VERSION; public static final Version QUERYPARSER_VERSION = VERSION; + public static final String LATEST_DOC_VALUES_FORMAT = "Lucene50"; + public static final String LATEST_POSTINGS_FORMAT = "Lucene50"; + public static final String LATEST_CODEC = Codec.getDefault().getName(); - public static final NamedAnalyzer STANDARD_ANALYZER = new NamedAnalyzer("_standard", AnalyzerScope.GLOBAL, new StandardAnalyzer(ANALYZER_VERSION)); + static { + Deprecated annotation = PostingsFormat.forName(LATEST_POSTINGS_FORMAT).getClass().getAnnotation(Deprecated.class); + assert annotation == null : "PostingsFromat " + LATEST_POSTINGS_FORMAT + " is deprecated" ; + annotation = DocValuesFormat.forName(LATEST_DOC_VALUES_FORMAT).getClass().getAnnotation(Deprecated.class); + assert annotation == null : "DocValuesFormat " + LATEST_DOC_VALUES_FORMAT + " is deprecated" ; + } + + public static final NamedAnalyzer STANDARD_ANALYZER = new NamedAnalyzer("_standard", AnalyzerScope.GLOBAL, new StandardAnalyzer()); public static final NamedAnalyzer KEYWORD_ANALYZER = new NamedAnalyzer("_keyword", AnalyzerScope.GLOBAL, new KeywordAnalyzer()); public static final ScoreDoc[] EMPTY_SCORE_DOCS = new ScoreDoc[0]; @@ -81,18 +94,14 @@ public class Lucene { * Reads the segments infos, failing if it fails to load */ public static SegmentInfos readSegmentInfos(Directory directory) throws IOException { - final SegmentInfos sis = new SegmentInfos(); - sis.read(directory); - return sis; + return SegmentInfos.readLatestCommit(directory); } /** * Reads the segments infos from the given commit, failing if it fails to load */ public static SegmentInfos readSegmentInfos(IndexCommit commit, Directory directory) throws IOException { - final SegmentInfos sis = new SegmentInfos(); - sis.read(directory, commit.getSegmentsFileName()); - return sis; + return SegmentInfos.readCommit(directory, commit.getSegmentsFileName()); } public static void checkSegmentInfoIntegrity(final Directory directory) throws IOException { @@ -483,11 +492,13 @@ public class Lucene { * A collector that terminates early by throwing {@link org.elasticsearch.common.lucene.Lucene.EarlyTerminationException} * when count of matched documents has reached maxCountHits */ - public final static class EarlyTerminatingCollector extends Collector { + public final static class EarlyTerminatingCollector extends SimpleCollector { private final int maxCountHits; private final Collector delegate; + private int count = 0; + private LeafCollector leafCollector; EarlyTerminatingCollector(int maxCountHits) { this.maxCountHits = maxCountHits; @@ -512,12 +523,12 @@ public class Lucene { @Override public void setScorer(Scorer scorer) throws IOException { - delegate.setScorer(scorer); + leafCollector.setScorer(scorer); } @Override public void collect(int doc) throws IOException { - delegate.collect(doc); + leafCollector.collect(doc); if (++count >= maxCountHits) { throw new EarlyTerminationException("early termination [CountBased]"); @@ -525,13 +536,13 @@ public class Lucene { } @Override - public void setNextReader(AtomicReaderContext atomicReaderContext) throws IOException { - delegate.setNextReader(atomicReaderContext); + public void doSetNextReader(LeafReaderContext atomicReaderContext) throws IOException { + leafCollector = delegate.getLeafCollector(atomicReaderContext); } @Override public boolean acceptsDocsOutOfOrder() { - return delegate.acceptsDocsOutOfOrder(); + return leafCollector.acceptsDocsOutOfOrder(); } } @@ -545,10 +556,11 @@ public class Lucene { /** * Returns true iff the given exception or - * one of it's causes is an instance of {@link CorruptIndexException} otherwise false. + * one of it's causes is an instance of {@link CorruptIndexException}, + * {@link IndexFormatTooOldException}, or {@link IndexFormatTooNewException} otherwise false. */ public static boolean isCorruptionException(Throwable t) { - return ExceptionsHelper.unwrap(t, CorruptIndexException.class) != null; + return ExceptionsHelper.unwrapCorruption(t) != null; } /** diff --git a/src/main/java/org/elasticsearch/common/lucene/MinimumScoreCollector.java b/src/main/java/org/elasticsearch/common/lucene/MinimumScoreCollector.java index 91500cef107..8233942e1a6 100644 --- a/src/main/java/org/elasticsearch/common/lucene/MinimumScoreCollector.java +++ b/src/main/java/org/elasticsearch/common/lucene/MinimumScoreCollector.java @@ -19,23 +19,21 @@ package org.elasticsearch.common.lucene; -import org.apache.lucene.index.AtomicReaderContext; -import org.apache.lucene.search.Collector; -import org.apache.lucene.search.ScoreCachingWrappingScorer; -import org.apache.lucene.search.Scorer; +import org.apache.lucene.index.LeafReaderContext; +import org.apache.lucene.search.*; import java.io.IOException; /** * */ -public class MinimumScoreCollector extends Collector { +public class MinimumScoreCollector extends SimpleCollector { private final Collector collector; - private final float minimumScore; private Scorer scorer; + private LeafCollector leafCollector; public MinimumScoreCollector(Collector collector, float minimumScore) { this.collector = collector; @@ -48,23 +46,23 @@ public class MinimumScoreCollector extends Collector { scorer = new ScoreCachingWrappingScorer(scorer); } this.scorer = scorer; - collector.setScorer(scorer); + leafCollector.setScorer(scorer); } @Override public void collect(int doc) throws IOException { if (scorer.score() >= minimumScore) { - collector.collect(doc); + leafCollector.collect(doc); } } @Override - public void setNextReader(AtomicReaderContext context) throws IOException { - collector.setNextReader(context); + public void doSetNextReader(LeafReaderContext context) throws IOException { + leafCollector = collector.getLeafCollector(context); } @Override public boolean acceptsDocsOutOfOrder() { - return collector.acceptsDocsOutOfOrder(); + return leafCollector.acceptsDocsOutOfOrder(); } } \ No newline at end of file diff --git a/src/main/java/org/elasticsearch/common/lucene/MultiCollector.java b/src/main/java/org/elasticsearch/common/lucene/MultiCollector.java index 7ec8cef7f69..90dfbf9a8d3 100644 --- a/src/main/java/org/elasticsearch/common/lucene/MultiCollector.java +++ b/src/main/java/org/elasticsearch/common/lucene/MultiCollector.java @@ -19,10 +19,8 @@ package org.elasticsearch.common.lucene; -import org.apache.lucene.index.AtomicReaderContext; -import org.apache.lucene.search.Collector; -import org.apache.lucene.search.ScoreCachingWrappingScorer; -import org.apache.lucene.search.Scorer; +import org.apache.lucene.index.LeafReaderContext; +import org.apache.lucene.search.*; import org.elasticsearch.common.lucene.search.XCollector; import java.io.IOException; @@ -30,15 +28,19 @@ import java.io.IOException; /** * */ -public class MultiCollector extends XCollector { +public class MultiCollector extends SimpleCollector implements XCollector { private final Collector collector; - private final Collector[] collectors; + private LeafCollector leafCollector; + private final LeafCollector[] leafCollectors; + + public MultiCollector(Collector collector, Collector[] collectors) { this.collector = collector; this.collectors = collectors; + this.leafCollectors = new LeafCollector[collectors.length]; } @Override @@ -47,35 +49,35 @@ public class MultiCollector extends XCollector { if (!(scorer instanceof ScoreCachingWrappingScorer)) { scorer = new ScoreCachingWrappingScorer(scorer); } - collector.setScorer(scorer); - for (Collector collector : collectors) { - collector.setScorer(scorer); + leafCollector.setScorer(scorer); + for (LeafCollector leafCollector : leafCollectors) { + leafCollector.setScorer(scorer); } } @Override public void collect(int doc) throws IOException { - collector.collect(doc); - for (Collector collector : collectors) { - collector.collect(doc); + leafCollector.collect(doc); + for (LeafCollector leafCollector : leafCollectors) { + leafCollector.collect(doc); } } @Override - public void setNextReader(AtomicReaderContext context) throws IOException { - collector.setNextReader(context); - for (Collector collector : collectors) { - collector.setNextReader(context); + public void doSetNextReader(LeafReaderContext context) throws IOException { + leafCollector = collector.getLeafCollector(context); + for (int i = 0; i < collectors.length; i++) { + leafCollectors[i] = collectors[i].getLeafCollector(context); } } @Override public boolean acceptsDocsOutOfOrder() { - if (!collector.acceptsDocsOutOfOrder()) { + if (!leafCollector.acceptsDocsOutOfOrder()) { return false; } - for (Collector collector : collectors) { - if (!collector.acceptsDocsOutOfOrder()) { + for (LeafCollector leafCollector : leafCollectors) { + if (!leafCollector.acceptsDocsOutOfOrder()) { return false; } } diff --git a/src/main/java/org/elasticsearch/common/lucene/ReaderContextAware.java b/src/main/java/org/elasticsearch/common/lucene/ReaderContextAware.java index 8dbb3cf1b72..e580909e990 100644 --- a/src/main/java/org/elasticsearch/common/lucene/ReaderContextAware.java +++ b/src/main/java/org/elasticsearch/common/lucene/ReaderContextAware.java @@ -18,12 +18,12 @@ */ package org.elasticsearch.common.lucene; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; /** * */ public interface ReaderContextAware { - public void setNextReader(AtomicReaderContext reader); + public void setNextReader(LeafReaderContext reader); } diff --git a/src/main/java/org/elasticsearch/common/lucene/SegmentReaderUtils.java b/src/main/java/org/elasticsearch/common/lucene/SegmentReaderUtils.java index 5efe64fcd20..892a4fb333d 100644 --- a/src/main/java/org/elasticsearch/common/lucene/SegmentReaderUtils.java +++ b/src/main/java/org/elasticsearch/common/lucene/SegmentReaderUtils.java @@ -18,8 +18,8 @@ */ package org.elasticsearch.common.lucene; -import org.apache.lucene.index.AtomicReader; -import org.apache.lucene.index.FilterAtomicReader; +import org.apache.lucene.index.FilterLeafReader; +import org.apache.lucene.index.LeafReader; import org.apache.lucene.index.SegmentReader; import org.elasticsearch.ElasticsearchIllegalStateException; import org.elasticsearch.common.Nullable; @@ -31,7 +31,7 @@ public class SegmentReaderUtils { * If no SegmentReader can be extracted an {@link org.elasticsearch.ElasticsearchIllegalStateException} is thrown. */ @Nullable - public static SegmentReader segmentReader(AtomicReader reader) { + public static SegmentReader segmentReader(LeafReader reader) { return internalSegmentReader(reader, true); } @@ -40,24 +40,24 @@ public class SegmentReaderUtils { * is returned */ @Nullable - public static SegmentReader segmentReaderOrNull(AtomicReader reader) { + public static SegmentReader segmentReaderOrNull(LeafReader reader) { return internalSegmentReader(reader, false); } - public static boolean registerCoreListener(AtomicReader reader, SegmentReader.CoreClosedListener listener) { + public static boolean registerCoreListener(LeafReader reader, SegmentReader.CoreClosedListener listener) { reader.addCoreClosedListener(listener); return true; } - private static SegmentReader internalSegmentReader(AtomicReader reader, boolean fail) { + private static SegmentReader internalSegmentReader(LeafReader reader, boolean fail) { if (reader == null) { return null; } if (reader instanceof SegmentReader) { return (SegmentReader) reader; - } else if (reader instanceof FilterAtomicReader) { - final FilterAtomicReader fReader = (FilterAtomicReader) reader; - return segmentReader(FilterAtomicReader.unwrap(fReader)); + } else if (reader instanceof FilterLeafReader) { + final FilterLeafReader fReader = (FilterLeafReader) reader; + return segmentReader(FilterLeafReader.unwrap(fReader)); } if (fail) { // hard fail - we can't get a SegmentReader diff --git a/src/main/java/org/elasticsearch/common/lucene/all/AllField.java b/src/main/java/org/elasticsearch/common/lucene/all/AllField.java index f1bd209eaf2..f6d0a13c396 100644 --- a/src/main/java/org/elasticsearch/common/lucene/all/AllField.java +++ b/src/main/java/org/elasticsearch/common/lucene/all/AllField.java @@ -23,7 +23,7 @@ import org.apache.lucene.analysis.Analyzer; import org.apache.lucene.analysis.TokenStream; import org.apache.lucene.document.Field; import org.apache.lucene.document.FieldType; -import org.apache.lucene.index.FieldInfo.IndexOptions; +import org.apache.lucene.index.IndexOptions; import org.elasticsearch.ElasticsearchException; import java.io.IOException; diff --git a/src/main/java/org/elasticsearch/common/lucene/all/AllTermQuery.java b/src/main/java/org/elasticsearch/common/lucene/all/AllTermQuery.java index 8f9c4243045..7f2b5c71448 100644 --- a/src/main/java/org/elasticsearch/common/lucene/all/AllTermQuery.java +++ b/src/main/java/org/elasticsearch/common/lucene/all/AllTermQuery.java @@ -19,7 +19,7 @@ package org.elasticsearch.common.lucene.all; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.DocsAndPositionsEnum; import org.apache.lucene.index.Term; import org.apache.lucene.search.ComplexExplanation; @@ -62,7 +62,7 @@ public class AllTermQuery extends SpanTermQuery { } @Override - public AllTermSpanScorer scorer(AtomicReaderContext context, Bits acceptDocs) throws IOException { + public AllTermSpanScorer scorer(LeafReaderContext context, Bits acceptDocs) throws IOException { if (this.stats == null) { return null; } @@ -145,7 +145,7 @@ public class AllTermQuery extends SpanTermQuery { } @Override - public Explanation explain(AtomicReaderContext context, int doc) throws IOException{ + public Explanation explain(LeafReaderContext context, int doc) throws IOException{ AllTermSpanScorer scorer = scorer(context, context.reader().getLiveDocs()); if (scorer != null) { int newDoc = scorer.advance(doc); diff --git a/src/main/java/org/elasticsearch/common/lucene/docset/AndDocIdSet.java b/src/main/java/org/elasticsearch/common/lucene/docset/AndDocIdSet.java index 97a1ab8228f..d988fe17997 100644 --- a/src/main/java/org/elasticsearch/common/lucene/docset/AndDocIdSet.java +++ b/src/main/java/org/elasticsearch/common/lucene/docset/AndDocIdSet.java @@ -21,11 +21,15 @@ package org.elasticsearch.common.lucene.docset; import org.apache.lucene.search.DocIdSet; import org.apache.lucene.search.DocIdSetIterator; +import org.apache.lucene.util.ArrayUtil; import org.apache.lucene.util.Bits; +import org.apache.lucene.util.InPlaceMergeSorter; import org.apache.lucene.util.RamUsageEstimator; import java.io.IOException; import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collection; import java.util.List; /** @@ -89,7 +93,7 @@ public class AndDocIdSet extends DocIdSet { } } if (bits.isEmpty()) { - return IteratorBasedIterator.newDocIdSetIterator(iterators.toArray(new DocIdSet[iterators.size()])); + return IteratorBasedIterator.newDocIdSetIterator(iterators); } if (iterators.isEmpty()) { return new BitsDocIdSetIterator(new AndBits(bits.toArray(new Bits[bits.size()]))); @@ -97,16 +101,17 @@ public class AndDocIdSet extends DocIdSet { // combination of both..., first iterating over the "fast" ones, and then checking on the more // expensive ones return new BitsDocIdSetIterator.FilteredIterator( - IteratorBasedIterator.newDocIdSetIterator(iterators.toArray(new DocIdSet[iterators.size()])), + IteratorBasedIterator.newDocIdSetIterator(iterators), new AndBits(bits.toArray(new Bits[bits.size()])) ); } - static class AndBits implements Bits { + /** A conjunction between several {@link Bits} instances with short-circuit logic. */ + public static class AndBits implements Bits { private final Bits[] bits; - AndBits(Bits[] bits) { + public AndBits(Bits[] bits) { this.bits = bits; } @@ -127,18 +132,17 @@ public class AndDocIdSet extends DocIdSet { } static class IteratorBasedIterator extends DocIdSetIterator { - private int lastReturn = -1; - private final DocIdSetIterator[] iterators; - private final long cost; + private int doc = -1; + private final DocIdSetIterator lead; + private final DocIdSetIterator[] otherIterators; - public static DocIdSetIterator newDocIdSetIterator(DocIdSet[] sets) throws IOException { - if (sets.length == 0) { + public static DocIdSetIterator newDocIdSetIterator(Collection sets) throws IOException { + if (sets.isEmpty()) { return DocIdSetIterator.empty(); } - final DocIdSetIterator[] iterators = new DocIdSetIterator[sets.length]; + final DocIdSetIterator[] iterators = new DocIdSetIterator[sets.size()]; int j = 0; - long cost = Integer.MAX_VALUE; for (DocIdSet set : sets) { if (set == null) { return DocIdSetIterator.empty(); @@ -148,94 +152,74 @@ public class AndDocIdSet extends DocIdSet { return DocIdSetIterator.empty();// non matching } iterators[j++] = docIdSetIterator; - cost = Math.min(cost, docIdSetIterator.cost()); } } - if (sets.length == 1) { + if (sets.size() == 1) { // shortcut if there is only one valid iterator. return iterators[0]; } - return new IteratorBasedIterator(iterators, cost); + return new IteratorBasedIterator(iterators); } - private IteratorBasedIterator(DocIdSetIterator[] iterators, long cost) throws IOException { - this.iterators = iterators; - this.cost = cost; + private IteratorBasedIterator(DocIdSetIterator[] iterators) throws IOException { + final DocIdSetIterator[] sortedIterators = Arrays.copyOf(iterators, iterators.length); + new InPlaceMergeSorter() { + + @Override + protected int compare(int i, int j) { + return Long.compare(sortedIterators[i].cost(), sortedIterators[j].cost()); + } + + @Override + protected void swap(int i, int j) { + ArrayUtil.swap(sortedIterators, i, j); + } + + }.sort(0, sortedIterators.length); + lead = sortedIterators[0]; + this.otherIterators = Arrays.copyOfRange(sortedIterators, 1, sortedIterators.length); } @Override public final int docID() { - return lastReturn; + return doc; } @Override public final int nextDoc() throws IOException { - - if (lastReturn == DocIdSetIterator.NO_MORE_DOCS) { - assert false : "Illegal State - DocIdSetIterator is already exhausted"; - return DocIdSetIterator.NO_MORE_DOCS; - } - - DocIdSetIterator dcit = iterators[0]; - int target = dcit.nextDoc(); - int size = iterators.length; - int skip = 0; - int i = 1; - while (i < size) { - if (i != skip) { - dcit = iterators[i]; - int docid = dcit.advance(target); - if (docid > target) { - target = docid; - if (i != 0) { - skip = i; - i = 0; - continue; - } else - skip = 0; - } - } - i++; - } - return (lastReturn = target); + doc = lead.nextDoc(); + return doNext(); } @Override public final int advance(int target) throws IOException { + doc = lead.advance(target); + return doNext(); + } - if (lastReturn == DocIdSetIterator.NO_MORE_DOCS) { - assert false : "Illegal State - DocIdSetIterator is already exhausted"; - return DocIdSetIterator.NO_MORE_DOCS; - } - - DocIdSetIterator dcit = iterators[0]; - target = dcit.advance(target); - int size = iterators.length; - int skip = 0; - int i = 1; - while (i < size) { - if (i != skip) { - dcit = iterators[i]; - int docid = dcit.advance(target); - if (docid > target) { - target = docid; - if (i != 0) { - skip = i; - i = 0; - continue; - } else { - skip = 0; + private int doNext() throws IOException { + main: + while (true) { + for (DocIdSetIterator otherIterator : otherIterators) { + // the following assert is the invariant of the loop + assert otherIterator.docID() <= doc; + // the current doc might already be equal to doc if it broke the loop + // at the previous iteration + if (otherIterator.docID() < doc) { + final int advanced = otherIterator.advance(doc); + if (advanced > doc) { + doc = lead.advance(advanced); + continue main; } } } - i++; + return doc; } - return (lastReturn = target); } @Override public long cost() { - return cost; + return lead.cost(); } } } diff --git a/src/main/java/org/elasticsearch/common/lucene/docset/ContextDocIdSet.java b/src/main/java/org/elasticsearch/common/lucene/docset/ContextDocIdSet.java index b56579c8cfe..76a1c1595aa 100644 --- a/src/main/java/org/elasticsearch/common/lucene/docset/ContextDocIdSet.java +++ b/src/main/java/org/elasticsearch/common/lucene/docset/ContextDocIdSet.java @@ -19,18 +19,18 @@ package org.elasticsearch.common.lucene.docset; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.search.DocIdSet; /** - * A holder for a {@link DocIdSet} and the {@link AtomicReaderContext} it is associated with. + * A holder for a {@link DocIdSet} and the {@link LeafReaderContext} it is associated with. */ public class ContextDocIdSet { - public final AtomicReaderContext context; + public final LeafReaderContext context; public final DocIdSet docSet; - public ContextDocIdSet(AtomicReaderContext context, DocIdSet docSet) { + public ContextDocIdSet(LeafReaderContext context, DocIdSet docSet) { this.context = context; this.docSet = docSet; } diff --git a/src/main/java/org/elasticsearch/common/lucene/docset/DocIdSets.java b/src/main/java/org/elasticsearch/common/lucene/docset/DocIdSets.java index 69d3ca4f32e..4102c645206 100644 --- a/src/main/java/org/elasticsearch/common/lucene/docset/DocIdSets.java +++ b/src/main/java/org/elasticsearch/common/lucene/docset/DocIdSets.java @@ -19,12 +19,16 @@ package org.elasticsearch.common.lucene.docset; -import org.apache.lucene.index.AtomicReader; +import org.apache.lucene.index.LeafReader; +import org.apache.lucene.search.BitsFilteredDocIdSet; import org.apache.lucene.search.DocIdSet; import org.apache.lucene.search.DocIdSetIterator; +import org.apache.lucene.util.BitDocIdSet; +import org.apache.lucene.util.BitSet; import org.apache.lucene.util.Bits; -import org.apache.lucene.util.FixedBitSet; import org.apache.lucene.util.RamUsageEstimator; +import org.apache.lucene.util.RoaringDocIdSet; +import org.apache.lucene.util.SparseFixedBitSet; import org.elasticsearch.common.Nullable; import java.io.IOException; @@ -52,44 +56,47 @@ public class DocIdSets { * For example, it does not ends up iterating one doc at a time check for its "value". */ public static boolean isFastIterator(DocIdSet set) { - return set instanceof FixedBitSet; + // TODO: this is really horrible + while (set instanceof BitsFilteredDocIdSet) { + set = ((BitsFilteredDocIdSet) set).getDelegate(); + } + return set instanceof BitDocIdSet || set instanceof RoaringDocIdSet; } /** * Converts to a cacheable {@link DocIdSet} *

- * Note, we don't use {@link org.apache.lucene.search.DocIdSet#isCacheable()} because execution - * might be expensive even if its cacheable (i.e. not going back to the reader to execute). We effectively - * always either return an empty {@link DocIdSet} or {@link FixedBitSet} but never null. + * This never returns null. */ - public static DocIdSet toCacheable(AtomicReader reader, @Nullable DocIdSet set) throws IOException { + public static DocIdSet toCacheable(LeafReader reader, @Nullable DocIdSet set) throws IOException { if (set == null || set == DocIdSet.EMPTY) { return DocIdSet.EMPTY; } - DocIdSetIterator it = set.iterator(); + final DocIdSetIterator it = set.iterator(); if (it == null) { return DocIdSet.EMPTY; } - int doc = it.nextDoc(); - if (doc == DocIdSetIterator.NO_MORE_DOCS) { + final int firstDoc = it.nextDoc(); + if (firstDoc == DocIdSetIterator.NO_MORE_DOCS) { return DocIdSet.EMPTY; } - if (set instanceof FixedBitSet) { + if (set instanceof BitDocIdSet) { return set; } - // TODO: should we use WAH8DocIdSet like Lucene? - FixedBitSet fixedBitSet = new FixedBitSet(reader.maxDoc()); - do { - fixedBitSet.set(doc); - doc = it.nextDoc(); - } while (doc != DocIdSetIterator.NO_MORE_DOCS); - return fixedBitSet; + + final RoaringDocIdSet.Builder builder = new RoaringDocIdSet.Builder(reader.maxDoc()); + builder.add(firstDoc); + for (int doc = it.nextDoc(); doc != DocIdSetIterator.NO_MORE_DOCS; doc = it.nextDoc()) { + builder.add(doc); + } + + return builder.build(); } /** * Gets a set to bits. */ - public static Bits toSafeBits(AtomicReader reader, @Nullable DocIdSet set) throws IOException { + public static Bits toSafeBits(LeafReader reader, @Nullable DocIdSet set) throws IOException { if (set == null) { return new Bits.MatchNoBits(reader.maxDoc()); } @@ -101,18 +108,21 @@ public class DocIdSets { if (iterator == null) { return new Bits.MatchNoBits(reader.maxDoc()); } - return toFixedBitSet(iterator, reader.maxDoc()); + return toBitSet(iterator, reader.maxDoc()); } /** - * Creates a {@link FixedBitSet} from an iterator. + * Creates a {@link BitSet} from an iterator. */ - public static FixedBitSet toFixedBitSet(DocIdSetIterator iterator, int numBits) throws IOException { - FixedBitSet set = new FixedBitSet(numBits); - int doc; - while ((doc = iterator.nextDoc()) != DocIdSetIterator.NO_MORE_DOCS) { - set.set(doc); + public static BitSet toBitSet(DocIdSetIterator iterator, int numBits) throws IOException { + BitDocIdSet.Builder builder = new BitDocIdSet.Builder(numBits); + builder.or(iterator); + BitDocIdSet result = builder.build(); + if (result != null) { + return result.bits(); + } else { + return new SparseFixedBitSet(numBits); } - return set; } + } diff --git a/src/main/java/org/elasticsearch/common/lucene/docset/MatchDocIdSet.java b/src/main/java/org/elasticsearch/common/lucene/docset/MatchDocIdSet.java deleted file mode 100644 index 3e193ea1c53..00000000000 --- a/src/main/java/org/elasticsearch/common/lucene/docset/MatchDocIdSet.java +++ /dev/null @@ -1,170 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.common.lucene.docset; - -import org.apache.lucene.search.DocIdSet; -import org.apache.lucene.search.DocIdSetIterator; -import org.apache.lucene.search.FilteredDocIdSetIterator; -import org.apache.lucene.util.Bits; -import org.apache.lucene.util.FixedBitSet; -import org.elasticsearch.common.Nullable; - -import java.io.IOException; - -/** - * A {@link DocIdSet} that works on a "doc" level by checking if it matches or not. - */ -public abstract class MatchDocIdSet extends DocIdSet implements Bits { - - private final int maxDoc; - private final Bits acceptDocs; - - protected MatchDocIdSet(int maxDoc, @Nullable Bits acceptDocs) { - this.maxDoc = maxDoc; - this.acceptDocs = acceptDocs; - } - - /** - * Does this document match? - */ - protected abstract boolean matchDoc(int doc); - - @Override - public DocIdSetIterator iterator() throws IOException { - if (acceptDocs == null) { - return new NoAcceptDocsIterator(maxDoc); - } else if (acceptDocs instanceof FixedBitSet) { - return new FixedBitSetIterator(((DocIdSet) acceptDocs).iterator()); - } else { - return new BothIterator(maxDoc, acceptDocs); - } - } - - @Override - public Bits bits() throws IOException { - return this; - } - - @Override - public boolean get(int index) { - return matchDoc(index); - } - - @Override - public int length() { - return maxDoc; - } - - final class NoAcceptDocsIterator extends DocIdSetIterator { - - private final int maxDoc; - private int doc = -1; - - NoAcceptDocsIterator(int maxDoc) { - this.maxDoc = maxDoc; - } - - @Override - public int docID() { - return doc; - } - - @Override - public int nextDoc() { - do { - doc++; - if (doc >= maxDoc) { - return doc = NO_MORE_DOCS; - } - } while (!matchDoc(doc)); - return doc; - } - - @Override - public int advance(int target) { - for (doc = target; doc < maxDoc; doc++) { - if (matchDoc(doc)) { - return doc; - } - } - return doc = NO_MORE_DOCS; - } - - @Override - public long cost() { - return maxDoc; - } - - } - - final class FixedBitSetIterator extends FilteredDocIdSetIterator { - - FixedBitSetIterator(DocIdSetIterator innerIter) { - super(innerIter); - } - - @Override - protected boolean match(int doc) { - return matchDoc(doc); - } - } - - final class BothIterator extends DocIdSetIterator { - private final int maxDoc; - private final Bits acceptDocs; - private int doc = -1; - - BothIterator(int maxDoc, Bits acceptDocs) { - this.maxDoc = maxDoc; - this.acceptDocs = acceptDocs; - } - - @Override - public int docID() { - return doc; - } - - @Override - public int nextDoc() { - do { - doc++; - if (doc >= maxDoc) { - return doc = NO_MORE_DOCS; - } - } while (!(acceptDocs.get(doc) && matchDoc(doc))); - return doc; - } - - @Override - public int advance(int target) { - for (doc = target; doc < maxDoc; doc++) { - if (acceptDocs.get(doc) && matchDoc(doc)) { - return doc; - } - } - return doc = NO_MORE_DOCS; - } - - @Override - public long cost() { - return maxDoc; - } - } -} diff --git a/src/main/java/org/elasticsearch/common/lucene/docset/OrDocIdSet.java b/src/main/java/org/elasticsearch/common/lucene/docset/OrDocIdSet.java index 845f038627e..dfb6157fb36 100644 --- a/src/main/java/org/elasticsearch/common/lucene/docset/OrDocIdSet.java +++ b/src/main/java/org/elasticsearch/common/lucene/docset/OrDocIdSet.java @@ -25,6 +25,7 @@ import org.apache.lucene.util.Bits; import org.apache.lucene.util.RamUsageEstimator; import java.io.IOException; +import java.util.Collection; /** * @@ -73,10 +74,12 @@ public class OrDocIdSet extends DocIdSet { return new IteratorBasedIterator(sets); } - static class OrBits implements Bits { + /** A disjunction between several {@link Bits} instances with short-circuit logic. */ + public static class OrBits implements Bits { + private final Bits[] bits; - OrBits(Bits[] bits) { + public OrBits(Bits[] bits) { this.bits = bits; } diff --git a/src/main/java/org/elasticsearch/common/lucene/index/FilterableTermsEnum.java b/src/main/java/org/elasticsearch/common/lucene/index/FilterableTermsEnum.java index cc11b309968..f8e84d9a6de 100644 --- a/src/main/java/org/elasticsearch/common/lucene/index/FilterableTermsEnum.java +++ b/src/main/java/org/elasticsearch/common/lucene/index/FilterableTermsEnum.java @@ -20,7 +20,12 @@ package org.elasticsearch.common.lucene.index; import com.google.common.collect.Lists; -import org.apache.lucene.index.*; +import org.apache.lucene.index.DocsAndPositionsEnum; +import org.apache.lucene.index.DocsEnum; +import org.apache.lucene.index.IndexReader; +import org.apache.lucene.index.LeafReaderContext; +import org.apache.lucene.index.Terms; +import org.apache.lucene.index.TermsEnum; import org.apache.lucene.search.DocIdSet; import org.apache.lucene.search.DocIdSetIterator; import org.apache.lucene.search.Filter; @@ -29,11 +34,8 @@ import org.apache.lucene.util.BytesRef; import org.elasticsearch.ElasticsearchIllegalArgumentException; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.lucene.docset.DocIdSets; -import org.elasticsearch.common.lucene.search.ApplyAcceptedDocsFilter; -import org.elasticsearch.common.lucene.search.Queries; import java.io.IOException; -import java.util.Comparator; import java.util.List; /** @@ -75,10 +77,9 @@ public class FilterableTermsEnum extends TermsEnum { // or we have this issue: https://github.com/elasticsearch/elasticsearch/issues/7951 numDocs = reader.maxDoc(); } - ApplyAcceptedDocsFilter acceptedDocsFilter = filter == null ? null : new ApplyAcceptedDocsFilter(filter); - List leaves = reader.leaves(); + List leaves = reader.leaves(); List enums = Lists.newArrayListWithExpectedSize(leaves.size()); - for (AtomicReaderContext context : leaves) { + for (LeafReaderContext context : leaves) { Terms terms = context.reader().terms(field); if (terms == null) { continue; @@ -88,24 +89,20 @@ public class FilterableTermsEnum extends TermsEnum { continue; } Bits bits = null; - if (acceptedDocsFilter != null) { - if (acceptedDocsFilter.filter() == Queries.MATCH_ALL_FILTER) { - bits = context.reader().getLiveDocs(); - } else { - // we want to force apply deleted docs - DocIdSet docIdSet = acceptedDocsFilter.getDocIdSet(context, context.reader().getLiveDocs()); - if (DocIdSets.isEmpty(docIdSet)) { - // fully filtered, none matching, no need to iterate on this - continue; - } - bits = DocIdSets.toSafeBits(context.reader(), docIdSet); - // Count how many docs are in our filtered set - // TODO make this lazy-loaded only for those that need it? - DocIdSetIterator iterator = docIdSet.iterator(); - if (iterator != null) { - while (iterator.nextDoc() != DocIdSetIterator.NO_MORE_DOCS) { - numDocs++; - } + if (filter != null) { + // we want to force apply deleted docs + DocIdSet docIdSet = filter.getDocIdSet(context, context.reader().getLiveDocs()); + if (DocIdSets.isEmpty(docIdSet)) { + // fully filtered, none matching, no need to iterate on this + continue; + } + bits = DocIdSets.toSafeBits(context.reader(), docIdSet); + // Count how many docs are in our filtered set + // TODO make this lazy-loaded only for those that need it? + DocIdSetIterator iterator = docIdSet.iterator(); + if (iterator != null) { + while (iterator.nextDoc() != DocIdSetIterator.NO_MORE_DOCS) { + numDocs++; } } } @@ -210,9 +207,4 @@ public class FilterableTermsEnum extends TermsEnum { public BytesRef next() throws IOException { throw new UnsupportedOperationException(UNSUPPORTED_MESSAGE); } - - @Override - public Comparator getComparator() { - throw new UnsupportedOperationException(UNSUPPORTED_MESSAGE); - } } \ No newline at end of file diff --git a/src/main/java/org/elasticsearch/common/lucene/search/AndFilter.java b/src/main/java/org/elasticsearch/common/lucene/search/AndFilter.java index 51f91ede86f..0dee394ac3a 100644 --- a/src/main/java/org/elasticsearch/common/lucene/search/AndFilter.java +++ b/src/main/java/org/elasticsearch/common/lucene/search/AndFilter.java @@ -19,7 +19,8 @@ package org.elasticsearch.common.lucene.search; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; +import org.apache.lucene.search.BitsFilteredDocIdSet; import org.apache.lucene.search.DocIdSet; import org.apache.lucene.search.Filter; import org.apache.lucene.util.Bits; @@ -45,19 +46,19 @@ public class AndFilter extends Filter { } @Override - public DocIdSet getDocIdSet(AtomicReaderContext context, Bits acceptDocs) throws IOException { + public DocIdSet getDocIdSet(LeafReaderContext context, Bits acceptDocs) throws IOException { if (filters.size() == 1) { return filters.get(0).getDocIdSet(context, acceptDocs); } DocIdSet[] sets = new DocIdSet[filters.size()]; for (int i = 0; i < filters.size(); i++) { - DocIdSet set = filters.get(i).getDocIdSet(context, acceptDocs); + DocIdSet set = filters.get(i).getDocIdSet(context, null); if (DocIdSets.isEmpty(set)) { // none matching for this filter, we AND, so return EMPTY return null; } sets[i] = set; } - return new AndDocIdSet(sets); + return BitsFilteredDocIdSet.wrap(new AndDocIdSet(sets), acceptDocs); } @Override diff --git a/src/main/java/org/elasticsearch/common/lucene/search/ApplyAcceptedDocsFilter.java b/src/main/java/org/elasticsearch/common/lucene/search/ApplyAcceptedDocsFilter.java deleted file mode 100644 index 097584bf0e0..00000000000 --- a/src/main/java/org/elasticsearch/common/lucene/search/ApplyAcceptedDocsFilter.java +++ /dev/null @@ -1,217 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.common.lucene.search; - -import org.apache.lucene.index.AtomicReaderContext; -import org.apache.lucene.search.*; -import org.apache.lucene.util.Bits; -import org.apache.lucene.util.FixedBitSet; -import org.apache.lucene.util.RamUsageEstimator; -import org.elasticsearch.common.lucene.docset.DocIdSets; - -import java.io.IOException; - -/** - * The assumption is that the underlying filter might not apply the accepted docs, so this filter helps to wrap - * the actual filter and apply the actual accepted docs. - */ -// TODO: we can try and be smart, and only apply if if a filter is cached (down the "chain") since that's the only place that acceptDocs are not applied in ES -public class ApplyAcceptedDocsFilter extends Filter { - - private final Filter filter; - - public ApplyAcceptedDocsFilter(Filter filter) { - this.filter = filter; - } - - @Override - public DocIdSet getDocIdSet(AtomicReaderContext context, Bits acceptDocs) throws IOException { - DocIdSet docIdSet = filter.getDocIdSet(context, acceptDocs); - if (DocIdSets.isEmpty(docIdSet)) { - return null; - } - if (acceptDocs == null) { - return docIdSet; - } - if (acceptDocs == context.reader().getLiveDocs()) { - // optimized wrapper for not deleted cases - return new NotDeletedDocIdSet(docIdSet, acceptDocs); - } - // we wrap this to make sure we can unwrap the inner docIDset in #unwrap - return new WrappedDocIdSet(BitsFilteredDocIdSet.wrap(docIdSet, acceptDocs), docIdSet); - } - - public Filter filter() { - return this.filter; - } - - @Override - public String toString() { - return filter.toString(); - } - - public static DocIdSet unwrap(DocIdSet docIdSet) { - if (docIdSet instanceof NotDeletedDocIdSet) { - return ((NotDeletedDocIdSet) docIdSet).innerSet; - } else if (docIdSet instanceof WrappedDocIdSet) { - return ((WrappedDocIdSet) docIdSet).innerSet; - } - return docIdSet; - } - - static class NotDeletedDocIdSet extends DocIdSet { - - private final DocIdSet innerSet; - private final Bits liveDocs; - - NotDeletedDocIdSet(DocIdSet innerSet, Bits liveDocs) { - this.innerSet = innerSet; - this.liveDocs = liveDocs; - } - - @Override - public boolean isCacheable() { - return innerSet.isCacheable(); - } - - @Override - public long ramBytesUsed() { - return RamUsageEstimator.NUM_BYTES_OBJECT_REF + innerSet.ramBytesUsed(); - } - - @Override - public Bits bits() throws IOException { - Bits bits = innerSet.bits(); - if (bits == null) { - return null; - } - return new NotDeleteBits(bits, liveDocs); - } - - @Override - public DocIdSetIterator iterator() throws IOException { - if (!DocIdSets.isFastIterator(innerSet) && liveDocs instanceof FixedBitSet) { - // might as well iterate over the live docs..., since the iterator is not fast enough - // but we can only do that if we have Bits..., in short, we reverse the order... - Bits bits = innerSet.bits(); - if (bits != null) { - return new NotDeletedDocIdSetIterator(((FixedBitSet) liveDocs).iterator(), bits); - } - } - DocIdSetIterator iterator = innerSet.iterator(); - if (iterator == null) { - return null; - } - return new NotDeletedDocIdSetIterator(iterator, liveDocs); - } - } - - static class NotDeleteBits implements Bits { - - private final Bits bits; - private final Bits liveDocs; - - NotDeleteBits(Bits bits, Bits liveDocs) { - this.bits = bits; - this.liveDocs = liveDocs; - } - - @Override - public boolean get(int index) { - return liveDocs.get(index) && bits.get(index); - } - - @Override - public int length() { - return bits.length(); - } - } - - static class NotDeletedDocIdSetIterator extends FilteredDocIdSetIterator { - - private final Bits match; - - NotDeletedDocIdSetIterator(DocIdSetIterator innerIter, Bits match) { - super(innerIter); - this.match = match; - } - - @Override - protected boolean match(int doc) { - return match.get(doc); - } - } - - @Override - public int hashCode() { - final int prime = 31; - int result = 1; - result = prime * result + ((filter == null) ? 0 : filter.hashCode()); - return result; - } - - @Override - public boolean equals(Object obj) { - if (this == obj) - return true; - if (obj == null) - return false; - if (getClass() != obj.getClass()) - return false; - ApplyAcceptedDocsFilter other = (ApplyAcceptedDocsFilter) obj; - if (filter == null) { - if (other.filter != null) - return false; - } else if (!filter.equals(other.filter)) - return false; - return true; - } - - private static final class WrappedDocIdSet extends DocIdSet { - private final DocIdSet delegate; - private final DocIdSet innerSet; - - private WrappedDocIdSet(DocIdSet delegate, DocIdSet innerSet) { - this.delegate = delegate; - this.innerSet = innerSet; - } - - - @Override - public DocIdSetIterator iterator() throws IOException { - return delegate.iterator(); - } - - @Override - public Bits bits() throws IOException { - return delegate.bits(); - } - - @Override - public boolean isCacheable() { - return delegate.isCacheable(); - } - - @Override - public long ramBytesUsed() { - return RamUsageEstimator.NUM_BYTES_OBJECT_REF + delegate.ramBytesUsed(); - } - } -} diff --git a/src/main/java/org/elasticsearch/common/lucene/search/FilteredCollector.java b/src/main/java/org/elasticsearch/common/lucene/search/FilteredCollector.java index cbb10a43873..491b3c5cfda 100644 --- a/src/main/java/org/elasticsearch/common/lucene/search/FilteredCollector.java +++ b/src/main/java/org/elasticsearch/common/lucene/search/FilteredCollector.java @@ -18,10 +18,8 @@ */ package org.elasticsearch.common.lucene.search; -import org.apache.lucene.index.AtomicReaderContext; -import org.apache.lucene.search.Collector; -import org.apache.lucene.search.Filter; -import org.apache.lucene.search.Scorer; +import org.apache.lucene.index.LeafReaderContext; +import org.apache.lucene.search.*; import org.apache.lucene.util.Bits; import org.elasticsearch.common.lucene.docset.DocIdSets; @@ -30,13 +28,13 @@ import java.io.IOException; /** * */ -public class FilteredCollector extends XCollector { +public class FilteredCollector extends SimpleCollector implements XCollector { private final Collector collector; - private final Filter filter; private Bits docSet; + private LeafCollector leafCollector; public FilteredCollector(Collector collector, Filter filter) { this.collector = collector; @@ -52,24 +50,24 @@ public class FilteredCollector extends XCollector { @Override public void setScorer(Scorer scorer) throws IOException { - collector.setScorer(scorer); + leafCollector.setScorer(scorer); } @Override public void collect(int doc) throws IOException { if (docSet.get(doc)) { - collector.collect(doc); + leafCollector.collect(doc); } } @Override - public void setNextReader(AtomicReaderContext context) throws IOException { - collector.setNextReader(context); + public void doSetNextReader(LeafReaderContext context) throws IOException { + leafCollector = collector.getLeafCollector(context); docSet = DocIdSets.toSafeBits(context.reader(), filter.getDocIdSet(context, null)); } @Override public boolean acceptsDocsOutOfOrder() { - return collector.acceptsDocsOutOfOrder(); + return leafCollector.acceptsDocsOutOfOrder(); } } \ No newline at end of file diff --git a/src/main/java/org/elasticsearch/common/lucene/search/LimitFilter.java b/src/main/java/org/elasticsearch/common/lucene/search/LimitFilter.java index df835943dd7..c767f9f9e57 100644 --- a/src/main/java/org/elasticsearch/common/lucene/search/LimitFilter.java +++ b/src/main/java/org/elasticsearch/common/lucene/search/LimitFilter.java @@ -19,14 +19,15 @@ package org.elasticsearch.common.lucene.search; -import org.apache.lucene.index.AtomicReaderContext; -import org.apache.lucene.search.DocIdSet; -import org.apache.lucene.util.Bits; -import org.elasticsearch.common.Nullable; -import org.elasticsearch.common.lucene.docset.MatchDocIdSet; - import java.io.IOException; +import org.apache.lucene.index.LeafReaderContext; +import org.apache.lucene.search.DocIdSet; +import org.apache.lucene.search.DocValuesDocIdSet; +import org.apache.lucene.util.Bits; +import org.apache.lucene.util.RamUsageEstimator; +import org.elasticsearch.common.Nullable; + public class LimitFilter extends NoCacheFilter { private final int limit; @@ -41,14 +42,14 @@ public class LimitFilter extends NoCacheFilter { } @Override - public DocIdSet getDocIdSet(AtomicReaderContext context, Bits acceptDocs) throws IOException { + public DocIdSet getDocIdSet(LeafReaderContext context, Bits acceptDocs) throws IOException { if (counter > limit) { return null; } return new LimitDocIdSet(context.reader().maxDoc(), acceptDocs, limit); } - public class LimitDocIdSet extends MatchDocIdSet { + public class LimitDocIdSet extends DocValuesDocIdSet { private final int limit; @@ -64,5 +65,10 @@ public class LimitFilter extends NoCacheFilter { } return true; } + + @Override + public long ramBytesUsed() { + return RamUsageEstimator.NUM_BYTES_INT; + } } } \ No newline at end of file diff --git a/src/main/java/org/elasticsearch/common/lucene/search/MatchAllDocsFilter.java b/src/main/java/org/elasticsearch/common/lucene/search/MatchAllDocsFilter.java index 41f21787588..eb62abbdb1a 100644 --- a/src/main/java/org/elasticsearch/common/lucene/search/MatchAllDocsFilter.java +++ b/src/main/java/org/elasticsearch/common/lucene/search/MatchAllDocsFilter.java @@ -19,7 +19,8 @@ package org.elasticsearch.common.lucene.search; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; +import org.apache.lucene.search.BitsFilteredDocIdSet; import org.apache.lucene.search.DocIdSet; import org.apache.lucene.search.Filter; import org.apache.lucene.util.Bits; @@ -33,8 +34,8 @@ import java.io.IOException; public class MatchAllDocsFilter extends Filter { @Override - public DocIdSet getDocIdSet(AtomicReaderContext context, Bits acceptDocs) throws IOException { - return new AllDocIdSet(context.reader().maxDoc()); + public DocIdSet getDocIdSet(LeafReaderContext context, Bits acceptDocs) throws IOException { + return BitsFilteredDocIdSet.wrap(new AllDocIdSet(context.reader().maxDoc()), acceptDocs); } @Override diff --git a/src/main/java/org/elasticsearch/common/lucene/search/MatchNoDocsFilter.java b/src/main/java/org/elasticsearch/common/lucene/search/MatchNoDocsFilter.java index f130a19929d..c00650cda0e 100644 --- a/src/main/java/org/elasticsearch/common/lucene/search/MatchNoDocsFilter.java +++ b/src/main/java/org/elasticsearch/common/lucene/search/MatchNoDocsFilter.java @@ -19,7 +19,7 @@ package org.elasticsearch.common.lucene.search; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.search.DocIdSet; import org.apache.lucene.search.Filter; import org.apache.lucene.util.Bits; @@ -32,7 +32,7 @@ import java.io.IOException; public class MatchNoDocsFilter extends Filter { @Override - public DocIdSet getDocIdSet(AtomicReaderContext context, Bits acceptDocs) throws IOException { + public DocIdSet getDocIdSet(LeafReaderContext context, Bits acceptDocs) throws IOException { return null; } diff --git a/src/main/java/org/elasticsearch/common/lucene/search/MatchNoDocsQuery.java b/src/main/java/org/elasticsearch/common/lucene/search/MatchNoDocsQuery.java index 1eaf882508e..7fc4b2ccabf 100644 --- a/src/main/java/org/elasticsearch/common/lucene/search/MatchNoDocsQuery.java +++ b/src/main/java/org/elasticsearch/common/lucene/search/MatchNoDocsQuery.java @@ -19,7 +19,7 @@ package org.elasticsearch.common.lucene.search; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.Term; import org.apache.lucene.search.*; import org.apache.lucene.util.Bits; @@ -57,12 +57,12 @@ public final class MatchNoDocsQuery extends Query { } @Override - public Scorer scorer(AtomicReaderContext context, Bits acceptDocs) throws IOException { + public Scorer scorer(LeafReaderContext context, Bits acceptDocs) throws IOException { return null; } @Override - public Explanation explain(final AtomicReaderContext context, + public Explanation explain(final LeafReaderContext context, final int doc) { return new ComplexExplanation(false, 0, "MatchNoDocs matches nothing"); } diff --git a/src/main/java/org/elasticsearch/common/lucene/search/MultiPhrasePrefixQuery.java b/src/main/java/org/elasticsearch/common/lucene/search/MultiPhrasePrefixQuery.java index 9168b3b199e..8e75507be38 100644 --- a/src/main/java/org/elasticsearch/common/lucene/search/MultiPhrasePrefixQuery.java +++ b/src/main/java/org/elasticsearch/common/lucene/search/MultiPhrasePrefixQuery.java @@ -156,8 +156,8 @@ public class MultiPhrasePrefixQuery extends Query { // SlowCompositeReaderWrapper could be used... but this would merge all terms from each segment into one terms // instance, which is very expensive. Therefore I think it is better to iterate over each leaf individually. TermsEnum termsEnum = null; - List leaves = reader.leaves(); - for (AtomicReaderContext leaf : leaves) { + List leaves = reader.leaves(); + for (LeafReaderContext leaf : leaves) { Terms _terms = leaf.reader().terms(field); if (_terms == null) { continue; diff --git a/src/main/java/org/elasticsearch/common/lucene/search/NoCacheFilter.java b/src/main/java/org/elasticsearch/common/lucene/search/NoCacheFilter.java index 6ec76d98ac9..879e6376ced 100644 --- a/src/main/java/org/elasticsearch/common/lucene/search/NoCacheFilter.java +++ b/src/main/java/org/elasticsearch/common/lucene/search/NoCacheFilter.java @@ -19,7 +19,7 @@ package org.elasticsearch.common.lucene.search; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.search.DocIdSet; import org.apache.lucene.search.Filter; import org.apache.lucene.util.Bits; @@ -39,7 +39,7 @@ public abstract class NoCacheFilter extends Filter { } @Override - public DocIdSet getDocIdSet(AtomicReaderContext context, Bits acceptDocs) throws IOException { + public DocIdSet getDocIdSet(LeafReaderContext context, Bits acceptDocs) throws IOException { return delegate.getDocIdSet(context, acceptDocs); } diff --git a/src/main/java/org/elasticsearch/common/lucene/search/NoopCollector.java b/src/main/java/org/elasticsearch/common/lucene/search/NoopCollector.java index 908ff52e447..52b631f16b8 100644 --- a/src/main/java/org/elasticsearch/common/lucene/search/NoopCollector.java +++ b/src/main/java/org/elasticsearch/common/lucene/search/NoopCollector.java @@ -19,16 +19,16 @@ package org.elasticsearch.common.lucene.search; -import org.apache.lucene.index.AtomicReaderContext; -import org.apache.lucene.search.Collector; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.search.Scorer; +import org.apache.lucene.search.SimpleCollector; import java.io.IOException; /** * */ -public class NoopCollector extends Collector { +public class NoopCollector extends SimpleCollector { public static final NoopCollector NOOP_COLLECTOR = new NoopCollector(); @@ -41,7 +41,7 @@ public class NoopCollector extends Collector { } @Override - public void setNextReader(AtomicReaderContext context) throws IOException { + protected void doSetNextReader(LeafReaderContext context) throws IOException { } @Override diff --git a/src/main/java/org/elasticsearch/common/lucene/search/NotFilter.java b/src/main/java/org/elasticsearch/common/lucene/search/NotFilter.java index 3c8867e9245..e1ddd51bbab 100644 --- a/src/main/java/org/elasticsearch/common/lucene/search/NotFilter.java +++ b/src/main/java/org/elasticsearch/common/lucene/search/NotFilter.java @@ -19,7 +19,8 @@ package org.elasticsearch.common.lucene.search; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; +import org.apache.lucene.search.BitsFilteredDocIdSet; import org.apache.lucene.search.DocIdSet; import org.apache.lucene.search.Filter; import org.apache.lucene.util.Bits; @@ -45,12 +46,15 @@ public class NotFilter extends Filter { } @Override - public DocIdSet getDocIdSet(AtomicReaderContext context, Bits acceptDocs) throws IOException { - DocIdSet set = filter.getDocIdSet(context, acceptDocs); + public DocIdSet getDocIdSet(LeafReaderContext context, Bits acceptDocs) throws IOException { + DocIdSet set = filter.getDocIdSet(context, null); + DocIdSet notSet; if (DocIdSets.isEmpty(set)) { - return new AllDocIdSet(context.reader().maxDoc()); + notSet = new AllDocIdSet(context.reader().maxDoc()); + } else { + notSet = new NotDocIdSet(set, context.reader().maxDoc()); } - return new NotDocIdSet(set, context.reader().maxDoc()); + return BitsFilteredDocIdSet.wrap(notSet, acceptDocs); } @Override diff --git a/src/main/java/org/elasticsearch/common/lucene/search/OrFilter.java b/src/main/java/org/elasticsearch/common/lucene/search/OrFilter.java index 2d9ba6e0cc4..1a42957e817 100644 --- a/src/main/java/org/elasticsearch/common/lucene/search/OrFilter.java +++ b/src/main/java/org/elasticsearch/common/lucene/search/OrFilter.java @@ -19,7 +19,8 @@ package org.elasticsearch.common.lucene.search; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; +import org.apache.lucene.search.BitsFilteredDocIdSet; import org.apache.lucene.search.DocIdSet; import org.apache.lucene.search.Filter; import org.apache.lucene.util.Bits; @@ -46,13 +47,13 @@ public class OrFilter extends Filter { } @Override - public DocIdSet getDocIdSet(AtomicReaderContext context, Bits acceptDocs) throws IOException { + public DocIdSet getDocIdSet(LeafReaderContext context, Bits acceptDocs) throws IOException { if (filters.size() == 1) { return filters.get(0).getDocIdSet(context, acceptDocs); } List sets = new ArrayList<>(filters.size()); for (int i = 0; i < filters.size(); i++) { - DocIdSet set = filters.get(i).getDocIdSet(context, acceptDocs); + DocIdSet set = filters.get(i).getDocIdSet(context, null); if (DocIdSets.isEmpty(set)) { // none matching for this filter, continue continue; } @@ -61,10 +62,13 @@ public class OrFilter extends Filter { if (sets.size() == 0) { return null; } + DocIdSet set; if (sets.size() == 1) { - return sets.get(0); + set = sets.get(0); + } else { + set = new OrDocIdSet(sets.toArray(new DocIdSet[sets.size()])); } - return new OrDocIdSet(sets.toArray(new DocIdSet[sets.size()])); + return BitsFilteredDocIdSet.wrap(set, acceptDocs); } @Override diff --git a/src/main/java/org/elasticsearch/common/lucene/search/Queries.java b/src/main/java/org/elasticsearch/common/lucene/search/Queries.java index 3e1030c9941..d8d88b89886 100644 --- a/src/main/java/org/elasticsearch/common/lucene/search/Queries.java +++ b/src/main/java/org/elasticsearch/common/lucene/search/Queries.java @@ -42,7 +42,7 @@ public class Queries { // We don't use MatchAllDocsQuery, its slower than the one below ... (much slower) // NEVER cache this XConstantScore Query it's not immutable and based on #3521 // some code might set a boost on this query. - return new XConstantScoreQuery(MATCH_ALL_FILTER); + return new ConstantScoreQuery(MATCH_ALL_FILTER); } /** Return a query that matches no document. */ @@ -74,8 +74,8 @@ public class Queries { } public static boolean isConstantMatchAllQuery(Query query) { - if (query instanceof XConstantScoreQuery) { - XConstantScoreQuery scoreQuery = (XConstantScoreQuery) query; + if (query instanceof ConstantScoreQuery) { + ConstantScoreQuery scoreQuery = (ConstantScoreQuery) query; if (scoreQuery.getFilter() instanceof MatchAllDocsFilter) { return true; } diff --git a/src/main/java/org/elasticsearch/common/lucene/search/RegexpFilter.java b/src/main/java/org/elasticsearch/common/lucene/search/RegexpFilter.java index 9af6bee51de..9950c6df9be 100644 --- a/src/main/java/org/elasticsearch/common/lucene/search/RegexpFilter.java +++ b/src/main/java/org/elasticsearch/common/lucene/search/RegexpFilter.java @@ -18,7 +18,7 @@ */ package org.elasticsearch.common.lucene.search; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.Term; import org.apache.lucene.search.DocIdSet; import org.apache.lucene.search.Filter; @@ -30,7 +30,7 @@ import org.apache.lucene.util.automaton.RegExp; import java.io.IOException; /** - * A lazy regexp filter which only builds the automaton on the first call to {@link #getDocIdSet(AtomicReaderContext, Bits)}. + * A lazy regexp filter which only builds the automaton on the first call to {@link #getDocIdSet(LeafReaderContext, Bits)}. * It is not thread safe (so can't be applied on multiple segments concurrently) */ public class RegexpFilter extends Filter { @@ -65,7 +65,7 @@ public class RegexpFilter extends Filter { } @Override - public DocIdSet getDocIdSet(AtomicReaderContext context, Bits acceptDocs) throws IOException { + public DocIdSet getDocIdSet(LeafReaderContext context, Bits acceptDocs) throws IOException { return filter.getDocIdSet(context, acceptDocs); } diff --git a/src/main/java/org/elasticsearch/common/lucene/search/XBooleanFilter.java b/src/main/java/org/elasticsearch/common/lucene/search/XBooleanFilter.java index 2940ef765db..46740fcf898 100644 --- a/src/main/java/org/elasticsearch/common/lucene/search/XBooleanFilter.java +++ b/src/main/java/org/elasticsearch/common/lucene/search/XBooleanFilter.java @@ -17,21 +17,27 @@ package org.elasticsearch.common.lucene.search; * limitations under the License. */ -import org.apache.lucene.index.AtomicReader; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.queries.FilterClause; +import org.apache.lucene.search.BitsFilteredDocIdSet; import org.apache.lucene.search.BooleanClause.Occur; import org.apache.lucene.search.DocIdSet; import org.apache.lucene.search.DocIdSetIterator; import org.apache.lucene.search.Filter; +import org.apache.lucene.util.BitDocIdSet; import org.apache.lucene.util.Bits; -import org.apache.lucene.util.FixedBitSet; +import org.apache.lucene.util.CollectionUtil; import org.elasticsearch.common.lucene.docset.AllDocIdSet; +import org.elasticsearch.common.lucene.docset.AndDocIdSet; import org.elasticsearch.common.lucene.docset.DocIdSets; import org.elasticsearch.common.lucene.docset.NotDocIdSet; +import org.elasticsearch.common.lucene.docset.OrDocIdSet.OrBits; import java.io.IOException; -import java.util.*; +import java.util.ArrayList; +import java.util.Comparator; +import java.util.Iterator; +import java.util.List; /** * Similar to {@link org.apache.lucene.queries.BooleanFilter}. @@ -42,6 +48,19 @@ import java.util.*; */ public class XBooleanFilter extends Filter implements Iterable { + private static final Comparator COST_DESCENDING = new Comparator() { + @Override + public int compare(DocIdSetIterator o1, DocIdSetIterator o2) { + return Long.compare(o2.cost(), o1.cost()); + } + }; + private static final Comparator COST_ASCENDING = new Comparator() { + @Override + public int compare(DocIdSetIterator o1, DocIdSetIterator o2) { + return Long.compare(o1.cost(), o2.cost()); + } + }; + final List clauses = new ArrayList<>(); /** @@ -49,9 +68,14 @@ public class XBooleanFilter extends Filter implements Iterable { * of the filters that have been added. */ @Override - public DocIdSet getDocIdSet(AtomicReaderContext context, Bits acceptDocs) throws IOException { - FixedBitSet res = null; - final AtomicReader reader = context.reader(); + public DocIdSet getDocIdSet(LeafReaderContext context, Bits acceptDocs) throws IOException { + final int maxDoc = context.reader().maxDoc(); + + // the 0-clauses case is ambiguous because an empty OR filter should return nothing + // while an empty AND filter should return all docs, so we handle this case explicitely + if (clauses.isEmpty()) { + return null; + } // optimize single case... if (clauses.size() == 1) { @@ -59,9 +83,9 @@ public class XBooleanFilter extends Filter implements Iterable { DocIdSet set = clause.getFilter().getDocIdSet(context, acceptDocs); if (clause.getOccur() == Occur.MUST_NOT) { if (DocIdSets.isEmpty(set)) { - return new AllDocIdSet(reader.maxDoc()); + return new AllDocIdSet(maxDoc); } else { - return new NotDocIdSet(set, reader.maxDoc()); + return new NotDocIdSet(set, maxDoc); } } // SHOULD or MUST, just return the set... @@ -71,241 +95,177 @@ public class XBooleanFilter extends Filter implements Iterable { return set; } - // first, go over and see if we can shortcut the execution - // and gather Bits if we need to - List results = new ArrayList<>(clauses.size()); + // We have several clauses, try to organize things to make it easier to process + List shouldIterators = new ArrayList<>(); + List shouldBits = new ArrayList<>(); boolean hasShouldClauses = false; - boolean hasNonEmptyShouldClause = false; - boolean hasMustClauses = false; - boolean hasMustNotClauses = false; - for (int i = 0; i < clauses.size(); i++) { - FilterClause clause = clauses.get(i); - DocIdSet set = clause.getFilter().getDocIdSet(context, acceptDocs); - if (clause.getOccur() == Occur.MUST) { - hasMustClauses = true; - if (DocIdSets.isEmpty(set)) { - return null; - } - } else if (clause.getOccur() == Occur.SHOULD) { - hasShouldClauses = true; - if (DocIdSets.isEmpty(set)) { - continue; - } - hasNonEmptyShouldClause = true; - } else if (clause.getOccur() == Occur.MUST_NOT) { - hasMustNotClauses = true; - if (DocIdSets.isEmpty(set)) { - // we mark empty ones as null for must_not, handle it in the next run... - results.add(new ResultClause(null, null, clause)); - continue; - } - } + + List requiredIterators = new ArrayList<>(); + List excludedIterators = new ArrayList<>(); + + List requiredBits = new ArrayList<>(); + List excludedBits = new ArrayList<>(); + + for (FilterClause clause : clauses) { + DocIdSet set = clause.getFilter().getDocIdSet(context, null); + DocIdSetIterator it = null; Bits bits = null; - if (!DocIdSets.isFastIterator(set)) { - bits = set.bits(); - } - results.add(new ResultClause(set, bits, clause)); - } - - if (hasShouldClauses && !hasNonEmptyShouldClause) { - return null; - } - - // now, go over the clauses and apply the "fast" ones first... - hasNonEmptyShouldClause = false; - boolean hasBits = false; - // But first we need to handle the "fast" should clauses, otherwise a should clause can unset docs - // that don't match with a must or must_not clause. - List fastOrClauses = new ArrayList<>(); - for (int i = 0; i < results.size(); i++) { - ResultClause clause = results.get(i); - // we apply bits in based ones (slow) in the second run - if (clause.bits != null) { - hasBits = true; - continue; - } - if (clause.clause.getOccur() == Occur.SHOULD) { - if (hasMustClauses || hasMustNotClauses) { - fastOrClauses.add(clause); - } else if (res == null) { - DocIdSetIterator it = clause.docIdSet.iterator(); - if (it != null) { - hasNonEmptyShouldClause = true; - res = new FixedBitSet(reader.maxDoc()); - res.or(it); - } - } else { - DocIdSetIterator it = clause.docIdSet.iterator(); - if (it != null) { - hasNonEmptyShouldClause = true; - res.or(it); - } + if (DocIdSets.isEmpty(set) == false) { + it = set.iterator(); + if (it != null) { + bits = set.bits(); } } - } - // Now we safely handle the "fast" must and must_not clauses. - for (int i = 0; i < results.size(); i++) { - ResultClause clause = results.get(i); - // we apply bits in based ones (slow) in the second run - if (clause.bits != null) { - hasBits = true; - continue; - } - if (clause.clause.getOccur() == Occur.MUST) { - DocIdSetIterator it = clause.docIdSet.iterator(); + switch (clause.getOccur()) { + case SHOULD: + hasShouldClauses = true; if (it == null) { + // continue, but we recorded that there is at least one should clause + // so that if all iterators are null we know that nothing matches this + // filter since at least one SHOULD clause needs to match + } else if (bits == null || DocIdSets.isFastIterator(set)) { + shouldIterators.add(it); + } else { + shouldBits.add(bits); + } + break; + case MUST: + if (it == null) { + // no documents matched a clause that is compulsory, then nothing matches at all return null; - } - if (res == null) { - res = new FixedBitSet(reader.maxDoc()); - res.or(it); + } else if (bits == null || DocIdSets.isFastIterator(set)) { + requiredIterators.add(it); } else { - res.and(it); + requiredBits.add(bits); } - } else if (clause.clause.getOccur() == Occur.MUST_NOT) { - if (res == null) { - res = new FixedBitSet(reader.maxDoc()); - res.set(0, reader.maxDoc()); // NOTE: may set bits on deleted docs - } - if (clause.docIdSet != null) { - DocIdSetIterator it = clause.docIdSet.iterator(); - if (it != null) { - res.andNot(it); - } + break; + case MUST_NOT: + if (it == null) { + // ignore + } else if (bits == null || DocIdSets.isFastIterator(set)) { + excludedIterators.add(it); + } else { + excludedBits.add(bits); } + break; + default: + throw new AssertionError(); } } - if (!hasBits) { - if (!fastOrClauses.isEmpty()) { - DocIdSetIterator it = res.iterator(); - at_least_one_should_clause_iter: - for (int setDoc = it.nextDoc(); setDoc != DocIdSetIterator.NO_MORE_DOCS; setDoc = it.nextDoc()) { - for (ResultClause fastOrClause : fastOrClauses) { - DocIdSetIterator clauseIterator = fastOrClause.iterator(); - if (clauseIterator == null) { - continue; - } - if (iteratorMatch(clauseIterator, setDoc)) { - hasNonEmptyShouldClause = true; - continue at_least_one_should_clause_iter; - } - } - res.clear(setDoc); - } - } + // Since BooleanFilter requires that at least one SHOULD clause matches, + // transform the SHOULD clauses into a MUST clause - if (hasShouldClauses && !hasNonEmptyShouldClause) { + if (hasShouldClauses) { + if (shouldIterators.isEmpty() && shouldBits.isEmpty()) { + // we had should clauses, but they all produced empty sets + // yet BooleanFilter requires that at least one clause matches + // so it means we do not match anything return null; + } else if (shouldIterators.size() == 1 && shouldBits.isEmpty()) { + requiredIterators.add(shouldIterators.get(0)); } else { - return res; - } - } + // apply high-cardinality should clauses first + CollectionUtil.timSort(shouldIterators, COST_DESCENDING); - // we have some clauses with bits, apply them... - // we let the "res" drive the computation, and check Bits for that - List slowOrClauses = new ArrayList<>(); - for (int i = 0; i < results.size(); i++) { - ResultClause clause = results.get(i); - if (clause.bits == null) { - continue; - } - if (clause.clause.getOccur() == Occur.SHOULD) { - if (hasMustClauses || hasMustNotClauses) { - slowOrClauses.add(clause); - } else { - if (res == null) { - DocIdSetIterator it = clause.docIdSet.iterator(); - if (it == null) { - continue; - } - hasNonEmptyShouldClause = true; - res = new FixedBitSet(reader.maxDoc()); - res.or(it); + BitDocIdSet.Builder shouldBuilder = null; + for (DocIdSetIterator it : shouldIterators) { + if (shouldBuilder == null) { + shouldBuilder = new BitDocIdSet.Builder(maxDoc); + } + shouldBuilder.or(it); + } + + if (shouldBuilder != null && shouldBits.isEmpty() == false) { + // we have both iterators and bits, there is no way to compute + // the union efficiently, so we just transform the iterators into + // bits + // add first since these are fast bits + shouldBits.add(0, shouldBuilder.build().bits()); + shouldBuilder = null; + } + + if (shouldBuilder == null) { + // only bits + assert shouldBits.size() >= 1; + if (shouldBits.size() == 1) { + requiredBits.add(shouldBits.get(0)); } else { - for (int doc = 0; doc < reader.maxDoc(); doc++) { - if (!res.get(doc) && clause.bits.get(doc)) { - hasNonEmptyShouldClause = true; - res.set(doc); - } - } - } - } - } else if (clause.clause.getOccur() == Occur.MUST) { - if (res == null) { - // nothing we can do, just or it... - res = new FixedBitSet(reader.maxDoc()); - DocIdSetIterator it = clause.docIdSet.iterator(); - if (it == null) { - return null; - } - res.or(it); - } else { - Bits bits = clause.bits; - // use the "res" to drive the iteration - DocIdSetIterator it = res.iterator(); - for (int doc = it.nextDoc(); doc != DocIdSetIterator.NO_MORE_DOCS; doc = it.nextDoc()) { - if (!bits.get(doc)) { - res.clear(doc); - } - } - } - } else if (clause.clause.getOccur() == Occur.MUST_NOT) { - if (res == null) { - res = new FixedBitSet(reader.maxDoc()); - res.set(0, reader.maxDoc()); // NOTE: may set bits on deleted docs - DocIdSetIterator it = clause.docIdSet.iterator(); - if (it != null) { - res.andNot(it); + requiredBits.add(new OrBits(shouldBits.toArray(new Bits[shouldBits.size()]))); } } else { - Bits bits = clause.bits; - // let res drive the iteration - DocIdSetIterator it = res.iterator(); - for (int doc = it.nextDoc(); doc != DocIdSetIterator.NO_MORE_DOCS; doc = it.nextDoc()) { - if (bits.get(doc)) { - res.clear(doc); - } - } + assert shouldBits.isEmpty(); + // only iterators, we can add the merged iterator to the list of required iterators + requiredIterators.add(shouldBuilder.build().iterator()); } } - } - - // From a boolean_logic behavior point of view a should clause doesn't have impact on a bool filter if there - // is already a must or must_not clause. However in the current ES bool filter behaviour at least one should - // clause must match in order for a doc to be a match. What we do here is checking if matched docs match with - // any should filter. TODO: Add an option to have disable minimum_should_match=1 behaviour - if (!slowOrClauses.isEmpty() || !fastOrClauses.isEmpty()) { - DocIdSetIterator it = res.iterator(); - at_least_one_should_clause_iter: - for (int setDoc = it.nextDoc(); setDoc != DocIdSetIterator.NO_MORE_DOCS; setDoc = it.nextDoc()) { - for (ResultClause fastOrClause : fastOrClauses) { - DocIdSetIterator clauseIterator = fastOrClause.iterator(); - if (clauseIterator == null) { - continue; - } - if (iteratorMatch(clauseIterator, setDoc)) { - hasNonEmptyShouldClause = true; - continue at_least_one_should_clause_iter; - } - } - for (ResultClause slowOrClause : slowOrClauses) { - if (slowOrClause.bits.get(setDoc)) { - hasNonEmptyShouldClause = true; - continue at_least_one_should_clause_iter; - } - } - res.clear(setDoc); - } - } - - if (hasShouldClauses && !hasNonEmptyShouldClause) { - return null; } else { - return res; + assert shouldIterators.isEmpty(); + assert shouldBits.isEmpty(); } + // From now on, we don't have to care about SHOULD clauses anymore since we upgraded + // them to required clauses (if necessary) + + // cheap iterators first to make intersection faster + CollectionUtil.timSort(requiredIterators, COST_ASCENDING); + CollectionUtil.timSort(excludedIterators, COST_ASCENDING); + + // Intersect iterators + BitDocIdSet.Builder res = null; + for (DocIdSetIterator iterator : requiredIterators) { + if (res == null) { + res = new BitDocIdSet.Builder(maxDoc); + res.or(iterator); + } else { + res.and(iterator); + } + } + for (DocIdSetIterator iterator : excludedIterators) { + if (res == null) { + res = new BitDocIdSet.Builder(maxDoc, true); + } + res.andNot(iterator); + } + + // Transform the excluded bits into required bits + if (excludedBits.isEmpty() == false) { + Bits excluded; + if (excludedBits.size() == 1) { + excluded = excludedBits.get(0); + } else { + excluded = new OrBits(excludedBits.toArray(new Bits[excludedBits.size()])); + } + requiredBits.add(new NotDocIdSet.NotBits(excluded)); + } + + // The only thing left to do is to intersect 'res' with 'requiredBits' + + // the main doc id set that will drive iteration + DocIdSet main; + if (res == null) { + main = new AllDocIdSet(maxDoc); + } else { + main = res.build(); + } + + // apply accepted docs and compute the bits to filter with + // accepted docs are added first since they are fast and will help not computing anything on deleted docs + if (acceptDocs != null) { + requiredBits.add(0, acceptDocs); + } + // the random-access filter that we will apply to 'main' + Bits filter; + if (requiredBits.isEmpty()) { + filter = null; + } else if (requiredBits.size() == 1) { + filter = requiredBits.get(0); + } else { + filter = new AndDocIdSet.AndBits(requiredBits.toArray(new Bits[requiredBits.size()])); + } + + return BitsFilteredDocIdSet.wrap(main, filter); } /** diff --git a/src/main/java/org/elasticsearch/common/lucene/search/XCollector.java b/src/main/java/org/elasticsearch/common/lucene/search/XCollector.java index b022626d842..c796d40a62b 100644 --- a/src/main/java/org/elasticsearch/common/lucene/search/XCollector.java +++ b/src/main/java/org/elasticsearch/common/lucene/search/XCollector.java @@ -26,9 +26,7 @@ import java.io.IOException; * An extension to {@link Collector} that allows for a callback when * collection is done. */ -public abstract class XCollector extends Collector { +public interface XCollector extends Collector { - public void postCollection() throws IOException { - - } + public void postCollection() throws IOException; } diff --git a/src/main/java/org/elasticsearch/common/lucene/search/XConstantScoreQuery.java b/src/main/java/org/elasticsearch/common/lucene/search/XConstantScoreQuery.java deleted file mode 100644 index 4d041082781..00000000000 --- a/src/main/java/org/elasticsearch/common/lucene/search/XConstantScoreQuery.java +++ /dev/null @@ -1,44 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.common.lucene.search; - -import org.apache.lucene.search.ConstantScoreQuery; -import org.apache.lucene.search.Filter; - -/** - * We still need sometimes to exclude deletes, because we don't remove them always with acceptDocs on filters - */ -public class XConstantScoreQuery extends ConstantScoreQuery { - - private final Filter actualFilter; - - public XConstantScoreQuery(Filter filter) { - super(new ApplyAcceptedDocsFilter(filter)); - this.actualFilter = filter; - } - - // trick so any external systems still think that its the actual filter we use, and not the - // deleted filter - @Override - public Filter getFilter() { - return this.actualFilter; - } -} - diff --git a/src/main/java/org/elasticsearch/common/lucene/search/XFilteredQuery.java b/src/main/java/org/elasticsearch/common/lucene/search/XFilteredQuery.java deleted file mode 100644 index 67abce84bfc..00000000000 --- a/src/main/java/org/elasticsearch/common/lucene/search/XFilteredQuery.java +++ /dev/null @@ -1,261 +0,0 @@ -package org.elasticsearch.common.lucene.search; - -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -import org.apache.lucene.index.AtomicReaderContext; -import org.apache.lucene.index.IndexReader; -import org.apache.lucene.index.Term; -import org.apache.lucene.search.*; -import org.apache.lucene.search.FilteredQuery.FilterStrategy; -import org.apache.lucene.util.Bits; -import org.elasticsearch.common.lucene.docset.DocIdSets; - -import java.io.IOException; -import java.util.Set; - - -/** - * A query that applies a filter to the results of another query. - *

- *

Note: the bits are retrieved from the filter each time this - * query is used in a search - use a CachingWrapperFilter to avoid - * regenerating the bits every time. - * - * @see CachingWrapperFilter - * @since 1.4 - */ -// Changes are marked with //CHANGE: -// Delegate to FilteredQuery - this version fixes the bug in LUCENE-4705 and uses ApplyAcceptedDocsFilter internally -public final class XFilteredQuery extends Query { - private final Filter rawFilter; - private final FilteredQuery delegate; - private final FilterStrategy strategy; - - /** - * Constructs a new query which applies a filter to the results of the original query. - * {@link Filter#getDocIdSet} will be called every time this query is used in a search. - * - * @param query Query to be filtered, cannot be null. - * @param filter Filter to apply to query results, cannot be null. - */ - public XFilteredQuery(Query query, Filter filter) { - this(query, filter, FilteredQuery.RANDOM_ACCESS_FILTER_STRATEGY); - } - - /** - * Expert: Constructs a new query which applies a filter to the results of the original query. - * {@link Filter#getDocIdSet} will be called every time this query is used in a search. - * - * @param query Query to be filtered, cannot be null. - * @param filter Filter to apply to query results, cannot be null. - * @param strategy a filter strategy used to create a filtered scorer. - * @see FilterStrategy - */ - public XFilteredQuery(Query query, Filter filter, FilterStrategy strategy) { - this(new FilteredQuery(query, new ApplyAcceptedDocsFilter(filter), strategy), filter, strategy); - } - - private XFilteredQuery(FilteredQuery delegate, Filter filter, FilterStrategy strategy) { - this.delegate = delegate; - // CHANGE: we need to wrap it in post application of accepted docs - this.rawFilter = filter; - this.strategy = strategy; - } - - /** - * Returns a Weight that applies the filter to the enclosed query's Weight. - * This is accomplished by overriding the Scorer returned by the Weight. - */ - @Override - public Weight createWeight(final IndexSearcher searcher) throws IOException { - return delegate.createWeight(searcher); - } - - /** - * Rewrites the query. If the wrapped is an instance of - * {@link MatchAllDocsQuery} it returns a {@link ConstantScoreQuery}. Otherwise - * it returns a new {@code FilteredQuery} wrapping the rewritten query. - */ - @Override - public Query rewrite(IndexReader reader) throws IOException { - Query query = delegate.getQuery(); - final Query queryRewritten = query.rewrite(reader); - - // CHANGE: if we push back to Lucene, would love to have an extension for "isMatchAllQuery" - if (queryRewritten instanceof MatchAllDocsQuery || Queries.isConstantMatchAllQuery(queryRewritten)) { - // Special case: If the query is a MatchAllDocsQuery, we only - // return a CSQ(filter). - final Query rewritten = new ConstantScoreQuery(delegate.getFilter()); - // Combine boost of MatchAllDocsQuery and the wrapped rewritten query: - rewritten.setBoost(delegate.getBoost() * queryRewritten.getBoost()); - return rewritten; - } - - if (queryRewritten != query) { - // rewrite to a new FilteredQuery wrapping the rewritten query - final Query rewritten = new XFilteredQuery(queryRewritten, rawFilter, strategy); - rewritten.setBoost(delegate.getBoost()); - return rewritten; - } else { - // nothing to rewrite, we are done! - return this; - } - } - - @Override - public void setBoost(float b) { - delegate.setBoost(b); - } - - @Override - public float getBoost() { - return delegate.getBoost(); - } - - /** - * Returns this FilteredQuery's (unfiltered) Query - */ - public final Query getQuery() { - return delegate.getQuery(); - } - - /** - * Returns this FilteredQuery's filter - */ - public final Filter getFilter() { - // CHANGE: unwrap the accepted docs filter - if (rawFilter instanceof ApplyAcceptedDocsFilter) { - return ((ApplyAcceptedDocsFilter) rawFilter).filter(); - } - return rawFilter; - } - - // inherit javadoc - @Override - public void extractTerms(Set terms) { - delegate.extractTerms(terms); - } - - /** - * Prints a user-readable version of this query. - */ - @Override - public String toString(String s) { - return delegate.toString(s); - } - - /** - * Returns true iff o is equal to this. - */ - @Override - public boolean equals(Object o) { - if (!(o instanceof XFilteredQuery)) { - return false; - } else { - return delegate.equals(((XFilteredQuery)o).delegate); - } - } - - /** - * Returns a hash code value for this object. - */ - @Override - public int hashCode() { - return delegate.hashCode(); - } - - // CHANGE: Add custom random access strategy, allowing to set the threshold - // CHANGE: Add filter first filter strategy - public static final FilterStrategy ALWAYS_RANDOM_ACCESS_FILTER_STRATEGY = new CustomRandomAccessFilterStrategy(0); - - public static final CustomRandomAccessFilterStrategy CUSTOM_FILTER_STRATEGY = new CustomRandomAccessFilterStrategy(); - - /** - * Extends {@link org.apache.lucene.search.FilteredQuery.RandomAccessFilterStrategy}. - *

- * Adds a threshold value, which defaults to -1. When set to -1, it will check if the filter docSet is - * *not* a fast docSet, and if not, it will use {@link FilteredQuery#QUERY_FIRST_FILTER_STRATEGY} (since - * the assumption is that its a "slow" filter and better computed only on whatever matched the query). - *

- * If the threshold value is 0, it always tries to pass "down" the filter as acceptDocs, and it the filter - * can't be represented as Bits (never really), then it uses {@link FilteredQuery#LEAP_FROG_QUERY_FIRST_STRATEGY}. - *

- * If the above conditions are not met, then it reverts to the {@link FilteredQuery.RandomAccessFilterStrategy} logic, - * with the threshold used to control {@link #useRandomAccess(org.apache.lucene.util.Bits, int)}. - */ - public static class CustomRandomAccessFilterStrategy extends FilteredQuery.RandomAccessFilterStrategy { - - private final int threshold; - - public CustomRandomAccessFilterStrategy() { - this.threshold = -1; - } - - public CustomRandomAccessFilterStrategy(int threshold) { - this.threshold = threshold; - } - - @Override - public Scorer filteredScorer(AtomicReaderContext context, Weight weight, DocIdSet docIdSet) throws IOException { - // CHANGE: If threshold is 0, always pass down the accept docs, don't pay the price of calling nextDoc even... - if (threshold == 0) { - final Bits filterAcceptDocs = docIdSet.bits(); - if (filterAcceptDocs != null) { - return weight.scorer(context, filterAcceptDocs); - } else { - return FilteredQuery.LEAP_FROG_QUERY_FIRST_STRATEGY.filteredScorer(context, weight, docIdSet); - } - } - - // CHANGE: handle "default" value - if (threshold == -1) { - // default value, don't iterate on only apply filter after query if its not a "fast" docIdSet - if (!DocIdSets.isFastIterator(ApplyAcceptedDocsFilter.unwrap(docIdSet))) { - return FilteredQuery.QUERY_FIRST_FILTER_STRATEGY.filteredScorer(context, weight, docIdSet); - } - } - - return super.filteredScorer(context, weight, docIdSet); - } - - /** - * Expert: decides if a filter should be executed as "random-access" or not. - * random-access means the filter "filters" in a similar way as deleted docs are filtered - * in Lucene. This is faster when the filter accepts many documents. - * However, when the filter is very sparse, it can be faster to execute the query+filter - * as a conjunction in some cases. - *

- * The default implementation returns true if the first document accepted by the - * filter is < threshold, if threshold is -1 (the default), then it checks for < 100. - */ - protected boolean useRandomAccess(Bits bits, int firstFilterDoc) { - // "default" - if (threshold == -1) { - return firstFilterDoc < 100; - } - //TODO once we have a cost API on filters and scorers we should rethink this heuristic - return firstFilterDoc < threshold; - } - } - - @Override - public Query clone() { - return new XFilteredQuery((FilteredQuery) delegate.clone(), rawFilter, strategy); - } - -} diff --git a/src/main/java/org/elasticsearch/common/lucene/search/function/BoostScoreFunction.java b/src/main/java/org/elasticsearch/common/lucene/search/function/BoostScoreFunction.java index fe50984e1a3..b22f50c1e11 100644 --- a/src/main/java/org/elasticsearch/common/lucene/search/function/BoostScoreFunction.java +++ b/src/main/java/org/elasticsearch/common/lucene/search/function/BoostScoreFunction.java @@ -19,7 +19,7 @@ package org.elasticsearch.common.lucene.search.function; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.search.Explanation; import org.elasticsearch.ElasticsearchIllegalArgumentException; @@ -43,7 +43,7 @@ public class BoostScoreFunction extends ScoreFunction { } @Override - public void setNextReader(AtomicReaderContext context) { + public void setNextReader(LeafReaderContext context) { // nothing to do here... } diff --git a/src/main/java/org/elasticsearch/common/lucene/search/function/FieldValueFactorFunction.java b/src/main/java/org/elasticsearch/common/lucene/search/function/FieldValueFactorFunction.java index 1f89a456637..87a75c241a5 100644 --- a/src/main/java/org/elasticsearch/common/lucene/search/function/FieldValueFactorFunction.java +++ b/src/main/java/org/elasticsearch/common/lucene/search/function/FieldValueFactorFunction.java @@ -19,7 +19,7 @@ package org.elasticsearch.common.lucene.search.function; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.search.Explanation; import org.elasticsearch.ElasticsearchException; import org.elasticsearch.index.fielddata.IndexNumericFieldData; @@ -48,7 +48,7 @@ public class FieldValueFactorFunction extends ScoreFunction { } @Override - public void setNextReader(AtomicReaderContext context) { + public void setNextReader(LeafReaderContext context) { this.values = this.indexFieldData.load(context).getDoubleValues(); } diff --git a/src/main/java/org/elasticsearch/common/lucene/search/function/FiltersFunctionScoreQuery.java b/src/main/java/org/elasticsearch/common/lucene/search/function/FiltersFunctionScoreQuery.java index ef8489ef542..8cedd928926 100644 --- a/src/main/java/org/elasticsearch/common/lucene/search/function/FiltersFunctionScoreQuery.java +++ b/src/main/java/org/elasticsearch/common/lucene/search/function/FiltersFunctionScoreQuery.java @@ -19,7 +19,7 @@ package org.elasticsearch.common.lucene.search.function; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.IndexReader; import org.apache.lucene.index.Term; import org.apache.lucene.search.*; @@ -150,7 +150,7 @@ public class FiltersFunctionScoreQuery extends Query { } @Override - public Scorer scorer(AtomicReaderContext context, Bits acceptDocs) throws IOException { + public Scorer scorer(LeafReaderContext context, Bits acceptDocs) throws IOException { // we ignore scoreDocsInOrder parameter, because we need to score in // order if documents are scored with a script. The // ShardLookup depends on in order scoring. @@ -167,7 +167,7 @@ public class FiltersFunctionScoreQuery extends Query { } @Override - public Explanation explain(AtomicReaderContext context, int doc) throws IOException { + public Explanation explain(LeafReaderContext context, int doc) throws IOException { Explanation subQueryExpl = subQueryWeight.explain(context, doc); if (!subQueryExpl.isMatch()) { diff --git a/src/main/java/org/elasticsearch/common/lucene/search/function/FunctionScoreQuery.java b/src/main/java/org/elasticsearch/common/lucene/search/function/FunctionScoreQuery.java index 0e5dfb73474..5f730fc7fc3 100644 --- a/src/main/java/org/elasticsearch/common/lucene/search/function/FunctionScoreQuery.java +++ b/src/main/java/org/elasticsearch/common/lucene/search/function/FunctionScoreQuery.java @@ -19,7 +19,7 @@ package org.elasticsearch.common.lucene.search.function; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.IndexReader; import org.apache.lucene.index.Term; import org.apache.lucene.search.*; @@ -112,7 +112,7 @@ public class FunctionScoreQuery extends Query { } @Override - public Scorer scorer(AtomicReaderContext context, Bits acceptDocs) throws IOException { + public Scorer scorer(LeafReaderContext context, Bits acceptDocs) throws IOException { // we ignore scoreDocsInOrder parameter, because we need to score in // order if documents are scored with a script. The // ShardLookup depends on in order scoring. @@ -125,7 +125,7 @@ public class FunctionScoreQuery extends Query { } @Override - public Explanation explain(AtomicReaderContext context, int doc) throws IOException { + public Explanation explain(LeafReaderContext context, int doc) throws IOException { Explanation subQueryExpl = subQueryWeight.explain(context, doc); if (!subQueryExpl.isMatch()) { return subQueryExpl; diff --git a/src/main/java/org/elasticsearch/common/lucene/search/function/RandomScoreFunction.java b/src/main/java/org/elasticsearch/common/lucene/search/function/RandomScoreFunction.java index 82444b7eaa5..276220851fe 100644 --- a/src/main/java/org/elasticsearch/common/lucene/search/function/RandomScoreFunction.java +++ b/src/main/java/org/elasticsearch/common/lucene/search/function/RandomScoreFunction.java @@ -18,7 +18,7 @@ */ package org.elasticsearch.common.lucene.search.function; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.search.Explanation; import org.apache.lucene.util.StringHelper; import org.elasticsearch.index.fielddata.AtomicFieldData; @@ -59,7 +59,7 @@ public class RandomScoreFunction extends ScoreFunction { } @Override - public void setNextReader(AtomicReaderContext context) { + public void setNextReader(LeafReaderContext context) { AtomicFieldData leafData = uidFieldData.load(context); uidByteData = leafData.getBytesValues(); if (uidByteData == null) throw new NullPointerException("failed to get uid byte data"); diff --git a/src/main/java/org/elasticsearch/common/lucene/search/function/ScoreFunction.java b/src/main/java/org/elasticsearch/common/lucene/search/function/ScoreFunction.java index 391e64648b3..91b73e970ed 100644 --- a/src/main/java/org/elasticsearch/common/lucene/search/function/ScoreFunction.java +++ b/src/main/java/org/elasticsearch/common/lucene/search/function/ScoreFunction.java @@ -19,7 +19,7 @@ package org.elasticsearch.common.lucene.search.function; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.search.Explanation; /** @@ -29,7 +29,7 @@ public abstract class ScoreFunction { private final CombineFunction scoreCombiner; - public abstract void setNextReader(AtomicReaderContext context); + public abstract void setNextReader(LeafReaderContext context); public abstract double score(int docId, float subQueryScore); diff --git a/src/main/java/org/elasticsearch/common/lucene/search/function/ScriptScoreFunction.java b/src/main/java/org/elasticsearch/common/lucene/search/function/ScriptScoreFunction.java index 2e6d8f1fe8c..5345f60b85b 100644 --- a/src/main/java/org/elasticsearch/common/lucene/search/function/ScriptScoreFunction.java +++ b/src/main/java/org/elasticsearch/common/lucene/search/function/ScriptScoreFunction.java @@ -19,7 +19,7 @@ package org.elasticsearch.common.lucene.search.function; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.search.Explanation; import org.apache.lucene.search.Scorer; import org.elasticsearch.script.SearchScript; @@ -87,7 +87,7 @@ public class ScriptScoreFunction extends ScoreFunction { } @Override - public void setNextReader(AtomicReaderContext ctx) { + public void setNextReader(LeafReaderContext ctx) { script.setNextReader(ctx); } diff --git a/src/main/java/org/elasticsearch/common/lucene/search/function/WeightFactorFunction.java b/src/main/java/org/elasticsearch/common/lucene/search/function/WeightFactorFunction.java index 79abf028424..fba7e0ae194 100644 --- a/src/main/java/org/elasticsearch/common/lucene/search/function/WeightFactorFunction.java +++ b/src/main/java/org/elasticsearch/common/lucene/search/function/WeightFactorFunction.java @@ -19,7 +19,7 @@ package org.elasticsearch.common.lucene.search.function; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.search.ComplexExplanation; import org.apache.lucene.search.Explanation; import org.elasticsearch.ElasticsearchIllegalArgumentException; @@ -53,7 +53,7 @@ public class WeightFactorFunction extends ScoreFunction { } @Override - public void setNextReader(AtomicReaderContext context) { + public void setNextReader(LeafReaderContext context) { scoreFunction.setNextReader(context); } @@ -87,7 +87,7 @@ public class WeightFactorFunction extends ScoreFunction { } @Override - public void setNextReader(AtomicReaderContext context) { + public void setNextReader(LeafReaderContext context) { } diff --git a/src/main/java/org/elasticsearch/common/lucene/store/OutputStreamIndexOutput.java b/src/main/java/org/elasticsearch/common/lucene/store/OutputStreamIndexOutput.java index 61c42abf504..156ddb5f3fd 100644 --- a/src/main/java/org/elasticsearch/common/lucene/store/OutputStreamIndexOutput.java +++ b/src/main/java/org/elasticsearch/common/lucene/store/OutputStreamIndexOutput.java @@ -49,11 +49,6 @@ public class OutputStreamIndexOutput extends OutputStream { out.writeBytes(b, off, len); } - @Override - public void flush() throws IOException { - out.flush(); - } - @Override public void close() throws IOException { out.close(); diff --git a/src/main/java/org/elasticsearch/common/lucene/uid/PerThreadIDAndVersionLookup.java b/src/main/java/org/elasticsearch/common/lucene/uid/PerThreadIDAndVersionLookup.java index 7508463ae71..30d8e196885 100644 --- a/src/main/java/org/elasticsearch/common/lucene/uid/PerThreadIDAndVersionLookup.java +++ b/src/main/java/org/elasticsearch/common/lucene/uid/PerThreadIDAndVersionLookup.java @@ -21,21 +21,18 @@ package org.elasticsearch.common.lucene.uid; import java.io.IOException; import java.util.ArrayList; -import java.util.Collections; -import java.util.Comparator; import java.util.List; -import org.apache.lucene.index.AtomicReaderContext; import org.apache.lucene.index.DocsAndPositionsEnum; import org.apache.lucene.index.DocsEnum; import org.apache.lucene.index.Fields; import org.apache.lucene.index.IndexReader; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.NumericDocValues; import org.apache.lucene.index.Terms; import org.apache.lucene.index.TermsEnum; import org.apache.lucene.util.Bits; import org.apache.lucene.util.BytesRef; -import org.apache.lucene.util.CollectionUtil; import org.elasticsearch.common.Numbers; import org.elasticsearch.common.lucene.uid.Versions.DocIdAndVersion; import org.elasticsearch.index.mapper.internal.UidFieldMapper; @@ -51,7 +48,7 @@ import org.elasticsearch.index.mapper.internal.VersionFieldMapper; final class PerThreadIDAndVersionLookup { - private final AtomicReaderContext[] readerContexts; + private final LeafReaderContext[] readerContexts; private final TermsEnum[] termsEnums; private final DocsEnum[] docsEnums; // Only used for back compat, to lookup a version from payload: @@ -64,9 +61,9 @@ final class PerThreadIDAndVersionLookup { public PerThreadIDAndVersionLookup(IndexReader r) throws IOException { - List leaves = new ArrayList<>(r.leaves()); + List leaves = new ArrayList<>(r.leaves()); - readerContexts = leaves.toArray(new AtomicReaderContext[leaves.size()]); + readerContexts = leaves.toArray(new LeafReaderContext[leaves.size()]); termsEnums = new TermsEnum[leaves.size()]; docsEnums = new DocsEnum[leaves.size()]; posEnums = new DocsAndPositionsEnum[leaves.size()]; @@ -78,7 +75,7 @@ final class PerThreadIDAndVersionLookup { // iterate backwards to optimize for the frequently updated documents // which are likely to be in the last segments for(int i=leaves.size()-1;i>=0;i--) { - AtomicReaderContext readerContext = leaves.get(i); + LeafReaderContext readerContext = leaves.get(i); Fields fields = readerContext.reader().fields(); if (fields != null) { Terms terms = fields.terms(UidFieldMapper.NAME); diff --git a/src/main/java/org/elasticsearch/common/lucene/uid/Versions.java b/src/main/java/org/elasticsearch/common/lucene/uid/Versions.java index 57246a2633b..44df71616fe 100644 --- a/src/main/java/org/elasticsearch/common/lucene/uid/Versions.java +++ b/src/main/java/org/elasticsearch/common/lucene/uid/Versions.java @@ -126,13 +126,13 @@ public class Versions { private Versions() { } - /** Wraps an {@link AtomicReaderContext}, a doc ID relative to the context doc base and a version. */ + /** Wraps an {@link LeafReaderContext}, a doc ID relative to the context doc base and a version. */ public static class DocIdAndVersion { public final int docId; public final long version; - public final AtomicReaderContext context; + public final LeafReaderContext context; - public DocIdAndVersion(int docId, long version, AtomicReaderContext context) { + public DocIdAndVersion(int docId, long version, LeafReaderContext context) { this.docId = docId; this.version = version; this.context = context; diff --git a/src/main/java/org/elasticsearch/common/util/AbstractArray.java b/src/main/java/org/elasticsearch/common/util/AbstractArray.java index 0c00a897333..348197fc144 100644 --- a/src/main/java/org/elasticsearch/common/util/AbstractArray.java +++ b/src/main/java/org/elasticsearch/common/util/AbstractArray.java @@ -19,6 +19,10 @@ package org.elasticsearch.common.util; +import java.util.Collections; + +import org.apache.lucene.util.Accountable; + abstract class AbstractArray implements BigArray { @@ -41,4 +45,8 @@ abstract class AbstractArray implements BigArray { protected abstract void doClose(); + @Override + public Iterable getChildResources() { + return Collections.emptyList(); + } } diff --git a/src/main/java/org/elasticsearch/common/util/BloomFilter.java b/src/main/java/org/elasticsearch/common/util/BloomFilter.java index 7b375bb5ec8..5ab047f98d6 100644 --- a/src/main/java/org/elasticsearch/common/util/BloomFilter.java +++ b/src/main/java/org/elasticsearch/common/util/BloomFilter.java @@ -22,6 +22,7 @@ import com.google.common.math.LongMath; import com.google.common.primitives.Ints; import org.apache.lucene.store.DataInput; import org.apache.lucene.store.DataOutput; +import org.apache.lucene.store.IndexInput; import org.apache.lucene.util.BytesRef; import org.elasticsearch.ElasticsearchIllegalArgumentException; import org.elasticsearch.common.Nullable; @@ -171,6 +172,12 @@ public class BloomFilter { } } + public static void skipBloom(IndexInput in) throws IOException { + int version = in.readInt(); // we do nothing with this now..., defaults to 0 + final int numLongs = in.readInt(); + in.seek(in.getFilePointer() + (numLongs * 8) + 4 + 4); // filter + numberOfHashFunctions + hashType + } + public static BloomFilter deserialize(DataInput in) throws IOException { int version = in.readInt(); // we do nothing with this now..., defaults to 0 int numLongs = in.readInt(); diff --git a/src/main/java/org/elasticsearch/env/NodeEnvironment.java b/src/main/java/org/elasticsearch/env/NodeEnvironment.java index 803cb0ce1fd..ec86d55d6ea 100644 --- a/src/main/java/org/elasticsearch/env/NodeEnvironment.java +++ b/src/main/java/org/elasticsearch/env/NodeEnvironment.java @@ -83,7 +83,7 @@ public class NodeEnvironment extends AbstractComponent { } logger.trace("obtaining node lock on {} ...", dir.getAbsolutePath()); try { - NativeFSLockFactory lockFactory = new NativeFSLockFactory(dir); + NativeFSLockFactory lockFactory = new NativeFSLockFactory(dir.toPath()); Lock tmpLock = lockFactory.makeLock("node.lock"); boolean obtained = tmpLock.obtain(); if (obtained) { diff --git a/src/main/java/org/elasticsearch/gateway/local/state/meta/CorruptStateException.java b/src/main/java/org/elasticsearch/gateway/local/state/meta/CorruptStateException.java index 3f2af87672b..8af8e0df41c 100644 --- a/src/main/java/org/elasticsearch/gateway/local/state/meta/CorruptStateException.java +++ b/src/main/java/org/elasticsearch/gateway/local/state/meta/CorruptStateException.java @@ -18,7 +18,6 @@ */ package org.elasticsearch.gateway.local.state.meta; -import org.apache.lucene.index.CorruptIndexException; import org.elasticsearch.ElasticsearchCorruptionException; /** @@ -37,12 +36,12 @@ public class CorruptStateException extends ElasticsearchCorruptionException { /** * Creates a new {@link CorruptStateException} with the given exceptions stacktrace. - * This constructor copies the stacktrace as well as the message from the given {@link CorruptIndexException} + * This constructor copies the stacktrace as well as the message from the given {@link Throwable} * into this exception. * * @param ex the exception cause */ - public CorruptStateException(CorruptIndexException ex) { + public CorruptStateException(Throwable ex) { super(ex); } } diff --git a/src/main/java/org/elasticsearch/gateway/local/state/meta/MetaDataStateFormat.java b/src/main/java/org/elasticsearch/gateway/local/state/meta/MetaDataStateFormat.java index 853f9a5e7ea..8dbff01cd8d 100644 --- a/src/main/java/org/elasticsearch/gateway/local/state/meta/MetaDataStateFormat.java +++ b/src/main/java/org/elasticsearch/gateway/local/state/meta/MetaDataStateFormat.java @@ -22,11 +22,9 @@ import com.google.common.base.Predicate; import com.google.common.collect.Collections2; import org.apache.lucene.codecs.CodecUtil; import org.apache.lucene.index.CorruptIndexException; -import org.apache.lucene.store.Directory; -import org.apache.lucene.store.IOContext; -import org.apache.lucene.store.IndexInput; -import org.apache.lucene.store.OutputStreamIndexOutput; -import org.apache.lucene.store.SimpleFSDirectory; +import org.apache.lucene.index.IndexFormatTooNewException; +import org.apache.lucene.index.IndexFormatTooOldException; +import org.apache.lucene.store.*; import org.apache.lucene.util.IOUtils; import org.elasticsearch.ElasticsearchIllegalStateException; import org.elasticsearch.ExceptionsHelper; @@ -125,9 +123,9 @@ public abstract class MetaDataStateFormat { } CodecUtil.writeFooter(out); } - IOUtils.fsync(tmpStatePath.toFile(), false); // fsync the state file + IOUtils.fsync(tmpStatePath, false); // fsync the state file Files.move(tmpStatePath, finalStatePath, StandardCopyOption.ATOMIC_MOVE); - IOUtils.fsync(stateLocation.toFile(), true); + IOUtils.fsync(stateLocation, true); for (int i = 1; i < locations.length; i++) { stateLocation = Paths.get(locations[i].getPath(), STATE_DIR_NAME); Files.createDirectories(stateLocation); @@ -136,7 +134,7 @@ public abstract class MetaDataStateFormat { try { Files.copy(finalStatePath, tmpPath); Files.move(tmpPath, finalPath, StandardCopyOption.ATOMIC_MOVE); // we are on the same FileSystem / Partition here we can do an atomic move - IOUtils.fsync(stateLocation.toFile(), true); // we just fsync the dir here.. + IOUtils.fsync(stateLocation, true); // we just fsync the dir here.. } finally { Files.deleteIfExists(tmpPath); } @@ -187,7 +185,7 @@ public abstract class MetaDataStateFormat { return fromXContent(parser); } } - } catch(CorruptIndexException ex) { + } catch(CorruptIndexException | IndexFormatTooOldException | IndexFormatTooNewException ex) { // we trick this into a dedicated exception with the original stacktrace throw new CorruptStateException(ex); } @@ -195,7 +193,7 @@ public abstract class MetaDataStateFormat { } protected Directory newDirectory(File dir) throws IOException { - return new SimpleFSDirectory(dir); + return new SimpleFSDirectory(dir.toPath()); } private void cleanupOldFiles(String prefix, String fileName, File[] locations) throws IOException { diff --git a/src/main/java/org/elasticsearch/index/analysis/Analysis.java b/src/main/java/org/elasticsearch/index/analysis/Analysis.java index fc1f897e32c..cab37d4e4c8 100644 --- a/src/main/java/org/elasticsearch/index/analysis/Analysis.java +++ b/src/main/java/org/elasticsearch/index/analysis/Analysis.java @@ -100,20 +100,20 @@ public class Analysis { return value != null && "_none_".equals(value); } - public static CharArraySet parseStemExclusion(Settings settings, CharArraySet defaultStemExclusion, Version version) { + public static CharArraySet parseStemExclusion(Settings settings, CharArraySet defaultStemExclusion) { String value = settings.get("stem_exclusion"); if (value != null) { if ("_none_".equals(value)) { return CharArraySet.EMPTY_SET; } else { // LUCENE 4 UPGRADE: Should be settings.getAsBoolean("stem_exclusion_case", false)? - return new CharArraySet(version, Strings.commaDelimitedListToSet(value), false); + return new CharArraySet(Strings.commaDelimitedListToSet(value), false); } } String[] stemExclusion = settings.getAsArray("stem_exclusion", null); if (stemExclusion != null) { // LUCENE 4 UPGRADE: Should be settings.getAsBoolean("stem_exclusion_case", false)? - return new CharArraySet(version, Arrays.asList(stemExclusion), false); + return new CharArraySet(Arrays.asList(stemExclusion), false); } else { return defaultStemExclusion; } @@ -153,43 +153,43 @@ public class Analysis { .put("_turkish_", TurkishAnalyzer.getDefaultStopSet()) .immutableMap(); - public static CharArraySet parseWords(Environment env, Settings settings, String name, CharArraySet defaultWords, ImmutableMap> namedWords, Version version, boolean ignoreCase) { + public static CharArraySet parseWords(Environment env, Settings settings, String name, CharArraySet defaultWords, ImmutableMap> namedWords, boolean ignoreCase) { String value = settings.get(name); if (value != null) { if ("_none_".equals(value)) { return CharArraySet.EMPTY_SET; } else { - return resolveNamedWords(Strings.commaDelimitedListToSet(value), namedWords, version, ignoreCase); + return resolveNamedWords(Strings.commaDelimitedListToSet(value), namedWords, ignoreCase); } } List pathLoadedWords = getWordList(env, settings, name); if (pathLoadedWords != null) { - return resolveNamedWords(pathLoadedWords, namedWords, version, ignoreCase); + return resolveNamedWords(pathLoadedWords, namedWords, ignoreCase); } return defaultWords; } - public static CharArraySet parseCommonWords(Environment env, Settings settings, CharArraySet defaultCommonWords, Version version, boolean ignoreCase) { - return parseWords(env, settings, "common_words", defaultCommonWords, namedStopWords, version, ignoreCase); + public static CharArraySet parseCommonWords(Environment env, Settings settings, CharArraySet defaultCommonWords, boolean ignoreCase) { + return parseWords(env, settings, "common_words", defaultCommonWords, namedStopWords, ignoreCase); } - public static CharArraySet parseArticles(Environment env, Settings settings, Version version) { - return parseWords(env, settings, "articles", null, null, version, settings.getAsBoolean("articles_case", false)); + public static CharArraySet parseArticles(Environment env, Settings settings) { + return parseWords(env, settings, "articles", null, null, settings.getAsBoolean("articles_case", false)); } - public static CharArraySet parseStopWords(Environment env, Settings settings, CharArraySet defaultStopWords, Version version) { - return parseStopWords(env, settings, defaultStopWords, version, settings.getAsBoolean("stopwords_case", false)); + public static CharArraySet parseStopWords(Environment env, Settings settings, CharArraySet defaultStopWords) { + return parseStopWords(env, settings, defaultStopWords, settings.getAsBoolean("stopwords_case", false)); } - public static CharArraySet parseStopWords(Environment env, Settings settings, CharArraySet defaultStopWords, Version version, boolean ignoreCase) { - return parseWords(env, settings, "stopwords", defaultStopWords, namedStopWords, version, ignoreCase); + public static CharArraySet parseStopWords(Environment env, Settings settings, CharArraySet defaultStopWords, boolean ignoreCase) { + return parseWords(env, settings, "stopwords", defaultStopWords, namedStopWords, ignoreCase); } - private static CharArraySet resolveNamedWords(Collection words, ImmutableMap> namedWords, Version version, boolean ignoreCase) { + private static CharArraySet resolveNamedWords(Collection words, ImmutableMap> namedWords, boolean ignoreCase) { if (namedWords == null) { - return new CharArraySet(version, words, ignoreCase); + return new CharArraySet(words, ignoreCase); } - CharArraySet setWords = new CharArraySet(version, words.size(), ignoreCase); + CharArraySet setWords = new CharArraySet(words.size(), ignoreCase); for (String word : words) { if (namedWords.containsKey(word)) { setWords.addAll(namedWords.get(word)); @@ -200,12 +200,12 @@ public class Analysis { return setWords; } - public static CharArraySet getWordSet(Environment env, Settings settings, String settingsPrefix, Version version) { + public static CharArraySet getWordSet(Environment env, Settings settings, String settingsPrefix) { List wordList = getWordList(env, settings, settingsPrefix); if (wordList == null) { return null; } - return new CharArraySet(version, wordList, settings.getAsBoolean(settingsPrefix + "_case", false)); + return new CharArraySet(wordList, settings.getAsBoolean(settingsPrefix + "_case", false)); } /** diff --git a/src/main/java/org/elasticsearch/index/analysis/ArabicAnalyzerProvider.java b/src/main/java/org/elasticsearch/index/analysis/ArabicAnalyzerProvider.java index 9bf4bb07814..c532204b164 100644 --- a/src/main/java/org/elasticsearch/index/analysis/ArabicAnalyzerProvider.java +++ b/src/main/java/org/elasticsearch/index/analysis/ArabicAnalyzerProvider.java @@ -38,9 +38,9 @@ public class ArabicAnalyzerProvider extends AbstractIndexAnalyzerProvider { +public class ChineseAnalyzerProvider extends AbstractIndexAnalyzerProvider { - private final ChineseAnalyzer analyzer; + private final StandardAnalyzer analyzer; @Inject public ChineseAnalyzerProvider(Index index, @IndexSettings Settings indexSettings, @Assisted String name, @Assisted Settings settings) { super(index, indexSettings, name, settings); - analyzer = new ChineseAnalyzer(); + // old index: best effort + analyzer = new StandardAnalyzer(); + analyzer.setVersion(version); + } @Override - public ChineseAnalyzer get() { + public StandardAnalyzer get() { return this.analyzer; } } \ No newline at end of file diff --git a/src/main/java/org/elasticsearch/index/analysis/CjkAnalyzerProvider.java b/src/main/java/org/elasticsearch/index/analysis/CjkAnalyzerProvider.java index 0a9fae93ec9..e3815b2aee8 100644 --- a/src/main/java/org/elasticsearch/index/analysis/CjkAnalyzerProvider.java +++ b/src/main/java/org/elasticsearch/index/analysis/CjkAnalyzerProvider.java @@ -38,9 +38,10 @@ public class CjkAnalyzerProvider extends AbstractIndexAnalyzerProvider rules = Analysis.getWordSet(env, settings, "keywords", version); + Set rules = Analysis.getWordSet(env, settings, "keywords"); if (rules == null) { throw new ElasticsearchIllegalArgumentException("keyword filter requires either `keywords` or `keywords_path` to be configured"); } - keywordLookup = new CharArraySet(version, rules, ignoreCase); + keywordLookup = new CharArraySet(rules, ignoreCase); } @Override diff --git a/src/main/java/org/elasticsearch/index/analysis/KeywordTokenizerFactory.java b/src/main/java/org/elasticsearch/index/analysis/KeywordTokenizerFactory.java index e97ea416b34..44ed001c2d4 100644 --- a/src/main/java/org/elasticsearch/index/analysis/KeywordTokenizerFactory.java +++ b/src/main/java/org/elasticsearch/index/analysis/KeywordTokenizerFactory.java @@ -27,8 +27,6 @@ import org.elasticsearch.common.settings.Settings; import org.elasticsearch.index.Index; import org.elasticsearch.index.settings.IndexSettings; -import java.io.Reader; - /** * */ @@ -43,7 +41,7 @@ public class KeywordTokenizerFactory extends AbstractTokenizerFactory { } @Override - public Tokenizer create(Reader reader) { - return new KeywordTokenizer(reader, bufferSize); + public Tokenizer create() { + return new KeywordTokenizer(bufferSize); } } diff --git a/src/main/java/org/elasticsearch/index/analysis/LatvianAnalyzerProvider.java b/src/main/java/org/elasticsearch/index/analysis/LatvianAnalyzerProvider.java index 573aa195083..236676e4b5b 100644 --- a/src/main/java/org/elasticsearch/index/analysis/LatvianAnalyzerProvider.java +++ b/src/main/java/org/elasticsearch/index/analysis/LatvianAnalyzerProvider.java @@ -38,9 +38,9 @@ public class LatvianAnalyzerProvider extends AbstractIndexAnalyzerProvider extends Analyzer { @Override - protected TokenStreamComponents createComponents(String fieldName, Reader reader) { + protected TokenStreamComponents createComponents(String fieldName) { try { // LUCENE 4 UPGRADE: in reusableTokenStream the buffer size was char[120] // Not sure if this is intentional or not - return new TokenStreamComponents(createNumericTokenizer(reader, new char[32])); + return new TokenStreamComponents(createNumericTokenizer(new char[32])); } catch (IOException e) { throw new RuntimeException("Failed to create numeric tokenizer", e); } } - protected abstract T createNumericTokenizer(Reader reader, char[] buffer) throws IOException; + protected abstract T createNumericTokenizer(char[] buffer) throws IOException; } diff --git a/src/main/java/org/elasticsearch/index/analysis/NumericDateAnalyzer.java b/src/main/java/org/elasticsearch/index/analysis/NumericDateAnalyzer.java index ebb3c441337..6860e6560b0 100644 --- a/src/main/java/org/elasticsearch/index/analysis/NumericDateAnalyzer.java +++ b/src/main/java/org/elasticsearch/index/analysis/NumericDateAnalyzer.java @@ -22,14 +22,10 @@ package org.elasticsearch.index.analysis; import com.carrotsearch.hppc.IntObjectOpenHashMap; import com.google.common.collect.Maps; import org.elasticsearch.common.joda.FormatDateTimeFormatter; -import org.elasticsearch.common.util.concurrent.ConcurrentCollections; -import org.elasticsearch.common.util.concurrent.ConcurrentMapLong; import org.joda.time.format.DateTimeFormatter; import java.io.IOException; -import java.io.Reader; import java.util.Map; -import java.util.concurrent.ConcurrentMap; /** * @@ -62,7 +58,7 @@ public class NumericDateAnalyzer extends NumericAnalyzer { } @Override - protected NumericDateTokenizer createNumericTokenizer(Reader reader, char[] buffer) throws IOException { - return new NumericDateTokenizer(reader, precisionStep, buffer, dateTimeFormatter); + protected NumericDateTokenizer createNumericTokenizer(char[] buffer) throws IOException { + return new NumericDateTokenizer(precisionStep, buffer, dateTimeFormatter); } } \ No newline at end of file diff --git a/src/main/java/org/elasticsearch/index/analysis/NumericDateTokenizer.java b/src/main/java/org/elasticsearch/index/analysis/NumericDateTokenizer.java index 82f83fd1a1f..03b502d4478 100644 --- a/src/main/java/org/elasticsearch/index/analysis/NumericDateTokenizer.java +++ b/src/main/java/org/elasticsearch/index/analysis/NumericDateTokenizer.java @@ -23,15 +23,14 @@ import org.apache.lucene.analysis.NumericTokenStream; import org.joda.time.format.DateTimeFormatter; import java.io.IOException; -import java.io.Reader; /** * */ public class NumericDateTokenizer extends NumericTokenizer { - public NumericDateTokenizer(Reader reader, int precisionStep, char[] buffer, DateTimeFormatter dateTimeFormatter) throws IOException { - super(reader, new NumericTokenStream(precisionStep), buffer, dateTimeFormatter); + public NumericDateTokenizer(int precisionStep, char[] buffer, DateTimeFormatter dateTimeFormatter) throws IOException { + super(new NumericTokenStream(precisionStep), buffer, dateTimeFormatter); } @Override diff --git a/src/main/java/org/elasticsearch/index/analysis/NumericDoubleAnalyzer.java b/src/main/java/org/elasticsearch/index/analysis/NumericDoubleAnalyzer.java index 6d7dc9145a9..1067aa8efcb 100644 --- a/src/main/java/org/elasticsearch/index/analysis/NumericDoubleAnalyzer.java +++ b/src/main/java/org/elasticsearch/index/analysis/NumericDoubleAnalyzer.java @@ -22,7 +22,6 @@ package org.elasticsearch.index.analysis; import com.carrotsearch.hppc.IntObjectOpenHashMap; import java.io.IOException; -import java.io.Reader; /** * @@ -54,7 +53,7 @@ public class NumericDoubleAnalyzer extends NumericAnalyzer } @Override - protected NumericFloatTokenizer createNumericTokenizer(Reader reader, char[] buffer) throws IOException { - return new NumericFloatTokenizer(reader, precisionStep, buffer); + protected NumericFloatTokenizer createNumericTokenizer(char[] buffer) throws IOException { + return new NumericFloatTokenizer(precisionStep, buffer); } } \ No newline at end of file diff --git a/src/main/java/org/elasticsearch/index/analysis/NumericFloatTokenizer.java b/src/main/java/org/elasticsearch/index/analysis/NumericFloatTokenizer.java index 900bbe13928..02d42b8eef8 100644 --- a/src/main/java/org/elasticsearch/index/analysis/NumericFloatTokenizer.java +++ b/src/main/java/org/elasticsearch/index/analysis/NumericFloatTokenizer.java @@ -22,15 +22,14 @@ package org.elasticsearch.index.analysis; import org.apache.lucene.analysis.NumericTokenStream; import java.io.IOException; -import java.io.Reader; /** * */ public class NumericFloatTokenizer extends NumericTokenizer { - public NumericFloatTokenizer(Reader reader, int precisionStep, char[] buffer) throws IOException { - super(reader, new NumericTokenStream(precisionStep), buffer, null); + public NumericFloatTokenizer(int precisionStep, char[] buffer) throws IOException { + super(new NumericTokenStream(precisionStep), buffer, null); } @Override diff --git a/src/main/java/org/elasticsearch/index/analysis/NumericIntegerAnalyzer.java b/src/main/java/org/elasticsearch/index/analysis/NumericIntegerAnalyzer.java index 5e095e31007..6dff17cdb6f 100644 --- a/src/main/java/org/elasticsearch/index/analysis/NumericIntegerAnalyzer.java +++ b/src/main/java/org/elasticsearch/index/analysis/NumericIntegerAnalyzer.java @@ -22,7 +22,6 @@ package org.elasticsearch.index.analysis; import com.carrotsearch.hppc.IntObjectOpenHashMap; import java.io.IOException; -import java.io.Reader; /** * @@ -54,7 +53,7 @@ public class NumericIntegerAnalyzer extends NumericAnalyzer { } @Override - protected NumericLongTokenizer createNumericTokenizer(Reader reader, char[] buffer) throws IOException { - return new NumericLongTokenizer(reader, precisionStep, buffer); + protected NumericLongTokenizer createNumericTokenizer(char[] buffer) throws IOException { + return new NumericLongTokenizer(precisionStep, buffer); } } \ No newline at end of file diff --git a/src/main/java/org/elasticsearch/index/analysis/NumericLongTokenizer.java b/src/main/java/org/elasticsearch/index/analysis/NumericLongTokenizer.java index 5ca94926377..7262b0ad9da 100644 --- a/src/main/java/org/elasticsearch/index/analysis/NumericLongTokenizer.java +++ b/src/main/java/org/elasticsearch/index/analysis/NumericLongTokenizer.java @@ -29,8 +29,8 @@ import java.io.Reader; */ public class NumericLongTokenizer extends NumericTokenizer { - public NumericLongTokenizer(Reader reader, int precisionStep, char[] buffer) throws IOException { - super(reader, new NumericTokenStream(precisionStep), buffer, null); + public NumericLongTokenizer(int precisionStep, char[] buffer) throws IOException { + super(new NumericTokenStream(precisionStep), buffer, null); } @Override diff --git a/src/main/java/org/elasticsearch/index/analysis/NumericTokenizer.java b/src/main/java/org/elasticsearch/index/analysis/NumericTokenizer.java index acb9cb47f60..ccd87628988 100644 --- a/src/main/java/org/elasticsearch/index/analysis/NumericTokenizer.java +++ b/src/main/java/org/elasticsearch/index/analysis/NumericTokenizer.java @@ -28,7 +28,6 @@ import org.apache.lucene.util.AttributeSource; import org.elasticsearch.common.io.Streams; import java.io.IOException; -import java.io.Reader; import java.util.Iterator; /** @@ -51,8 +50,8 @@ public abstract class NumericTokenizer extends Tokenizer { protected final Object extra; private boolean started; - protected NumericTokenizer(Reader reader, NumericTokenStream numericTokenStream, char[] buffer, Object extra) throws IOException { - super(delegatingAttributeFactory(numericTokenStream), reader); + protected NumericTokenizer(NumericTokenStream numericTokenStream, char[] buffer, Object extra) throws IOException { + super(delegatingAttributeFactory(numericTokenStream)); this.numericTokenStream = numericTokenStream; // Add attributes from the numeric token stream, this works fine because the attribute factory delegates to numericTokenStream for (Iterator> it = numericTokenStream.getAttributeClassesIterator(); it.hasNext();) { diff --git a/src/main/java/org/elasticsearch/index/analysis/PathHierarchyTokenizerFactory.java b/src/main/java/org/elasticsearch/index/analysis/PathHierarchyTokenizerFactory.java index 4b11fb0e26b..fb1fda8ac9d 100644 --- a/src/main/java/org/elasticsearch/index/analysis/PathHierarchyTokenizerFactory.java +++ b/src/main/java/org/elasticsearch/index/analysis/PathHierarchyTokenizerFactory.java @@ -29,8 +29,6 @@ import org.elasticsearch.common.settings.Settings; import org.elasticsearch.index.Index; import org.elasticsearch.index.settings.IndexSettings; -import java.io.Reader; - public class PathHierarchyTokenizerFactory extends AbstractTokenizerFactory { private final int bufferSize; @@ -66,10 +64,10 @@ public class PathHierarchyTokenizerFactory extends AbstractTokenizerFactory { } @Override - public Tokenizer create(Reader reader) { + public Tokenizer create() { if (reverse) { - return new ReversePathHierarchyTokenizer(reader, bufferSize, delimiter, replacement, skip); + return new ReversePathHierarchyTokenizer(bufferSize, delimiter, replacement, skip); } - return new PathHierarchyTokenizer(reader, bufferSize, delimiter, replacement, skip); + return new PathHierarchyTokenizer(bufferSize, delimiter, replacement, skip); } } diff --git a/src/main/java/org/elasticsearch/index/analysis/PatternAnalyzer.java b/src/main/java/org/elasticsearch/index/analysis/PatternAnalyzer.java new file mode 100644 index 00000000000..43378411ae4 --- /dev/null +++ b/src/main/java/org/elasticsearch/index/analysis/PatternAnalyzer.java @@ -0,0 +1,56 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.index.analysis; + +import org.apache.lucene.analysis.Analyzer; +import org.apache.lucene.analysis.Tokenizer; +import org.apache.lucene.analysis.TokenStream; +import org.apache.lucene.analysis.core.LowerCaseFilter; +import org.apache.lucene.analysis.core.StopFilter; +import org.apache.lucene.analysis.pattern.PatternTokenizer; +import org.apache.lucene.analysis.util.CharArraySet; + +import java.util.regex.Pattern; + +/** Simple regex-based analyzer based on PatternTokenizer + lowercase + stopwords */ +public final class PatternAnalyzer extends Analyzer { + private final Pattern pattern; + private final boolean lowercase; + private final CharArraySet stopWords; + + public PatternAnalyzer(Pattern pattern, boolean lowercase, CharArraySet stopWords) { + this.pattern = pattern; + this.lowercase = lowercase; + this.stopWords = stopWords; + } + + @Override + protected TokenStreamComponents createComponents(String s) { + final Tokenizer tokenizer = new PatternTokenizer(pattern, -1); + TokenStream stream = tokenizer; + if (lowercase) { + stream = new LowerCaseFilter(stream); + } + if (stopWords != null) { + stream = new StopFilter(stream, stopWords); + } + return new TokenStreamComponents(tokenizer, stream); + } +} \ No newline at end of file diff --git a/src/main/java/org/elasticsearch/index/analysis/PatternAnalyzerProvider.java b/src/main/java/org/elasticsearch/index/analysis/PatternAnalyzerProvider.java index 50f563ede11..1996aff8c92 100644 --- a/src/main/java/org/elasticsearch/index/analysis/PatternAnalyzerProvider.java +++ b/src/main/java/org/elasticsearch/index/analysis/PatternAnalyzerProvider.java @@ -20,15 +20,10 @@ package org.elasticsearch.index.analysis; import org.apache.lucene.analysis.Analyzer; -import org.apache.lucene.analysis.TokenStream; -import org.apache.lucene.analysis.core.LowerCaseFilter; import org.apache.lucene.analysis.core.StopAnalyzer; -import org.apache.lucene.analysis.core.StopFilter; -import org.apache.lucene.analysis.pattern.PatternTokenizer; import org.apache.lucene.analysis.util.CharArraySet; import org.elasticsearch.ElasticsearchIllegalArgumentException; import org.elasticsearch.Version; -import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.inject.assistedinject.Assisted; import org.elasticsearch.common.regex.Regex; @@ -37,7 +32,6 @@ import org.elasticsearch.env.Environment; import org.elasticsearch.index.Index; import org.elasticsearch.index.settings.IndexSettings; -import java.io.Reader; import java.util.regex.Pattern; /** @@ -47,31 +41,6 @@ public class PatternAnalyzerProvider extends AbstractIndexAnalyzerProvider "O", "Neil" flags |= getFlag(STEM_ENGLISH_POSSESSIVE, settings, "stem_english_possessive", true); // If not null is the set of tokens to protect from being delimited - Set protectedWords = Analysis.getWordSet(env, settings, "protected_words", version); - this.protoWords = protectedWords == null ? null : CharArraySet.copy(Lucene.VERSION, protectedWords); + Set protectedWords = Analysis.getWordSet(env, settings, "protected_words"); + this.protoWords = protectedWords == null ? null : CharArraySet.copy(protectedWords); this.flags = flags; } @Override public TokenStream create(TokenStream tokenStream) { if (version.onOrAfter(Version.LUCENE_4_8)) { - return new WordDelimiterFilter(version, tokenStream, + return new WordDelimiterFilter(tokenStream, charTypeTable, flags, protoWords); diff --git a/src/main/java/org/elasticsearch/index/analysis/compound/AbstractCompoundWordTokenFilterFactory.java b/src/main/java/org/elasticsearch/index/analysis/compound/AbstractCompoundWordTokenFilterFactory.java index 8505d501099..0d5ceb77a99 100644 --- a/src/main/java/org/elasticsearch/index/analysis/compound/AbstractCompoundWordTokenFilterFactory.java +++ b/src/main/java/org/elasticsearch/index/analysis/compound/AbstractCompoundWordTokenFilterFactory.java @@ -50,7 +50,7 @@ public abstract class AbstractCompoundWordTokenFilterFactory extends AbstractTok minSubwordSize = settings.getAsInt("min_subword_size", CompoundWordTokenFilterBase.DEFAULT_MIN_SUBWORD_SIZE); maxSubwordSize = settings.getAsInt("max_subword_size", CompoundWordTokenFilterBase.DEFAULT_MAX_SUBWORD_SIZE); onlyLongestMatch = settings.getAsBoolean("only_longest_match", false); - wordList = Analysis.getWordSet(env, settings, "word_list", version); + wordList = Analysis.getWordSet(env, settings, "word_list"); if (wordList == null) { throw new ElasticsearchIllegalArgumentException("word_list must be provided for [" + name + "], either as a path to a file, or directly"); } diff --git a/src/main/java/org/elasticsearch/index/analysis/compound/DictionaryCompoundWordTokenFilterFactory.java b/src/main/java/org/elasticsearch/index/analysis/compound/DictionaryCompoundWordTokenFilterFactory.java index 4ef9123d6b8..55c1b4e3df1 100644 --- a/src/main/java/org/elasticsearch/index/analysis/compound/DictionaryCompoundWordTokenFilterFactory.java +++ b/src/main/java/org/elasticsearch/index/analysis/compound/DictionaryCompoundWordTokenFilterFactory.java @@ -21,6 +21,9 @@ package org.elasticsearch.index.analysis.compound; import org.apache.lucene.analysis.TokenStream; import org.apache.lucene.analysis.compound.DictionaryCompoundWordTokenFilter; +import org.apache.lucene.analysis.compound.Lucene43DictionaryCompoundWordTokenFilter; +import org.apache.lucene.util.Version; + import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.inject.assistedinject.Assisted; import org.elasticsearch.common.settings.Settings; @@ -45,7 +48,12 @@ public class DictionaryCompoundWordTokenFilterFactory extends AbstractCompoundWo @Override public TokenStream create(TokenStream tokenStream) { - return new DictionaryCompoundWordTokenFilter(version, tokenStream, wordList, - minWordSize, minSubwordSize, maxSubwordSize, onlyLongestMatch); + if (version.onOrAfter(Version.LUCENE_4_4_0)) { + return new DictionaryCompoundWordTokenFilter(tokenStream, wordList, minWordSize, + minSubwordSize, maxSubwordSize, onlyLongestMatch); + } else { + return new Lucene43DictionaryCompoundWordTokenFilter(tokenStream, wordList, minWordSize, + minSubwordSize, maxSubwordSize, onlyLongestMatch); + } } } \ No newline at end of file diff --git a/src/main/java/org/elasticsearch/index/analysis/compound/HyphenationCompoundWordTokenFilterFactory.java b/src/main/java/org/elasticsearch/index/analysis/compound/HyphenationCompoundWordTokenFilterFactory.java index da7444ea64d..db491b8bc58 100644 --- a/src/main/java/org/elasticsearch/index/analysis/compound/HyphenationCompoundWordTokenFilterFactory.java +++ b/src/main/java/org/elasticsearch/index/analysis/compound/HyphenationCompoundWordTokenFilterFactory.java @@ -21,7 +21,10 @@ package org.elasticsearch.index.analysis.compound; import org.apache.lucene.analysis.TokenStream; import org.apache.lucene.analysis.compound.HyphenationCompoundWordTokenFilter; +import org.apache.lucene.analysis.compound.Lucene43HyphenationCompoundWordTokenFilter; import org.apache.lucene.analysis.compound.hyphenation.HyphenationTree; +import org.apache.lucene.util.Version; + import org.elasticsearch.ElasticsearchIllegalArgumentException; import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.inject.assistedinject.Assisted; @@ -64,8 +67,12 @@ public class HyphenationCompoundWordTokenFilterFactory extends AbstractCompoundW @Override public TokenStream create(TokenStream tokenStream) { - return new HyphenationCompoundWordTokenFilter(version, tokenStream, - hyphenationTree, wordList, - minWordSize, minSubwordSize, maxSubwordSize, onlyLongestMatch); + if (version.onOrAfter(Version.LUCENE_4_4_0)) { + return new HyphenationCompoundWordTokenFilter(tokenStream, hyphenationTree, wordList, minWordSize, + minSubwordSize, maxSubwordSize, onlyLongestMatch); + } else { + return new Lucene43HyphenationCompoundWordTokenFilter(tokenStream, hyphenationTree, wordList, minWordSize, + minSubwordSize, maxSubwordSize, onlyLongestMatch); + } } } \ No newline at end of file diff --git a/src/main/java/org/elasticsearch/index/cache/IndexCache.java b/src/main/java/org/elasticsearch/index/cache/IndexCache.java index 589c449b417..4374d0ac5dd 100644 --- a/src/main/java/org/elasticsearch/index/cache/IndexCache.java +++ b/src/main/java/org/elasticsearch/index/cache/IndexCache.java @@ -29,8 +29,8 @@ import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.index.AbstractIndexComponent; import org.elasticsearch.index.Index; +import org.elasticsearch.index.cache.bitset.BitsetFilterCache; import org.elasticsearch.index.cache.filter.FilterCache; -import org.elasticsearch.index.cache.fixedbitset.FixedBitSetFilterCache; import org.elasticsearch.index.cache.query.parser.QueryParserCache; import org.elasticsearch.index.settings.IndexSettings; @@ -41,16 +41,16 @@ public class IndexCache extends AbstractIndexComponent implements CloseableCompo private final FilterCache filterCache; private final QueryParserCache queryParserCache; - private final FixedBitSetFilterCache fixedBitSetFilterCache; + private final BitsetFilterCache bitsetFilterCache; private ClusterService clusterService; @Inject - public IndexCache(Index index, @IndexSettings Settings indexSettings, FilterCache filterCache, QueryParserCache queryParserCache, FixedBitSetFilterCache fixedBitSetFilterCache) { + public IndexCache(Index index, @IndexSettings Settings indexSettings, FilterCache filterCache, QueryParserCache queryParserCache, BitsetFilterCache bitsetFilterCache) { super(index, indexSettings); this.filterCache = filterCache; this.queryParserCache = queryParserCache; - this.fixedBitSetFilterCache = fixedBitSetFilterCache; + this.bitsetFilterCache = bitsetFilterCache; } @Inject(optional = true) @@ -66,10 +66,10 @@ public class IndexCache extends AbstractIndexComponent implements CloseableCompo } /** - * Return the {@link FixedBitSetFilterCache} for this index. + * Return the {@link BitsetFilterCache} for this index. */ - public FixedBitSetFilterCache fixedBitSetFilterCache() { - return fixedBitSetFilterCache; + public BitsetFilterCache bitsetFilterCache() { + return bitsetFilterCache; } public QueryParserCache queryParserCache() { @@ -80,7 +80,7 @@ public class IndexCache extends AbstractIndexComponent implements CloseableCompo public void close() throws ElasticsearchException { filterCache.close(); queryParserCache.close(); - fixedBitSetFilterCache.close(); + bitsetFilterCache.close(); if (clusterService != null) { clusterService.remove(this); } @@ -89,7 +89,7 @@ public class IndexCache extends AbstractIndexComponent implements CloseableCompo public void clear(String reason) { filterCache.clear(reason); queryParserCache.clear(); - fixedBitSetFilterCache.clear(reason); + bitsetFilterCache.clear(reason); } @Override diff --git a/src/main/java/org/elasticsearch/index/cache/IndexCacheModule.java b/src/main/java/org/elasticsearch/index/cache/IndexCacheModule.java index b986ae2f21d..796ad7388b4 100644 --- a/src/main/java/org/elasticsearch/index/cache/IndexCacheModule.java +++ b/src/main/java/org/elasticsearch/index/cache/IndexCacheModule.java @@ -21,8 +21,8 @@ package org.elasticsearch.index.cache; import org.elasticsearch.common.inject.AbstractModule; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.index.cache.bitset.BitsetFilterCacheModule; import org.elasticsearch.index.cache.filter.FilterCacheModule; -import org.elasticsearch.index.cache.fixedbitset.FixedBitSetFilterCacheModule; import org.elasticsearch.index.cache.query.parser.QueryParserCacheModule; /** @@ -40,7 +40,7 @@ public class IndexCacheModule extends AbstractModule { protected void configure() { new FilterCacheModule(settings).configure(binder()); new QueryParserCacheModule(settings).configure(binder()); - new FixedBitSetFilterCacheModule(settings).configure(binder()); + new BitsetFilterCacheModule(settings).configure(binder()); bind(IndexCache.class).asEagerSingleton(); } diff --git a/src/main/java/org/elasticsearch/index/cache/fixedbitset/FixedBitSetFilterCache.java b/src/main/java/org/elasticsearch/index/cache/bitset/BitsetFilterCache.java similarity index 76% rename from src/main/java/org/elasticsearch/index/cache/fixedbitset/FixedBitSetFilterCache.java rename to src/main/java/org/elasticsearch/index/cache/bitset/BitsetFilterCache.java index cbd37c8461f..03b9fab695d 100644 --- a/src/main/java/org/elasticsearch/index/cache/fixedbitset/FixedBitSetFilterCache.java +++ b/src/main/java/org/elasticsearch/index/cache/bitset/BitsetFilterCache.java @@ -17,19 +17,19 @@ * under the License. */ -package org.elasticsearch.index.cache.fixedbitset; +package org.elasticsearch.index.cache.bitset; import com.google.common.cache.Cache; import com.google.common.cache.CacheBuilder; import com.google.common.cache.RemovalListener; import com.google.common.cache.RemovalNotification; -import org.apache.lucene.index.AtomicReader; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReader; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.search.DocIdSet; -import org.apache.lucene.search.DocIdSetIterator; import org.apache.lucene.search.Filter; -import org.apache.lucene.util.Bits; -import org.apache.lucene.util.FixedBitSet; +import org.apache.lucene.search.join.BitDocIdSetFilter; +import org.apache.lucene.util.BitDocIdSet; +import org.apache.lucene.util.SparseFixedBitSet; import org.elasticsearch.ElasticsearchException; import org.elasticsearch.ExceptionsHelper; import org.elasticsearch.cluster.metadata.IndexMetaData; @@ -66,29 +66,29 @@ import java.util.concurrent.ExecutionException; import java.util.concurrent.Executor; /** - * This is a cache for {@link FixedBitSet} based filters and is unbounded by size or time. + * This is a cache for {@link BitDocIdSet} based filters and is unbounded by size or time. *

- * Use this cache with care, only components that require that a filter is to be materialized as a {@link FixedBitSet} + * Use this cache with care, only components that require that a filter is to be materialized as a {@link BitDocIdSet} * and require that it should always be around should use this cache, otherwise the * {@link org.elasticsearch.index.cache.filter.FilterCache} should be used instead. */ -public class FixedBitSetFilterCache extends AbstractIndexComponent implements AtomicReader.CoreClosedListener, RemovalListener>, CloseableComponent { +public class BitsetFilterCache extends AbstractIndexComponent implements LeafReader.CoreClosedListener, RemovalListener>, CloseableComponent { public static final String LOAD_RANDOM_ACCESS_FILTERS_EAGERLY = "index.load_fixed_bitset_filters_eagerly"; private final boolean loadRandomAccessFiltersEagerly; private final Cache> loadedFilters; - private final FixedBitSetFilterWarmer warmer; + private final BitDocIdSetFilterWarmer warmer; private IndexService indexService; private IndicesWarmer indicesWarmer; @Inject - public FixedBitSetFilterCache(Index index, @IndexSettings Settings indexSettings) { + public BitsetFilterCache(Index index, @IndexSettings Settings indexSettings) { super(index, indexSettings); this.loadRandomAccessFiltersEagerly = indexSettings.getAsBoolean(LOAD_RANDOM_ACCESS_FILTERS_EAGERLY, true); this.loadedFilters = CacheBuilder.newBuilder().removalListener(this).build(); - this.warmer = new FixedBitSetFilterWarmer(); + this.warmer = new BitDocIdSetFilterWarmer(); } @Inject(optional = true) @@ -104,10 +104,10 @@ public class FixedBitSetFilterCache extends AbstractIndexComponent implements At indicesWarmer.addListener(warmer); } - public FixedBitSetFilter getFixedBitSetFilter(Filter filter) { + public BitDocIdSetFilter getBitDocIdSetFilter(Filter filter) { assert filter != null; assert !(filter instanceof NoCacheFilter); - return new FixedBitSetFilterWrapper(filter); + return new BitDocIdSetFilterWrapper(filter); } @Override @@ -121,18 +121,18 @@ public class FixedBitSetFilterCache extends AbstractIndexComponent implements At } public void clear(String reason) { - logger.debug("Clearing all FixedBitSets because [{}]", reason); + logger.debug("Clearing all Bitsets because [{}]", reason); loadedFilters.invalidateAll(); loadedFilters.cleanUp(); } - private FixedBitSet getAndLoadIfNotPresent(final Filter filter, final AtomicReaderContext context) throws IOException, ExecutionException { + private BitDocIdSet getAndLoadIfNotPresent(final Filter filter, final LeafReaderContext context) throws IOException, ExecutionException { final Object coreCacheReader = context.reader().getCoreCacheKey(); final ShardId shardId = ShardUtils.extractShardId(context.reader()); Cache filterToFbs = loadedFilters.get(coreCacheReader, new Callable>() { @Override public Cache call() throws Exception { - SegmentReaderUtils.registerCoreListener(context.reader(), FixedBitSetFilterCache.this); + SegmentReaderUtils.registerCoreListener(context.reader(), BitsetFilterCache.this); return CacheBuilder.newBuilder().build(); } }); @@ -140,35 +140,32 @@ public class FixedBitSetFilterCache extends AbstractIndexComponent implements At @Override public Value call() throws Exception { DocIdSet docIdSet = filter.getDocIdSet(context, null); - final FixedBitSet fixedBitSet; - if (docIdSet instanceof FixedBitSet) { - fixedBitSet = (FixedBitSet) docIdSet; + final BitDocIdSet bitSet; + if (docIdSet instanceof BitDocIdSet) { + bitSet = (BitDocIdSet) docIdSet; } else { - fixedBitSet = new FixedBitSet(context.reader().maxDoc()); + BitDocIdSet.Builder builder = new BitDocIdSet.Builder(context.reader().maxDoc()); if (docIdSet != null && docIdSet != DocIdSet.EMPTY) { - DocIdSetIterator iterator = docIdSet.iterator(); - if (iterator != null) { - int doc = iterator.nextDoc(); - if (doc != DocIdSetIterator.NO_MORE_DOCS) { - do { - fixedBitSet.set(doc); - doc = iterator.nextDoc(); - } while (doc != DocIdSetIterator.NO_MORE_DOCS); - } - } + builder.or(docIdSet.iterator()); } + BitDocIdSet bits = builder.build(); + // code expects this to be non-null + if (bits == null) { + bits = new BitDocIdSet(new SparseFixedBitSet(context.reader().maxDoc()), 0); + } + bitSet = bits; } - Value value = new Value(fixedBitSet, shardId); + Value value = new Value(bitSet, shardId); if (shardId != null) { IndexShard shard = indexService.shard(shardId.id()); if (shard != null) { - shard.shardFixedBitSetFilterCache().onCached(value.fixedBitSet.ramBytesUsed()); + shard.shardBitsetFilterCache().onCached(value.bitset.ramBytesUsed()); } } return value; } - }).fixedBitSet; + }).bitset; } @Override @@ -189,8 +186,8 @@ public class FixedBitSetFilterCache extends AbstractIndexComponent implements At } IndexShard shard = indexService.shard(entry.getValue().shardId.id()); if (shard != null) { - ShardFixedBitSetFilterCache shardFixedBitSetFilterCache = shard.shardFixedBitSetFilterCache(); - shardFixedBitSetFilterCache.onRemoval(entry.getValue().fixedBitSet.ramBytesUsed()); + ShardBitsetFilterCache shardBitsetFilterCache = shard.shardBitsetFilterCache(); + shardBitsetFilterCache.onRemoval(entry.getValue().bitset.ramBytesUsed()); } // if null then this means the shard has already been removed and the stats are 0 anyway for the shard this key belongs to } @@ -198,25 +195,25 @@ public class FixedBitSetFilterCache extends AbstractIndexComponent implements At public static final class Value { - final FixedBitSet fixedBitSet; + final BitDocIdSet bitset; final ShardId shardId; - public Value(FixedBitSet fixedBitSet, ShardId shardId) { - this.fixedBitSet = fixedBitSet; + public Value(BitDocIdSet bitset, ShardId shardId) { + this.bitset = bitset; this.shardId = shardId; } } - final class FixedBitSetFilterWrapper extends FixedBitSetFilter { + final class BitDocIdSetFilterWrapper extends BitDocIdSetFilter { final Filter filter; - FixedBitSetFilterWrapper(Filter filter) { + BitDocIdSetFilterWrapper(Filter filter) { this.filter = filter; } @Override - public FixedBitSet getDocIdSet(AtomicReaderContext context, Bits acceptDocs) throws IOException { + public BitDocIdSet getDocIdSet(LeafReaderContext context) throws IOException { try { return getAndLoadIfNotPresent(filter, context); } catch (ExecutionException e) { @@ -229,8 +226,8 @@ public class FixedBitSetFilterCache extends AbstractIndexComponent implements At } public boolean equals(Object o) { - if (!(o instanceof FixedBitSetFilterWrapper)) return false; - return this.filter.equals(((FixedBitSetFilterWrapper) o).filter); + if (!(o instanceof BitDocIdSetFilterWrapper)) return false; + return this.filter.equals(((BitDocIdSetFilterWrapper) o).filter); } public int hashCode() { @@ -238,7 +235,7 @@ public class FixedBitSetFilterCache extends AbstractIndexComponent implements At } } - final class FixedBitSetFilterWarmer extends IndicesWarmer.Listener { + final class BitDocIdSetFilterWarmer extends IndicesWarmer.Listener { @Override public TerminationHandle warmNewReaders(final IndexShard indexShard, IndexMetaData indexMetaData, IndicesWarmer.WarmerContext context, ThreadPool threadPool) { @@ -276,7 +273,7 @@ public class FixedBitSetFilterCache extends AbstractIndexComponent implements At final Executor executor = threadPool.executor(executor()); final CountDownLatch latch = new CountDownLatch(context.searcher().reader().leaves().size() * warmUp.size()); - for (final AtomicReaderContext ctx : context.searcher().reader().leaves()) { + for (final LeafReaderContext ctx : context.searcher().reader().leaves()) { for (final Filter filterToWarm : warmUp) { executor.execute(new Runnable() { @@ -286,10 +283,10 @@ public class FixedBitSetFilterCache extends AbstractIndexComponent implements At final long start = System.nanoTime(); getAndLoadIfNotPresent(filterToWarm, ctx); if (indexShard.warmerService().logger().isTraceEnabled()) { - indexShard.warmerService().logger().trace("warmed fixed bitset for [{}], took [{}]", filterToWarm, TimeValue.timeValueNanos(System.nanoTime() - start)); + indexShard.warmerService().logger().trace("warmed bitset for [{}], took [{}]", filterToWarm, TimeValue.timeValueNanos(System.nanoTime() - start)); } } catch (Throwable t) { - indexShard.warmerService().logger().warn("failed to load fixed bitset for [{}]", t, filterToWarm); + indexShard.warmerService().logger().warn("failed to load bitset for [{}]", t, filterToWarm); } finally { latch.countDown(); } diff --git a/src/main/java/org/elasticsearch/index/cache/fixedbitset/FixedBitSetFilterCacheModule.java b/src/main/java/org/elasticsearch/index/cache/bitset/BitsetFilterCacheModule.java similarity index 75% rename from src/main/java/org/elasticsearch/index/cache/fixedbitset/FixedBitSetFilterCacheModule.java rename to src/main/java/org/elasticsearch/index/cache/bitset/BitsetFilterCacheModule.java index 60fcf3f41a8..3ecccf1a49a 100644 --- a/src/main/java/org/elasticsearch/index/cache/fixedbitset/FixedBitSetFilterCacheModule.java +++ b/src/main/java/org/elasticsearch/index/cache/bitset/BitsetFilterCacheModule.java @@ -17,23 +17,20 @@ * under the License. */ -package org.elasticsearch.index.cache.fixedbitset; +package org.elasticsearch.index.cache.bitset; import org.elasticsearch.common.inject.AbstractModule; import org.elasticsearch.common.settings.Settings; /** */ -public class FixedBitSetFilterCacheModule extends AbstractModule { +public class BitsetFilterCacheModule extends AbstractModule { - private final Settings settings; - - public FixedBitSetFilterCacheModule(Settings settings) { - this.settings = settings; + public BitsetFilterCacheModule(Settings settings) { } @Override protected void configure() { - bind(FixedBitSetFilterCache.class).asEagerSingleton(); + bind(BitsetFilterCache.class).asEagerSingleton(); } } diff --git a/src/main/java/org/elasticsearch/index/cache/fixedbitset/ShardFixedBitSetFilterCache.java b/src/main/java/org/elasticsearch/index/cache/bitset/ShardBitsetFilterCache.java similarity index 86% rename from src/main/java/org/elasticsearch/index/cache/fixedbitset/ShardFixedBitSetFilterCache.java rename to src/main/java/org/elasticsearch/index/cache/bitset/ShardBitsetFilterCache.java index a1aa5501a5c..f5827dcf4cf 100644 --- a/src/main/java/org/elasticsearch/index/cache/fixedbitset/ShardFixedBitSetFilterCache.java +++ b/src/main/java/org/elasticsearch/index/cache/bitset/ShardBitsetFilterCache.java @@ -17,7 +17,7 @@ * under the License. */ -package org.elasticsearch.index.cache.fixedbitset; +package org.elasticsearch.index.cache.bitset; import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.metrics.CounterMetric; @@ -28,12 +28,12 @@ import org.elasticsearch.index.shard.ShardId; /** */ -public class ShardFixedBitSetFilterCache extends AbstractIndexShardComponent { +public class ShardBitsetFilterCache extends AbstractIndexShardComponent { private final CounterMetric totalMetric = new CounterMetric(); @Inject - public ShardFixedBitSetFilterCache(ShardId shardId, @IndexSettings Settings indexSettings) { + public ShardBitsetFilterCache(ShardId shardId, @IndexSettings Settings indexSettings) { super(shardId, indexSettings); } diff --git a/src/main/java/org/elasticsearch/index/cache/fixedbitset/ShardFixedBitSetFilterCacheModule.java b/src/main/java/org/elasticsearch/index/cache/bitset/ShardBitsetFilterCacheModule.java similarity index 82% rename from src/main/java/org/elasticsearch/index/cache/fixedbitset/ShardFixedBitSetFilterCacheModule.java rename to src/main/java/org/elasticsearch/index/cache/bitset/ShardBitsetFilterCacheModule.java index 2583a5999ba..c0087119f66 100644 --- a/src/main/java/org/elasticsearch/index/cache/fixedbitset/ShardFixedBitSetFilterCacheModule.java +++ b/src/main/java/org/elasticsearch/index/cache/bitset/ShardBitsetFilterCacheModule.java @@ -17,16 +17,16 @@ * under the License. */ -package org.elasticsearch.index.cache.fixedbitset; +package org.elasticsearch.index.cache.bitset; import org.elasticsearch.common.inject.AbstractModule; /** */ -public class ShardFixedBitSetFilterCacheModule extends AbstractModule { +public class ShardBitsetFilterCacheModule extends AbstractModule { @Override protected void configure() { - bind(ShardFixedBitSetFilterCache.class).asEagerSingleton(); + bind(ShardBitsetFilterCache.class).asEagerSingleton(); } } diff --git a/src/main/java/org/elasticsearch/index/cache/filter/support/CacheKeyFilter.java b/src/main/java/org/elasticsearch/index/cache/filter/support/CacheKeyFilter.java index fcbd1b074d5..0b0b4e7c4e5 100644 --- a/src/main/java/org/elasticsearch/index/cache/filter/support/CacheKeyFilter.java +++ b/src/main/java/org/elasticsearch/index/cache/filter/support/CacheKeyFilter.java @@ -19,7 +19,7 @@ package org.elasticsearch.index.cache.filter.support; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.search.DocIdSet; import org.apache.lucene.search.Filter; import org.apache.lucene.util.Bits; @@ -87,7 +87,7 @@ public interface CacheKeyFilter { } @Override - public DocIdSet getDocIdSet(AtomicReaderContext context, Bits acceptDocs) throws IOException { + public DocIdSet getDocIdSet(LeafReaderContext context, Bits acceptDocs) throws IOException { return filter.getDocIdSet(context, acceptDocs); } @@ -98,7 +98,10 @@ public interface CacheKeyFilter { @Override public boolean equals(Object obj) { - return filter.equals(obj); + if (obj instanceof Wrapper == false) { + return false; + } + return filter.equals(((Wrapper) obj).filter); } @Override diff --git a/src/main/java/org/elasticsearch/index/cache/filter/weighted/WeightedFilterCache.java b/src/main/java/org/elasticsearch/index/cache/filter/weighted/WeightedFilterCache.java index 42023732f09..cc2c805ead3 100644 --- a/src/main/java/org/elasticsearch/index/cache/filter/weighted/WeightedFilterCache.java +++ b/src/main/java/org/elasticsearch/index/cache/filter/weighted/WeightedFilterCache.java @@ -22,9 +22,10 @@ package org.elasticsearch.index.cache.filter.weighted; import com.google.common.cache.Cache; import com.google.common.cache.RemovalListener; import com.google.common.cache.Weigher; -import org.apache.lucene.index.AtomicReaderContext; import org.apache.lucene.index.IndexReader; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.SegmentReader; +import org.apache.lucene.search.BitsFilteredDocIdSet; import org.apache.lucene.search.DocIdSet; import org.apache.lucene.search.Filter; import org.apache.lucene.util.Bits; @@ -154,7 +155,7 @@ public class WeightedFilterCache extends AbstractIndexComponent implements Filte @Override - public DocIdSet getDocIdSet(AtomicReaderContext context, Bits acceptDocs) throws IOException { + public DocIdSet getDocIdSet(LeafReaderContext context, Bits acceptDocs) throws IOException { Object filterKey = filter; if (filter instanceof CacheKeyFilter) { filterKey = ((CacheKeyFilter) filter).cacheKey(); @@ -188,10 +189,7 @@ public class WeightedFilterCache extends AbstractIndexComponent implements Filte innerCache.put(cacheKey, cacheValue); } - // note, we don't wrap the return value with a BitsFilteredDocIdSet.wrap(docIdSet, acceptDocs) because - // we rely on our custom XFilteredQuery to do the wrapping if needed, so we don't have the wrap each - // filter on its own - return DocIdSets.isEmpty(cacheValue) ? null : cacheValue; + return BitsFilteredDocIdSet.wrap(DocIdSets.isEmpty(cacheValue) ? null : cacheValue, acceptDocs); } public String toString() { diff --git a/src/main/java/org/elasticsearch/index/cache/fixedbitset/FixedBitSetFilter.java b/src/main/java/org/elasticsearch/index/cache/fixedbitset/FixedBitSetFilter.java deleted file mode 100644 index 152b721cc23..00000000000 --- a/src/main/java/org/elasticsearch/index/cache/fixedbitset/FixedBitSetFilter.java +++ /dev/null @@ -1,37 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.index.cache.fixedbitset; - -import org.apache.lucene.index.AtomicReaderContext; -import org.apache.lucene.search.Filter; -import org.apache.lucene.util.Bits; -import org.apache.lucene.util.FixedBitSet; - -import java.io.IOException; - -/** - * A filter that always returns a {@link FixedBitSet}. - */ -public abstract class FixedBitSetFilter extends Filter { - - @Override - public abstract FixedBitSet getDocIdSet(AtomicReaderContext context, Bits acceptDocs) throws IOException; - -} diff --git a/src/main/java/org/elasticsearch/index/codec/PerFieldMappingPostingFormatCodec.java b/src/main/java/org/elasticsearch/index/codec/PerFieldMappingPostingFormatCodec.java index 19be4b26ba3..26278c80ae7 100644 --- a/src/main/java/org/elasticsearch/index/codec/PerFieldMappingPostingFormatCodec.java +++ b/src/main/java/org/elasticsearch/index/codec/PerFieldMappingPostingFormatCodec.java @@ -19,10 +19,12 @@ package org.elasticsearch.index.codec; +import org.apache.lucene.codecs.Codec; import org.apache.lucene.codecs.DocValuesFormat; import org.apache.lucene.codecs.PostingsFormat; -import org.apache.lucene.codecs.lucene410.Lucene410Codec; +import org.apache.lucene.codecs.lucene50.Lucene50Codec; import org.elasticsearch.common.logging.ESLogger; +import org.elasticsearch.common.lucene.Lucene; import org.elasticsearch.index.codec.docvaluesformat.DocValuesFormatProvider; import org.elasticsearch.index.codec.postingsformat.PostingsFormatProvider; import org.elasticsearch.index.mapper.FieldMappers; @@ -37,12 +39,16 @@ import org.elasticsearch.index.mapper.MapperService; * configured for a specific field the default postings format is used. */ // LUCENE UPGRADE: make sure to move to a new codec depending on the lucene version -public class PerFieldMappingPostingFormatCodec extends Lucene410Codec { +public class PerFieldMappingPostingFormatCodec extends Lucene50Codec { private final ESLogger logger; private final MapperService mapperService; private final PostingsFormat defaultPostingFormat; private final DocValuesFormat defaultDocValuesFormat; + static { + assert Codec.forName(Lucene.LATEST_CODEC).getClass().isAssignableFrom(PerFieldMappingPostingFormatCodec.class) : "PerFieldMappingPostingFormatCodec must subclass default codec"; + } + public PerFieldMappingPostingFormatCodec(MapperService mapperService, PostingsFormat defaultPostingFormat, DocValuesFormat defaultDocValuesFormat, ESLogger logger) { this.mapperService = mapperService; this.logger = logger; diff --git a/src/main/java/org/elasticsearch/index/codec/docvaluesformat/DocValuesFormats.java b/src/main/java/org/elasticsearch/index/codec/docvaluesformat/DocValuesFormats.java index 1e93ab34be6..b032833084f 100644 --- a/src/main/java/org/elasticsearch/index/codec/docvaluesformat/DocValuesFormats.java +++ b/src/main/java/org/elasticsearch/index/codec/docvaluesformat/DocValuesFormats.java @@ -23,6 +23,7 @@ import com.google.common.collect.ImmutableCollection; import com.google.common.collect.ImmutableMap; import org.apache.lucene.codecs.DocValuesFormat; import org.elasticsearch.common.collect.MapBuilder; +import org.elasticsearch.common.lucene.Lucene; /** * This class represents the set of Elasticsearch "built-in" @@ -38,7 +39,7 @@ public class DocValuesFormats { builtInDocValuesFormatsX.put(name, new PreBuiltDocValuesFormatProvider.Factory(DocValuesFormat.forName(name))); } // LUCENE UPGRADE: update those DVF if necessary - builtInDocValuesFormatsX.put(DocValuesFormatService.DEFAULT_FORMAT, new PreBuiltDocValuesFormatProvider.Factory(DocValuesFormatService.DEFAULT_FORMAT, DocValuesFormat.forName("Lucene410"))); + builtInDocValuesFormatsX.put(DocValuesFormatService.DEFAULT_FORMAT, new PreBuiltDocValuesFormatProvider.Factory(DocValuesFormatService.DEFAULT_FORMAT, DocValuesFormat.forName(Lucene.LATEST_DOC_VALUES_FORMAT))); builtInDocValuesFormats = builtInDocValuesFormatsX.immutableMap(); } diff --git a/src/main/java/org/elasticsearch/index/codec/postingsformat/BloomFilterPostingsFormat.java b/src/main/java/org/elasticsearch/index/codec/postingsformat/BloomFilterPostingsFormat.java index a01c3909af8..9c203339dc5 100644 --- a/src/main/java/org/elasticsearch/index/codec/postingsformat/BloomFilterPostingsFormat.java +++ b/src/main/java/org/elasticsearch/index/codec/postingsformat/BloomFilterPostingsFormat.java @@ -21,12 +21,8 @@ package org.elasticsearch.index.codec.postingsformat; import org.apache.lucene.codecs.*; import org.apache.lucene.index.*; -import org.apache.lucene.store.ChecksumIndexInput; -import org.apache.lucene.store.IOContext; -import org.apache.lucene.store.IndexOutput; -import org.apache.lucene.util.Bits; -import org.apache.lucene.util.BytesRef; -import org.apache.lucene.util.IOUtils; +import org.apache.lucene.store.*; +import org.apache.lucene.util.*; import org.elasticsearch.common.util.BloomFilter; import org.elasticsearch.index.store.DirectoryUtils; import org.elasticsearch.index.store.Store; @@ -104,9 +100,46 @@ public final class BloomFilterPostingsFormat extends PostingsFormat { return new BloomFilteredFieldsProducer(state); } + public PostingsFormat getDelegate() { + return delegatePostingsFormat; + } + + private final class LazyBloomLoader implements Accountable { + private final long offset; + private final IndexInput indexInput; + private BloomFilter filter; + + private LazyBloomLoader(long offset, IndexInput origial) { + this.offset = offset; + this.indexInput = origial.clone(); + } + + synchronized BloomFilter get() throws IOException { + if (filter == null) { + try (final IndexInput input = indexInput) { + input.seek(offset); + this.filter = BloomFilter.deserialize(input); + } + } + return filter; + } + + @Override + public long ramBytesUsed() { + return filter == null ? 0l : filter.getSizeInBytes(); + } + + @Override + public Iterable getChildResources() { + return Collections.singleton(Accountables.namedAccountable("bloom", ramBytesUsed())); + } + } + public final class BloomFilteredFieldsProducer extends FieldsProducer { private FieldsProducer delegateFieldsProducer; - HashMap bloomsByFieldName = new HashMap<>(); + HashMap bloomsByFieldName = new HashMap<>(); + private final int version; + private final IndexInput data; // for internal use only FieldsProducer getDelegate() { @@ -116,22 +149,19 @@ public final class BloomFilterPostingsFormat extends PostingsFormat { public BloomFilteredFieldsProducer(SegmentReadState state) throws IOException { - String bloomFileName = IndexFileNames.segmentFileName( + final String bloomFileName = IndexFileNames.segmentFileName( state.segmentInfo.name, state.segmentSuffix, BLOOM_EXTENSION); - ChecksumIndexInput bloomIn = null; - boolean success = false; + final Directory directory = state.directory; + IndexInput dataInput = directory.openInput(bloomFileName, state.context); try { - bloomIn = state.directory.openChecksumInput(bloomFileName, state.context); - int version = CodecUtil.checkHeader(bloomIn, BLOOM_CODEC_NAME, BLOOM_CODEC_VERSION, + ChecksumIndexInput bloomIn = new BufferedChecksumIndexInput(dataInput.clone()); + version = CodecUtil.checkHeader(bloomIn, BLOOM_CODEC_NAME, BLOOM_CODEC_VERSION, BLOOM_CODEC_VERSION_CURRENT); // // Load the hash function used in the BloomFilter // hashFunction = HashFunction.forName(bloomIn.readString()); // Load the delegate postings format - PostingsFormat delegatePostingsFormat = PostingsFormat.forName(bloomIn - .readString()); - - this.delegateFieldsProducer = delegatePostingsFormat - .fieldsProducer(state); + final String delegatePostings = bloomIn + .readString(); int numBlooms = bloomIn.readInt(); boolean load = true; @@ -140,13 +170,13 @@ public final class BloomFilterPostingsFormat extends PostingsFormat { load = storeDir.codecService().isLoadBloomFilter(); } - if (load && state.context.context != IOContext.Context.MERGE) { - // if we merge we don't need to load the bloom filters + if (load) { for (int i = 0; i < numBlooms; i++) { int fieldNum = bloomIn.readInt(); - BloomFilter bloom = BloomFilter.deserialize(bloomIn); FieldInfo fieldInfo = state.fieldInfos.fieldInfo(fieldNum); - bloomsByFieldName.put(fieldInfo.name, bloom); + LazyBloomLoader loader = new LazyBloomLoader(bloomIn.getFilePointer(), dataInput); + bloomsByFieldName.put(fieldInfo.name, loader); + BloomFilter.skipBloom(bloomIn); } if (version >= BLOOM_CODEC_VERSION_CHECKSUM) { CodecUtil.checkFooter(bloomIn); @@ -154,12 +184,12 @@ public final class BloomFilterPostingsFormat extends PostingsFormat { CodecUtil.checkEOF(bloomIn); } } - IOUtils.close(bloomIn); - success = true; + this.delegateFieldsProducer = PostingsFormat.forName(delegatePostings) + .fieldsProducer(state); + this.data = dataInput; + dataInput = null; // null it out such that we don't close it } finally { - if (!success) { - IOUtils.closeWhileHandlingException(bloomIn, delegateFieldsProducer); - } + IOUtils.closeWhileHandlingException(dataInput); } } @@ -170,12 +200,12 @@ public final class BloomFilterPostingsFormat extends PostingsFormat { @Override public void close() throws IOException { - delegateFieldsProducer.close(); + IOUtils.close(data, delegateFieldsProducer); } @Override public Terms terms(String field) throws IOException { - BloomFilter filter = bloomsByFieldName.get(field); + LazyBloomLoader filter = bloomsByFieldName.get(field); if (filter == null) { return delegateFieldsProducer.terms(field); } else { @@ -183,7 +213,7 @@ public final class BloomFilterPostingsFormat extends PostingsFormat { if (result == null) { return null; } - return new BloomFilteredTerms(result, filter); + return new BloomFilteredTerms(result, filter.get()); } } @@ -192,26 +222,40 @@ public final class BloomFilterPostingsFormat extends PostingsFormat { return delegateFieldsProducer.size(); } - public long getUniqueTermCount() throws IOException { - return delegateFieldsProducer.getUniqueTermCount(); - } - @Override public long ramBytesUsed() { long size = delegateFieldsProducer.ramBytesUsed(); - for (BloomFilter bloomFilter : bloomsByFieldName.values()) { - size += bloomFilter.getSizeInBytes(); + for (LazyBloomLoader bloomFilter : bloomsByFieldName.values()) { + size += bloomFilter.ramBytesUsed(); } return size; } + @Override + public Iterable getChildResources() { + List resources = new ArrayList<>(); + resources.addAll(Accountables.namedAccountables("field", bloomsByFieldName)); + if (delegateFieldsProducer != null) { + resources.add(Accountables.namedAccountable("delegate", delegateFieldsProducer)); + } + return Collections.unmodifiableList(resources); + } + @Override public void checkIntegrity() throws IOException { delegateFieldsProducer.checkIntegrity(); + if (version >= BLOOM_CODEC_VERSION_CHECKSUM) { + CodecUtil.checksumEntireFile(data); + } + } + + @Override + public FieldsProducer getMergeInstance() throws IOException { + return delegateFieldsProducer.getMergeInstance(); } } - public static final class BloomFilteredTerms extends FilterAtomicReader.FilterTerms { + public static final class BloomFilteredTerms extends FilterLeafReader.FilterTerms { private BloomFilter filter; public BloomFilteredTerms(Terms terms, BloomFilter filter) { @@ -278,11 +322,6 @@ public final class BloomFilterPostingsFormat extends PostingsFormat { return getDelegate().next(); } - @Override - public final Comparator getComparator() { - return delegateTerms.getComparator(); - } - @Override public final boolean seekExact(BytesRef text) throws IOException { @@ -364,17 +403,45 @@ public final class BloomFilterPostingsFormat extends PostingsFormat { return delegateFieldsConsumer; } + @Override - public TermsConsumer addField(FieldInfo field) throws IOException { - BloomFilter bloomFilter = bloomFilterFactory.createFilter(state.segmentInfo.getDocCount()); - if (bloomFilter != null) { - assert bloomFilters.containsKey(field) == false; - bloomFilters.put(field, bloomFilter); - return new WrappedTermsConsumer(delegateFieldsConsumer.addField(field), bloomFilter); - } else { - // No, use the unfiltered fieldsConsumer - we are not interested in - // recording any term Bitsets. - return delegateFieldsConsumer.addField(field); + public void write(Fields fields) throws IOException { + + // Delegate must write first: it may have opened files + // on creating the class + // (e.g. Lucene41PostingsConsumer), and write() will + // close them; alternatively, if we delayed pulling + // the fields consumer until here, we could do it + // afterwards: + delegateFieldsConsumer.write(fields); + + for(String field : fields) { + Terms terms = fields.terms(field); + if (terms == null) { + continue; + } + FieldInfo fieldInfo = state.fieldInfos.fieldInfo(field); + TermsEnum termsEnum = terms.iterator(null); + + BloomFilter bloomFilter = null; + + DocsEnum docsEnum = null; + while (true) { + BytesRef term = termsEnum.next(); + if (term == null) { + break; + } + if (bloomFilter == null) { + bloomFilter = bloomFilterFactory.createFilter(state.segmentInfo.getDocCount()); + assert bloomFilters.containsKey(field) == false; + bloomFilters.put(fieldInfo, bloomFilter); + } + // Make sure there's at least one doc for this term: + docsEnum = termsEnum.docs(null, docsEnum, 0); + if (docsEnum.nextDoc() != DocsEnum.NO_MORE_DOCS) { + bloomFilter.put(term); + } + } } } @@ -416,57 +483,8 @@ public final class BloomFilterPostingsFormat extends PostingsFormat { private void saveAppropriatelySizedBloomFilter(IndexOutput bloomOutput, BloomFilter bloomFilter, FieldInfo fieldInfo) throws IOException { - -// FuzzySet rightSizedSet = bloomFilterFactory.downsize(fieldInfo, -// bloomFilter); -// if (rightSizedSet == null) { -// rightSizedSet = bloomFilter; -// } -// rightSizedSet.serialize(bloomOutput); BloomFilter.serilaize(bloomFilter, bloomOutput); } } - - class WrappedTermsConsumer extends TermsConsumer { - private TermsConsumer delegateTermsConsumer; - private BloomFilter bloomFilter; - - public WrappedTermsConsumer(TermsConsumer termsConsumer, BloomFilter bloomFilter) { - this.delegateTermsConsumer = termsConsumer; - this.bloomFilter = bloomFilter; - } - - @Override - public PostingsConsumer startTerm(BytesRef text) throws IOException { - return delegateTermsConsumer.startTerm(text); - } - - @Override - public void finishTerm(BytesRef text, TermStats stats) throws IOException { - - // Record this term in our BloomFilter - if (stats.docFreq > 0) { - bloomFilter.put(text); - } - delegateTermsConsumer.finishTerm(text, stats); - } - - @Override - public void finish(long sumTotalTermFreq, long sumDocFreq, int docCount) - throws IOException { - delegateTermsConsumer.finish(sumTotalTermFreq, sumDocFreq, docCount); - } - - @Override - public Comparator getComparator() throws IOException { - return delegateTermsConsumer.getComparator(); - } - - } - - public PostingsFormat getDelegate() { - return this.delegatePostingsFormat; - } - } diff --git a/src/main/java/org/elasticsearch/index/codec/postingsformat/DefaultPostingsFormatProvider.java b/src/main/java/org/elasticsearch/index/codec/postingsformat/DefaultPostingsFormatProvider.java index 42b2306608a..477d0c2b0b3 100644 --- a/src/main/java/org/elasticsearch/index/codec/postingsformat/DefaultPostingsFormatProvider.java +++ b/src/main/java/org/elasticsearch/index/codec/postingsformat/DefaultPostingsFormatProvider.java @@ -21,13 +21,13 @@ package org.elasticsearch.index.codec.postingsformat; import org.apache.lucene.codecs.PostingsFormat; import org.apache.lucene.codecs.blocktree.BlockTreeTermsWriter; -import org.apache.lucene.codecs.lucene41.Lucene41PostingsFormat; +import org.apache.lucene.codecs.lucene50.Lucene50PostingsFormat; import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.inject.assistedinject.Assisted; import org.elasticsearch.common.settings.Settings; /** - * The default postingsformat, maps to {@link Lucene41PostingsFormat}. + * The default postingsformat, maps to {@link Lucene50PostingsFormat}. *

    *
  • min_block_size: the minimum block size the default Lucene term * dictionary uses to encode on-disk blocks.
  • @@ -41,14 +41,14 @@ public class DefaultPostingsFormatProvider extends AbstractPostingsFormatProvide private final int minBlockSize; private final int maxBlockSize; - private final Lucene41PostingsFormat postingsFormat; + private final Lucene50PostingsFormat postingsFormat; @Inject public DefaultPostingsFormatProvider(@Assisted String name, @Assisted Settings postingsFormatSettings) { super(name); this.minBlockSize = postingsFormatSettings.getAsInt("min_block_size", BlockTreeTermsWriter.DEFAULT_MIN_BLOCK_SIZE); this.maxBlockSize = postingsFormatSettings.getAsInt("max_block_size", BlockTreeTermsWriter.DEFAULT_MAX_BLOCK_SIZE); - this.postingsFormat = new Lucene41PostingsFormat(minBlockSize, maxBlockSize); + this.postingsFormat = new Lucene50PostingsFormat(minBlockSize, maxBlockSize); } public int minBlockSize() { diff --git a/src/main/java/org/elasticsearch/index/codec/postingsformat/Elasticsearch090PostingsFormat.java b/src/main/java/org/elasticsearch/index/codec/postingsformat/Elasticsearch090PostingsFormat.java index 9ad8a9db355..b533cebd7f8 100644 --- a/src/main/java/org/elasticsearch/index/codec/postingsformat/Elasticsearch090PostingsFormat.java +++ b/src/main/java/org/elasticsearch/index/codec/postingsformat/Elasticsearch090PostingsFormat.java @@ -18,25 +18,30 @@ */ package org.elasticsearch.index.codec.postingsformat; +import com.google.common.base.Predicate; +import com.google.common.base.Predicates; +import com.google.common.collect.Iterators; import org.apache.lucene.codecs.FieldsConsumer; import org.apache.lucene.codecs.FieldsProducer; import org.apache.lucene.codecs.PostingsFormat; -import org.apache.lucene.codecs.TermsConsumer; -import org.apache.lucene.codecs.lucene41.Lucene41PostingsFormat; -import org.apache.lucene.index.FieldInfo; +import org.apache.lucene.codecs.lucene50.Lucene50PostingsFormat; +import org.apache.lucene.index.Fields; +import org.apache.lucene.index.FilterLeafReader; import org.apache.lucene.index.SegmentReadState; import org.apache.lucene.index.SegmentWriteState; +import org.elasticsearch.common.lucene.Lucene; import org.elasticsearch.common.util.BloomFilter; import org.elasticsearch.index.codec.postingsformat.BloomFilterPostingsFormat.BloomFilteredFieldsConsumer; import org.elasticsearch.index.mapper.internal.UidFieldMapper; import java.io.IOException; +import java.util.Iterator; /** * This is the default postings format for Elasticsearch that special cases * the _uid field to use a bloom filter while all other fields - * will use a {@link Lucene41PostingsFormat}. This format will reuse the underlying - * {@link Lucene41PostingsFormat} and its files also for the _uid saving up to + * will use a {@link Lucene50PostingsFormat}. This format will reuse the underlying + * {@link Lucene50PostingsFormat} and its files also for the _uid saving up to * 5 files per segment in the default case. */ public final class Elasticsearch090PostingsFormat extends PostingsFormat { @@ -44,12 +49,21 @@ public final class Elasticsearch090PostingsFormat extends PostingsFormat { public Elasticsearch090PostingsFormat() { super("es090"); - bloomPostings = new BloomFilterPostingsFormat(new Lucene41PostingsFormat(), BloomFilter.Factory.DEFAULT); + Lucene50PostingsFormat delegate = new Lucene50PostingsFormat(); + assert delegate.getName().equals(Lucene.LATEST_POSTINGS_FORMAT); + bloomPostings = new BloomFilterPostingsFormat(delegate, BloomFilter.Factory.DEFAULT); } public PostingsFormat getDefaultWrapped() { return bloomPostings.getDelegate(); } + private static final Predicate UID_FIELD_FILTER = new Predicate() { + + @Override + public boolean apply(String s) { + return UidFieldMapper.NAME.equals(s); + } + }; @Override public FieldsConsumer fieldsConsumer(SegmentWriteState state) throws IOException { @@ -57,17 +71,28 @@ public final class Elasticsearch090PostingsFormat extends PostingsFormat { return new FieldsConsumer() { @Override - public void close() throws IOException { - fieldsConsumer.close(); + public void write(Fields fields) throws IOException { + + Fields maskedFields = new FilterLeafReader.FilterFields(fields) { + @Override + public Iterator iterator() { + return Iterators.filter(this.in.iterator(), Predicates.not(UID_FIELD_FILTER)); + } + }; + fieldsConsumer.getDelegate().write(maskedFields); + maskedFields = new FilterLeafReader.FilterFields(fields) { + @Override + public Iterator iterator() { + return Iterators.singletonIterator(UidFieldMapper.NAME); + } + }; + // only go through bloom for the UID field + fieldsConsumer.write(maskedFields); } @Override - public TermsConsumer addField(FieldInfo field) throws IOException { - if (UidFieldMapper.NAME.equals(field.name)) { - // only go through bloom for the UID field - return fieldsConsumer.addField(field); - } - return fieldsConsumer.getDelegate().addField(field); + public void close() throws IOException { + fieldsConsumer.close(); } }; } diff --git a/src/main/java/org/elasticsearch/index/codec/postingsformat/PostingFormats.java b/src/main/java/org/elasticsearch/index/codec/postingsformat/PostingFormats.java index 96765063d43..c60ebee2493 100644 --- a/src/main/java/org/elasticsearch/index/codec/postingsformat/PostingFormats.java +++ b/src/main/java/org/elasticsearch/index/codec/postingsformat/PostingFormats.java @@ -23,6 +23,7 @@ import com.google.common.collect.ImmutableCollection; import com.google.common.collect.ImmutableMap; import org.apache.lucene.codecs.PostingsFormat; import org.elasticsearch.common.collect.MapBuilder; +import org.elasticsearch.common.lucene.Lucene; import org.elasticsearch.common.util.BloomFilter; /** @@ -54,7 +55,7 @@ public class PostingFormats { builtInPostingFormatsX.put(PostingsFormatService.DEFAULT_FORMAT, new PreBuiltPostingsFormatProvider.Factory(PostingsFormatService.DEFAULT_FORMAT, defaultFormat)); - builtInPostingFormatsX.put("bloom_default", new PreBuiltPostingsFormatProvider.Factory("bloom_default", wrapInBloom(PostingsFormat.forName("Lucene41")))); + builtInPostingFormatsX.put("bloom_default", new PreBuiltPostingsFormatProvider.Factory("bloom_default", wrapInBloom(PostingsFormat.forName(Lucene.LATEST_POSTINGS_FORMAT)))); builtInPostingFormats = builtInPostingFormatsX.immutableMap(); } diff --git a/src/main/java/org/elasticsearch/index/engine/Engine.java b/src/main/java/org/elasticsearch/index/engine/Engine.java index 0d99041bdaf..c44d22864b5 100644 --- a/src/main/java/org/elasticsearch/index/engine/Engine.java +++ b/src/main/java/org/elasticsearch/index/engine/Engine.java @@ -25,6 +25,7 @@ import org.apache.lucene.index.Term; import org.apache.lucene.search.Filter; import org.apache.lucene.search.IndexSearcher; import org.apache.lucene.search.Query; +import org.apache.lucene.search.join.BitDocIdSetFilter; import org.elasticsearch.ElasticsearchException; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.bytes.BytesReference; @@ -34,7 +35,6 @@ import org.elasticsearch.common.lucene.uid.Versions; import org.elasticsearch.common.unit.ByteSizeValue; import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.index.VersionType; -import org.elasticsearch.index.cache.fixedbitset.FixedBitSetFilter; import org.elasticsearch.index.deletionpolicy.SnapshotIndexCommit; import org.elasticsearch.index.mapper.DocumentMapper; import org.elasticsearch.index.mapper.ParseContext.Document; @@ -651,13 +651,13 @@ public interface Engine extends IndexShardComponent, CloseableComponent { private final String[] filteringAliases; private final Filter aliasFilter; private final String[] types; - private final FixedBitSetFilter parentFilter; + private final BitDocIdSetFilter parentFilter; private final Operation.Origin origin; private final long startTime; private long endTime; - public DeleteByQuery(Query query, BytesReference source, @Nullable String[] filteringAliases, @Nullable Filter aliasFilter, FixedBitSetFilter parentFilter, Operation.Origin origin, long startTime, String... types) { + public DeleteByQuery(Query query, BytesReference source, @Nullable String[] filteringAliases, @Nullable Filter aliasFilter, BitDocIdSetFilter parentFilter, Operation.Origin origin, long startTime, String... types) { this.query = query; this.source = source; this.types = types; @@ -692,7 +692,7 @@ public interface Engine extends IndexShardComponent, CloseableComponent { return parentFilter != null; } - public FixedBitSetFilter parentFilter() { + public BitDocIdSetFilter parentFilter() { return parentFilter; } diff --git a/src/main/java/org/elasticsearch/index/engine/SegmentsStats.java b/src/main/java/org/elasticsearch/index/engine/SegmentsStats.java index 991c1b67d2f..3daf9129328 100644 --- a/src/main/java/org/elasticsearch/index/engine/SegmentsStats.java +++ b/src/main/java/org/elasticsearch/index/engine/SegmentsStats.java @@ -40,7 +40,7 @@ public class SegmentsStats implements Streamable, ToXContent { private long indexWriterMemoryInBytes; private long indexWriterMaxMemoryInBytes; private long versionMapMemoryInBytes; - private long fixedBitSetMemoryInBytes; + private long bitsetMemoryInBytes; public SegmentsStats() { @@ -63,8 +63,8 @@ public class SegmentsStats implements Streamable, ToXContent { this.versionMapMemoryInBytes += versionMapMemoryInBytes; } - public void addFixedBitSetMemoryInBytes(long fixedBitSetMemoryInBytes) { - this.fixedBitSetMemoryInBytes += fixedBitSetMemoryInBytes; + public void addBitsetMemoryInBytes(long bitsetMemoryInBytes) { + this.bitsetMemoryInBytes += bitsetMemoryInBytes; } public void add(SegmentsStats mergeStats) { @@ -75,7 +75,7 @@ public class SegmentsStats implements Streamable, ToXContent { addIndexWriterMemoryInBytes(mergeStats.indexWriterMemoryInBytes); addIndexWriterMaxMemoryInBytes(mergeStats.indexWriterMaxMemoryInBytes); addVersionMapMemoryInBytes(mergeStats.versionMapMemoryInBytes); - addFixedBitSetMemoryInBytes(mergeStats.fixedBitSetMemoryInBytes); + addBitsetMemoryInBytes(mergeStats.bitsetMemoryInBytes); } /** @@ -130,14 +130,14 @@ public class SegmentsStats implements Streamable, ToXContent { } /** - * Estimation of how much the cached fixed bit sets are taking. (which nested and p/c rely on) + * Estimation of how much the cached bit sets are taking. (which nested and p/c rely on) */ - public long getFixedBitSetMemoryInBytes() { - return fixedBitSetMemoryInBytes; + public long getBitsetMemoryInBytes() { + return bitsetMemoryInBytes; } - public ByteSizeValue getFixedBitSetMemory() { - return new ByteSizeValue(fixedBitSetMemoryInBytes); + public ByteSizeValue getBitsetMemory() { + return new ByteSizeValue(bitsetMemoryInBytes); } public static SegmentsStats readSegmentsStats(StreamInput in) throws IOException { @@ -154,7 +154,7 @@ public class SegmentsStats implements Streamable, ToXContent { builder.byteSizeField(Fields.INDEX_WRITER_MEMORY_IN_BYTES, Fields.INDEX_WRITER_MEMORY, indexWriterMemoryInBytes); builder.byteSizeField(Fields.INDEX_WRITER_MAX_MEMORY_IN_BYTES, Fields.INDEX_WRITER_MAX_MEMORY, indexWriterMaxMemoryInBytes); builder.byteSizeField(Fields.VERSION_MAP_MEMORY_IN_BYTES, Fields.VERSION_MAP_MEMORY, versionMapMemoryInBytes); - builder.byteSizeField(Fields.FIXED_BIT_SET_MEMORY_IN_BYTES, Fields.FIXED_BIT_SET, fixedBitSetMemoryInBytes); + builder.byteSizeField(Fields.FIXED_BIT_SET_MEMORY_IN_BYTES, Fields.FIXED_BIT_SET, bitsetMemoryInBytes); builder.endObject(); return builder; } @@ -186,7 +186,7 @@ public class SegmentsStats implements Streamable, ToXContent { indexWriterMaxMemoryInBytes = in.readLong(); } if (in.getVersion().onOrAfter(Version.V_1_4_0_Beta1)) { - fixedBitSetMemoryInBytes = in.readLong(); + bitsetMemoryInBytes = in.readLong(); } } @@ -202,7 +202,7 @@ public class SegmentsStats implements Streamable, ToXContent { out.writeLong(indexWriterMaxMemoryInBytes); } if (out.getVersion().onOrAfter(Version.V_1_4_0_Beta1)) { - out.writeLong(fixedBitSetMemoryInBytes); + out.writeLong(bitsetMemoryInBytes); } } } diff --git a/src/main/java/org/elasticsearch/index/engine/internal/InternalEngine.java b/src/main/java/org/elasticsearch/index/engine/internal/InternalEngine.java index cd105892abd..135abe86512 100644 --- a/src/main/java/org/elasticsearch/index/engine/internal/InternalEngine.java +++ b/src/main/java/org/elasticsearch/index/engine/internal/InternalEngine.java @@ -20,8 +20,21 @@ package org.elasticsearch.index.engine.internal; import com.google.common.collect.Lists; -import org.apache.lucene.index.*; +import org.apache.lucene.index.CorruptIndexException; +import org.apache.lucene.index.IndexReader; +import org.apache.lucene.index.IndexWriter; import org.apache.lucene.index.IndexWriter.IndexReaderWarmer; +import org.apache.lucene.index.IndexWriterConfig; +import org.apache.lucene.index.LeafReader; +import org.apache.lucene.index.LeafReaderContext; +import org.apache.lucene.index.LiveIndexWriterConfig; +import org.apache.lucene.index.MergePolicy; +import org.apache.lucene.index.MultiReader; +import org.apache.lucene.index.SegmentCommitInfo; +import org.apache.lucene.index.SegmentInfos; +import org.apache.lucene.index.SegmentReader; +import org.apache.lucene.index.Term; +import org.apache.lucene.search.FilteredQuery; import org.apache.lucene.search.IndexSearcher; import org.apache.lucene.search.Query; import org.apache.lucene.search.SearcherFactory; @@ -36,7 +49,6 @@ import org.elasticsearch.ExceptionsHelper; import org.elasticsearch.cluster.routing.operation.hash.djb.DjbHashFunction; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.Preconditions; -import org.elasticsearch.common.collect.MapBuilder; import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.lease.Releasable; import org.elasticsearch.common.lease.Releasables; @@ -44,7 +56,6 @@ import org.elasticsearch.common.logging.ESLogger; import org.elasticsearch.common.lucene.LoggerInfoStream; import org.elasticsearch.common.lucene.Lucene; import org.elasticsearch.common.lucene.SegmentReaderUtils; -import org.elasticsearch.common.lucene.search.XFilteredQuery; import org.elasticsearch.common.lucene.uid.Versions; import org.elasticsearch.common.math.MathUtils; import org.elasticsearch.common.settings.Settings; @@ -58,7 +69,25 @@ import org.elasticsearch.index.analysis.AnalysisService; import org.elasticsearch.index.codec.CodecService; import org.elasticsearch.index.deletionpolicy.SnapshotDeletionPolicy; import org.elasticsearch.index.deletionpolicy.SnapshotIndexCommit; -import org.elasticsearch.index.engine.*; +import org.elasticsearch.index.engine.CreateFailedEngineException; +import org.elasticsearch.index.engine.DeleteByQueryFailedEngineException; +import org.elasticsearch.index.engine.DeleteFailedEngineException; +import org.elasticsearch.index.engine.DocumentAlreadyExistsException; +import org.elasticsearch.index.engine.Engine; +import org.elasticsearch.index.engine.EngineAlreadyStartedException; +import org.elasticsearch.index.engine.EngineClosedException; +import org.elasticsearch.index.engine.EngineCreationFailureException; +import org.elasticsearch.index.engine.EngineException; +import org.elasticsearch.index.engine.FlushFailedEngineException; +import org.elasticsearch.index.engine.FlushNotAllowedEngineException; +import org.elasticsearch.index.engine.IndexFailedEngineException; +import org.elasticsearch.index.engine.OptimizeFailedEngineException; +import org.elasticsearch.index.engine.RecoveryEngineException; +import org.elasticsearch.index.engine.RefreshFailedEngineException; +import org.elasticsearch.index.engine.Segment; +import org.elasticsearch.index.engine.SegmentsStats; +import org.elasticsearch.index.engine.SnapshotFailedEngineException; +import org.elasticsearch.index.engine.VersionConflictEngineException; import org.elasticsearch.index.indexing.ShardIndexingService; import org.elasticsearch.index.mapper.Uid; import org.elasticsearch.index.merge.OnGoingMerge; @@ -78,7 +107,13 @@ import org.elasticsearch.indices.warmer.InternalIndicesWarmer; import org.elasticsearch.threadpool.ThreadPool; import java.io.IOException; -import java.util.*; +import java.util.Arrays; +import java.util.Collections; +import java.util.Comparator; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Set; import java.util.concurrent.CopyOnWriteArrayList; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicBoolean; @@ -202,7 +237,6 @@ public class InternalEngine extends AbstractIndexShardComponent implements Engin this.similarityService = similarityService; this.codecService = codecService; this.compoundOnFlush = indexSettings.getAsBoolean(INDEX_COMPOUND_ON_FLUSH, this.compoundOnFlush); - this.checksumOnMerge = indexSettings.getAsBoolean(INDEX_CHECKSUM_ON_MERGE, this.checksumOnMerge); this.indexConcurrency = indexSettings.getAsInt(INDEX_INDEX_CONCURRENCY, Math.max(IndexWriterConfig.DEFAULT_MAX_THREAD_STATES, (int) (EsExecutors.boundedNumberOfProcessors(indexSettings) * 0.65))); this.versionMap = new LiveVersionMap(); this.dirtyLocks = new Object[indexConcurrency * 50]; // we multiply it to have enough... @@ -668,11 +702,11 @@ public class InternalEngine extends AbstractIndexShardComponent implements Engin Query query; if (delete.nested() && delete.aliasFilter() != null) { - query = new IncludeNestedDocsQuery(new XFilteredQuery(delete.query(), delete.aliasFilter()), delete.parentFilter()); + query = new IncludeNestedDocsQuery(new FilteredQuery(delete.query(), delete.aliasFilter()), delete.parentFilter()); } else if (delete.nested()) { query = new IncludeNestedDocsQuery(delete.query(), delete.parentFilter()); } else if (delete.aliasFilter() != null) { - query = new XFilteredQuery(delete.query(), delete.aliasFilter()); + query = new FilteredQuery(delete.query(), delete.aliasFilter()); } else { query = delete.query(); } @@ -1174,7 +1208,7 @@ public class InternalEngine extends AbstractIndexShardComponent implements Engin return t; } - private static long getReaderRamBytesUsed(AtomicReaderContext reader) { + private static long getReaderRamBytesUsed(LeafReaderContext reader) { final SegmentReader segmentReader = SegmentReaderUtils.segmentReader(reader.reader()); return segmentReader.ramBytesUsed(); } @@ -1185,7 +1219,7 @@ public class InternalEngine extends AbstractIndexShardComponent implements Engin ensureOpen(); try (final Searcher searcher = acquireSearcher("segments_stats")) { SegmentsStats stats = new SegmentsStats(); - for (AtomicReaderContext reader : searcher.reader().leaves()) { + for (LeafReaderContext reader : searcher.reader().leaves()) { stats.add(1, getReaderRamBytesUsed(reader)); } stats.addVersionMapMemoryInBytes(versionMap.ramBytesUsed()); @@ -1205,7 +1239,7 @@ public class InternalEngine extends AbstractIndexShardComponent implements Engin // first, go over and compute the search ones... Searcher searcher = acquireSearcher("segments"); try { - for (AtomicReaderContext reader : searcher.reader().leaves()) { + for (LeafReaderContext reader : searcher.reader().leaves()) { assert reader.reader() instanceof SegmentReader; SegmentCommitInfo info = SegmentReaderUtils.segmentReader(reader.reader()).getSegmentInfo(); assert !segments.containsKey(info.info.name); @@ -1340,7 +1374,7 @@ public class InternalEngine extends AbstractIndexShardComponent implements Engin // the shard is initializing if (Lucene.isCorruptionException(failure)) { try { - store.markStoreCorrupted(ExceptionsHelper.unwrap(failure, CorruptIndexException.class)); + store.markStoreCorrupted(ExceptionsHelper.unwrapCorruption(failure)); } catch (IOException e) { logger.warn("Couldn't marks store corrupted", e); } @@ -1385,7 +1419,7 @@ public class InternalEngine extends AbstractIndexShardComponent implements Engin /** * Returns whether a leaf reader comes from a merge (versus flush or addIndexes). */ - private static boolean isMergedSegment(AtomicReader reader) { + private static boolean isMergedSegment(LeafReader reader) { // We expect leaves to be segment readers final Map diagnostics = SegmentReaderUtils.segmentReader(reader).getSegmentInfo().info.getDiagnostics(); final String source = diagnostics.get(IndexWriter.SOURCE); @@ -1401,7 +1435,8 @@ public class InternalEngine extends AbstractIndexShardComponent implements Engin IndexWriter.unlock(store.directory()); } boolean create = !Lucene.indexExists(store.directory()); - IndexWriterConfig config = new IndexWriterConfig(Lucene.VERSION, analysisService.defaultIndexAnalyzer()); + IndexWriterConfig config = new IndexWriterConfig(analysisService.defaultIndexAnalyzer()); + config.setCommitOnClose(false); // we by default don't commit on close config.setOpenMode(create ? IndexWriterConfig.OpenMode.CREATE : IndexWriterConfig.OpenMode.APPEND); config.setIndexDeletionPolicy(deletionPolicy); config.setInfoStream(new LoggerInfoStream(indexSettings, shardId)); @@ -1422,12 +1457,11 @@ public class InternalEngine extends AbstractIndexShardComponent implements Engin * in combination with the default writelock timeout*/ config.setWriteLockTimeout(5000); config.setUseCompoundFile(this.compoundOnFlush); - config.setCheckIntegrityAtMerge(checksumOnMerge); // Warm-up hook for newly-merged segments. Warming up segments here is better since it will be performed at the end // of the merge operation and won't slow down _refresh config.setMergedSegmentWarmer(new IndexReaderWarmer() { @Override - public void warm(AtomicReader reader) throws IOException { + public void warm(LeafReader reader) throws IOException { try { assert isMergedSegment(reader); if (warmer != null) { @@ -1457,7 +1491,6 @@ public class InternalEngine extends AbstractIndexShardComponent implements Engin public static final String INDEX_INDEX_CONCURRENCY = "index.index_concurrency"; public static final String INDEX_COMPOUND_ON_FLUSH = "index.compound_on_flush"; - public static final String INDEX_CHECKSUM_ON_MERGE = "index.checksum_on_merge"; public static final String INDEX_GC_DELETES = "index.gc_deletes"; public static final String INDEX_FAIL_ON_MERGE_FAILURE = "index.fail_on_merge_failure"; public static final String INDEX_FAIL_ON_CORRUPTION = "index.fail_on_corruption"; @@ -1479,13 +1512,6 @@ public class InternalEngine extends AbstractIndexShardComponent implements Engin InternalEngine.this.compoundOnFlush = compoundOnFlush; indexWriter.getConfig().setUseCompoundFile(compoundOnFlush); } - - final boolean checksumOnMerge = settings.getAsBoolean(INDEX_CHECKSUM_ON_MERGE, InternalEngine.this.checksumOnMerge); - if (checksumOnMerge != InternalEngine.this.checksumOnMerge) { - logger.info("updating {} from [{}] to [{}]", InternalEngine.INDEX_CHECKSUM_ON_MERGE, InternalEngine.this.checksumOnMerge, checksumOnMerge); - InternalEngine.this.checksumOnMerge = checksumOnMerge; - indexWriter.getConfig().setCheckIntegrityAtMerge(checksumOnMerge); - } InternalEngine.this.failEngineOnCorruption = settings.getAsBoolean(INDEX_FAIL_ON_CORRUPTION, InternalEngine.this.failEngineOnCorruption); int indexConcurrency = settings.getAsInt(INDEX_INDEX_CONCURRENCY, InternalEngine.this.indexConcurrency); @@ -1602,13 +1628,13 @@ public class InternalEngine extends AbstractIndexShardComponent implements Engin try (final Searcher currentSearcher = acquireSearcher("search_factory")) { // figure out the newSearcher, with only the new readers that are relevant for us List readers = Lists.newArrayList(); - for (AtomicReaderContext newReaderContext : searcher.getIndexReader().leaves()) { + for (LeafReaderContext newReaderContext : searcher.getIndexReader().leaves()) { if (isMergedSegment(newReaderContext.reader())) { // merged segments are already handled by IndexWriterConfig.setMergedSegmentWarmer continue; } boolean found = false; - for (AtomicReaderContext currentReaderContext : currentSearcher.reader().leaves()) { + for (LeafReaderContext currentReaderContext : currentSearcher.reader().leaves()) { if (currentReaderContext.reader().getCoreCacheKey().equals(newReaderContext.reader().getCoreCacheKey())) { found = true; break; diff --git a/src/main/java/org/elasticsearch/index/engine/internal/LiveVersionMap.java b/src/main/java/org/elasticsearch/index/engine/internal/LiveVersionMap.java index 29dc863db03..5407877c18a 100644 --- a/src/main/java/org/elasticsearch/index/engine/internal/LiveVersionMap.java +++ b/src/main/java/org/elasticsearch/index/engine/internal/LiveVersionMap.java @@ -20,6 +20,7 @@ package org.elasticsearch.index.engine.internal; import java.io.IOException; +import java.util.Collections; import java.util.Map; import java.util.concurrent.atomic.AtomicLong; @@ -248,4 +249,10 @@ class LiveVersionMap implements ReferenceManager.RefreshListener, Accountable { long ramBytesUsedForRefresh() { return ramBytesUsedCurrent.get(); } + + @Override + public Iterable getChildResources() { + // TODO: useful to break down RAM usage here? + return Collections.emptyList(); + } } diff --git a/src/main/java/org/elasticsearch/index/engine/internal/VersionValue.java b/src/main/java/org/elasticsearch/index/engine/internal/VersionValue.java index cb4bd27c657..b9876805ddf 100644 --- a/src/main/java/org/elasticsearch/index/engine/internal/VersionValue.java +++ b/src/main/java/org/elasticsearch/index/engine/internal/VersionValue.java @@ -19,6 +19,8 @@ package org.elasticsearch.index.engine.internal; +import java.util.Collections; + import org.apache.lucene.util.Accountable; import org.apache.lucene.util.RamUsageEstimator; import org.elasticsearch.Version; @@ -54,4 +56,9 @@ class VersionValue implements Accountable { public long ramBytesUsed() { return RamUsageEstimator.NUM_BYTES_OBJECT_HEADER + RamUsageEstimator.NUM_BYTES_LONG + RamUsageEstimator.NUM_BYTES_OBJECT_REF + translogLocation.ramBytesUsed(); } + + @Override + public Iterable getChildResources() { + return Collections.emptyList(); + } } diff --git a/src/main/java/org/elasticsearch/index/fielddata/AtomicFieldData.java b/src/main/java/org/elasticsearch/index/fielddata/AtomicFieldData.java index a14fcb634fe..b119d3a3221 100644 --- a/src/main/java/org/elasticsearch/index/fielddata/AtomicFieldData.java +++ b/src/main/java/org/elasticsearch/index/fielddata/AtomicFieldData.java @@ -23,7 +23,7 @@ import org.apache.lucene.util.Accountable; import org.elasticsearch.common.lease.Releasable; /** - * The thread safe {@link org.apache.lucene.index.AtomicReader} level cache of the data. + * The thread safe {@link org.apache.lucene.index.LeafReader} level cache of the data. */ public interface AtomicFieldData extends Accountable, Releasable { diff --git a/src/main/java/org/elasticsearch/index/fielddata/IndexFieldData.java b/src/main/java/org/elasticsearch/index/fielddata/IndexFieldData.java index 7c8a6d709a6..250f34766ef 100644 --- a/src/main/java/org/elasticsearch/index/fielddata/IndexFieldData.java +++ b/src/main/java/org/elasticsearch/index/fielddata/IndexFieldData.java @@ -19,17 +19,17 @@ package org.elasticsearch.index.fielddata; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.IndexReader; import org.apache.lucene.search.*; +import org.apache.lucene.search.join.BitDocIdSetFilter; +import org.apache.lucene.util.BitDocIdSet; import org.apache.lucene.util.BytesRef; import org.apache.lucene.util.BytesRefBuilder; -import org.apache.lucene.util.FixedBitSet; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.index.Index; import org.elasticsearch.index.IndexComponent; -import org.elasticsearch.index.cache.fixedbitset.FixedBitSetFilter; import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource.Nested; import org.elasticsearch.index.mapper.FieldMapper; import org.elasticsearch.index.mapper.MapperService; @@ -41,7 +41,7 @@ import java.io.IOException; /** * Thread-safe utility class that allows to get per-segment values via the - * {@link #load(AtomicReaderContext)} method. + * {@link #load(LeafReaderContext)} method. */ public interface IndexFieldData extends IndexComponent { @@ -87,12 +87,12 @@ public interface IndexFieldData extends IndexCompone /** * Loads the atomic field data for the reader, possibly cached. */ - FD load(AtomicReaderContext context); + FD load(LeafReaderContext context); /** * Loads directly the atomic field data for the reader, ignoring any caching involved. */ - FD loadDirect(AtomicReaderContext context) throws Exception; + FD loadDirect(LeafReaderContext context) throws Exception; /** * Comparator used for sorting. @@ -129,25 +129,25 @@ public interface IndexFieldData extends IndexCompone * parent + 1, or 0 if there is no previous parent, and R (excluded). */ public static class Nested { - private final FixedBitSetFilter rootFilter, innerFilter; + private final BitDocIdSetFilter rootFilter, innerFilter; - public Nested(FixedBitSetFilter rootFilter, FixedBitSetFilter innerFilter) { + public Nested(BitDocIdSetFilter rootFilter, BitDocIdSetFilter innerFilter) { this.rootFilter = rootFilter; this.innerFilter = innerFilter; } /** - * Get a {@link FixedBitSet} that matches the root documents. + * Get a {@link BitDocIdSet} that matches the root documents. */ - public FixedBitSet rootDocs(AtomicReaderContext ctx) throws IOException { - return rootFilter.getDocIdSet(ctx, null); + public BitDocIdSet rootDocs(LeafReaderContext ctx) throws IOException { + return rootFilter.getDocIdSet(ctx); } /** - * Get a {@link FixedBitSet} that matches the inner documents. + * Get a {@link BitDocIdSet} that matches the inner documents. */ - public FixedBitSet innerDocs(AtomicReaderContext ctx) throws IOException { - return innerFilter.getDocIdSet(ctx, null); + public BitDocIdSet innerDocs(LeafReaderContext ctx) throws IOException { + return innerFilter.getDocIdSet(ctx); } } diff --git a/src/main/java/org/elasticsearch/index/fielddata/IndexFieldDataCache.java b/src/main/java/org/elasticsearch/index/fielddata/IndexFieldDataCache.java index fe9700265ee..a2b73221d91 100644 --- a/src/main/java/org/elasticsearch/index/fielddata/IndexFieldDataCache.java +++ b/src/main/java/org/elasticsearch/index/fielddata/IndexFieldDataCache.java @@ -19,7 +19,7 @@ package org.elasticsearch.index.fielddata; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.IndexReader; import org.apache.lucene.util.Accountable; import org.elasticsearch.index.mapper.FieldMapper; @@ -29,7 +29,7 @@ import org.elasticsearch.index.mapper.FieldMapper; */ public interface IndexFieldDataCache { - > FD load(AtomicReaderContext context, IFD indexFieldData) throws Exception; + > FD load(LeafReaderContext context, IFD indexFieldData) throws Exception; > IFD load(final IndexReader indexReader, final IFD indexFieldData) throws Exception; @@ -55,7 +55,7 @@ public interface IndexFieldDataCache { class None implements IndexFieldDataCache { @Override - public > FD load(AtomicReaderContext context, IFD indexFieldData) throws Exception { + public > FD load(LeafReaderContext context, IFD indexFieldData) throws Exception { return indexFieldData.loadDirect(context); } diff --git a/src/main/java/org/elasticsearch/index/fielddata/NumericDoubleValues.java b/src/main/java/org/elasticsearch/index/fielddata/NumericDoubleValues.java index 830126237e1..2cbbb0064f4 100644 --- a/src/main/java/org/elasticsearch/index/fielddata/NumericDoubleValues.java +++ b/src/main/java/org/elasticsearch/index/fielddata/NumericDoubleValues.java @@ -19,6 +19,8 @@ package org.elasticsearch.index.fielddata; +import org.apache.lucene.index.NumericDocValues; + /** * A per-document numeric value. */ @@ -35,4 +37,26 @@ public abstract class NumericDoubleValues { * @return numeric value */ public abstract double get(int docID); + + // TODO: this interaction with sort comparators is really ugly... + /** Returns numeric docvalues view of raw double bits */ + public NumericDocValues getRawDoubleValues() { + return new NumericDocValues() { + @Override + public long get(int docID) { + return Double.doubleToRawLongBits(NumericDoubleValues.this.get(docID)); + } + }; + } + + // yes... this is doing what the previous code was doing... + /** Returns numeric docvalues view of raw float bits */ + public NumericDocValues getRawFloatValues() { + return new NumericDocValues() { + @Override + public long get(int docID) { + return Float.floatToRawIntBits((float)NumericDoubleValues.this.get(docID)); + } + }; + } } diff --git a/src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/BytesRefFieldComparatorSource.java b/src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/BytesRefFieldComparatorSource.java index 276a091966d..7e64720c581 100644 --- a/src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/BytesRefFieldComparatorSource.java +++ b/src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/BytesRefFieldComparatorSource.java @@ -19,7 +19,7 @@ package org.elasticsearch.index.fielddata.fieldcomparator; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.BinaryDocValues; import org.apache.lucene.index.RandomAccessOrds; import org.apache.lucene.index.SortedDocValues; @@ -27,8 +27,8 @@ import org.apache.lucene.search.FieldComparator; import org.apache.lucene.search.Scorer; import org.apache.lucene.search.SortField; import org.apache.lucene.util.Bits; +import org.apache.lucene.util.BitSet; import org.apache.lucene.util.BytesRef; -import org.apache.lucene.util.FixedBitSet; import org.elasticsearch.index.fielddata.IndexFieldData; import org.elasticsearch.index.fielddata.IndexOrdinalsFieldData; import org.elasticsearch.index.fielddata.SortedBinaryDocValues; @@ -58,7 +58,7 @@ public class BytesRefFieldComparatorSource extends IndexFieldData.XFieldComparat return SortField.Type.STRING; } - protected SortedBinaryDocValues getValues(AtomicReaderContext context) { + protected SortedBinaryDocValues getValues(LeafReaderContext context) { return indexFieldData.load(context).getBytesValues(); } @@ -74,14 +74,14 @@ public class BytesRefFieldComparatorSource extends IndexFieldData.XFieldComparat return new FieldComparator.TermOrdValComparator(numHits, null, sortMissingLast) { @Override - protected SortedDocValues getSortedDocValues(AtomicReaderContext context, String field) throws IOException { + protected SortedDocValues getSortedDocValues(LeafReaderContext context, String field) throws IOException { final RandomAccessOrds values = ((IndexOrdinalsFieldData) indexFieldData).load(context).getOrdinalsValues(); final SortedDocValues selectedValues; if (nested == null) { selectedValues = sortMode.select(values); } else { - final FixedBitSet rootDocs = nested.rootDocs(context); - final FixedBitSet innerDocs = nested.innerDocs(context); + final BitSet rootDocs = nested.rootDocs(context).bits(); + final BitSet innerDocs = nested.innerDocs(context).bits(); selectedValues = sortMode.select(values, rootDocs, innerDocs); } if (sortMissingFirst(missingValue) || sortMissingLast(missingValue)) { @@ -110,21 +110,21 @@ public class BytesRefFieldComparatorSource extends IndexFieldData.XFieldComparat return new FieldComparator.TermValComparator(numHits, null, sortMissingLast) { @Override - protected BinaryDocValues getBinaryDocValues(AtomicReaderContext context, String field) throws IOException { + protected BinaryDocValues getBinaryDocValues(LeafReaderContext context, String field) throws IOException { final SortedBinaryDocValues values = getValues(context); final BinaryDocValues selectedValues; if (nested == null) { selectedValues = sortMode.select(values, nonNullMissingBytes); } else { - final FixedBitSet rootDocs = nested.rootDocs(context); - final FixedBitSet innerDocs = nested.innerDocs(context); + final BitSet rootDocs = nested.rootDocs(context).bits(); + final BitSet innerDocs = nested.innerDocs(context).bits(); selectedValues = sortMode.select(values, nonNullMissingBytes, rootDocs, innerDocs, context.reader().maxDoc()); } return selectedValues; } @Override - protected Bits getDocsWithField(AtomicReaderContext context, String field) throws IOException { + protected Bits getDocsWithField(LeafReaderContext context, String field) throws IOException { return new Bits.MatchAllBits(context.reader().maxDoc()); } diff --git a/src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/DoubleValuesComparatorSource.java b/src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/DoubleValuesComparatorSource.java index 321dc5b53d1..bae774c1681 100644 --- a/src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/DoubleValuesComparatorSource.java +++ b/src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/DoubleValuesComparatorSource.java @@ -19,12 +19,12 @@ package org.elasticsearch.index.fielddata.fieldcomparator; -import org.apache.lucene.index.AtomicReaderContext; -import org.apache.lucene.search.FieldCache.Doubles; +import org.apache.lucene.index.LeafReaderContext; +import org.apache.lucene.index.NumericDocValues; import org.apache.lucene.search.FieldComparator; import org.apache.lucene.search.Scorer; import org.apache.lucene.search.SortField; -import org.apache.lucene.util.FixedBitSet; +import org.apache.lucene.util.BitSet; import org.elasticsearch.common.Nullable; import org.elasticsearch.index.fielddata.IndexFieldData; import org.elasticsearch.index.fielddata.IndexNumericFieldData; @@ -56,7 +56,7 @@ public class DoubleValuesComparatorSource extends IndexFieldData.XFieldComparato return SortField.Type.DOUBLE; } - protected SortedNumericDoubleValues getValues(AtomicReaderContext context) { + protected SortedNumericDoubleValues getValues(LeafReaderContext context) { return indexFieldData.load(context).getDoubleValues(); } @@ -69,24 +69,19 @@ public class DoubleValuesComparatorSource extends IndexFieldData.XFieldComparato final double dMissingValue = (Double) missingObject(missingValue, reversed); // NOTE: it's important to pass null as a missing value in the constructor so that // the comparator doesn't check docsWithField since we replace missing values in select() - return new FieldComparator.DoubleComparator(numHits, null, null, null) { + return new FieldComparator.DoubleComparator(numHits, null, null) { @Override - protected Doubles getDoubleValues(AtomicReaderContext context, String field) throws IOException { + protected NumericDocValues getNumericDocValues(LeafReaderContext context, String field) throws IOException { final SortedNumericDoubleValues values = getValues(context); final NumericDoubleValues selectedValues; if (nested == null) { selectedValues = sortMode.select(values, dMissingValue); } else { - final FixedBitSet rootDocs = nested.rootDocs(context); - final FixedBitSet innerDocs = nested.innerDocs(context); + final BitSet rootDocs = nested.rootDocs(context).bits(); + final BitSet innerDocs = nested.innerDocs(context).bits(); selectedValues = sortMode.select(values, dMissingValue, rootDocs, innerDocs, context.reader().maxDoc()); } - return new Doubles() { - @Override - public double get(int docID) { - return selectedValues.get(docID); - } - }; + return selectedValues.getRawDoubleValues(); } @Override public void setScorer(Scorer scorer) { diff --git a/src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/FloatValuesComparatorSource.java b/src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/FloatValuesComparatorSource.java index e663ee5e30b..4fb0f552ad3 100644 --- a/src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/FloatValuesComparatorSource.java +++ b/src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/FloatValuesComparatorSource.java @@ -18,11 +18,11 @@ */ package org.elasticsearch.index.fielddata.fieldcomparator; -import org.apache.lucene.index.AtomicReaderContext; -import org.apache.lucene.search.FieldCache.Floats; +import org.apache.lucene.index.LeafReaderContext; +import org.apache.lucene.index.NumericDocValues; import org.apache.lucene.search.FieldComparator; import org.apache.lucene.search.SortField; -import org.apache.lucene.util.FixedBitSet; +import org.apache.lucene.util.BitSet; import org.elasticsearch.common.Nullable; import org.elasticsearch.index.fielddata.IndexFieldData; import org.elasticsearch.index.fielddata.IndexNumericFieldData; @@ -61,24 +61,19 @@ public class FloatValuesComparatorSource extends IndexFieldData.XFieldComparator final float dMissingValue = (Float) missingObject(missingValue, reversed); // NOTE: it's important to pass null as a missing value in the constructor so that // the comparator doesn't check docsWithField since we replace missing values in select() - return new FieldComparator.FloatComparator(numHits, null, null, null) { + return new FieldComparator.FloatComparator(numHits, null, null) { @Override - protected Floats getFloatValues(AtomicReaderContext context, String field) throws IOException { + protected NumericDocValues getNumericDocValues(LeafReaderContext context, String field) throws IOException { final SortedNumericDoubleValues values = indexFieldData.load(context).getDoubleValues(); final NumericDoubleValues selectedValues; if (nested == null) { selectedValues = sortMode.select(values, dMissingValue); } else { - final FixedBitSet rootDocs = nested.rootDocs(context); - final FixedBitSet innerDocs = nested.innerDocs(context); + final BitSet rootDocs = nested.rootDocs(context).bits(); + final BitSet innerDocs = nested.innerDocs(context).bits(); selectedValues = sortMode.select(values, dMissingValue, rootDocs, innerDocs, context.reader().maxDoc()); } - return new Floats() { - @Override - public float get(int docID) { - return (float) selectedValues.get(docID); - } - }; + return selectedValues.getRawFloatValues(); } }; } diff --git a/src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/LongValuesComparatorSource.java b/src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/LongValuesComparatorSource.java index 86d649bd276..af0940d16d4 100644 --- a/src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/LongValuesComparatorSource.java +++ b/src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/LongValuesComparatorSource.java @@ -18,13 +18,12 @@ */ package org.elasticsearch.index.fielddata.fieldcomparator; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.NumericDocValues; import org.apache.lucene.index.SortedNumericDocValues; -import org.apache.lucene.search.FieldCache.Longs; import org.apache.lucene.search.FieldComparator; import org.apache.lucene.search.SortField; -import org.apache.lucene.util.FixedBitSet; +import org.apache.lucene.util.BitSet; import org.elasticsearch.common.Nullable; import org.elasticsearch.index.fielddata.IndexFieldData; import org.elasticsearch.index.fielddata.IndexNumericFieldData; @@ -61,24 +60,19 @@ public class LongValuesComparatorSource extends IndexFieldData.XFieldComparatorS final Long dMissingValue = (Long) missingObject(missingValue, reversed); // NOTE: it's important to pass null as a missing value in the constructor so that // the comparator doesn't check docsWithField since we replace missing values in select() - return new FieldComparator.LongComparator(numHits, null, null, null) { + return new FieldComparator.LongComparator(numHits, null, null) { @Override - protected Longs getLongValues(AtomicReaderContext context, String field) throws IOException { + protected NumericDocValues getNumericDocValues(LeafReaderContext context, String field) throws IOException { final SortedNumericDocValues values = indexFieldData.load(context).getLongValues(); final NumericDocValues selectedValues; if (nested == null) { selectedValues = sortMode.select(values, dMissingValue); } else { - final FixedBitSet rootDocs = nested.rootDocs(context); - final FixedBitSet innerDocs = nested.innerDocs(context); + final BitSet rootDocs = nested.rootDocs(context).bits(); + final BitSet innerDocs = nested.innerDocs(context).bits(); selectedValues = sortMode.select(values, dMissingValue, rootDocs, innerDocs, context.reader().maxDoc()); } - return new Longs() { - @Override - public long get(int docID) { - return selectedValues.get(docID); - } - }; + return selectedValues; } }; diff --git a/src/main/java/org/elasticsearch/index/fielddata/ordinals/GlobalOrdinalsIndexFieldData.java b/src/main/java/org/elasticsearch/index/fielddata/ordinals/GlobalOrdinalsIndexFieldData.java index 1208055b898..b41a0f6263b 100644 --- a/src/main/java/org/elasticsearch/index/fielddata/ordinals/GlobalOrdinalsIndexFieldData.java +++ b/src/main/java/org/elasticsearch/index/fielddata/ordinals/GlobalOrdinalsIndexFieldData.java @@ -18,7 +18,9 @@ */ package org.elasticsearch.index.fielddata.ordinals; -import org.apache.lucene.index.AtomicReaderContext; +import java.util.Collections; + +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.IndexReader; import org.apache.lucene.util.Accountable; import org.elasticsearch.common.Nullable; @@ -47,7 +49,7 @@ public abstract class GlobalOrdinalsIndexFieldData extends AbstractIndexComponen } @Override - public AtomicOrdinalsFieldData loadDirect(AtomicReaderContext context) throws Exception { + public AtomicOrdinalsFieldData loadDirect(LeafReaderContext context) throws Exception { return load(context); } @@ -91,4 +93,9 @@ public abstract class GlobalOrdinalsIndexFieldData extends AbstractIndexComponen return memorySizeInBytes; } + @Override + public Iterable getChildResources() { + // TODO: break down ram usage? + return Collections.emptyList(); + } } diff --git a/src/main/java/org/elasticsearch/index/fielddata/ordinals/InternalGlobalOrdinalsIndexFieldData.java b/src/main/java/org/elasticsearch/index/fielddata/ordinals/InternalGlobalOrdinalsIndexFieldData.java index 1802cbf379e..b1c275f07f8 100644 --- a/src/main/java/org/elasticsearch/index/fielddata/ordinals/InternalGlobalOrdinalsIndexFieldData.java +++ b/src/main/java/org/elasticsearch/index/fielddata/ordinals/InternalGlobalOrdinalsIndexFieldData.java @@ -18,9 +18,10 @@ */ package org.elasticsearch.index.fielddata.ordinals; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.MultiDocValues.OrdinalMap; import org.apache.lucene.index.RandomAccessOrds; +import org.apache.lucene.util.Accountable; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.index.Index; import org.elasticsearch.index.fielddata.AtomicOrdinalsFieldData; @@ -44,7 +45,7 @@ final class InternalGlobalOrdinalsIndexFieldData extends GlobalOrdinalsIndexFiel } @Override - public AtomicOrdinalsFieldData load(AtomicReaderContext context) { + public AtomicOrdinalsFieldData load(LeafReaderContext context) { return atomicReaders[context.ord]; } @@ -79,6 +80,11 @@ final class InternalGlobalOrdinalsIndexFieldData extends GlobalOrdinalsIndexFiel return afd.ramBytesUsed(); } + @Override + public Iterable getChildResources() { + return afd.getChildResources(); + } + @Override public void close() { } diff --git a/src/main/java/org/elasticsearch/index/fielddata/ordinals/MultiOrdinals.java b/src/main/java/org/elasticsearch/index/fielddata/ordinals/MultiOrdinals.java index 946f2500c80..2fa794c25da 100644 --- a/src/main/java/org/elasticsearch/index/fielddata/ordinals/MultiOrdinals.java +++ b/src/main/java/org/elasticsearch/index/fielddata/ordinals/MultiOrdinals.java @@ -19,9 +19,14 @@ package org.elasticsearch.index.fielddata.ordinals; +import java.util.ArrayList; +import java.util.List; + import org.apache.lucene.index.DocValues; import org.apache.lucene.index.RandomAccessOrds; import org.apache.lucene.index.SortedDocValues; +import org.apache.lucene.util.Accountable; +import org.apache.lucene.util.Accountables; import org.apache.lucene.util.BytesRef; import org.apache.lucene.util.LongsRef; import org.apache.lucene.util.packed.PackedInts; @@ -85,6 +90,14 @@ public class MultiOrdinals extends Ordinals { return endOffsets.ramBytesUsed() + ords.ramBytesUsed(); } + @Override + public Iterable getChildResources() { + List resources = new ArrayList<>(); + resources.add(Accountables.namedAccountable("offsets", endOffsets)); + resources.add(Accountables.namedAccountable("ordinals", ords)); + return resources; + } + @Override public RandomAccessOrds ordinals(ValuesHolder values) { if (multiValued) { diff --git a/src/main/java/org/elasticsearch/index/fielddata/ordinals/OrdinalsBuilder.java b/src/main/java/org/elasticsearch/index/fielddata/ordinals/OrdinalsBuilder.java index f01deea3602..7874313b5e9 100644 --- a/src/main/java/org/elasticsearch/index/fielddata/ordinals/OrdinalsBuilder.java +++ b/src/main/java/org/elasticsearch/index/fielddata/ordinals/OrdinalsBuilder.java @@ -31,7 +31,6 @@ import org.elasticsearch.common.settings.Settings; import java.io.Closeable; import java.io.IOException; import java.util.Arrays; -import java.util.Comparator; /** * Simple class to build document ID <-> ordinal mapping. Note: Ordinals are @@ -379,10 +378,10 @@ public final class OrdinalsBuilder implements Closeable { } /** - * Builds a {@link FixedBitSet} where each documents bit is that that has one or more ordinals associated with it. + * Builds a {@link BitSet} where each documents bit is that that has one or more ordinals associated with it. * if every document has an ordinal associated with it this method returns null */ - public FixedBitSet buildDocsWithValuesSet() { + public BitSet buildDocsWithValuesSet() { if (numDocsWithValue == maxDoc) { return null; } @@ -479,11 +478,6 @@ public final class OrdinalsBuilder implements Closeable { } return ref; } - - @Override - public Comparator getComparator() { - return termsEnum.getComparator(); - } }; } diff --git a/src/main/java/org/elasticsearch/index/fielddata/ordinals/SinglePackedOrdinals.java b/src/main/java/org/elasticsearch/index/fielddata/ordinals/SinglePackedOrdinals.java index 959ad18e528..950820cfce5 100644 --- a/src/main/java/org/elasticsearch/index/fielddata/ordinals/SinglePackedOrdinals.java +++ b/src/main/java/org/elasticsearch/index/fielddata/ordinals/SinglePackedOrdinals.java @@ -19,9 +19,13 @@ package org.elasticsearch.index.fielddata.ordinals; +import java.util.Collections; + import org.apache.lucene.index.DocValues; import org.apache.lucene.index.RandomAccessOrds; import org.apache.lucene.index.SortedDocValues; +import org.apache.lucene.util.Accountable; +import org.apache.lucene.util.Accountables; import org.apache.lucene.util.BytesRef; import org.apache.lucene.util.RamUsageEstimator; import org.apache.lucene.util.packed.PackedInts; @@ -48,6 +52,11 @@ public class SinglePackedOrdinals extends Ordinals { return RamUsageEstimator.NUM_BYTES_OBJECT_REF + reader.ramBytesUsed(); } + @Override + public Iterable getChildResources() { + return Collections.singleton(Accountables.namedAccountable("reader", reader)); + } + @Override public RandomAccessOrds ordinals(ValuesHolder values) { return (RandomAccessOrds) DocValues.singleton(new Docs(this, values)); diff --git a/src/main/java/org/elasticsearch/index/fielddata/plain/AbstractAtomicGeoPointFieldData.java b/src/main/java/org/elasticsearch/index/fielddata/plain/AbstractAtomicGeoPointFieldData.java index 70d873f9991..df422630ad6 100644 --- a/src/main/java/org/elasticsearch/index/fielddata/plain/AbstractAtomicGeoPointFieldData.java +++ b/src/main/java/org/elasticsearch/index/fielddata/plain/AbstractAtomicGeoPointFieldData.java @@ -18,8 +18,10 @@ */ package org.elasticsearch.index.fielddata.plain; -import org.elasticsearch.index.fielddata.*; +import java.util.Collections; +import org.apache.lucene.util.Accountable; +import org.elasticsearch.index.fielddata.*; /** */ @@ -41,6 +43,11 @@ abstract class AbstractAtomicGeoPointFieldData implements AtomicGeoPointFieldDat public long ramBytesUsed() { return 0; } + + @Override + public Iterable getChildResources() { + return Collections.emptyList(); + } @Override public void close() { diff --git a/src/main/java/org/elasticsearch/index/fielddata/plain/AbstractAtomicOrdinalsFieldData.java b/src/main/java/org/elasticsearch/index/fielddata/plain/AbstractAtomicOrdinalsFieldData.java index 7ab5c52cbed..17afb204798 100644 --- a/src/main/java/org/elasticsearch/index/fielddata/plain/AbstractAtomicOrdinalsFieldData.java +++ b/src/main/java/org/elasticsearch/index/fielddata/plain/AbstractAtomicOrdinalsFieldData.java @@ -19,6 +19,10 @@ package org.elasticsearch.index.fielddata.plain; +import org.apache.lucene.util.Accountable; + +import java.util.Collections; + import org.apache.lucene.index.DocValues; import org.apache.lucene.index.RandomAccessOrds; import org.elasticsearch.index.fielddata.AtomicOrdinalsFieldData; @@ -48,6 +52,11 @@ public abstract class AbstractAtomicOrdinalsFieldData implements AtomicOrdinalsF public long ramBytesUsed() { return 0; } + + @Override + public Iterable getChildResources() { + return Collections.emptyList(); + } @Override public void close() { diff --git a/src/main/java/org/elasticsearch/index/fielddata/plain/AbstractAtomicParentChildFieldData.java b/src/main/java/org/elasticsearch/index/fielddata/plain/AbstractAtomicParentChildFieldData.java index ef9058c8542..f2152ddbcd1 100644 --- a/src/main/java/org/elasticsearch/index/fielddata/plain/AbstractAtomicParentChildFieldData.java +++ b/src/main/java/org/elasticsearch/index/fielddata/plain/AbstractAtomicParentChildFieldData.java @@ -22,12 +22,14 @@ package org.elasticsearch.index.fielddata.plain; import com.google.common.collect.ImmutableSet; import org.apache.lucene.index.DocValues; import org.apache.lucene.index.SortedDocValues; +import org.apache.lucene.util.Accountable; import org.apache.lucene.util.ArrayUtil; import org.apache.lucene.util.BytesRef; import org.elasticsearch.index.fielddata.AtomicParentChildFieldData; import org.elasticsearch.index.fielddata.ScriptDocValues; import org.elasticsearch.index.fielddata.SortedBinaryDocValues; +import java.util.Collections; import java.util.Set; @@ -88,6 +90,11 @@ abstract class AbstractAtomicParentChildFieldData implements AtomicParentChildFi public long ramBytesUsed() { return 0; } + + @Override + public Iterable getChildResources() { + return Collections.emptyList(); + } @Override public void close() { diff --git a/src/main/java/org/elasticsearch/index/fielddata/plain/AbstractIndexFieldData.java b/src/main/java/org/elasticsearch/index/fielddata/plain/AbstractIndexFieldData.java index d45672c60bf..c2e7828c3f0 100644 --- a/src/main/java/org/elasticsearch/index/fielddata/plain/AbstractIndexFieldData.java +++ b/src/main/java/org/elasticsearch/index/fielddata/plain/AbstractIndexFieldData.java @@ -19,7 +19,7 @@ package org.elasticsearch.index.fielddata.plain; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.IndexReader; import org.apache.lucene.index.Terms; import org.apache.lucene.index.TermsEnum; @@ -69,7 +69,7 @@ public abstract class AbstractIndexFieldData extends } @Override - public FD load(AtomicReaderContext context) { + public FD load(LeafReaderContext context) { try { FD fd = cache.load(context, this); return fd; diff --git a/src/main/java/org/elasticsearch/index/fielddata/plain/AbstractIndexOrdinalsFieldData.java b/src/main/java/org/elasticsearch/index/fielddata/plain/AbstractIndexOrdinalsFieldData.java index dd3aae20fcb..328b05a5b31 100644 --- a/src/main/java/org/elasticsearch/index/fielddata/plain/AbstractIndexOrdinalsFieldData.java +++ b/src/main/java/org/elasticsearch/index/fielddata/plain/AbstractIndexOrdinalsFieldData.java @@ -80,7 +80,7 @@ public abstract class AbstractIndexOrdinalsFieldData extends AbstractIndexFieldD return GlobalOrdinalsBuilder.build(indexReader, this, indexSettings, breakerService, logger); } - protected TermsEnum filter(Terms terms, AtomicReader reader) throws IOException { + protected TermsEnum filter(Terms terms, LeafReader reader) throws IOException { TermsEnum iterator = terms.iterator(null); if (iterator == null) { return null; @@ -105,7 +105,7 @@ public abstract class AbstractIndexOrdinalsFieldData extends AbstractIndexFieldD this.maxFreq = maxFreq; } - public static TermsEnum filter(TermsEnum toFilter, Terms terms, AtomicReader reader, Settings settings) throws IOException { + public static TermsEnum filter(TermsEnum toFilter, Terms terms, LeafReader reader, Settings settings) throws IOException { int docCount = terms.getDocCount(); if (docCount == -1) { docCount = reader.maxDoc(); @@ -143,7 +143,7 @@ public abstract class AbstractIndexOrdinalsFieldData extends AbstractIndexFieldD super(delegate, false); this.matcher = matcher; } - public static TermsEnum filter(TermsEnum iterator, Terms terms, AtomicReader reader, Settings regex) { + public static TermsEnum filter(TermsEnum iterator, Terms terms, LeafReader reader, Settings regex) { String pattern = regex.get("pattern"); if (pattern == null) { return iterator; diff --git a/src/main/java/org/elasticsearch/index/fielddata/plain/AtomicDoubleFieldData.java b/src/main/java/org/elasticsearch/index/fielddata/plain/AtomicDoubleFieldData.java index dab79cd0675..b9cf0e47064 100644 --- a/src/main/java/org/elasticsearch/index/fielddata/plain/AtomicDoubleFieldData.java +++ b/src/main/java/org/elasticsearch/index/fielddata/plain/AtomicDoubleFieldData.java @@ -19,7 +19,10 @@ package org.elasticsearch.index.fielddata.plain; +import java.util.Collections; + import org.apache.lucene.index.SortedNumericDocValues; +import org.apache.lucene.util.Accountable; import org.elasticsearch.index.fielddata.*; @@ -61,6 +64,11 @@ abstract class AtomicDoubleFieldData implements AtomicNumericFieldData { public SortedNumericDoubleValues getDoubleValues() { return FieldData.emptySortedNumericDoubles(maxDoc); } + + @Override + public Iterable getChildResources() { + return Collections.emptyList(); + } }; } diff --git a/src/main/java/org/elasticsearch/index/fielddata/plain/AtomicLongFieldData.java b/src/main/java/org/elasticsearch/index/fielddata/plain/AtomicLongFieldData.java index d342558779c..ddb02ac6de0 100644 --- a/src/main/java/org/elasticsearch/index/fielddata/plain/AtomicLongFieldData.java +++ b/src/main/java/org/elasticsearch/index/fielddata/plain/AtomicLongFieldData.java @@ -19,6 +19,9 @@ package org.elasticsearch.index.fielddata.plain; +import java.util.Collections; + +import org.apache.lucene.util.Accountable; import org.apache.lucene.index.DocValues; import org.apache.lucene.index.SortedNumericDocValues; import org.elasticsearch.index.fielddata.*; @@ -63,6 +66,11 @@ abstract class AtomicLongFieldData implements AtomicNumericFieldData { return DocValues.emptySortedNumeric(maxDoc); } + @Override + public Iterable getChildResources() { + return Collections.emptyList(); + } + }; } diff --git a/src/main/java/org/elasticsearch/index/fielddata/plain/BinaryDVAtomicFieldData.java b/src/main/java/org/elasticsearch/index/fielddata/plain/BinaryDVAtomicFieldData.java index b97a37e0a10..9a9d8722563 100644 --- a/src/main/java/org/elasticsearch/index/fielddata/plain/BinaryDVAtomicFieldData.java +++ b/src/main/java/org/elasticsearch/index/fielddata/plain/BinaryDVAtomicFieldData.java @@ -19,23 +19,25 @@ package org.elasticsearch.index.fielddata.plain; -import org.apache.lucene.index.AtomicReader; +import org.apache.lucene.index.LeafReader; import org.apache.lucene.index.BinaryDocValues; import org.apache.lucene.index.DocValues; +import org.apache.lucene.util.Accountable; import org.apache.lucene.util.Bits; import org.elasticsearch.ElasticsearchIllegalStateException; import org.elasticsearch.index.fielddata.*; import org.elasticsearch.index.fielddata.ScriptDocValues.Strings; import java.io.IOException; +import java.util.Collections; /** {@link AtomicFieldData} impl on top of Lucene's binary doc values. */ public class BinaryDVAtomicFieldData implements AtomicFieldData { - private final AtomicReader reader; + private final LeafReader reader; private final String field; - public BinaryDVAtomicFieldData(AtomicReader reader, String field) { + public BinaryDVAtomicFieldData(LeafReader reader, String field) { this.reader = reader; this.field = field; } @@ -65,5 +67,10 @@ public class BinaryDVAtomicFieldData implements AtomicFieldData { public long ramBytesUsed() { return 0; // unknown } + + @Override + public Iterable getChildResources() { + return Collections.emptyList(); + } } diff --git a/src/main/java/org/elasticsearch/index/fielddata/plain/BinaryDVIndexFieldData.java b/src/main/java/org/elasticsearch/index/fielddata/plain/BinaryDVIndexFieldData.java index 16add8622e0..f731cd8eb29 100644 --- a/src/main/java/org/elasticsearch/index/fielddata/plain/BinaryDVIndexFieldData.java +++ b/src/main/java/org/elasticsearch/index/fielddata/plain/BinaryDVIndexFieldData.java @@ -19,7 +19,7 @@ package org.elasticsearch.index.fielddata.plain; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.elasticsearch.index.Index; import org.elasticsearch.index.fielddata.FieldDataType; import org.elasticsearch.index.fielddata.IndexFieldData; @@ -35,12 +35,12 @@ public class BinaryDVIndexFieldData extends DocValuesIndexFieldData implements I } @Override - public BinaryDVAtomicFieldData load(AtomicReaderContext context) { + public BinaryDVAtomicFieldData load(LeafReaderContext context) { return new BinaryDVAtomicFieldData(context.reader(), fieldNames.indexName()); } @Override - public BinaryDVAtomicFieldData loadDirect(AtomicReaderContext context) throws Exception { + public BinaryDVAtomicFieldData loadDirect(LeafReaderContext context) throws Exception { return load(context); } diff --git a/src/main/java/org/elasticsearch/index/fielddata/plain/BinaryDVNumericIndexFieldData.java b/src/main/java/org/elasticsearch/index/fielddata/plain/BinaryDVNumericIndexFieldData.java index 082d35ca601..bfff3246e48 100644 --- a/src/main/java/org/elasticsearch/index/fielddata/plain/BinaryDVNumericIndexFieldData.java +++ b/src/main/java/org/elasticsearch/index/fielddata/plain/BinaryDVNumericIndexFieldData.java @@ -20,11 +20,12 @@ package org.elasticsearch.index.fielddata.plain; import com.google.common.base.Preconditions; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.BinaryDocValues; import org.apache.lucene.index.DocValues; import org.apache.lucene.index.SortedNumericDocValues; import org.apache.lucene.store.ByteArrayDataInput; +import org.apache.lucene.util.Accountable; import org.apache.lucene.util.ArrayUtil; import org.apache.lucene.util.BytesRef; import org.elasticsearch.ElasticsearchIllegalArgumentException; @@ -40,6 +41,7 @@ import org.elasticsearch.index.mapper.FieldMapper.Names; import org.elasticsearch.search.MultiValueMode; import java.io.IOException; +import java.util.Collections; public class BinaryDVNumericIndexFieldData extends DocValuesIndexFieldData implements IndexNumericFieldData { @@ -64,7 +66,7 @@ public class BinaryDVNumericIndexFieldData extends DocValuesIndexFieldData imple } @Override - public AtomicNumericFieldData load(AtomicReaderContext context) { + public AtomicNumericFieldData load(LeafReaderContext context) { try { final BinaryDocValues values = DocValues.getBinary(context.reader(), fieldNames.indexName()); if (numericType.isFloatingPoint()) { @@ -81,6 +83,11 @@ public class BinaryDVNumericIndexFieldData extends DocValuesIndexFieldData imple throw new ElasticsearchIllegalArgumentException("" + numericType); } } + + @Override + public Iterable getChildResources() { + return Collections.emptyList(); + } }; } else { @@ -90,6 +97,11 @@ public class BinaryDVNumericIndexFieldData extends DocValuesIndexFieldData imple public SortedNumericDocValues getLongValues() { return new BinaryAsSortedNumericDocValues(values); } + + @Override + public Iterable getChildResources() { + return Collections.emptyList(); + } }; } @@ -99,7 +111,7 @@ public class BinaryDVNumericIndexFieldData extends DocValuesIndexFieldData imple } @Override - public AtomicNumericFieldData loadDirect(AtomicReaderContext context) throws Exception { + public AtomicNumericFieldData loadDirect(LeafReaderContext context) throws Exception { return load(context); } diff --git a/src/main/java/org/elasticsearch/index/fielddata/plain/BytesBinaryDVAtomicFieldData.java b/src/main/java/org/elasticsearch/index/fielddata/plain/BytesBinaryDVAtomicFieldData.java index 182f27225e2..1620a0446e9 100644 --- a/src/main/java/org/elasticsearch/index/fielddata/plain/BytesBinaryDVAtomicFieldData.java +++ b/src/main/java/org/elasticsearch/index/fielddata/plain/BytesBinaryDVAtomicFieldData.java @@ -21,6 +21,7 @@ package org.elasticsearch.index.fielddata.plain; import org.apache.lucene.index.BinaryDocValues; import org.apache.lucene.store.ByteArrayDataInput; +import org.apache.lucene.util.Accountable; import org.apache.lucene.util.ArrayUtil; import org.apache.lucene.util.BytesRef; import org.apache.lucene.util.BytesRefBuilder; @@ -30,6 +31,7 @@ import org.elasticsearch.index.fielddata.ScriptDocValues; import org.elasticsearch.index.fielddata.SortedBinaryDocValues; import java.util.Arrays; +import java.util.Collections; final class BytesBinaryDVAtomicFieldData implements AtomicFieldData { @@ -45,6 +47,11 @@ final class BytesBinaryDVAtomicFieldData implements AtomicFieldData { return 0; // not exposed by Lucene } + @Override + public Iterable getChildResources() { + return Collections.emptyList(); + } + @Override public SortedBinaryDocValues getBytesValues() { return new SortedBinaryDocValues() { diff --git a/src/main/java/org/elasticsearch/index/fielddata/plain/BytesBinaryDVIndexFieldData.java b/src/main/java/org/elasticsearch/index/fielddata/plain/BytesBinaryDVIndexFieldData.java index 3864e72b623..b2def219918 100644 --- a/src/main/java/org/elasticsearch/index/fielddata/plain/BytesBinaryDVIndexFieldData.java +++ b/src/main/java/org/elasticsearch/index/fielddata/plain/BytesBinaryDVIndexFieldData.java @@ -19,7 +19,7 @@ package org.elasticsearch.index.fielddata.plain; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.DocValues; import org.elasticsearch.ElasticsearchIllegalArgumentException; import org.elasticsearch.ElasticsearchIllegalStateException; @@ -50,7 +50,7 @@ public class BytesBinaryDVIndexFieldData extends DocValuesIndexFieldData impleme } @Override - public BytesBinaryDVAtomicFieldData load(AtomicReaderContext context) { + public BytesBinaryDVAtomicFieldData load(LeafReaderContext context) { try { return new BytesBinaryDVAtomicFieldData(DocValues.getBinary(context.reader(), fieldNames.indexName())); } catch (IOException e) { @@ -59,7 +59,7 @@ public class BytesBinaryDVIndexFieldData extends DocValuesIndexFieldData impleme } @Override - public BytesBinaryDVAtomicFieldData loadDirect(AtomicReaderContext context) throws Exception { + public BytesBinaryDVAtomicFieldData loadDirect(LeafReaderContext context) throws Exception { return load(context); } diff --git a/src/main/java/org/elasticsearch/index/fielddata/plain/DisabledIndexFieldData.java b/src/main/java/org/elasticsearch/index/fielddata/plain/DisabledIndexFieldData.java index 8ad8c5cfc57..05282910d97 100644 --- a/src/main/java/org/elasticsearch/index/fielddata/plain/DisabledIndexFieldData.java +++ b/src/main/java/org/elasticsearch/index/fielddata/plain/DisabledIndexFieldData.java @@ -19,7 +19,7 @@ package org.elasticsearch.index.fielddata.plain; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.elasticsearch.ElasticsearchIllegalStateException; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.index.Index; @@ -52,7 +52,7 @@ public final class DisabledIndexFieldData extends AbstractIndexFieldData getChildResources() { + List resources = new ArrayList<>(); + resources.add(Accountables.namedAccountable("ordinals", build)); + resources.add(Accountables.namedAccountable("values", finalValues)); + return Collections.unmodifiableList(resources); + } }; } else { - final FixedBitSet set = builder.buildDocsWithValuesSet(); + final BitSet set = builder.buildDocsWithValuesSet(); // there's sweet spot where due to low unique value count, using ordinals will consume less memory - long singleValuesArraySize = reader.maxDoc() * RamUsageEstimator.NUM_BYTES_DOUBLE + (set == null ? 0 : RamUsageEstimator.sizeOf(set.getBits()) + RamUsageEstimator.NUM_BYTES_INT); + long singleValuesArraySize = reader.maxDoc() * RamUsageEstimator.NUM_BYTES_DOUBLE + (set == null ? 0 : set.ramBytesUsed()); long uniqueValuesArraySize = values.ramBytesUsed(); long ordinalsSize = build.ramBytesUsed(); if (uniqueValuesArraySize + ordinalsSize < singleValuesArraySize) { @@ -120,6 +132,14 @@ public class DoubleArrayIndexFieldData extends AbstractIndexFieldData getChildResources() { + List resources = new ArrayList<>(); + resources.add(Accountables.namedAccountable("ordinals", build)); + resources.add(Accountables.namedAccountable("values", finalValues)); + return Collections.unmodifiableList(resources); + } }; } @@ -141,6 +161,14 @@ public class DoubleArrayIndexFieldData extends AbstractIndexFieldData getChildResources() { + List resources = new ArrayList<>(); + resources.add(Accountables.namedAccountable("values", sValues)); + resources.add(Accountables.namedAccountable("missing bitset", set)); + return Collections.unmodifiableList(resources); + } }; success = true; @@ -197,7 +225,7 @@ public class DoubleArrayIndexFieldData extends AbstractIndexFieldData getChildResources() { + List resources = new ArrayList<>(); + resources.add(Accountables.namedAccountable("ordinals", ordinals)); + if (fst != null) { + resources.add(Accountables.namedAccountable("terms", fst)); + } + return Collections.unmodifiableList(resources); + } + @Override public RandomAccessOrds getOrdinalsValues() { return ordinals.ordinals(new ValuesHolder(fst)); diff --git a/src/main/java/org/elasticsearch/index/fielddata/plain/FSTBytesIndexFieldData.java b/src/main/java/org/elasticsearch/index/fielddata/plain/FSTBytesIndexFieldData.java index a5a2b84013c..7a545c53883 100644 --- a/src/main/java/org/elasticsearch/index/fielddata/plain/FSTBytesIndexFieldData.java +++ b/src/main/java/org/elasticsearch/index/fielddata/plain/FSTBytesIndexFieldData.java @@ -58,8 +58,8 @@ public class FSTBytesIndexFieldData extends AbstractIndexOrdinalsFieldData { } @Override - public AtomicOrdinalsFieldData loadDirect(AtomicReaderContext context) throws Exception { - AtomicReader reader = context.reader(); + public AtomicOrdinalsFieldData loadDirect(LeafReaderContext context) throws Exception { + LeafReader reader = context.reader(); Terms terms = reader.terms(getFieldNames().indexName()); AtomicOrdinalsFieldData data = null; diff --git a/src/main/java/org/elasticsearch/index/fielddata/plain/FloatArrayIndexFieldData.java b/src/main/java/org/elasticsearch/index/fielddata/plain/FloatArrayIndexFieldData.java index 3c0030408fe..c3a1a087642 100644 --- a/src/main/java/org/elasticsearch/index/fielddata/plain/FloatArrayIndexFieldData.java +++ b/src/main/java/org/elasticsearch/index/fielddata/plain/FloatArrayIndexFieldData.java @@ -18,6 +18,10 @@ */ package org.elasticsearch.index.fielddata.plain; +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; + import org.apache.lucene.index.*; import org.apache.lucene.util.*; import org.elasticsearch.common.Nullable; @@ -64,8 +68,8 @@ public class FloatArrayIndexFieldData extends AbstractIndexFieldData getChildResources() { + List resources = new ArrayList<>(); + resources.add(Accountables.namedAccountable("ordinals", build)); + resources.add(Accountables.namedAccountable("values", finalValues)); + return Collections.unmodifiableList(resources); + } }; } else { - final FixedBitSet set = builder.buildDocsWithValuesSet(); + final BitSet set = builder.buildDocsWithValuesSet(); // there's sweet spot where due to low unique value count, using ordinals will consume less memory - long singleValuesArraySize = reader.maxDoc() * RamUsageEstimator.NUM_BYTES_FLOAT + (set == null ? 0 : RamUsageEstimator.sizeOf(set.getBits()) + RamUsageEstimator.NUM_BYTES_INT); + long singleValuesArraySize = reader.maxDoc() * RamUsageEstimator.NUM_BYTES_FLOAT + (set == null ? 0 : set.ramBytesUsed()); long uniqueValuesArraySize = values.ramBytesUsed(); long ordinalsSize = build.ramBytesUsed(); if (uniqueValuesArraySize + ordinalsSize < singleValuesArraySize) { @@ -118,6 +130,14 @@ public class FloatArrayIndexFieldData extends AbstractIndexFieldData getChildResources() { + List resources = new ArrayList<>(); + resources.add(Accountables.namedAccountable("ordinals", build)); + resources.add(Accountables.namedAccountable("values", finalValues)); + return Collections.unmodifiableList(resources); + } }; } @@ -139,6 +159,14 @@ public class FloatArrayIndexFieldData extends AbstractIndexFieldData getChildResources() { + List resources = new ArrayList<>(); + resources.add(Accountables.namedAccountable("values", sValues)); + resources.add(Accountables.namedAccountable("missing bitset", set)); + return Collections.unmodifiableList(resources); + } }; success = true; @@ -195,7 +223,7 @@ public class FloatArrayIndexFieldData extends AbstractIndexFieldData getChildResources() { + return Collections.emptyList(); + } @Override public void close() { diff --git a/src/main/java/org/elasticsearch/index/fielddata/plain/GeoPointBinaryDVIndexFieldData.java b/src/main/java/org/elasticsearch/index/fielddata/plain/GeoPointBinaryDVIndexFieldData.java index 7f775ab13a1..d308f98e3f9 100644 --- a/src/main/java/org/elasticsearch/index/fielddata/plain/GeoPointBinaryDVIndexFieldData.java +++ b/src/main/java/org/elasticsearch/index/fielddata/plain/GeoPointBinaryDVIndexFieldData.java @@ -19,7 +19,7 @@ package org.elasticsearch.index.fielddata.plain; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.DocValues; import org.elasticsearch.ElasticsearchIllegalArgumentException; import org.elasticsearch.ElasticsearchIllegalStateException; @@ -48,7 +48,7 @@ public class GeoPointBinaryDVIndexFieldData extends DocValuesIndexFieldData impl } @Override - public AtomicGeoPointFieldData load(AtomicReaderContext context) { + public AtomicGeoPointFieldData load(LeafReaderContext context) { try { return new GeoPointBinaryDVAtomicFieldData(DocValues.getBinary(context.reader(), fieldNames.indexName())); } catch (IOException e) { @@ -57,7 +57,7 @@ public class GeoPointBinaryDVIndexFieldData extends DocValuesIndexFieldData impl } @Override - public AtomicGeoPointFieldData loadDirect(AtomicReaderContext context) throws Exception { + public AtomicGeoPointFieldData loadDirect(LeafReaderContext context) throws Exception { return load(context); } diff --git a/src/main/java/org/elasticsearch/index/fielddata/plain/GeoPointCompressedAtomicFieldData.java b/src/main/java/org/elasticsearch/index/fielddata/plain/GeoPointCompressedAtomicFieldData.java index 91911b1e5ec..90effc9eafa 100644 --- a/src/main/java/org/elasticsearch/index/fielddata/plain/GeoPointCompressedAtomicFieldData.java +++ b/src/main/java/org/elasticsearch/index/fielddata/plain/GeoPointCompressedAtomicFieldData.java @@ -18,10 +18,16 @@ */ package org.elasticsearch.index.fielddata.plain; +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; + import org.apache.lucene.index.DocValues; import org.apache.lucene.index.RandomAccessOrds; import org.apache.lucene.index.SortedDocValues; -import org.apache.lucene.util.FixedBitSet; +import org.apache.lucene.util.Accountable; +import org.apache.lucene.util.Accountables; +import org.apache.lucene.util.BitSet; import org.apache.lucene.util.RamUsageEstimator; import org.apache.lucene.util.packed.PagedMutable; import org.elasticsearch.common.geo.GeoPoint; @@ -61,6 +67,14 @@ public abstract class GeoPointCompressedAtomicFieldData extends AbstractAtomicGe return RamUsageEstimator.NUM_BYTES_INT/*size*/ + lon.ramBytesUsed() + lat.ramBytesUsed(); } + @Override + public Iterable getChildResources() { + List resources = new ArrayList<>(); + resources.add(Accountables.namedAccountable("latitude", lat)); + resources.add(Accountables.namedAccountable("longitude", lon)); + return Collections.unmodifiableList(resources); + } + @Override public MultiGeoPointValues getGeoPointValues() { final RandomAccessOrds ords = ordinals.ordinals(); @@ -112,9 +126,9 @@ public abstract class GeoPointCompressedAtomicFieldData extends AbstractAtomicGe private final GeoPointFieldMapper.Encoding encoding; private final PagedMutable lon, lat; - private final FixedBitSet set; + private final BitSet set; - public Single(GeoPointFieldMapper.Encoding encoding, PagedMutable lon, PagedMutable lat, FixedBitSet set) { + public Single(GeoPointFieldMapper.Encoding encoding, PagedMutable lon, PagedMutable lat, BitSet set) { super(); this.encoding = encoding; this.lon = lon; @@ -126,6 +140,17 @@ public abstract class GeoPointCompressedAtomicFieldData extends AbstractAtomicGe public long ramBytesUsed() { return RamUsageEstimator.NUM_BYTES_INT/*size*/ + lon.ramBytesUsed() + lat.ramBytesUsed() + (set == null ? 0 : set.ramBytesUsed()); } + + @Override + public Iterable getChildResources() { + List resources = new ArrayList<>(); + resources.add(Accountables.namedAccountable("latitude", lat)); + resources.add(Accountables.namedAccountable("longitude", lon)); + if (set != null) { + resources.add(Accountables.namedAccountable("missing bitset", set)); + } + return Collections.unmodifiableList(resources); + } @Override public MultiGeoPointValues getGeoPointValues() { diff --git a/src/main/java/org/elasticsearch/index/fielddata/plain/GeoPointCompressedIndexFieldData.java b/src/main/java/org/elasticsearch/index/fielddata/plain/GeoPointCompressedIndexFieldData.java index c33c74c38ab..a37edb3038e 100644 --- a/src/main/java/org/elasticsearch/index/fielddata/plain/GeoPointCompressedIndexFieldData.java +++ b/src/main/java/org/elasticsearch/index/fielddata/plain/GeoPointCompressedIndexFieldData.java @@ -18,11 +18,11 @@ */ package org.elasticsearch.index.fielddata.plain; -import org.apache.lucene.index.AtomicReader; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReader; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.RandomAccessOrds; import org.apache.lucene.index.Terms; -import org.apache.lucene.util.FixedBitSet; +import org.apache.lucene.util.BitSet; import org.apache.lucene.util.packed.PackedInts; import org.apache.lucene.util.packed.PagedMutable; import org.elasticsearch.common.breaker.CircuitBreaker; @@ -77,8 +77,8 @@ public class GeoPointCompressedIndexFieldData extends AbstractIndexGeoPointField } @Override - public AtomicGeoPointFieldData loadDirect(AtomicReaderContext context) throws Exception { - AtomicReader reader = context.reader(); + public AtomicGeoPointFieldData loadDirect(LeafReaderContext context) throws Exception { + LeafReader reader = context.reader(); Terms terms = reader.terms(getFieldNames().indexName()); AtomicGeoPointFieldData data = null; @@ -138,7 +138,7 @@ public class GeoPointCompressedIndexFieldData extends AbstractIndexGeoPointField sLon.set(i, missing); } } - FixedBitSet set = builder.buildDocsWithValuesSet(); + BitSet set = builder.buildDocsWithValuesSet(); data = new GeoPointCompressedAtomicFieldData.Single(encoding, sLon, sLat, set); } success = true; diff --git a/src/main/java/org/elasticsearch/index/fielddata/plain/GeoPointDoubleArrayAtomicFieldData.java b/src/main/java/org/elasticsearch/index/fielddata/plain/GeoPointDoubleArrayAtomicFieldData.java index 12faa328370..84e15c87cc6 100644 --- a/src/main/java/org/elasticsearch/index/fielddata/plain/GeoPointDoubleArrayAtomicFieldData.java +++ b/src/main/java/org/elasticsearch/index/fielddata/plain/GeoPointDoubleArrayAtomicFieldData.java @@ -18,10 +18,16 @@ */ package org.elasticsearch.index.fielddata.plain; +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; + import org.apache.lucene.index.DocValues; import org.apache.lucene.index.RandomAccessOrds; import org.apache.lucene.index.SortedDocValues; -import org.apache.lucene.util.FixedBitSet; +import org.apache.lucene.util.Accountable; +import org.apache.lucene.util.Accountables; +import org.apache.lucene.util.BitSet; import org.apache.lucene.util.RamUsageEstimator; import org.elasticsearch.common.geo.GeoPoint; import org.elasticsearch.common.util.DoubleArray; @@ -56,6 +62,14 @@ public abstract class GeoPointDoubleArrayAtomicFieldData extends AbstractAtomicG public long ramBytesUsed() { return RamUsageEstimator.NUM_BYTES_INT/*size*/ + lon.ramBytesUsed() + lat.ramBytesUsed(); } + + @Override + public Iterable getChildResources() { + List resources = new ArrayList<>(); + resources.add(Accountables.namedAccountable("latitude", lat)); + resources.add(Accountables.namedAccountable("longitude", lon)); + return Collections.unmodifiableList(resources); + } @Override public MultiGeoPointValues getGeoPointValues() { @@ -105,9 +119,9 @@ public abstract class GeoPointDoubleArrayAtomicFieldData extends AbstractAtomicG public static class Single extends GeoPointDoubleArrayAtomicFieldData { private final DoubleArray lon, lat; - private final FixedBitSet set; + private final BitSet set; - public Single(DoubleArray lon, DoubleArray lat, FixedBitSet set) { + public Single(DoubleArray lon, DoubleArray lat, BitSet set) { this.lon = lon; this.lat = lat; this.set = set; @@ -117,6 +131,17 @@ public abstract class GeoPointDoubleArrayAtomicFieldData extends AbstractAtomicG public long ramBytesUsed() { return RamUsageEstimator.NUM_BYTES_INT/*size*/ + lon.ramBytesUsed() + lat.ramBytesUsed() + (set == null ? 0 : set.ramBytesUsed()); } + + @Override + public Iterable getChildResources() { + List resources = new ArrayList<>(); + resources.add(Accountables.namedAccountable("latitude", lat)); + resources.add(Accountables.namedAccountable("longitude", lon)); + if (set != null) { + resources.add(Accountables.namedAccountable("missing bitset", set)); + } + return Collections.unmodifiableList(resources); + } @Override public MultiGeoPointValues getGeoPointValues() { diff --git a/src/main/java/org/elasticsearch/index/fielddata/plain/GeoPointDoubleArrayIndexFieldData.java b/src/main/java/org/elasticsearch/index/fielddata/plain/GeoPointDoubleArrayIndexFieldData.java index e1684d65d6d..7bfddbc044f 100644 --- a/src/main/java/org/elasticsearch/index/fielddata/plain/GeoPointDoubleArrayIndexFieldData.java +++ b/src/main/java/org/elasticsearch/index/fielddata/plain/GeoPointDoubleArrayIndexFieldData.java @@ -18,11 +18,11 @@ */ package org.elasticsearch.index.fielddata.plain; -import org.apache.lucene.index.AtomicReader; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReader; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.RandomAccessOrds; import org.apache.lucene.index.Terms; -import org.apache.lucene.util.FixedBitSet; +import org.apache.lucene.util.BitSet; import org.elasticsearch.common.breaker.CircuitBreaker; import org.elasticsearch.common.geo.GeoPoint; import org.elasticsearch.common.settings.Settings; @@ -59,8 +59,8 @@ public class GeoPointDoubleArrayIndexFieldData extends AbstractIndexGeoPointFiel } @Override - public AtomicGeoPointFieldData loadDirect(AtomicReaderContext context) throws Exception { - AtomicReader reader = context.reader(); + public AtomicGeoPointFieldData loadDirect(LeafReaderContext context) throws Exception { + LeafReader reader = context.reader(); Terms terms = reader.terms(getFieldNames().indexName()); AtomicGeoPointFieldData data = null; @@ -103,7 +103,7 @@ public class GeoPointDoubleArrayIndexFieldData extends AbstractIndexGeoPointFiel sLon.set(i, lon.get(nativeOrdinal)); } } - FixedBitSet set = builder.buildDocsWithValuesSet(); + BitSet set = builder.buildDocsWithValuesSet(); data = new GeoPointDoubleArrayAtomicFieldData.Single(sLon, sLat, set); } else { data = new GeoPointDoubleArrayAtomicFieldData.WithOrdinals(lon, lat, build, reader.maxDoc()); diff --git a/src/main/java/org/elasticsearch/index/fielddata/plain/IndexIndexFieldData.java b/src/main/java/org/elasticsearch/index/fielddata/plain/IndexIndexFieldData.java index ab875c62ab4..06cb30e7c20 100644 --- a/src/main/java/org/elasticsearch/index/fielddata/plain/IndexIndexFieldData.java +++ b/src/main/java/org/elasticsearch/index/fielddata/plain/IndexIndexFieldData.java @@ -19,7 +19,10 @@ package org.elasticsearch.index.fielddata.plain; +import java.util.Collections; + import org.apache.lucene.index.*; +import org.apache.lucene.util.Accountable; import org.apache.lucene.util.BytesRef; import org.elasticsearch.common.settings.ImmutableSettings; import org.elasticsearch.common.settings.Settings; @@ -54,6 +57,11 @@ public class IndexIndexFieldData extends AbstractIndexOrdinalsFieldData { return 0; } + @Override + public Iterable getChildResources() { + return Collections.emptyList(); + } + @Override public RandomAccessOrds getOrdinalsValues() { final BytesRef term = new BytesRef(index); @@ -99,12 +107,12 @@ public class IndexIndexFieldData extends AbstractIndexOrdinalsFieldData { } @Override - public final AtomicOrdinalsFieldData load(AtomicReaderContext context) { + public final AtomicOrdinalsFieldData load(LeafReaderContext context) { return atomicFieldData; } @Override - public AtomicOrdinalsFieldData loadDirect(AtomicReaderContext context) + public AtomicOrdinalsFieldData loadDirect(LeafReaderContext context) throws Exception { return atomicFieldData; } diff --git a/src/main/java/org/elasticsearch/index/fielddata/plain/NumericDVIndexFieldData.java b/src/main/java/org/elasticsearch/index/fielddata/plain/NumericDVIndexFieldData.java index 83988270899..17fbf4425cc 100644 --- a/src/main/java/org/elasticsearch/index/fielddata/plain/NumericDVIndexFieldData.java +++ b/src/main/java/org/elasticsearch/index/fielddata/plain/NumericDVIndexFieldData.java @@ -20,6 +20,7 @@ package org.elasticsearch.index.fielddata.plain; import org.apache.lucene.index.*; +import org.apache.lucene.util.Accountable; import org.apache.lucene.util.Bits; import org.elasticsearch.ElasticsearchIllegalStateException; import org.elasticsearch.index.Index; @@ -31,6 +32,7 @@ import org.elasticsearch.index.mapper.FieldMapper.Names; import org.elasticsearch.search.MultiValueMode; import java.io.IOException; +import java.util.Collections; public class NumericDVIndexFieldData extends DocValuesIndexFieldData implements IndexNumericFieldData { @@ -39,8 +41,8 @@ public class NumericDVIndexFieldData extends DocValuesIndexFieldData implements } @Override - public AtomicLongFieldData load(AtomicReaderContext context) { - final AtomicReader reader = context.reader(); + public AtomicLongFieldData load(LeafReaderContext context) { + final LeafReader reader = context.reader(); final String field = fieldNames.indexName(); return new AtomicLongFieldData(0) { @Override @@ -53,12 +55,17 @@ public class NumericDVIndexFieldData extends DocValuesIndexFieldData implements throw new ElasticsearchIllegalStateException("Cannot load doc values", e); } } + + @Override + public Iterable getChildResources() { + return Collections.emptyList(); + } }; } @Override - public AtomicLongFieldData loadDirect(AtomicReaderContext context) throws Exception { + public AtomicLongFieldData loadDirect(LeafReaderContext context) throws Exception { return load(context); } diff --git a/src/main/java/org/elasticsearch/index/fielddata/plain/PackedArrayIndexFieldData.java b/src/main/java/org/elasticsearch/index/fielddata/plain/PackedArrayIndexFieldData.java index e53e772585e..768dfc051b6 100644 --- a/src/main/java/org/elasticsearch/index/fielddata/plain/PackedArrayIndexFieldData.java +++ b/src/main/java/org/elasticsearch/index/fielddata/plain/PackedArrayIndexFieldData.java @@ -41,7 +41,10 @@ import org.elasticsearch.indices.breaker.CircuitBreakerService; import org.elasticsearch.search.MultiValueMode; import java.io.IOException; +import java.util.ArrayList; +import java.util.Collections; import java.util.EnumSet; +import java.util.List; /** * Stores numeric data into bit-packed arrays for better memory efficiency. @@ -83,8 +86,8 @@ public class PackedArrayIndexFieldData extends AbstractIndexFieldData getChildResources() { + List resources = new ArrayList<>(); + resources.add(Accountables.namedAccountable("ordinals", build)); + resources.add(Accountables.namedAccountable("values", values)); + return Collections.unmodifiableList(resources); + } }; } else { - final FixedBitSet docsWithValues = builder.buildDocsWithValuesSet(); + final BitSet docsWithValues = builder.buildDocsWithValuesSet(); long minV, maxV; minV = maxV = 0; @@ -201,6 +211,16 @@ public class PackedArrayIndexFieldData extends AbstractIndexFieldData getChildResources() { + List resources = new ArrayList<>(); + resources.add(Accountables.namedAccountable("values", sValues)); + if (docsWithValues != null) { + resources.add(Accountables.namedAccountable("missing bitset", docsWithValues)); + } + return Collections.unmodifiableList(resources); + } }; break; @@ -216,8 +236,11 @@ public class PackedArrayIndexFieldData extends AbstractIndexFieldData getChildResources() { + List resources = new ArrayList<>(); + resources.add(Accountables.namedAccountable("values", pagedValues)); + if (docsWithValues != null) { + resources.add(Accountables.namedAccountable("missing bitset", docsWithValues)); + } + return Collections.unmodifiableList(resources); + } + }; break; case ORDINALS: @@ -235,6 +268,14 @@ public class PackedArrayIndexFieldData extends AbstractIndexFieldData getChildResources() { + List resources = new ArrayList<>(); + resources.add(Accountables.namedAccountable("ordinals", build)); + resources.add(Accountables.namedAccountable("values", values)); + return Collections.unmodifiableList(resources); + } }; break; @@ -259,7 +300,7 @@ public class PackedArrayIndexFieldData extends AbstractIndexFieldData getChildResources() { + List resources = new ArrayList<>(); + resources.add(Accountables.namedAccountable("ordinals", ordinals)); + resources.add(Accountables.namedAccountable("term bytes", bytes)); + resources.add(Accountables.namedAccountable("term offsets", termOrdToBytesOffset)); + return Collections.unmodifiableList(resources); + } + @Override public RandomAccessOrds getOrdinalsValues() { return ordinals.ordinals(new ValuesHolder(bytes, termOrdToBytesOffset)); diff --git a/src/main/java/org/elasticsearch/index/fielddata/plain/PagedBytesIndexFieldData.java b/src/main/java/org/elasticsearch/index/fielddata/plain/PagedBytesIndexFieldData.java index a2ea4c3b226..8c8d857e4e2 100644 --- a/src/main/java/org/elasticsearch/index/fielddata/plain/PagedBytesIndexFieldData.java +++ b/src/main/java/org/elasticsearch/index/fielddata/plain/PagedBytesIndexFieldData.java @@ -58,8 +58,8 @@ public class PagedBytesIndexFieldData extends AbstractIndexOrdinalsFieldData { } @Override - public AtomicOrdinalsFieldData loadDirect(AtomicReaderContext context) throws Exception { - AtomicReader reader = context.reader(); + public AtomicOrdinalsFieldData loadDirect(LeafReaderContext context) throws Exception { + LeafReader reader = context.reader(); AtomicOrdinalsFieldData data = null; PagedBytesEstimator estimator = new PagedBytesEstimator(context, breakerService.getBreaker(CircuitBreaker.Name.FIELDDATA), getFieldNames().fullName()); @@ -125,12 +125,12 @@ public class PagedBytesIndexFieldData extends AbstractIndexOrdinalsFieldData { */ public class PagedBytesEstimator implements PerValueEstimator { - private final AtomicReaderContext context; + private final LeafReaderContext context; private final CircuitBreaker breaker; private final String fieldName; private long estimatedBytes; - PagedBytesEstimator(AtomicReaderContext context, CircuitBreaker breaker, String fieldName) { + PagedBytesEstimator(LeafReaderContext context, CircuitBreaker breaker, String fieldName) { this.breaker = breaker; this.context = context; this.fieldName = fieldName; @@ -156,14 +156,14 @@ public class PagedBytesIndexFieldData extends AbstractIndexOrdinalsFieldData { */ public long estimateStringFieldData() { try { - AtomicReader reader = context.reader(); + LeafReader reader = context.reader(); Terms terms = reader.terms(getFieldNames().indexName()); Fields fields = reader.fields(); final Terms fieldTerms = fields.terms(getFieldNames().indexName()); if (fieldTerms instanceof FieldReader) { - final Stats stats = ((FieldReader) fieldTerms).computeStats(); + final Stats stats = ((FieldReader) fieldTerms).getStats(); long totalTermBytes = stats.totalTermBytes; if (logger.isTraceEnabled()) { logger.trace("totalTermBytes: {}, terms.size(): {}, terms.getSumDocFreq(): {}", @@ -193,7 +193,7 @@ public class PagedBytesIndexFieldData extends AbstractIndexOrdinalsFieldData { FilterSettingFields.ACCEPTABLE_TRANSIENT_OVERHEAD_RATIO, OrdinalsBuilder.DEFAULT_ACCEPTABLE_OVERHEAD_RATIO); - AtomicReader reader = context.reader(); + LeafReader reader = context.reader(); // Check if one of the following is present: // - The OrdinalsBuilder overhead has been tweaked away from the default // - A field data filter is present diff --git a/src/main/java/org/elasticsearch/index/fielddata/plain/ParentChildAtomicFieldData.java b/src/main/java/org/elasticsearch/index/fielddata/plain/ParentChildAtomicFieldData.java index ed6f3a5ad64..f3900c7a819 100644 --- a/src/main/java/org/elasticsearch/index/fielddata/plain/ParentChildAtomicFieldData.java +++ b/src/main/java/org/elasticsearch/index/fielddata/plain/ParentChildAtomicFieldData.java @@ -22,10 +22,12 @@ package org.elasticsearch.index.fielddata.plain; import com.carrotsearch.hppc.cursors.ObjectCursor; import org.apache.lucene.index.DocValues; import org.apache.lucene.index.SortedDocValues; +import org.apache.lucene.util.Accountable; import org.elasticsearch.common.collect.ImmutableOpenMap; import org.elasticsearch.index.fielddata.AtomicOrdinalsFieldData; import org.elasticsearch.search.MultiValueMode; +import java.util.Collections; import java.util.HashSet; import java.util.Set; @@ -50,6 +52,13 @@ public class ParentChildAtomicFieldData extends AbstractAtomicParentChildFieldDa return memorySizeInBytes; } + @Override + public Iterable getChildResources() { + // TODO: should we break down by type? + // the current 'map' does not impl java.util.Map so we cant use Accountables.namedAccountables... + return Collections.emptyList(); + } + @Override public Set types() { final Set types = new HashSet<>(); diff --git a/src/main/java/org/elasticsearch/index/fielddata/plain/ParentChildIndexFieldData.java b/src/main/java/org/elasticsearch/index/fielddata/plain/ParentChildIndexFieldData.java index 1f55130fcf2..057cdf98740 100644 --- a/src/main/java/org/elasticsearch/index/fielddata/plain/ParentChildIndexFieldData.java +++ b/src/main/java/org/elasticsearch/index/fielddata/plain/ParentChildIndexFieldData.java @@ -86,8 +86,8 @@ public class ParentChildIndexFieldData extends AbstractIndexFieldData entry : types.entrySet()) { final String parentType = entry.getKey(); final SortedDocValues[] values = entry.getValue(); - for (AtomicReaderContext context : indexReader.leaves()) { + for (LeafReaderContext context : indexReader.leaves()) { SortedDocValues vals = load(context).getOrdinalsValues(parentType); if (vals != null) { values[context.ord] = vals; @@ -367,6 +367,12 @@ public class ParentChildIndexFieldData extends AbstractIndexFieldData getChildResources() { + // TODO: is this really the best? + return Collections.emptyList(); + } + @Override public void close() { } @@ -385,7 +391,12 @@ public class ParentChildIndexFieldData extends AbstractIndexFieldData getChildResources() { + return Collections.emptyList(); } @Override @@ -399,13 +410,13 @@ public class ParentChildIndexFieldData extends AbstractIndexFieldData comparator; private final List states; private final IntArrayList stateSlots; private BytesRef current; - ParentChildIntersectTermsEnum(AtomicReader atomicReader, String... fields) throws IOException { + ParentChildIntersectTermsEnum(LeafReader atomicReader, String... fields) throws IOException { List fieldEnums = new ArrayList<>(); for (String field : fields) { Terms terms = atomicReader.terms(field); @@ -51,7 +50,6 @@ final class ParentChildIntersectTermsEnum extends TermsEnum { fieldEnums.add(terms.iterator(null)); } } - this.comparator = fieldEnums.get(0).getComparator(); states = new ArrayList<>(fieldEnums.size()); for (TermsEnum tEnum : fieldEnums) { states.add(new TermsEnumState(tEnum)); @@ -59,11 +57,6 @@ final class ParentChildIntersectTermsEnum extends TermsEnum { stateSlots = new IntArrayList(states.size()); } - @Override - public Comparator getComparator() { - return comparator; - } - @Override public BytesRef term() throws IOException { return current; diff --git a/src/main/java/org/elasticsearch/index/fielddata/plain/SortedNumericDVIndexFieldData.java b/src/main/java/org/elasticsearch/index/fielddata/plain/SortedNumericDVIndexFieldData.java index 30d204972e1..b95798a52c8 100644 --- a/src/main/java/org/elasticsearch/index/fielddata/plain/SortedNumericDVIndexFieldData.java +++ b/src/main/java/org/elasticsearch/index/fielddata/plain/SortedNumericDVIndexFieldData.java @@ -21,6 +21,7 @@ package org.elasticsearch.index.fielddata.plain; import com.google.common.base.Preconditions; import org.apache.lucene.index.*; +import org.apache.lucene.util.Accountable; import org.apache.lucene.util.NumericUtils; import org.elasticsearch.ElasticsearchIllegalStateException; import org.elasticsearch.index.Index; @@ -33,9 +34,10 @@ import org.elasticsearch.index.mapper.FieldMapper.Names; import org.elasticsearch.search.MultiValueMode; import java.io.IOException; +import java.util.Collections; /** - * FieldData backed by {@link AtomicReader#getSortedNumericDocValues(String)} + * FieldData backed by {@link LeafReader#getSortedNumericDocValues(String)} * @see FieldInfo.DocValuesType#SORTED_NUMERIC */ public class SortedNumericDVIndexFieldData extends DocValuesIndexFieldData implements IndexNumericFieldData { @@ -66,13 +68,13 @@ public class SortedNumericDVIndexFieldData extends DocValuesIndexFieldData imple } @Override - public AtomicNumericFieldData loadDirect(AtomicReaderContext context) throws Exception { + public AtomicNumericFieldData loadDirect(LeafReaderContext context) throws Exception { return load(context); } @Override - public AtomicNumericFieldData load(AtomicReaderContext context) { - final AtomicReader reader = context.reader(); + public AtomicNumericFieldData load(LeafReaderContext context) { + final LeafReader reader = context.reader(); final String field = fieldNames.indexName(); switch (numericType) { @@ -99,10 +101,10 @@ public class SortedNumericDVIndexFieldData extends DocValuesIndexFieldData imple * a Bits matching documents that have a real value (as opposed to missing). */ static final class SortedNumericLongFieldData extends AtomicLongFieldData { - final AtomicReader reader; + final LeafReader reader; final String field; - SortedNumericLongFieldData(AtomicReader reader, String field) { + SortedNumericLongFieldData(LeafReader reader, String field) { super(0L); this.reader = reader; this.field = field; @@ -116,6 +118,11 @@ public class SortedNumericDVIndexFieldData extends DocValuesIndexFieldData imple throw new ElasticsearchIllegalStateException("Cannot load doc values", e); } } + + @Override + public Iterable getChildResources() { + return Collections.emptyList(); + } } /** @@ -136,10 +143,10 @@ public class SortedNumericDVIndexFieldData extends DocValuesIndexFieldData imple * a Bits matching documents that have a real value (as opposed to missing). */ static final class SortedNumericFloatFieldData extends AtomicDoubleFieldData { - final AtomicReader reader; + final LeafReader reader; final String field; - SortedNumericFloatFieldData(AtomicReader reader, String field) { + SortedNumericFloatFieldData(LeafReader reader, String field) { super(0L); this.reader = reader; this.field = field; @@ -160,6 +167,11 @@ public class SortedNumericDVIndexFieldData extends DocValuesIndexFieldData imple throw new ElasticsearchIllegalStateException("Cannot load doc values", e); } } + + @Override + public Iterable getChildResources() { + return Collections.emptyList(); + } } /** @@ -222,10 +234,10 @@ public class SortedNumericDVIndexFieldData extends DocValuesIndexFieldData imple * a Bits matching documents that have a real value (as opposed to missing). */ static final class SortedNumericDoubleFieldData extends AtomicDoubleFieldData { - final AtomicReader reader; + final LeafReader reader; final String field; - SortedNumericDoubleFieldData(AtomicReader reader, String field) { + SortedNumericDoubleFieldData(LeafReader reader, String field) { super(0L); this.reader = reader; this.field = field; @@ -240,5 +252,10 @@ public class SortedNumericDVIndexFieldData extends DocValuesIndexFieldData imple throw new ElasticsearchIllegalStateException("Cannot load doc values", e); } } + + @Override + public Iterable getChildResources() { + return Collections.emptyList(); + } } } diff --git a/src/main/java/org/elasticsearch/index/fielddata/plain/SortedSetDVBytesAtomicFieldData.java b/src/main/java/org/elasticsearch/index/fielddata/plain/SortedSetDVBytesAtomicFieldData.java index 65db7ebc2d1..873a0e90f32 100644 --- a/src/main/java/org/elasticsearch/index/fielddata/plain/SortedSetDVBytesAtomicFieldData.java +++ b/src/main/java/org/elasticsearch/index/fielddata/plain/SortedSetDVBytesAtomicFieldData.java @@ -19,7 +19,9 @@ package org.elasticsearch.index.fielddata.plain; -import org.apache.lucene.index.AtomicReader; +import org.apache.lucene.util.Accountable; + +import org.apache.lucene.index.LeafReader; import org.apache.lucene.index.DocValues; import org.apache.lucene.index.RandomAccessOrds; import org.elasticsearch.ElasticsearchIllegalStateException; @@ -27,16 +29,17 @@ import org.elasticsearch.index.fielddata.AtomicFieldData; import org.elasticsearch.index.fielddata.FieldData; import java.io.IOException; +import java.util.Collections; /** * An {@link AtomicFieldData} implementation that uses Lucene {@link org.apache.lucene.index.SortedSetDocValues}. */ public final class SortedSetDVBytesAtomicFieldData extends AbstractAtomicOrdinalsFieldData { - private final AtomicReader reader; + private final LeafReader reader; private final String field; - SortedSetDVBytesAtomicFieldData(AtomicReader reader, String field) { + SortedSetDVBytesAtomicFieldData(LeafReader reader, String field) { this.reader = reader; this.field = field; } @@ -58,5 +61,10 @@ public final class SortedSetDVBytesAtomicFieldData extends AbstractAtomicOrdinal public long ramBytesUsed() { return 0; // unknown } + + @Override + public Iterable getChildResources() { + return Collections.emptyList(); + } } diff --git a/src/main/java/org/elasticsearch/index/fielddata/plain/SortedSetDVOrdinalsIndexFieldData.java b/src/main/java/org/elasticsearch/index/fielddata/plain/SortedSetDVOrdinalsIndexFieldData.java index bf615b746a4..91182f82a0c 100644 --- a/src/main/java/org/elasticsearch/index/fielddata/plain/SortedSetDVOrdinalsIndexFieldData.java +++ b/src/main/java/org/elasticsearch/index/fielddata/plain/SortedSetDVOrdinalsIndexFieldData.java @@ -19,7 +19,7 @@ package org.elasticsearch.index.fielddata.plain; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.IndexReader; import org.elasticsearch.ElasticsearchException; import org.elasticsearch.common.settings.Settings; @@ -50,12 +50,12 @@ public class SortedSetDVOrdinalsIndexFieldData extends DocValuesIndexFieldData i } @Override - public AtomicOrdinalsFieldData load(AtomicReaderContext context) { + public AtomicOrdinalsFieldData load(LeafReaderContext context) { return new SortedSetDVBytesAtomicFieldData(context.reader(), fieldNames.indexName()); } @Override - public AtomicOrdinalsFieldData loadDirect(AtomicReaderContext context) throws Exception { + public AtomicOrdinalsFieldData loadDirect(LeafReaderContext context) throws Exception { return load(context); } diff --git a/src/main/java/org/elasticsearch/index/gateway/local/LocalIndexShardGateway.java b/src/main/java/org/elasticsearch/index/gateway/local/LocalIndexShardGateway.java index 21de194dbaa..24bc1eee3b3 100644 --- a/src/main/java/org/elasticsearch/index/gateway/local/LocalIndexShardGateway.java +++ b/src/main/java/org/elasticsearch/index/gateway/local/LocalIndexShardGateway.java @@ -150,7 +150,7 @@ public class LocalIndexShardGateway extends AbstractIndexShardComponent implemen // it exists on the directory, but shouldn't exist on the FS, its a leftover (possibly dangling) // its a "new index create" API, we have to do something, so better to clean it than use same data logger.trace("cleaning existing shard, shouldn't exists"); - IndexWriter writer = new IndexWriter(indexShard.store().directory(), new IndexWriterConfig(Lucene.VERSION, Lucene.STANDARD_ANALYZER).setOpenMode(IndexWriterConfig.OpenMode.CREATE)); + IndexWriter writer = new IndexWriter(indexShard.store().directory(), new IndexWriterConfig(Lucene.STANDARD_ANALYZER).setOpenMode(IndexWriterConfig.OpenMode.CREATE)); writer.close(); } } diff --git a/src/main/java/org/elasticsearch/index/mapper/DocumentMapper.java b/src/main/java/org/elasticsearch/index/mapper/DocumentMapper.java index dacad46fd67..4477afdfd22 100644 --- a/src/main/java/org/elasticsearch/index/mapper/DocumentMapper.java +++ b/src/main/java/org/elasticsearch/index/mapper/DocumentMapper.java @@ -25,11 +25,12 @@ import com.google.common.collect.Sets; import org.apache.lucene.analysis.Analyzer; import org.apache.lucene.document.Field; import org.apache.lucene.document.FieldType; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.IndexableField; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.search.Filter; +import org.apache.lucene.util.BitDocIdSet; import org.apache.lucene.util.CloseableThreadLocal; -import org.apache.lucene.util.FixedBitSet; import org.elasticsearch.ElasticsearchGenerationException; import org.elasticsearch.ElasticsearchIllegalArgumentException; import org.elasticsearch.common.Booleans; @@ -46,7 +47,7 @@ import org.elasticsearch.common.text.Text; import org.elasticsearch.common.xcontent.*; import org.elasticsearch.common.xcontent.smile.SmileXContent; import org.elasticsearch.index.analysis.NamedAnalyzer; -import org.elasticsearch.index.cache.fixedbitset.FixedBitSetFilterCache; +import org.elasticsearch.index.cache.bitset.BitsetFilterCache; import org.elasticsearch.index.mapper.internal.*; import org.elasticsearch.index.mapper.object.ObjectMapper; import org.elasticsearch.index.mapper.object.RootObjectMapper; @@ -578,7 +579,7 @@ public class DocumentMapper implements ToXContent { for (ParseContext.Document doc : context.docs()) { encounteredFields.clear(); for (IndexableField field : doc) { - if (field.fieldType().indexed() && !field.fieldType().omitNorms()) { + if (field.fieldType().indexOptions() != IndexOptions.NONE && !field.fieldType().omitNorms()) { if (!encounteredFields.contains(field.name())) { ((Field) field).setBoost(context.docBoost() * field.boost()); encounteredFields.add(field.name()); @@ -598,15 +599,15 @@ public class DocumentMapper implements ToXContent { /** * Returns the best nested {@link ObjectMapper} instances that is in the scope of the specified nested docId. */ - public ObjectMapper findNestedObjectMapper(int nestedDocId, FixedBitSetFilterCache cache, AtomicReaderContext context) throws IOException { + public ObjectMapper findNestedObjectMapper(int nestedDocId, BitsetFilterCache cache, LeafReaderContext context) throws IOException { ObjectMapper nestedObjectMapper = null; for (ObjectMapper objectMapper : objectMappers().values()) { if (!objectMapper.nested().isNested()) { continue; } - FixedBitSet nestedTypeBitSet = cache.getFixedBitSetFilter(objectMapper.nestedTypeFilter()).getDocIdSet(context, null); - if (nestedTypeBitSet != null && nestedTypeBitSet.get(nestedDocId)) { + BitDocIdSet nestedTypeBitSet = cache.getBitDocIdSetFilter(objectMapper.nestedTypeFilter()).getDocIdSet(context); + if (nestedTypeBitSet != null && nestedTypeBitSet.bits().get(nestedDocId)) { if (nestedObjectMapper == null) { nestedObjectMapper = objectMapper; } else { diff --git a/src/main/java/org/elasticsearch/index/mapper/MapperService.java b/src/main/java/org/elasticsearch/index/mapper/MapperService.java index 12950b96a1a..4c3586cd40d 100755 --- a/src/main/java/org/elasticsearch/index/mapper/MapperService.java +++ b/src/main/java/org/elasticsearch/index/mapper/MapperService.java @@ -25,6 +25,7 @@ import com.google.common.base.Predicate; import com.google.common.collect.*; import org.apache.lucene.analysis.Analyzer; import org.apache.lucene.analysis.DelegatingAnalyzerWrapper; +import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.Term; import org.apache.lucene.queries.FilterClause; import org.apache.lucene.queries.TermFilter; @@ -518,7 +519,7 @@ public class MapperService extends AbstractIndexComponent { useTermsFilter = false; break; } - if (!docMapper.typeMapper().fieldType().indexed()) { + if (docMapper.typeMapper().fieldType().indexOptions() == IndexOptions.NONE) { useTermsFilter = false; break; } diff --git a/src/main/java/org/elasticsearch/index/mapper/ParseContext.java b/src/main/java/org/elasticsearch/index/mapper/ParseContext.java index 6a4b6ad4397..a72249e7cd2 100644 --- a/src/main/java/org/elasticsearch/index/mapper/ParseContext.java +++ b/src/main/java/org/elasticsearch/index/mapper/ParseContext.java @@ -24,6 +24,7 @@ import com.carrotsearch.hppc.ObjectObjectOpenHashMap; import com.google.common.collect.Lists; import org.apache.lucene.analysis.Analyzer; import org.apache.lucene.document.Field; +import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.IndexableField; import org.apache.lucene.util.BytesRef; import org.elasticsearch.ElasticsearchIllegalArgumentException; @@ -761,7 +762,7 @@ public abstract class ParseContext { public abstract void version(Field version); public final boolean includeInAll(Boolean includeInAll, FieldMapper mapper) { - return includeInAll(includeInAll, mapper.fieldType().indexed()); + return includeInAll(includeInAll, mapper.fieldType().indexOptions() != IndexOptions.NONE); } /** diff --git a/src/main/java/org/elasticsearch/index/mapper/core/AbstractFieldMapper.java b/src/main/java/org/elasticsearch/index/mapper/core/AbstractFieldMapper.java index e7ba7054582..f32eedbef4c 100644 --- a/src/main/java/org/elasticsearch/index/mapper/core/AbstractFieldMapper.java +++ b/src/main/java/org/elasticsearch/index/mapper/core/AbstractFieldMapper.java @@ -27,7 +27,7 @@ import com.google.common.collect.ImmutableList; import org.apache.lucene.analysis.Analyzer; import org.apache.lucene.document.Field; import org.apache.lucene.document.FieldType; -import org.apache.lucene.index.FieldInfo.IndexOptions; +import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.Term; import org.apache.lucene.queries.TermFilter; import org.apache.lucene.queries.TermsFilter; @@ -72,7 +72,6 @@ public abstract class AbstractFieldMapper implements FieldMapper { public static final boolean DOC_VALUES = false; static { - FIELD_TYPE.setIndexed(true); FIELD_TYPE.setTokenized(true); FIELD_TYPE.setStored(false); FIELD_TYPE.setStoreTermVectors(false); @@ -88,6 +87,7 @@ public abstract class AbstractFieldMapper implements FieldMapper { public abstract static class Builder extends Mapper.Builder { protected final FieldType fieldType; + private final IndexOptions defaultOptions; protected Boolean docValues; protected float boost = Defaults.BOOST; protected boolean omitNormsSet = false; @@ -108,14 +108,32 @@ public abstract class AbstractFieldMapper implements FieldMapper { protected Builder(String name, FieldType fieldType) { super(name); this.fieldType = fieldType; + this.defaultOptions = fieldType.indexOptions(); // we have to store it the fieldType is mutable multiFieldsBuilder = new MultiFields.Builder(); } public T index(boolean index) { - this.fieldType.setIndexed(index); + if (index) { + if (fieldType.indexOptions() == IndexOptions.NONE) { + /* + * the logic here is to reset to the default options only if we are not indexed ie. options are null + * if the fieldType has a non-null option we are all good it might have been set through a different + * call. + */ + final IndexOptions options = getDefaultIndexOption(); + assert options != IndexOptions.NONE : "default IndexOptions is NONE can't enable indexing"; + fieldType.setIndexOptions(options); + } + } else { + fieldType.setIndexOptions(IndexOptions.NONE); + } return builder; } + protected IndexOptions getDefaultIndexOption() { + return defaultOptions; + } + public T store(boolean store) { this.fieldType.setStored(store); return builder; @@ -292,13 +310,13 @@ public abstract class AbstractFieldMapper implements FieldMapper { this.fieldType.freeze(); // automatically set to keyword analyzer if its indexed and not analyzed - if (indexAnalyzer == null && !this.fieldType.tokenized() && this.fieldType.indexed()) { + if (indexAnalyzer == null && !this.fieldType.tokenized() && this.fieldType.indexOptions() != IndexOptions.NONE) { this.indexAnalyzer = Lucene.KEYWORD_ANALYZER; } else { this.indexAnalyzer = indexAnalyzer; } // automatically set to keyword analyzer if its indexed and not analyzed - if (searchAnalyzer == null && !this.fieldType.tokenized() && this.fieldType.indexed()) { + if (searchAnalyzer == null && !this.fieldType.tokenized() && this.fieldType.indexOptions() != IndexOptions.NONE) { this.searchAnalyzer = Lucene.KEYWORD_ANALYZER; } else { this.searchAnalyzer = searchAnalyzer; @@ -565,7 +583,9 @@ public abstract class AbstractFieldMapper implements FieldMapper { return; } AbstractFieldMapper fieldMergeWith = (AbstractFieldMapper) mergeWith; - if (this.fieldType().indexed() != fieldMergeWith.fieldType().indexed() || this.fieldType().tokenized() != fieldMergeWith.fieldType().tokenized()) { + boolean indexed = fieldType.indexOptions() != IndexOptions.NONE; + boolean mergeWithIndexed = fieldMergeWith.fieldType().indexOptions() != IndexOptions.NONE; + if (indexed != mergeWithIndexed || this.fieldType().tokenized() != fieldMergeWith.fieldType().tokenized()) { mergeContext.addConflict("mapper [" + names.fullName() + "] has different index values"); } if (this.fieldType().stored() != fieldMergeWith.fieldType().stored()) { @@ -676,9 +696,11 @@ public abstract class AbstractFieldMapper implements FieldMapper { } FieldType defaultFieldType = defaultFieldType(); - if (includeDefaults || fieldType.indexed() != defaultFieldType.indexed() || + boolean indexed = fieldType.indexOptions() != IndexOptions.NONE; + boolean defaultIndexed = defaultFieldType.indexOptions() != IndexOptions.NONE; + if (includeDefaults || indexed != defaultIndexed || fieldType.tokenized() != defaultFieldType.tokenized()) { - builder.field("index", indexTokenizeOptionToString(fieldType.indexed(), fieldType.tokenized())); + builder.field("index", indexTokenizeOptionToString(indexed, fieldType.tokenized())); } if (includeDefaults || fieldType.stored() != defaultFieldType.stored()) { builder.field("store", fieldType.stored()); @@ -699,7 +721,7 @@ public abstract class AbstractFieldMapper implements FieldMapper { } builder.endObject(); } - if (includeDefaults || fieldType.indexOptions() != defaultFieldType.indexOptions()) { + if (indexed && (includeDefaults || fieldType.indexOptions() != defaultFieldType.indexOptions())) { builder.field("index_options", indexOptionToString(fieldType.indexOptions())); } @@ -782,7 +804,7 @@ public abstract class AbstractFieldMapper implements FieldMapper { return TypeParsers.INDEX_OPTIONS_FREQS; case DOCS_AND_FREQS_AND_POSITIONS: return TypeParsers.INDEX_OPTIONS_POSITIONS; - case DOCS_ONLY: + case DOCS: return TypeParsers.INDEX_OPTIONS_DOCS; default: throw new ElasticsearchIllegalArgumentException("Unknown IndexOptions [" + indexOption + "]"); diff --git a/src/main/java/org/elasticsearch/index/mapper/core/BinaryFieldMapper.java b/src/main/java/org/elasticsearch/index/mapper/core/BinaryFieldMapper.java index f2cf9ce3998..152381a0c5f 100644 --- a/src/main/java/org/elasticsearch/index/mapper/core/BinaryFieldMapper.java +++ b/src/main/java/org/elasticsearch/index/mapper/core/BinaryFieldMapper.java @@ -22,7 +22,8 @@ package org.elasticsearch.index.mapper.core; import com.carrotsearch.hppc.ObjectArrayList; import org.apache.lucene.document.Field; import org.apache.lucene.document.FieldType; -import org.apache.lucene.index.FieldInfo; +import org.apache.lucene.index.DocValuesType; +import org.apache.lucene.index.IndexOptions; import org.apache.lucene.store.ByteArrayDataOutput; import org.apache.lucene.util.BytesRef; import org.elasticsearch.ElasticsearchException; @@ -65,7 +66,7 @@ public class BinaryFieldMapper extends AbstractFieldMapper { public static final FieldType FIELD_TYPE = new FieldType(AbstractFieldMapper.Defaults.FIELD_TYPE); static { - FIELD_TYPE.setIndexed(false); + FIELD_TYPE.setIndexOptions(IndexOptions.NONE); FIELD_TYPE.freeze(); } } @@ -259,7 +260,7 @@ public class BinaryFieldMapper extends AbstractFieldMapper { public static final FieldType TYPE = new FieldType(); static { - TYPE.setDocValueType(FieldInfo.DocValuesType.BINARY); + TYPE.setDocValueType(DocValuesType.BINARY); TYPE.freeze(); } diff --git a/src/main/java/org/elasticsearch/index/mapper/core/BooleanFieldMapper.java b/src/main/java/org/elasticsearch/index/mapper/core/BooleanFieldMapper.java index e244bac1cb9..1c15bddd6bf 100644 --- a/src/main/java/org/elasticsearch/index/mapper/core/BooleanFieldMapper.java +++ b/src/main/java/org/elasticsearch/index/mapper/core/BooleanFieldMapper.java @@ -21,7 +21,7 @@ package org.elasticsearch.index.mapper.core; import org.apache.lucene.document.Field; import org.apache.lucene.document.FieldType; -import org.apache.lucene.index.FieldInfo.IndexOptions; +import org.apache.lucene.index.IndexOptions; import org.apache.lucene.queries.TermFilter; import org.apache.lucene.search.Filter; import org.apache.lucene.util.BytesRef; @@ -60,7 +60,7 @@ public class BooleanFieldMapper extends AbstractFieldMapper { static { FIELD_TYPE.setOmitNorms(true); - FIELD_TYPE.setIndexOptions(IndexOptions.DOCS_ONLY); + FIELD_TYPE.setIndexOptions(IndexOptions.DOCS); FIELD_TYPE.setTokenized(false); FIELD_TYPE.freeze(); } @@ -205,7 +205,7 @@ public class BooleanFieldMapper extends AbstractFieldMapper { @Override protected void parseCreateField(ParseContext context, List fields) throws IOException { - if (!fieldType().indexed() && !fieldType().stored()) { + if (fieldType().indexOptions() == IndexOptions.NONE && !fieldType().stored()) { return; } diff --git a/src/main/java/org/elasticsearch/index/mapper/core/ByteFieldMapper.java b/src/main/java/org/elasticsearch/index/mapper/core/ByteFieldMapper.java index cc937b5251b..f500e5a386a 100644 --- a/src/main/java/org/elasticsearch/index/mapper/core/ByteFieldMapper.java +++ b/src/main/java/org/elasticsearch/index/mapper/core/ByteFieldMapper.java @@ -22,6 +22,7 @@ import org.apache.lucene.analysis.Analyzer; import org.apache.lucene.analysis.TokenStream; import org.apache.lucene.document.Field; import org.apache.lucene.document.FieldType; +import org.apache.lucene.index.IndexOptions; import org.apache.lucene.search.Filter; import org.apache.lucene.search.NumericRangeFilter; import org.apache.lucene.search.NumericRangeQuery; @@ -318,7 +319,7 @@ public class ByteFieldMapper extends NumberFieldMapper { } } } - if (fieldType.indexed() || fieldType.stored()) { + if (fieldType.indexOptions() != IndexOptions.NONE || fieldType.stored()) { CustomByteNumericField field = new CustomByteNumericField(this, value, fieldType); field.setBoost(boost); fields.add(field); @@ -376,7 +377,7 @@ public class ByteFieldMapper extends NumberFieldMapper { @Override public TokenStream tokenStream(Analyzer analyzer, TokenStream previous) { - if (fieldType().indexed()) { + if (fieldType().indexOptions() != IndexOptions.NONE) { return mapper.popCachedStream().setIntValue(number); } return null; diff --git a/src/main/java/org/elasticsearch/index/mapper/core/DateFieldMapper.java b/src/main/java/org/elasticsearch/index/mapper/core/DateFieldMapper.java index 775a3745534..fc319d37674 100644 --- a/src/main/java/org/elasticsearch/index/mapper/core/DateFieldMapper.java +++ b/src/main/java/org/elasticsearch/index/mapper/core/DateFieldMapper.java @@ -21,6 +21,7 @@ package org.elasticsearch.index.mapper.core; import org.apache.lucene.document.Field; import org.apache.lucene.document.FieldType; +import org.apache.lucene.index.IndexOptions; import org.apache.lucene.search.Filter; import org.apache.lucene.search.NumericRangeFilter; import org.apache.lucene.search.NumericRangeQuery; @@ -518,7 +519,7 @@ public class DateFieldMapper extends NumberFieldMapper { } if (value != null) { - if (fieldType.indexed() || fieldType.stored()) { + if (fieldType.indexOptions() != IndexOptions.NONE || fieldType.stored()) { CustomLongNumericField field = new CustomLongNumericField(this, value, fieldType); field.setBoost(boost); fields.add(field); diff --git a/src/main/java/org/elasticsearch/index/mapper/core/DoubleFieldMapper.java b/src/main/java/org/elasticsearch/index/mapper/core/DoubleFieldMapper.java index 503f1eb81b9..985ec982a33 100644 --- a/src/main/java/org/elasticsearch/index/mapper/core/DoubleFieldMapper.java +++ b/src/main/java/org/elasticsearch/index/mapper/core/DoubleFieldMapper.java @@ -24,7 +24,9 @@ import org.apache.lucene.analysis.Analyzer; import org.apache.lucene.analysis.TokenStream; import org.apache.lucene.document.Field; import org.apache.lucene.document.FieldType; +import org.apache.lucene.index.DocValuesType; import org.apache.lucene.index.FieldInfo; +import org.apache.lucene.index.IndexOptions; import org.apache.lucene.search.Filter; import org.apache.lucene.search.NumericRangeFilter; import org.apache.lucene.search.NumericRangeQuery; @@ -314,7 +316,7 @@ public class DoubleFieldMapper extends NumberFieldMapper { } } - if (fieldType.indexed() || fieldType.stored()) { + if (fieldType.indexOptions() != IndexOptions.NONE || fieldType.stored()) { CustomDoubleNumericField field = new CustomDoubleNumericField(this, value, fieldType); field.setBoost(boost); fields.add(field); @@ -383,7 +385,7 @@ public class DoubleFieldMapper extends NumberFieldMapper { @Override public TokenStream tokenStream(Analyzer analyzer, TokenStream previous) throws IOException { - if (fieldType().indexed()) { + if (fieldType().indexOptions() != IndexOptions.NONE) { return mapper.popCachedStream().setDoubleValue(number); } return null; @@ -399,7 +401,7 @@ public class DoubleFieldMapper extends NumberFieldMapper { public static final FieldType TYPE = new FieldType(); static { - TYPE.setDocValueType(FieldInfo.DocValuesType.BINARY); + TYPE.setDocValueType(DocValuesType.BINARY); TYPE.freeze(); } diff --git a/src/main/java/org/elasticsearch/index/mapper/core/FloatFieldMapper.java b/src/main/java/org/elasticsearch/index/mapper/core/FloatFieldMapper.java index dcd0e50b455..2accfb10e18 100644 --- a/src/main/java/org/elasticsearch/index/mapper/core/FloatFieldMapper.java +++ b/src/main/java/org/elasticsearch/index/mapper/core/FloatFieldMapper.java @@ -24,7 +24,8 @@ import org.apache.lucene.analysis.Analyzer; import org.apache.lucene.analysis.TokenStream; import org.apache.lucene.document.Field; import org.apache.lucene.document.FieldType; -import org.apache.lucene.index.FieldInfo; +import org.apache.lucene.index.DocValuesType; +import org.apache.lucene.index.IndexOptions; import org.apache.lucene.search.Filter; import org.apache.lucene.search.NumericRangeFilter; import org.apache.lucene.search.NumericRangeQuery; @@ -319,7 +320,7 @@ public class FloatFieldMapper extends NumberFieldMapper { } } - if (fieldType.indexed() || fieldType.stored()) { + if (fieldType.indexOptions() != IndexOptions.NONE || fieldType.stored()) { CustomFloatNumericField field = new CustomFloatNumericField(this, value, fieldType); field.setBoost(boost); fields.add(field); @@ -389,7 +390,7 @@ public class FloatFieldMapper extends NumberFieldMapper { @Override public TokenStream tokenStream(Analyzer analyzer, TokenStream previous) throws IOException { - if (fieldType().indexed()) { + if (fieldType().indexOptions() != IndexOptions.NONE) { return mapper.popCachedStream().setFloatValue(number); } return null; @@ -405,7 +406,7 @@ public class FloatFieldMapper extends NumberFieldMapper { public static final FieldType TYPE = new FieldType(); static { - TYPE.setDocValueType(FieldInfo.DocValuesType.BINARY); + TYPE.setDocValueType(DocValuesType.BINARY); TYPE.freeze(); } diff --git a/src/main/java/org/elasticsearch/index/mapper/core/IntegerFieldMapper.java b/src/main/java/org/elasticsearch/index/mapper/core/IntegerFieldMapper.java index dac33702909..4a664b4d5a2 100644 --- a/src/main/java/org/elasticsearch/index/mapper/core/IntegerFieldMapper.java +++ b/src/main/java/org/elasticsearch/index/mapper/core/IntegerFieldMapper.java @@ -23,6 +23,7 @@ import org.apache.lucene.analysis.Analyzer; import org.apache.lucene.analysis.TokenStream; import org.apache.lucene.document.Field; import org.apache.lucene.document.FieldType; +import org.apache.lucene.index.IndexOptions; import org.apache.lucene.search.Filter; import org.apache.lucene.search.NumericRangeFilter; import org.apache.lucene.search.NumericRangeQuery; @@ -317,7 +318,7 @@ public class IntegerFieldMapper extends NumberFieldMapper { } protected void addIntegerFields(ParseContext context, List fields, int value, float boost) { - if (fieldType.indexed() || fieldType.stored()) { + if (fieldType.indexOptions() != IndexOptions.NONE || fieldType.stored()) { CustomIntegerNumericField field = new CustomIntegerNumericField(this, value, fieldType); field.setBoost(boost); fields.add(field); @@ -380,7 +381,7 @@ public class IntegerFieldMapper extends NumberFieldMapper { @Override public TokenStream tokenStream(Analyzer analyzer, TokenStream previous) throws IOException { - if (fieldType().indexed()) { + if (fieldType().indexOptions() != IndexOptions.NONE) { return mapper.popCachedStream().setIntValue(number); } return null; diff --git a/src/main/java/org/elasticsearch/index/mapper/core/LongFieldMapper.java b/src/main/java/org/elasticsearch/index/mapper/core/LongFieldMapper.java index 65b15ff32f9..2ebc04ae7d2 100644 --- a/src/main/java/org/elasticsearch/index/mapper/core/LongFieldMapper.java +++ b/src/main/java/org/elasticsearch/index/mapper/core/LongFieldMapper.java @@ -23,6 +23,7 @@ import org.apache.lucene.analysis.Analyzer; import org.apache.lucene.analysis.TokenStream; import org.apache.lucene.document.Field; import org.apache.lucene.document.FieldType; +import org.apache.lucene.index.IndexOptions; import org.apache.lucene.search.Filter; import org.apache.lucene.search.NumericRangeFilter; import org.apache.lucene.search.NumericRangeQuery; @@ -303,7 +304,7 @@ public class LongFieldMapper extends NumberFieldMapper { } } } - if (fieldType.indexed() || fieldType.stored()) { + if (fieldType.indexOptions() != IndexOptions.NONE || fieldType.stored()) { CustomLongNumericField field = new CustomLongNumericField(this, value, fieldType); field.setBoost(boost); fields.add(field); @@ -361,7 +362,7 @@ public class LongFieldMapper extends NumberFieldMapper { @Override public TokenStream tokenStream(Analyzer analyzer, TokenStream previous) throws IOException { - if (fieldType().indexed()) { + if (fieldType().indexOptions() != IndexOptions.NONE) { return mapper.popCachedStream().setLongValue(number); } return null; diff --git a/src/main/java/org/elasticsearch/index/mapper/core/NumberFieldMapper.java b/src/main/java/org/elasticsearch/index/mapper/core/NumberFieldMapper.java index 743741cc605..8317141077c 100644 --- a/src/main/java/org/elasticsearch/index/mapper/core/NumberFieldMapper.java +++ b/src/main/java/org/elasticsearch/index/mapper/core/NumberFieldMapper.java @@ -28,8 +28,8 @@ import org.apache.lucene.analysis.TokenStream; import org.apache.lucene.document.Field; import org.apache.lucene.document.FieldType; import org.apache.lucene.document.SortedNumericDocValuesField; -import org.apache.lucene.index.FieldInfo; -import org.apache.lucene.index.FieldInfo.IndexOptions; +import org.apache.lucene.index.DocValuesType; +import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.IndexableField; import org.apache.lucene.index.IndexableFieldType; import org.apache.lucene.search.Filter; @@ -75,7 +75,7 @@ public abstract class NumberFieldMapper extends AbstractFieldM static { FIELD_TYPE.setTokenized(false); FIELD_TYPE.setOmitNorms(true); - FIELD_TYPE.setIndexOptions(IndexOptions.DOCS_ONLY); + FIELD_TYPE.setIndexOptions(IndexOptions.DOCS); FIELD_TYPE.setStoreTermVectors(false); FIELD_TYPE.freeze(); } @@ -433,7 +433,7 @@ public abstract class NumberFieldMapper extends AbstractFieldM public static final FieldType TYPE = new FieldType(); static { - TYPE.setDocValueType(FieldInfo.DocValuesType.BINARY); + TYPE.setDocValueType(DocValuesType.BINARY); TYPE.freeze(); } @@ -484,7 +484,7 @@ public abstract class NumberFieldMapper extends AbstractFieldM public static final FieldType TYPE = new FieldType(); static { - TYPE.setDocValueType(FieldInfo.DocValuesType.BINARY); + TYPE.setDocValueType(DocValuesType.BINARY); TYPE.freeze(); } diff --git a/src/main/java/org/elasticsearch/index/mapper/core/ShortFieldMapper.java b/src/main/java/org/elasticsearch/index/mapper/core/ShortFieldMapper.java index d257d655da3..195a5e8a3b4 100644 --- a/src/main/java/org/elasticsearch/index/mapper/core/ShortFieldMapper.java +++ b/src/main/java/org/elasticsearch/index/mapper/core/ShortFieldMapper.java @@ -23,6 +23,7 @@ import org.apache.lucene.analysis.Analyzer; import org.apache.lucene.analysis.TokenStream; import org.apache.lucene.document.Field; import org.apache.lucene.document.FieldType; +import org.apache.lucene.index.IndexOptions; import org.apache.lucene.search.Filter; import org.apache.lucene.search.NumericRangeFilter; import org.apache.lucene.search.NumericRangeQuery; @@ -319,7 +320,7 @@ public class ShortFieldMapper extends NumberFieldMapper { } } } - if (fieldType.indexed() || fieldType.stored()) { + if (fieldType.indexOptions() != IndexOptions.NONE || fieldType.stored()) { CustomShortNumericField field = new CustomShortNumericField(this, value, fieldType); field.setBoost(boost); fields.add(field); @@ -378,7 +379,7 @@ public class ShortFieldMapper extends NumberFieldMapper { @Override public TokenStream tokenStream(Analyzer analyzer, TokenStream previous) throws IOException { - if (fieldType().indexed()) { + if (fieldType().indexOptions() != IndexOptions.NONE) { return mapper.popCachedStream().setIntValue(number); } return null; diff --git a/src/main/java/org/elasticsearch/index/mapper/core/StringFieldMapper.java b/src/main/java/org/elasticsearch/index/mapper/core/StringFieldMapper.java index f0c1ad4b850..573a9b9e997 100644 --- a/src/main/java/org/elasticsearch/index/mapper/core/StringFieldMapper.java +++ b/src/main/java/org/elasticsearch/index/mapper/core/StringFieldMapper.java @@ -23,7 +23,7 @@ import org.apache.lucene.analysis.Analyzer; import org.apache.lucene.document.Field; import org.apache.lucene.document.FieldType; import org.apache.lucene.document.SortedSetDocValuesField; -import org.apache.lucene.index.FieldInfo.IndexOptions; +import org.apache.lucene.index.IndexOptions; import org.apache.lucene.search.Filter; import org.apache.lucene.util.BytesRef; import org.elasticsearch.ElasticsearchIllegalArgumentException; @@ -126,14 +126,14 @@ public class StringFieldMapper extends AbstractFieldMapper implements Al // we also change the values on the default field type so that toXContent emits what // differs from the defaults FieldType defaultFieldType = new FieldType(Defaults.FIELD_TYPE); - if (fieldType.indexed() && !fieldType.tokenized()) { + if (fieldType.indexOptions() != IndexOptions.NONE && !fieldType.tokenized()) { defaultFieldType.setOmitNorms(true); - defaultFieldType.setIndexOptions(IndexOptions.DOCS_ONLY); + defaultFieldType.setIndexOptions(IndexOptions.DOCS); if (!omitNormsSet && boost == Defaults.BOOST) { fieldType.setOmitNorms(true); } if (!indexOptionsSet) { - fieldType.setIndexOptions(IndexOptions.DOCS_ONLY); + fieldType.setIndexOptions(IndexOptions.DOCS); } } defaultFieldType.freeze(); @@ -195,7 +195,7 @@ public class StringFieldMapper extends AbstractFieldMapper implements Al private int ignoreAbove; private final FieldType defaultFieldType; - protected StringFieldMapper(Names names, float boost, FieldType fieldType,FieldType defaultFieldType, Boolean docValues, + protected StringFieldMapper(Names names, float boost, FieldType fieldType, FieldType defaultFieldType, Boolean docValues, String nullValue, NamedAnalyzer indexAnalyzer, NamedAnalyzer searchAnalyzer, NamedAnalyzer searchQuotedAnalyzer, int positionOffsetGap, int ignoreAbove, PostingsFormatProvider postingsFormat, DocValuesFormatProvider docValuesFormat, @@ -203,7 +203,7 @@ public class StringFieldMapper extends AbstractFieldMapper implements Al Settings indexSettings, MultiFields multiFields, CopyTo copyTo) { super(names, boost, fieldType, docValues, indexAnalyzer, searchAnalyzer, postingsFormat, docValuesFormat, similarity, normsLoading, fieldDataSettings, indexSettings, multiFields, copyTo); - if (fieldType.tokenized() && fieldType.indexed() && hasDocValues()) { + if (fieldType.tokenized() && fieldType.indexOptions() != IndexOptions.NONE && hasDocValues()) { throw new MapperParsingException("Field [" + names.fullName() + "] cannot be analyzed and have doc values"); } this.defaultFieldType = defaultFieldType; @@ -285,7 +285,7 @@ public class StringFieldMapper extends AbstractFieldMapper implements Al context.allEntries().addText(names.fullName(), valueAndBoost.value(), valueAndBoost.boost()); } - if (fieldType.indexed() || fieldType.stored()) { + if (fieldType.indexOptions() != IndexOptions.NONE || fieldType.stored()) { Field field = new Field(names.indexName(), valueAndBoost.value(), fieldType); field.setBoost(valueAndBoost.boost()); fields.add(field); diff --git a/src/main/java/org/elasticsearch/index/mapper/core/TokenCountFieldMapper.java b/src/main/java/org/elasticsearch/index/mapper/core/TokenCountFieldMapper.java index 7beff6ffee9..a43dc7daccd 100644 --- a/src/main/java/org/elasticsearch/index/mapper/core/TokenCountFieldMapper.java +++ b/src/main/java/org/elasticsearch/index/mapper/core/TokenCountFieldMapper.java @@ -23,6 +23,7 @@ import org.apache.lucene.analysis.TokenStream; import org.apache.lucene.analysis.tokenattributes.PositionIncrementAttribute; import org.apache.lucene.document.Field; import org.apache.lucene.document.FieldType; +import org.apache.lucene.index.IndexOptions; import org.elasticsearch.common.Explicit; import org.elasticsearch.common.Strings; import org.elasticsearch.common.settings.Settings; @@ -131,7 +132,7 @@ public class TokenCountFieldMapper extends IntegerFieldMapper { return; } - if (fieldType.indexed() || fieldType.stored() || hasDocValues()) { + if (fieldType.indexOptions() != IndexOptions.NONE || fieldType.stored() || hasDocValues()) { int count; if (valueAndBoost.value() == null) { count = nullValue(); diff --git a/src/main/java/org/elasticsearch/index/mapper/core/TypeParsers.java b/src/main/java/org/elasticsearch/index/mapper/core/TypeParsers.java index fcbe033c2db..ccbffafe035 100644 --- a/src/main/java/org/elasticsearch/index/mapper/core/TypeParsers.java +++ b/src/main/java/org/elasticsearch/index/mapper/core/TypeParsers.java @@ -19,7 +19,7 @@ package org.elasticsearch.index.mapper.core; -import org.apache.lucene.index.FieldInfo.IndexOptions; +import org.apache.lucene.index.IndexOptions; import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.Version; import org.elasticsearch.common.Strings; @@ -199,9 +199,9 @@ public class TypeParsers { } } } else if (propName.equals("omit_term_freq_and_positions")) { - final IndexOptions op = nodeBooleanValue(propNode) ? IndexOptions.DOCS_ONLY : IndexOptions.DOCS_AND_FREQS_AND_POSITIONS; + final IndexOptions op = nodeBooleanValue(propNode) ? IndexOptions.DOCS : IndexOptions.DOCS_AND_FREQS_AND_POSITIONS; if (parserContext.indexVersionCreated().onOrAfter(Version.V_1_0_0_RC2)) { - throw new ElasticsearchParseException("'omit_term_freq_and_positions' is not supported anymore - use ['index_options' : '" + op.name() + "'] instead"); + throw new ElasticsearchParseException("'omit_term_freq_and_positions' is not supported anymore - use ['index_options' : 'docs'] instead"); } // deprecated option for BW compat builder.indexOptions(op); @@ -295,7 +295,7 @@ public class TypeParsers { } else if (INDEX_OPTIONS_FREQS.equalsIgnoreCase(value)) { return IndexOptions.DOCS_AND_FREQS; } else if (INDEX_OPTIONS_DOCS.equalsIgnoreCase(value)) { - return IndexOptions.DOCS_ONLY; + return IndexOptions.DOCS; } else { throw new ElasticsearchParseException("Failed to parse index option [" + value + "]"); } diff --git a/src/main/java/org/elasticsearch/index/mapper/geo/GeoPointFieldMapper.java b/src/main/java/org/elasticsearch/index/mapper/geo/GeoPointFieldMapper.java index fe7a39351bf..a0f98949c4a 100644 --- a/src/main/java/org/elasticsearch/index/mapper/geo/GeoPointFieldMapper.java +++ b/src/main/java/org/elasticsearch/index/mapper/geo/GeoPointFieldMapper.java @@ -24,8 +24,8 @@ import com.carrotsearch.hppc.cursors.ObjectCursor; import com.google.common.base.Objects; import org.apache.lucene.document.Field; import org.apache.lucene.document.FieldType; -import org.apache.lucene.index.FieldInfo; -import org.apache.lucene.index.FieldInfo.IndexOptions; +import org.apache.lucene.index.DocValuesType; +import org.apache.lucene.index.IndexOptions; import org.apache.lucene.util.BytesRef; import org.elasticsearch.ElasticsearchIllegalArgumentException; import org.elasticsearch.ElasticsearchIllegalStateException; @@ -98,10 +98,9 @@ public class GeoPointFieldMapper extends AbstractFieldMapper implement public static final FieldType FIELD_TYPE = new FieldType(StringFieldMapper.Defaults.FIELD_TYPE); static { - FIELD_TYPE.setIndexed(true); + FIELD_TYPE.setIndexOptions(IndexOptions.DOCS); FIELD_TYPE.setTokenized(false); FIELD_TYPE.setOmitNorms(true); - FIELD_TYPE.setIndexOptions(IndexOptions.DOCS_ONLY); FIELD_TYPE.freeze(); } } @@ -186,7 +185,7 @@ public class GeoPointFieldMapper extends AbstractFieldMapper implement } StringFieldMapper geohashMapper = null; if (enableGeoHash) { - geohashMapper = stringField(Names.GEOHASH).index(true).tokenized(false).includeInAll(false).omitNorms(true).indexOptions(IndexOptions.DOCS_ONLY).build(context); + geohashMapper = stringField(Names.GEOHASH).index(true).tokenized(false).includeInAll(false).omitNorms(true).indexOptions(IndexOptions.DOCS).build(context); } context.path().remove(); @@ -568,7 +567,7 @@ public class GeoPointFieldMapper extends AbstractFieldMapper implement } } - if (fieldType.indexed() || fieldType.stored()) { + if (fieldType.indexOptions() != IndexOptions.NONE || fieldType.stored()) { Field field = new Field(names.indexName(), Double.toString(point.lat()) + ',' + Double.toString(point.lon()), fieldType); context.doc().add(field); } @@ -716,7 +715,7 @@ public class GeoPointFieldMapper extends AbstractFieldMapper implement public static final FieldType TYPE = new FieldType(); static { - TYPE.setDocValueType(FieldInfo.DocValuesType.BINARY); + TYPE.setDocValueType(DocValuesType.BINARY); TYPE.freeze(); } diff --git a/src/main/java/org/elasticsearch/index/mapper/geo/GeoShapeFieldMapper.java b/src/main/java/org/elasticsearch/index/mapper/geo/GeoShapeFieldMapper.java index 8cb134d63a8..b1ca4a0c254 100644 --- a/src/main/java/org/elasticsearch/index/mapper/geo/GeoShapeFieldMapper.java +++ b/src/main/java/org/elasticsearch/index/mapper/geo/GeoShapeFieldMapper.java @@ -21,7 +21,7 @@ package org.elasticsearch.index.mapper.geo; import com.spatial4j.core.shape.Shape; import org.apache.lucene.document.Field; import org.apache.lucene.document.FieldType; -import org.apache.lucene.index.FieldInfo; +import org.apache.lucene.index.IndexOptions; import org.apache.lucene.spatial.prefix.PrefixTreeStrategy; import org.apache.lucene.spatial.prefix.RecursivePrefixTreeStrategy; import org.apache.lucene.spatial.prefix.TermQueryPrefixTreeStrategy; @@ -91,12 +91,11 @@ public class GeoShapeFieldMapper extends AbstractFieldMapper { public static final FieldType FIELD_TYPE = new FieldType(); static { - FIELD_TYPE.setIndexed(true); + FIELD_TYPE.setIndexOptions(IndexOptions.DOCS); FIELD_TYPE.setTokenized(false); FIELD_TYPE.setStored(false); FIELD_TYPE.setStoreTermVectors(false); FIELD_TYPE.setOmitNorms(true); - FIELD_TYPE.setIndexOptions(FieldInfo.IndexOptions.DOCS_ONLY); FIELD_TYPE.freeze(); } diff --git a/src/main/java/org/elasticsearch/index/mapper/internal/AllFieldMapper.java b/src/main/java/org/elasticsearch/index/mapper/internal/AllFieldMapper.java index a7043cf9801..cfd5df6e28f 100644 --- a/src/main/java/org/elasticsearch/index/mapper/internal/AllFieldMapper.java +++ b/src/main/java/org/elasticsearch/index/mapper/internal/AllFieldMapper.java @@ -22,7 +22,7 @@ package org.elasticsearch.index.mapper.internal; import org.apache.lucene.analysis.Analyzer; import org.apache.lucene.document.Field; import org.apache.lucene.document.FieldType; -import org.apache.lucene.index.FieldInfo.IndexOptions; +import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.Term; import org.apache.lucene.search.Query; import org.apache.lucene.search.TermQuery; @@ -41,7 +41,6 @@ import org.elasticsearch.index.codec.postingsformat.PostingsFormatProvider; import org.elasticsearch.index.fielddata.FieldDataType; import org.elasticsearch.index.mapper.*; import org.elasticsearch.index.mapper.core.AbstractFieldMapper; -import org.elasticsearch.index.mapper.object.RootObjectMapper; import org.elasticsearch.index.query.QueryParseContext; import org.elasticsearch.index.similarity.SimilarityLookupService; import org.elasticsearch.index.similarity.SimilarityProvider; @@ -80,7 +79,7 @@ public class AllFieldMapper extends AbstractFieldMapper implements Inter public static final FieldType FIELD_TYPE = new FieldType(); static { - FIELD_TYPE.setIndexed(true); + FIELD_TYPE.setIndexOptions(IndexOptions.DOCS_AND_FREQS_AND_POSITIONS); FIELD_TYPE.setTokenized(true); FIELD_TYPE.freeze(); } @@ -107,7 +106,10 @@ public class AllFieldMapper extends AbstractFieldMapper implements Inter @Override public AllFieldMapper build(BuilderContext context) { // In case the mapping overrides these - fieldType.setIndexed(true); + // TODO: this should be an exception! it doesnt make sense to not index this field + if (fieldType.indexOptions() == IndexOptions.NONE) { + fieldType.setIndexOptions(Defaults.FIELD_TYPE.indexOptions()); + } fieldType.setTokenized(true); return new AllFieldMapper(name, fieldType, indexAnalyzer, searchAnalyzer, enabled, autoBoost, postingsProvider, docValuesProvider, similarity, normsLoading, fieldDataSettings, context.indexSettings()); diff --git a/src/main/java/org/elasticsearch/index/mapper/internal/BoostFieldMapper.java b/src/main/java/org/elasticsearch/index/mapper/internal/BoostFieldMapper.java index 58b1996695e..1963bd12fe6 100644 --- a/src/main/java/org/elasticsearch/index/mapper/internal/BoostFieldMapper.java +++ b/src/main/java/org/elasticsearch/index/mapper/internal/BoostFieldMapper.java @@ -21,6 +21,7 @@ package org.elasticsearch.index.mapper.internal; import org.apache.lucene.document.Field; import org.apache.lucene.document.FieldType; +import org.apache.lucene.index.IndexOptions; import org.apache.lucene.search.Filter; import org.apache.lucene.search.NumericRangeFilter; import org.apache.lucene.search.NumericRangeQuery; @@ -31,7 +32,6 @@ import org.apache.lucene.util.NumericUtils; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.Numbers; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.settings.ImmutableSettings; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.Fuzziness; import org.elasticsearch.common.xcontent.XContentBuilder; @@ -69,8 +69,8 @@ public class BoostFieldMapper extends NumberFieldMapper implements Intern public static final FieldType FIELD_TYPE = new FieldType(NumberFieldMapper.Defaults.FIELD_TYPE); static { - FIELD_TYPE.setIndexed(false); FIELD_TYPE.setStored(false); + FIELD_TYPE.setIndexOptions(IndexOptions.NONE); // not indexed } } @@ -88,6 +88,11 @@ public class BoostFieldMapper extends NumberFieldMapper implements Intern return this; } + // if we are indexed we use DOCS_ONLY + protected IndexOptions getDefaultIndexOption() { + return IndexOptions.DOCS; + } + @Override public BoostFieldMapper build(BuilderContext context) { return new BoostFieldMapper(name, buildIndexName(context), @@ -282,10 +287,12 @@ public class BoostFieldMapper extends NumberFieldMapper implements Intern @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { boolean includeDefaults = params.paramAsBoolean("include_defaults", false); + boolean indexed = fieldType.indexOptions() != IndexOptions.NONE; + boolean indexedDefault = Defaults.FIELD_TYPE.indexOptions() != IndexOptions.NONE; // all are defaults, don't write it at all if (!includeDefaults && name().equals(Defaults.NAME) && nullValue == null && - fieldType.indexed() == Defaults.FIELD_TYPE.indexed() && + indexed == indexedDefault && fieldType.stored() == Defaults.FIELD_TYPE.stored() && customFieldDataSettings == null) { return builder; @@ -297,8 +304,8 @@ public class BoostFieldMapper extends NumberFieldMapper implements Intern if (includeDefaults || nullValue != null) { builder.field("null_value", nullValue); } - if (includeDefaults || fieldType.indexed() != Defaults.FIELD_TYPE.indexed()) { - builder.field("index", indexTokenizeOptionToString(fieldType.indexed(), fieldType.tokenized())); + if (includeDefaults || indexed != indexedDefault) { + builder.field("index", indexTokenizeOptionToString(indexed, fieldType.tokenized())); } if (includeDefaults || fieldType.stored() != Defaults.FIELD_TYPE.stored()) { builder.field("store", fieldType.stored()); diff --git a/src/main/java/org/elasticsearch/index/mapper/internal/FieldNamesFieldMapper.java b/src/main/java/org/elasticsearch/index/mapper/internal/FieldNamesFieldMapper.java index c795e0c7dcc..2d9f0696887 100644 --- a/src/main/java/org/elasticsearch/index/mapper/internal/FieldNamesFieldMapper.java +++ b/src/main/java/org/elasticsearch/index/mapper/internal/FieldNamesFieldMapper.java @@ -23,7 +23,7 @@ import com.google.common.collect.UnmodifiableIterator; import org.apache.lucene.document.Field; import org.apache.lucene.document.FieldType; import org.apache.lucene.document.SortedSetDocValuesField; -import org.apache.lucene.index.FieldInfo.IndexOptions; +import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.IndexableField; import org.apache.lucene.util.BytesRef; import org.elasticsearch.ElasticsearchIllegalArgumentException; @@ -65,17 +65,17 @@ public class FieldNamesFieldMapper extends AbstractFieldMapper implement public static final String INDEX_NAME = FieldNamesFieldMapper.NAME; public static final FieldType FIELD_TYPE = new FieldType(AbstractFieldMapper.Defaults.FIELD_TYPE); + // TODO: this field should be removed? public static final FieldType FIELD_TYPE_PRE_1_3_0; static { - FIELD_TYPE.setIndexed(true); + FIELD_TYPE.setIndexOptions(IndexOptions.DOCS); FIELD_TYPE.setTokenized(false); FIELD_TYPE.setStored(false); FIELD_TYPE.setOmitNorms(true); - FIELD_TYPE.setIndexOptions(IndexOptions.DOCS_ONLY); FIELD_TYPE.freeze(); FIELD_TYPE_PRE_1_3_0 = new FieldType(FIELD_TYPE); - FIELD_TYPE_PRE_1_3_0.setIndexed(false); + FIELD_TYPE_PRE_1_3_0.setIndexOptions(IndexOptions.NONE); FIELD_TYPE_PRE_1_3_0.freeze(); } } @@ -98,7 +98,7 @@ public class FieldNamesFieldMapper extends AbstractFieldMapper implement @Override public FieldNamesFieldMapper build(BuilderContext context) { if ((context.indexCreatedVersion() == null || context.indexCreatedVersion().before(Version.V_1_3_0)) && !indexIsExplicit) { - fieldType.setIndexed(false); + fieldType.setIndexOptions(IndexOptions.NONE); } return new FieldNamesFieldMapper(name, indexName, boost, fieldType, postingsProvider, docValuesProvider, fieldDataSettings, context.indexSettings()); } @@ -214,7 +214,7 @@ public class FieldNamesFieldMapper extends AbstractFieldMapper implement @Override protected void parseCreateField(ParseContext context, List fields) throws IOException { - if (!fieldType.indexed() && !fieldType.stored() && !hasDocValues()) { + if (fieldType.indexOptions() == IndexOptions.NONE && !fieldType.stored() && !hasDocValues()) { return; } for (ParseContext.Document document : context.docs()) { @@ -224,7 +224,7 @@ public class FieldNamesFieldMapper extends AbstractFieldMapper implement } for (String path : paths) { for (String fieldName : extractFieldNames(path)) { - if (fieldType.indexed() || fieldType.stored()) { + if (fieldType.indexOptions() != IndexOptions.NONE || fieldType.stored()) { document.add(new Field(names().indexName(), fieldName, fieldType)); } if (hasDocValues()) { diff --git a/src/main/java/org/elasticsearch/index/mapper/internal/IdFieldMapper.java b/src/main/java/org/elasticsearch/index/mapper/internal/IdFieldMapper.java index ffe9e6242b1..da236c3b894 100644 --- a/src/main/java/org/elasticsearch/index/mapper/internal/IdFieldMapper.java +++ b/src/main/java/org/elasticsearch/index/mapper/internal/IdFieldMapper.java @@ -23,7 +23,7 @@ import com.google.common.collect.Iterables; import org.apache.lucene.document.BinaryDocValuesField; import org.apache.lucene.document.Field; import org.apache.lucene.document.FieldType; -import org.apache.lucene.index.FieldInfo.IndexOptions; +import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.Term; import org.apache.lucene.queries.TermsFilter; import org.apache.lucene.search.*; @@ -71,10 +71,9 @@ public class IdFieldMapper extends AbstractFieldMapper implements Intern public static final FieldType FIELD_TYPE = new FieldType(AbstractFieldMapper.Defaults.FIELD_TYPE); static { - FIELD_TYPE.setIndexed(false); + FIELD_TYPE.setIndexOptions(IndexOptions.NONE); FIELD_TYPE.setStored(false); FIELD_TYPE.setOmitNorms(true); - FIELD_TYPE.setIndexOptions(IndexOptions.DOCS_ONLY); FIELD_TYPE.freeze(); } @@ -94,6 +93,10 @@ public class IdFieldMapper extends AbstractFieldMapper implements Intern this.path = path; return builder; } + // if we are indexed we use DOCS + protected IndexOptions getDefaultIndexOption() { + return IndexOptions.DOCS; + } @Override public IdFieldMapper build(BuilderContext context) { @@ -168,7 +171,7 @@ public class IdFieldMapper extends AbstractFieldMapper implements Intern @Override public Query termQuery(Object value, @Nullable QueryParseContext context) { - if (fieldType.indexed() || context == null) { + if (fieldType.indexOptions() != IndexOptions.NONE || context == null) { return super.termQuery(value, context); } // no need for constant score filter, since we don't cache the filter, and it always takes deletes into account @@ -177,7 +180,7 @@ public class IdFieldMapper extends AbstractFieldMapper implements Intern @Override public Filter termFilter(Object value, @Nullable QueryParseContext context) { - if (fieldType.indexed() || context == null) { + if (fieldType.indexOptions() != IndexOptions.NONE || context == null) { return super.termFilter(value, context); } return new TermsFilter(UidFieldMapper.NAME, Uid.createTypeUids(context.queryTypes(), value)); @@ -185,7 +188,7 @@ public class IdFieldMapper extends AbstractFieldMapper implements Intern @Override public Filter termsFilter(List values, @Nullable QueryParseContext context) { - if (fieldType.indexed() || context == null) { + if (fieldType.indexOptions() != IndexOptions.NONE || context == null) { return super.termsFilter(values, context); } return new TermsFilter(UidFieldMapper.NAME, Uid.createTypeUids(context.queryTypes(), values)); @@ -193,7 +196,7 @@ public class IdFieldMapper extends AbstractFieldMapper implements Intern @Override public Query prefixQuery(Object value, @Nullable MultiTermQuery.RewriteMethod method, @Nullable QueryParseContext context) { - if (fieldType.indexed() || context == null) { + if (fieldType.indexOptions() != IndexOptions.NONE || context == null) { return super.prefixQuery(value, method, context); } Collection queryTypes = context.queryTypes(); @@ -217,7 +220,7 @@ public class IdFieldMapper extends AbstractFieldMapper implements Intern @Override public Filter prefixFilter(Object value, @Nullable QueryParseContext context) { - if (fieldType.indexed() || context == null) { + if (fieldType.indexOptions() != IndexOptions.NONE || context == null) { return super.prefixFilter(value, context); } Collection queryTypes = context.queryTypes(); @@ -233,7 +236,7 @@ public class IdFieldMapper extends AbstractFieldMapper implements Intern @Override public Query regexpQuery(Object value, int flags, @Nullable MultiTermQuery.RewriteMethod method, @Nullable QueryParseContext context) { - if (fieldType.indexed() || context == null) { + if (fieldType.indexOptions() != IndexOptions.NONE || context == null) { return super.regexpQuery(value, flags, method, context); } Collection queryTypes = context.queryTypes(); @@ -256,7 +259,7 @@ public class IdFieldMapper extends AbstractFieldMapper implements Intern } public Filter regexpFilter(Object value, int flags, @Nullable QueryParseContext context) { - if (fieldType.indexed() || context == null) { + if (fieldType.indexOptions() != IndexOptions.NONE || context == null) { return super.regexpFilter(value, flags, context); } Collection queryTypes = context.queryTypes(); @@ -308,7 +311,7 @@ public class IdFieldMapper extends AbstractFieldMapper implements Intern context.id(id); } // else we are in the pre/post parse phase - if (fieldType.indexed() || fieldType.stored()) { + if (fieldType.indexOptions() != IndexOptions.NONE || fieldType.stored()) { fields.add(new Field(names.indexName(), context.id(), fieldType)); } if (hasDocValues()) { @@ -327,7 +330,7 @@ public class IdFieldMapper extends AbstractFieldMapper implements Intern // if all are defaults, no sense to write it at all if (!includeDefaults && fieldType.stored() == Defaults.FIELD_TYPE.stored() - && fieldType.indexed() == Defaults.FIELD_TYPE.indexed() + && fieldType.indexOptions() == Defaults.FIELD_TYPE.indexOptions() && path == Defaults.PATH && customFieldDataSettings == null && (postingsFormat == null || postingsFormat.name().equals(defaultPostingFormat())) @@ -338,8 +341,8 @@ public class IdFieldMapper extends AbstractFieldMapper implements Intern if (includeDefaults || fieldType.stored() != Defaults.FIELD_TYPE.stored()) { builder.field("store", fieldType.stored()); } - if (includeDefaults || fieldType.indexed() != Defaults.FIELD_TYPE.indexed()) { - builder.field("index", indexTokenizeOptionToString(fieldType.indexed(), fieldType.tokenized())); + if (includeDefaults || fieldType.indexOptions() != Defaults.FIELD_TYPE.indexOptions()) { + builder.field("index", indexTokenizeOptionToString(fieldType.indexOptions() != IndexOptions.NONE, fieldType.tokenized())); } if (includeDefaults || path != Defaults.PATH) { builder.field("path", path); diff --git a/src/main/java/org/elasticsearch/index/mapper/internal/IndexFieldMapper.java b/src/main/java/org/elasticsearch/index/mapper/internal/IndexFieldMapper.java index 3c809580f5f..38e1e97a3a7 100644 --- a/src/main/java/org/elasticsearch/index/mapper/internal/IndexFieldMapper.java +++ b/src/main/java/org/elasticsearch/index/mapper/internal/IndexFieldMapper.java @@ -22,7 +22,7 @@ package org.elasticsearch.index.mapper.internal; import org.apache.lucene.document.Document; import org.apache.lucene.document.Field; import org.apache.lucene.document.FieldType; -import org.apache.lucene.index.FieldInfo.IndexOptions; +import org.apache.lucene.index.IndexOptions; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.Strings; import org.elasticsearch.common.lucene.Lucene; @@ -58,11 +58,10 @@ public class IndexFieldMapper extends AbstractFieldMapper implements Int public static final FieldType FIELD_TYPE = new FieldType(AbstractFieldMapper.Defaults.FIELD_TYPE); static { - FIELD_TYPE.setIndexed(true); + FIELD_TYPE.setIndexOptions(IndexOptions.DOCS); FIELD_TYPE.setTokenized(false); FIELD_TYPE.setStored(false); FIELD_TYPE.setOmitNorms(true); - FIELD_TYPE.setIndexOptions(IndexOptions.DOCS_ONLY); FIELD_TYPE.freeze(); } diff --git a/src/main/java/org/elasticsearch/index/mapper/internal/ParentFieldMapper.java b/src/main/java/org/elasticsearch/index/mapper/internal/ParentFieldMapper.java index 9a01b3d945a..fa9d4df31a6 100644 --- a/src/main/java/org/elasticsearch/index/mapper/internal/ParentFieldMapper.java +++ b/src/main/java/org/elasticsearch/index/mapper/internal/ParentFieldMapper.java @@ -21,7 +21,7 @@ package org.elasticsearch.index.mapper.internal; import com.google.common.base.Objects; import org.apache.lucene.document.Field; import org.apache.lucene.document.FieldType; -import org.apache.lucene.index.FieldInfo.IndexOptions; +import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.Term; import org.apache.lucene.queries.TermFilter; import org.apache.lucene.queries.TermsFilter; @@ -42,7 +42,6 @@ import org.elasticsearch.index.fielddata.FieldDataType; import org.elasticsearch.index.mapper.*; import org.elasticsearch.index.mapper.core.AbstractFieldMapper; import org.elasticsearch.index.query.QueryParseContext; -import org.elasticsearch.index.settings.IndexSettings; import java.io.IOException; import java.util.ArrayList; @@ -70,11 +69,10 @@ public class ParentFieldMapper extends AbstractFieldMapper implements Inter public static final FieldType FIELD_TYPE = new FieldType(AbstractFieldMapper.Defaults.FIELD_TYPE); static { - FIELD_TYPE.setIndexed(true); + FIELD_TYPE.setIndexOptions(IndexOptions.DOCS); FIELD_TYPE.setTokenized(false); FIELD_TYPE.setStored(true); FIELD_TYPE.setOmitNorms(true); - FIELD_TYPE.setIndexOptions(IndexOptions.DOCS_ONLY); FIELD_TYPE.freeze(); } } diff --git a/src/main/java/org/elasticsearch/index/mapper/internal/RoutingFieldMapper.java b/src/main/java/org/elasticsearch/index/mapper/internal/RoutingFieldMapper.java index dfb50d79a47..25f61ebac8f 100644 --- a/src/main/java/org/elasticsearch/index/mapper/internal/RoutingFieldMapper.java +++ b/src/main/java/org/elasticsearch/index/mapper/internal/RoutingFieldMapper.java @@ -22,7 +22,7 @@ package org.elasticsearch.index.mapper.internal; import org.apache.lucene.document.Document; import org.apache.lucene.document.Field; import org.apache.lucene.document.FieldType; -import org.apache.lucene.index.FieldInfo.IndexOptions; +import org.apache.lucene.index.IndexOptions; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.Strings; import org.elasticsearch.common.lucene.Lucene; @@ -57,11 +57,10 @@ public class RoutingFieldMapper extends AbstractFieldMapper implements I public static final FieldType FIELD_TYPE = new FieldType(AbstractFieldMapper.Defaults.FIELD_TYPE); static { - FIELD_TYPE.setIndexed(true); + FIELD_TYPE.setIndexOptions(IndexOptions.DOCS); FIELD_TYPE.setTokenized(false); FIELD_TYPE.setStored(true); FIELD_TYPE.setOmitNorms(true); - FIELD_TYPE.setIndexOptions(IndexOptions.DOCS_ONLY); FIELD_TYPE.freeze(); } @@ -196,7 +195,7 @@ public class RoutingFieldMapper extends AbstractFieldMapper implements I if (context.sourceToParse().routing() != null) { String routing = context.sourceToParse().routing(); if (routing != null) { - if (!fieldType.indexed() && !fieldType.stored()) { + if (fieldType.indexOptions() == IndexOptions.NONE && !fieldType.stored()) { context.ignoredValue(names.indexName(), routing); return; } @@ -215,13 +214,15 @@ public class RoutingFieldMapper extends AbstractFieldMapper implements I boolean includeDefaults = params.paramAsBoolean("include_defaults", false); // if all are defaults, no sense to write it at all - if (!includeDefaults && fieldType.indexed() == Defaults.FIELD_TYPE.indexed() && + boolean indexed = fieldType.indexOptions() != IndexOptions.NONE; + boolean indexedDefault = Defaults.FIELD_TYPE.indexOptions() != IndexOptions.NONE; + if (!includeDefaults && indexed == indexedDefault && fieldType.stored() == Defaults.FIELD_TYPE.stored() && required == Defaults.REQUIRED && path == Defaults.PATH) { return builder; } builder.startObject(CONTENT_TYPE); - if (includeDefaults || fieldType.indexed() != Defaults.FIELD_TYPE.indexed()) { - builder.field("index", indexTokenizeOptionToString(fieldType.indexed(), fieldType.tokenized())); + if (includeDefaults || indexed != indexedDefault) { + builder.field("index", indexTokenizeOptionToString(indexed, fieldType.tokenized())); } if (includeDefaults || fieldType.stored() != Defaults.FIELD_TYPE.stored()) { builder.field("store", fieldType.stored()); diff --git a/src/main/java/org/elasticsearch/index/mapper/internal/SourceFieldMapper.java b/src/main/java/org/elasticsearch/index/mapper/internal/SourceFieldMapper.java index 0fca9ae8c58..756dc0514c5 100644 --- a/src/main/java/org/elasticsearch/index/mapper/internal/SourceFieldMapper.java +++ b/src/main/java/org/elasticsearch/index/mapper/internal/SourceFieldMapper.java @@ -23,7 +23,7 @@ import com.google.common.base.Objects; import org.apache.lucene.document.Field; import org.apache.lucene.document.FieldType; import org.apache.lucene.document.StoredField; -import org.apache.lucene.index.FieldInfo.IndexOptions; +import org.apache.lucene.index.IndexOptions; import org.apache.lucene.util.BytesRef; import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.common.Strings; @@ -73,10 +73,9 @@ public class SourceFieldMapper extends AbstractFieldMapper implements In public static final FieldType FIELD_TYPE = new FieldType(AbstractFieldMapper.Defaults.FIELD_TYPE); static { - FIELD_TYPE.setIndexed(false); + FIELD_TYPE.setIndexOptions(IndexOptions.NONE); // not indexed FIELD_TYPE.setStored(true); FIELD_TYPE.setOmitNorms(true); - FIELD_TYPE.setIndexOptions(IndexOptions.DOCS_ONLY); FIELD_TYPE.freeze(); } diff --git a/src/main/java/org/elasticsearch/index/mapper/internal/TTLFieldMapper.java b/src/main/java/org/elasticsearch/index/mapper/internal/TTLFieldMapper.java index d45ef1b2270..9af9480b90f 100644 --- a/src/main/java/org/elasticsearch/index/mapper/internal/TTLFieldMapper.java +++ b/src/main/java/org/elasticsearch/index/mapper/internal/TTLFieldMapper.java @@ -21,10 +21,10 @@ package org.elasticsearch.index.mapper.internal; import org.apache.lucene.document.Field; import org.apache.lucene.document.FieldType; +import org.apache.lucene.index.IndexOptions; import org.elasticsearch.common.Explicit; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.settings.ImmutableSettings; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.common.xcontent.XContentBuilder; @@ -35,7 +35,6 @@ import org.elasticsearch.index.codec.postingsformat.PostingsFormatProvider; import org.elasticsearch.index.mapper.*; import org.elasticsearch.index.mapper.core.LongFieldMapper; import org.elasticsearch.index.mapper.core.NumberFieldMapper; -import org.elasticsearch.index.query.GeoShapeFilterParser; import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; @@ -59,8 +58,8 @@ public class TTLFieldMapper extends LongFieldMapper implements InternalMapper, R public static final FieldType TTL_FIELD_TYPE = new FieldType(LongFieldMapper.Defaults.FIELD_TYPE); static { + TTL_FIELD_TYPE.setIndexOptions(IndexOptions.DOCS); TTL_FIELD_TYPE.setStored(true); - TTL_FIELD_TYPE.setIndexed(true); TTL_FIELD_TYPE.setTokenized(false); TTL_FIELD_TYPE.freeze(); } diff --git a/src/main/java/org/elasticsearch/index/mapper/internal/TimestampFieldMapper.java b/src/main/java/org/elasticsearch/index/mapper/internal/TimestampFieldMapper.java index 23d13044d99..8c7effbe77b 100644 --- a/src/main/java/org/elasticsearch/index/mapper/internal/TimestampFieldMapper.java +++ b/src/main/java/org/elasticsearch/index/mapper/internal/TimestampFieldMapper.java @@ -22,6 +22,7 @@ package org.elasticsearch.index.mapper.internal; import org.apache.lucene.document.Field; import org.apache.lucene.document.FieldType; import org.apache.lucene.document.NumericDocValuesField; +import org.apache.lucene.index.IndexOptions; import org.elasticsearch.Version; import org.elasticsearch.common.Explicit; import org.elasticsearch.common.Nullable; @@ -58,12 +59,12 @@ public class TimestampFieldMapper extends DateFieldMapper implements InternalMap public static class Defaults extends DateFieldMapper.Defaults { public static final String NAME = "_timestamp"; + // TODO: this should be removed public static final FieldType PRE_20_FIELD_TYPE; public static final FieldType FIELD_TYPE = new FieldType(DateFieldMapper.Defaults.FIELD_TYPE); static { FIELD_TYPE.setStored(true); - FIELD_TYPE.setIndexed(true); FIELD_TYPE.setTokenized(false); FIELD_TYPE.freeze(); PRE_20_FIELD_TYPE = new FieldType(FIELD_TYPE); @@ -237,10 +238,10 @@ public class TimestampFieldMapper extends DateFieldMapper implements InternalMap protected void innerParseCreateField(ParseContext context, List fields) throws IOException { if (enabledState.enabled) { long timestamp = context.sourceToParse().timestamp(); - if (!fieldType.indexed() && !fieldType.stored() && !hasDocValues()) { + if (fieldType.indexOptions() == IndexOptions.NONE && !fieldType.stored() && !hasDocValues()) { context.ignoredValue(names.indexName(), String.valueOf(timestamp)); } - if (fieldType.indexed() || fieldType.stored()) { + if (fieldType.indexOptions() != IndexOptions.NONE || fieldType.stored()) { fields.add(new LongFieldMapper.CustomLongNumericField(this, timestamp, fieldType)); } if (hasDocValues()) { @@ -257,9 +258,11 @@ public class TimestampFieldMapper extends DateFieldMapper implements InternalMap @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { boolean includeDefaults = params.paramAsBoolean("include_defaults", false); + boolean indexed = fieldType.indexOptions() != IndexOptions.NONE; + boolean indexedDefault = Defaults.FIELD_TYPE.indexOptions() != IndexOptions.NONE; // if all are defaults, no sense to write it at all - if (!includeDefaults && fieldType.indexed() == Defaults.FIELD_TYPE.indexed() && customFieldDataSettings == null && + if (!includeDefaults && indexed == indexedDefault && customFieldDataSettings == null && fieldType.stored() == Defaults.FIELD_TYPE.stored() && enabledState == Defaults.ENABLED && path == Defaults.PATH && dateTimeFormatter.format().equals(Defaults.DATE_TIME_FORMATTER.format()) && Defaults.DEFAULT_TIMESTAMP.equals(defaultTimestamp)) { @@ -269,8 +272,8 @@ public class TimestampFieldMapper extends DateFieldMapper implements InternalMap if (includeDefaults || enabledState != Defaults.ENABLED) { builder.field("enabled", enabledState.enabled); } - if (includeDefaults || (fieldType.indexed() != Defaults.FIELD_TYPE.indexed()) || (fieldType.tokenized() != Defaults.FIELD_TYPE.tokenized())) { - builder.field("index", indexTokenizeOptionToString(fieldType.indexed(), fieldType.tokenized())); + if (includeDefaults || (indexed != indexedDefault) || (fieldType.tokenized() != Defaults.FIELD_TYPE.tokenized())) { + builder.field("index", indexTokenizeOptionToString(indexed, fieldType.tokenized())); } if (includeDefaults || fieldType.stored() != Defaults.FIELD_TYPE.stored()) { builder.field("store", fieldType.stored()); diff --git a/src/main/java/org/elasticsearch/index/mapper/internal/TypeFieldMapper.java b/src/main/java/org/elasticsearch/index/mapper/internal/TypeFieldMapper.java index b53f3f723ee..df393009c18 100644 --- a/src/main/java/org/elasticsearch/index/mapper/internal/TypeFieldMapper.java +++ b/src/main/java/org/elasticsearch/index/mapper/internal/TypeFieldMapper.java @@ -22,9 +22,10 @@ package org.elasticsearch.index.mapper.internal; import org.apache.lucene.document.Field; import org.apache.lucene.document.FieldType; import org.apache.lucene.document.SortedSetDocValuesField; -import org.apache.lucene.index.FieldInfo.IndexOptions; +import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.Term; import org.apache.lucene.queries.TermFilter; +import org.apache.lucene.search.ConstantScoreQuery; import org.apache.lucene.search.Filter; import org.apache.lucene.search.PrefixFilter; import org.apache.lucene.search.Query; @@ -32,14 +33,20 @@ import org.apache.lucene.util.BytesRef; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.lucene.BytesRefs; import org.elasticsearch.common.lucene.Lucene; -import org.elasticsearch.common.lucene.search.XConstantScoreQuery; import org.elasticsearch.common.settings.ImmutableSettings; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.index.codec.docvaluesformat.DocValuesFormatProvider; import org.elasticsearch.index.codec.postingsformat.PostingsFormatProvider; import org.elasticsearch.index.fielddata.FieldDataType; -import org.elasticsearch.index.mapper.*; +import org.elasticsearch.index.mapper.InternalMapper; +import org.elasticsearch.index.mapper.Mapper; +import org.elasticsearch.index.mapper.MapperParsingException; +import org.elasticsearch.index.mapper.MergeContext; +import org.elasticsearch.index.mapper.MergeMappingException; +import org.elasticsearch.index.mapper.ParseContext; +import org.elasticsearch.index.mapper.RootMapper; +import org.elasticsearch.index.mapper.Uid; import org.elasticsearch.index.mapper.core.AbstractFieldMapper; import org.elasticsearch.index.query.QueryParseContext; @@ -66,11 +73,10 @@ public class TypeFieldMapper extends AbstractFieldMapper implements Inte public static final FieldType FIELD_TYPE = new FieldType(AbstractFieldMapper.Defaults.FIELD_TYPE); static { - FIELD_TYPE.setIndexed(true); + FIELD_TYPE.setIndexOptions(IndexOptions.DOCS); FIELD_TYPE.setTokenized(false); FIELD_TYPE.setStored(false); FIELD_TYPE.setOmitNorms(true); - FIELD_TYPE.setIndexOptions(IndexOptions.DOCS_ONLY); FIELD_TYPE.freeze(); } } @@ -137,12 +143,12 @@ public class TypeFieldMapper extends AbstractFieldMapper implements Inte @Override public Query termQuery(Object value, @Nullable QueryParseContext context) { - return new XConstantScoreQuery(context.cacheFilter(termFilter(value, context), null)); + return new ConstantScoreQuery(context.cacheFilter(termFilter(value, context), null)); } @Override public Filter termFilter(Object value, @Nullable QueryParseContext context) { - if (!fieldType.indexed()) { + if (fieldType.indexOptions() == IndexOptions.NONE) { return new PrefixFilter(new Term(UidFieldMapper.NAME, Uid.typePrefixAsBytes(BytesRefs.toBytesRef(value)))); } return new TermFilter(names().createIndexNameTerm(BytesRefs.toBytesRef(value))); @@ -174,7 +180,7 @@ public class TypeFieldMapper extends AbstractFieldMapper implements Inte @Override protected void parseCreateField(ParseContext context, List fields) throws IOException { - if (!fieldType.indexed() && !fieldType.stored()) { + if (fieldType.indexOptions() == IndexOptions.NONE && !fieldType.stored()) { return; } fields.add(new Field(names.indexName(), context.type(), fieldType)); @@ -193,15 +199,17 @@ public class TypeFieldMapper extends AbstractFieldMapper implements Inte boolean includeDefaults = params.paramAsBoolean("include_defaults", false); // if all are defaults, no sense to write it at all - if (!includeDefaults && fieldType.stored() == Defaults.FIELD_TYPE.stored() && fieldType.indexed() == Defaults.FIELD_TYPE.indexed()) { + boolean indexed = fieldType.indexOptions() != IndexOptions.NONE; + boolean defaultIndexed = Defaults.FIELD_TYPE.indexOptions() != IndexOptions.NONE; + if (!includeDefaults && fieldType.stored() == Defaults.FIELD_TYPE.stored() && indexed == defaultIndexed) { return builder; } builder.startObject(CONTENT_TYPE); if (includeDefaults || fieldType.stored() != Defaults.FIELD_TYPE.stored()) { builder.field("store", fieldType.stored()); } - if (includeDefaults || fieldType.indexed() != Defaults.FIELD_TYPE.indexed()) { - builder.field("index", indexTokenizeOptionToString(fieldType.indexed(), fieldType.tokenized())); + if (includeDefaults || indexed != defaultIndexed) { + builder.field("index", indexTokenizeOptionToString(indexed, fieldType.tokenized())); } builder.endObject(); return builder; diff --git a/src/main/java/org/elasticsearch/index/mapper/internal/UidFieldMapper.java b/src/main/java/org/elasticsearch/index/mapper/internal/UidFieldMapper.java index 92f6beade66..ea87ebc6086 100644 --- a/src/main/java/org/elasticsearch/index/mapper/internal/UidFieldMapper.java +++ b/src/main/java/org/elasticsearch/index/mapper/internal/UidFieldMapper.java @@ -22,7 +22,7 @@ package org.elasticsearch.index.mapper.internal; import org.apache.lucene.document.BinaryDocValuesField; import org.apache.lucene.document.Field; import org.apache.lucene.document.FieldType; -import org.apache.lucene.index.FieldInfo; +import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.IndexableField; import org.apache.lucene.index.Term; import org.apache.lucene.util.BytesRef; @@ -64,11 +64,10 @@ public class UidFieldMapper extends AbstractFieldMapper implements Internal public static final FieldType NESTED_FIELD_TYPE; static { - FIELD_TYPE.setIndexed(true); + FIELD_TYPE.setIndexOptions(IndexOptions.DOCS); FIELD_TYPE.setTokenized(false); FIELD_TYPE.setStored(true); FIELD_TYPE.setOmitNorms(true); - FIELD_TYPE.setIndexOptions(FieldInfo.IndexOptions.DOCS_ONLY); FIELD_TYPE.freeze(); NESTED_FIELD_TYPE = new FieldType(FIELD_TYPE); diff --git a/src/main/java/org/elasticsearch/index/mapper/ip/IpFieldMapper.java b/src/main/java/org/elasticsearch/index/mapper/ip/IpFieldMapper.java index 061571b452f..25d76304b2a 100644 --- a/src/main/java/org/elasticsearch/index/mapper/ip/IpFieldMapper.java +++ b/src/main/java/org/elasticsearch/index/mapper/ip/IpFieldMapper.java @@ -23,6 +23,7 @@ import com.google.common.net.InetAddresses; import org.apache.lucene.analysis.NumericTokenStream; import org.apache.lucene.document.Field; import org.apache.lucene.document.FieldType; +import org.apache.lucene.index.IndexOptions; import org.apache.lucene.search.Filter; import org.apache.lucene.search.NumericRangeFilter; import org.apache.lucene.search.NumericRangeQuery; @@ -54,7 +55,6 @@ import org.elasticsearch.index.search.NumericRangeFieldDataFilter; import org.elasticsearch.index.similarity.SimilarityProvider; import java.io.IOException; -import java.io.Reader; import java.util.List; import java.util.Map; import java.util.regex.Pattern; @@ -300,7 +300,7 @@ public class IpFieldMapper extends NumberFieldMapper { } final long value = ipToLong(ipAsString); - if (fieldType.indexed() || fieldType.stored()) { + if (fieldType.indexOptions() != IndexOptions.NONE || fieldType.stored()) { CustomLongNumericField field = new CustomLongNumericField(this, value, fieldType); field.setBoost(boost); fields.add(field); @@ -353,15 +353,15 @@ public class IpFieldMapper extends NumberFieldMapper { } @Override - protected NumericIpTokenizer createNumericTokenizer(Reader reader, char[] buffer) throws IOException { - return new NumericIpTokenizer(reader, precisionStep, buffer); + protected NumericIpTokenizer createNumericTokenizer(char[] buffer) throws IOException { + return new NumericIpTokenizer(precisionStep, buffer); } } public static class NumericIpTokenizer extends NumericTokenizer { - public NumericIpTokenizer(Reader reader, int precisionStep, char[] buffer) throws IOException { - super(reader, new NumericTokenStream(precisionStep), buffer, null); + public NumericIpTokenizer(int precisionStep, char[] buffer) throws IOException { + super(new NumericTokenStream(precisionStep), buffer, null); } @Override diff --git a/src/main/java/org/elasticsearch/index/merge/policy/ElasticsearchMergePolicy.java b/src/main/java/org/elasticsearch/index/merge/policy/ElasticsearchMergePolicy.java index efd822e2822..d9fb79a5b4b 100644 --- a/src/main/java/org/elasticsearch/index/merge/policy/ElasticsearchMergePolicy.java +++ b/src/main/java/org/elasticsearch/index/merge/policy/ElasticsearchMergePolicy.java @@ -22,8 +22,6 @@ package org.elasticsearch.index.merge.policy; import com.google.common.collect.ImmutableList; import com.google.common.collect.Lists; import org.apache.lucene.index.*; -import org.apache.lucene.index.FieldInfo.DocValuesType; -import org.apache.lucene.index.FieldInfo.IndexOptions; import org.apache.lucene.store.Directory; import org.apache.lucene.util.Bits; import org.apache.lucene.util.BytesRef; @@ -68,10 +66,10 @@ public final class ElasticsearchMergePolicy extends MergePolicy { } /** Return an "upgraded" view of the reader. */ - static AtomicReader filter(AtomicReader reader) throws IOException { + static LeafReader filter(LeafReader reader) throws IOException { final FieldInfos fieldInfos = reader.getFieldInfos(); final FieldInfo versionInfo = fieldInfos.fieldInfo(VersionFieldMapper.NAME); - if (versionInfo != null && versionInfo.hasDocValues()) { + if (versionInfo != null && versionInfo.getDocValuesType() != DocValuesType.NONE) { // the reader is a recent one, it has versions and they are stored // in a numeric doc values field return reader; @@ -109,13 +107,21 @@ public final class ElasticsearchMergePolicy extends MergePolicy { for (FieldInfo fi : fieldInfos) { fieldNumber = Math.max(fieldNumber, fi.number + 1); } - newVersionInfo = new FieldInfo(VersionFieldMapper.NAME, false, fieldNumber, false, true, false, - IndexOptions.DOCS_ONLY, DocValuesType.NUMERIC, DocValuesType.NUMERIC, -1, Collections.emptyMap()); + // TODO: lots of things can wrong here... + newVersionInfo = new FieldInfo(VersionFieldMapper.NAME, // field name + fieldNumber, // field number + false, // store term vectors + false, // omit norms + false, // store payloads + IndexOptions.NONE, // index options + DocValuesType.NUMERIC, // docvalues + -1, // docvalues generation + Collections.emptyMap() // attributes + ); } else { - newVersionInfo = new FieldInfo(VersionFieldMapper.NAME, versionInfo.isIndexed(), versionInfo.number, - versionInfo.hasVectors(), versionInfo.omitsNorms(), versionInfo.hasPayloads(), - versionInfo.getIndexOptions(), versionInfo.getDocValuesType(), versionInfo.getNormType(), versionInfo.getDocValuesGen(), versionInfo.attributes()); + newVersionInfo = versionInfo; } + newVersionInfo.checkConsistency(); // fail merge immediately if above code is wrong final ArrayList fieldInfoList = new ArrayList<>(); for (FieldInfo info : fieldInfos) { if (info != versionInfo) { @@ -130,7 +136,7 @@ public final class ElasticsearchMergePolicy extends MergePolicy { return versions.get(index); } }; - return new FilterAtomicReader(reader) { + return new FilterLeafReader(reader) { @Override public FieldInfos getFieldInfos() { return newFieldInfos; @@ -156,10 +162,10 @@ public final class ElasticsearchMergePolicy extends MergePolicy { } @Override - public List getMergeReaders() throws IOException { - final List readers = super.getMergeReaders(); - ImmutableList.Builder newReaders = ImmutableList.builder(); - for (AtomicReader reader : readers) { + public List getMergeReaders() throws IOException { + final List readers = super.getMergeReaders(); + ImmutableList.Builder newReaders = ImmutableList.builder(); + for (LeafReader reader : readers) { newReaders.add(filter(reader)); } return newReaders.build(); diff --git a/src/main/java/org/elasticsearch/index/percolator/PercolatorQueriesRegistry.java b/src/main/java/org/elasticsearch/index/percolator/PercolatorQueriesRegistry.java index 7f11e51e2f3..44a939451ba 100644 --- a/src/main/java/org/elasticsearch/index/percolator/PercolatorQueriesRegistry.java +++ b/src/main/java/org/elasticsearch/index/percolator/PercolatorQueriesRegistry.java @@ -21,13 +21,13 @@ package org.elasticsearch.index.percolator; import org.apache.lucene.index.Term; import org.apache.lucene.queries.TermFilter; +import org.apache.lucene.search.ConstantScoreQuery; import org.apache.lucene.search.Query; import org.apache.lucene.util.BytesRef; import org.apache.lucene.util.CloseableThreadLocal; import org.elasticsearch.ElasticsearchException; import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.inject.Inject; -import org.elasticsearch.common.lucene.search.XConstantScoreQuery; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.util.concurrent.ConcurrentCollections; import org.elasticsearch.common.xcontent.XContentBuilder; @@ -263,7 +263,7 @@ public class PercolatorQueriesRegistry extends AbstractIndexShardComponent { shard.refresh(new Engine.Refresh("percolator_load_queries").force(true)); // Maybe add a mode load? This isn't really a write. We need write b/c state=post_recovery try (Engine.Searcher searcher = shard.acquireSearcher("percolator_load_queries", IndexShard.Mode.WRITE)) { - Query query = new XConstantScoreQuery( + Query query = new ConstantScoreQuery( indexCache.filter().cache( new TermFilter(new Term(TypeFieldMapper.NAME, PercolatorService.TYPE_NAME)) ) diff --git a/src/main/java/org/elasticsearch/index/percolator/QueriesLoaderCollector.java b/src/main/java/org/elasticsearch/index/percolator/QueriesLoaderCollector.java index 58a05baedeb..8805b5d71f6 100644 --- a/src/main/java/org/elasticsearch/index/percolator/QueriesLoaderCollector.java +++ b/src/main/java/org/elasticsearch/index/percolator/QueriesLoaderCollector.java @@ -19,11 +19,11 @@ package org.elasticsearch.index.percolator; import com.google.common.collect.Maps; -import org.apache.lucene.index.AtomicReader; -import org.apache.lucene.index.AtomicReaderContext; -import org.apache.lucene.search.Collector; +import org.apache.lucene.index.LeafReader; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.search.Query; import org.apache.lucene.search.Scorer; +import org.apache.lucene.search.SimpleCollector; import org.apache.lucene.util.BytesRef; import org.elasticsearch.common.logging.ESLogger; import org.elasticsearch.index.fielddata.IndexFieldData; @@ -39,7 +39,7 @@ import java.util.Map; /** */ -final class QueriesLoaderCollector extends Collector { +final class QueriesLoaderCollector extends SimpleCollector { private final Map queries = Maps.newHashMap(); private final JustSourceFieldsVisitor fieldsVisitor = new JustSourceFieldsVisitor(); @@ -48,7 +48,7 @@ final class QueriesLoaderCollector extends Collector { private final ESLogger logger; private SortedBinaryDocValues idValues; - private AtomicReader reader; + private LeafReader reader; QueriesLoaderCollector(PercolatorQueriesRegistry percolator, ESLogger logger, MapperService mapperService, IndexFieldDataService indexFieldDataService) { this.percolator = percolator; @@ -88,7 +88,7 @@ final class QueriesLoaderCollector extends Collector { } @Override - public void setNextReader(AtomicReaderContext context) throws IOException { + protected void doSetNextReader(LeafReaderContext context) throws IOException { reader = context.reader(); idValues = idFieldData.load(context).getBytesValues(); } diff --git a/src/main/java/org/elasticsearch/index/query/ConstantScoreQueryParser.java b/src/main/java/org/elasticsearch/index/query/ConstantScoreQueryParser.java index 6b60741becd..83cb3294bc2 100644 --- a/src/main/java/org/elasticsearch/index/query/ConstantScoreQueryParser.java +++ b/src/main/java/org/elasticsearch/index/query/ConstantScoreQueryParser.java @@ -24,7 +24,6 @@ import org.apache.lucene.search.Filter; import org.apache.lucene.search.Query; import org.elasticsearch.common.Strings; import org.elasticsearch.common.inject.Inject; -import org.elasticsearch.common.lucene.search.XConstantScoreQuery; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.cache.filter.support.CacheKeyFilter; @@ -99,7 +98,7 @@ public class ConstantScoreQueryParser implements QueryParser { filter = parseContext.cacheFilter(filter, cacheKey); } - Query query1 = new XConstantScoreQuery(filter); + Query query1 = new ConstantScoreQuery(filter); query1.setBoost(boost); return query1; } diff --git a/src/main/java/org/elasticsearch/index/query/ExistsFilterParser.java b/src/main/java/org/elasticsearch/index/query/ExistsFilterParser.java index 4bc7ea7e25d..ce021f795f0 100644 --- a/src/main/java/org/elasticsearch/index/query/ExistsFilterParser.java +++ b/src/main/java/org/elasticsearch/index/query/ExistsFilterParser.java @@ -19,6 +19,7 @@ package org.elasticsearch.index.query; +import org.apache.lucene.index.IndexOptions; import org.apache.lucene.search.BooleanClause; import org.apache.lucene.search.Filter; import org.apache.lucene.search.TermRangeFilter; @@ -105,7 +106,7 @@ public class ExistsFilterParser implements FilterParser { nonNullFieldMappers = smartNameFieldMappers; } Filter filter = null; - if (fieldNamesMapper!= null && fieldNamesMapper.mapper().fieldType().indexed()) { + if (fieldNamesMapper!= null && fieldNamesMapper.mapper().fieldType().indexOptions() != IndexOptions.NONE) { final String f; if (smartNameFieldMappers != null && smartNameFieldMappers.hasMapper()) { f = smartNameFieldMappers.mapper().names().indexName(); diff --git a/src/main/java/org/elasticsearch/index/query/FilteredQueryParser.java b/src/main/java/org/elasticsearch/index/query/FilteredQueryParser.java index dbfe928b591..228fb91d466 100644 --- a/src/main/java/org/elasticsearch/index/query/FilteredQueryParser.java +++ b/src/main/java/org/elasticsearch/index/query/FilteredQueryParser.java @@ -19,13 +19,19 @@ package org.elasticsearch.index.query; +import org.apache.lucene.index.LeafReaderContext; +import org.apache.lucene.search.ConstantScoreQuery; +import org.apache.lucene.search.DocIdSet; import org.apache.lucene.search.Filter; import org.apache.lucene.search.FilteredQuery; +import org.apache.lucene.search.FilteredQuery.FilterStrategy; import org.apache.lucene.search.Query; +import org.apache.lucene.search.Scorer; +import org.apache.lucene.search.Weight; +import org.apache.lucene.util.Bits; import org.elasticsearch.common.inject.Inject; +import org.elasticsearch.common.lucene.docset.DocIdSets; import org.elasticsearch.common.lucene.search.Queries; -import org.elasticsearch.common.lucene.search.XConstantScoreQuery; -import org.elasticsearch.common.lucene.search.XFilteredQuery; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.cache.filter.support.CacheKeyFilter; @@ -38,6 +44,69 @@ public class FilteredQueryParser implements QueryParser { public static final String NAME = "filtered"; + public static final FilterStrategy ALWAYS_RANDOM_ACCESS_FILTER_STRATEGY = new CustomRandomAccessFilterStrategy(0); + + public static final CustomRandomAccessFilterStrategy CUSTOM_FILTER_STRATEGY = new CustomRandomAccessFilterStrategy(); + + /** + * Extends {@link org.apache.lucene.search.FilteredQuery.RandomAccessFilterStrategy}. + *

    + * Adds a threshold value, which defaults to -1. When set to -1, it will check if the filter docSet is + * *not* a fast docSet, and if not, it will use {@link FilteredQuery#QUERY_FIRST_FILTER_STRATEGY} (since + * the assumption is that its a "slow" filter and better computed only on whatever matched the query). + *

    + * If the threshold value is 0, it always tries to pass "down" the filter as acceptDocs, and it the filter + * can't be represented as Bits (never really), then it uses {@link FilteredQuery#LEAP_FROG_QUERY_FIRST_STRATEGY}. + *

    + * If the above conditions are not met, then it reverts to the {@link FilteredQuery.RandomAccessFilterStrategy} logic, + * with the threshold used to control {@link #useRandomAccess(org.apache.lucene.util.Bits, int)}. + */ + public static class CustomRandomAccessFilterStrategy extends FilteredQuery.RandomAccessFilterStrategy { + + private final int threshold; + + public CustomRandomAccessFilterStrategy() { + this.threshold = -1; + } + + public CustomRandomAccessFilterStrategy(int threshold) { + this.threshold = threshold; + } + + @Override + public Scorer filteredScorer(LeafReaderContext context, Weight weight, DocIdSet docIdSet) throws IOException { + // CHANGE: If threshold is 0, always pass down the accept docs, don't pay the price of calling nextDoc even... + if (threshold == 0) { + final Bits filterAcceptDocs = docIdSet.bits(); + if (filterAcceptDocs != null) { + return weight.scorer(context, filterAcceptDocs); + } else { + return FilteredQuery.LEAP_FROG_QUERY_FIRST_STRATEGY.filteredScorer(context, weight, docIdSet); + } + } + + // CHANGE: handle "default" value + if (threshold == -1) { + // default value, don't iterate on only apply filter after query if its not a "fast" docIdSet + if (!DocIdSets.isFastIterator(docIdSet)) { + return FilteredQuery.QUERY_FIRST_FILTER_STRATEGY.filteredScorer(context, weight, docIdSet); + } + } + + return super.filteredScorer(context, weight, docIdSet); + } + + @Override + protected boolean useRandomAccess(Bits bits, long filterCost) { + int multiplier = threshold; + if (threshold == -1) { + // default + multiplier = 100; + } + return filterCost * multiplier > bits.length(); + } + } + @Inject public FilteredQueryParser() { } @@ -61,7 +130,7 @@ public class FilteredQueryParser implements QueryParser { String currentFieldName = null; XContentParser.Token token; - FilteredQuery.FilterStrategy filterStrategy = XFilteredQuery.CUSTOM_FILTER_STRATEGY; + FilteredQuery.FilterStrategy filterStrategy = CUSTOM_FILTER_STRATEGY; while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { if (token == XContentParser.Token.FIELD_NAME) { @@ -81,15 +150,15 @@ public class FilteredQueryParser implements QueryParser { if ("query_first".equals(value) || "queryFirst".equals(value)) { filterStrategy = FilteredQuery.QUERY_FIRST_FILTER_STRATEGY; } else if ("random_access_always".equals(value) || "randomAccessAlways".equals(value)) { - filterStrategy = XFilteredQuery.ALWAYS_RANDOM_ACCESS_FILTER_STRATEGY; + filterStrategy = ALWAYS_RANDOM_ACCESS_FILTER_STRATEGY; } else if ("leap_frog".equals(value) || "leapFrog".equals(value)) { filterStrategy = FilteredQuery.LEAP_FROG_QUERY_FIRST_STRATEGY; } else if (value.startsWith("random_access_")) { int threshold = Integer.parseInt(value.substring("random_access_".length())); - filterStrategy = new XFilteredQuery.CustomRandomAccessFilterStrategy(threshold); + filterStrategy = new CustomRandomAccessFilterStrategy(threshold); } else if (value.startsWith("randomAccess")) { int threshold = Integer.parseInt(value.substring("randomAccess".length())); - filterStrategy = new XFilteredQuery.CustomRandomAccessFilterStrategy(threshold); + filterStrategy = new CustomRandomAccessFilterStrategy(threshold); } else if ("leap_frog_query_first".equals(value) || "leapFrogQueryFirst".equals(value)) { filterStrategy = FilteredQuery.LEAP_FROG_QUERY_FIRST_STRATEGY; } else if ("leap_frog_filter_first".equals(value) || "leapFrogFilterFirst".equals(value)) { @@ -138,12 +207,12 @@ public class FilteredQueryParser implements QueryParser { // if its a match_all query, use constant_score if (Queries.isConstantMatchAllQuery(query)) { - Query q = new XConstantScoreQuery(filter); + Query q = new ConstantScoreQuery(filter); q.setBoost(boost); return q; } - XFilteredQuery filteredQuery = new XFilteredQuery(query, filter, filterStrategy); + FilteredQuery filteredQuery = new FilteredQuery(query, filter, filterStrategy); filteredQuery.setBoost(boost); if (queryName != null) { parseContext.addNamedQuery(queryName, filteredQuery); diff --git a/src/main/java/org/elasticsearch/index/query/GeoShapeFilterParser.java b/src/main/java/org/elasticsearch/index/query/GeoShapeFilterParser.java index 4f4f2a1fdf2..73220a6d151 100644 --- a/src/main/java/org/elasticsearch/index/query/GeoShapeFilterParser.java +++ b/src/main/java/org/elasticsearch/index/query/GeoShapeFilterParser.java @@ -20,12 +20,16 @@ package org.elasticsearch.index.query; import com.spatial4j.core.shape.Shape; + +import org.apache.lucene.search.BooleanClause; import org.apache.lucene.search.Filter; import org.apache.lucene.spatial.prefix.PrefixTreeStrategy; +import org.apache.lucene.spatial.prefix.RecursivePrefixTreeStrategy; import org.elasticsearch.common.geo.ShapeRelation; import org.elasticsearch.common.geo.builders.ShapeBuilder; import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.inject.internal.Nullable; +import org.elasticsearch.common.lucene.search.XBooleanFilter; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.cache.filter.support.CacheKeyFilter; import org.elasticsearch.index.mapper.FieldMapper; @@ -175,7 +179,20 @@ public class GeoShapeFilterParser implements FilterParser { if (strategyName != null) { strategy = shapeFieldMapper.resolveStrategy(strategyName); } - Filter filter = strategy.makeFilter(GeoShapeQueryParser.getArgs(shape, shapeRelation)); + + Filter filter; + if (strategy instanceof RecursivePrefixTreeStrategy && shapeRelation == ShapeRelation.DISJOINT) { + // this strategy doesn't support disjoint anymore: but it did before, including creating lucene fieldcache (!) + // in this case, execute disjoint as exists && !intersects + XBooleanFilter bool = new XBooleanFilter(); + Filter exists = ExistsFilterParser.newFilter(parseContext, fieldName, null); + Filter intersects = strategy.makeFilter(GeoShapeQueryParser.getArgs(shape, ShapeRelation.INTERSECTS)); + bool.add(exists, BooleanClause.Occur.MUST); + bool.add(intersects, BooleanClause.Occur.MUST_NOT); + filter = bool; + } else { + filter = strategy.makeFilter(GeoShapeQueryParser.getArgs(shape, shapeRelation)); + } if (cache) { filter = parseContext.cacheFilter(filter, cacheKey); diff --git a/src/main/java/org/elasticsearch/index/query/GeoShapeQueryParser.java b/src/main/java/org/elasticsearch/index/query/GeoShapeQueryParser.java index 84f0dbb26f5..a7190443ed7 100644 --- a/src/main/java/org/elasticsearch/index/query/GeoShapeQueryParser.java +++ b/src/main/java/org/elasticsearch/index/query/GeoShapeQueryParser.java @@ -19,8 +19,12 @@ package org.elasticsearch.index.query; +import org.apache.lucene.search.BooleanClause; +import org.apache.lucene.search.ConstantScoreQuery; +import org.apache.lucene.search.Filter; import org.apache.lucene.search.Query; import org.apache.lucene.spatial.prefix.PrefixTreeStrategy; +import org.apache.lucene.spatial.prefix.RecursivePrefixTreeStrategy; import org.apache.lucene.spatial.query.SpatialArgs; import org.apache.lucene.spatial.query.SpatialOperation; import org.elasticsearch.ElasticsearchIllegalArgumentException; @@ -29,6 +33,7 @@ import org.elasticsearch.common.Strings; import org.elasticsearch.common.geo.ShapeRelation; import org.elasticsearch.common.geo.builders.ShapeBuilder; import org.elasticsearch.common.inject.Inject; +import org.elasticsearch.common.lucene.search.XBooleanFilter; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.mapper.FieldMapper; import org.elasticsearch.index.mapper.MapperService; @@ -152,7 +157,19 @@ public class GeoShapeQueryParser implements QueryParser { if (strategyName != null) { strategy = shapeFieldMapper.resolveStrategy(strategyName); } - Query query = strategy.makeQuery(getArgs(shape, shapeRelation)); + Query query; + if (strategy instanceof RecursivePrefixTreeStrategy && shapeRelation == ShapeRelation.DISJOINT) { + // this strategy doesn't support disjoint anymore: but it did before, including creating lucene fieldcache (!) + // in this case, execute disjoint as exists && !intersects + XBooleanFilter bool = new XBooleanFilter(); + Filter exists = ExistsFilterParser.newFilter(parseContext, fieldName, null); + Filter intersects = strategy.makeFilter(getArgs(shape, ShapeRelation.INTERSECTS)); + bool.add(exists, BooleanClause.Occur.MUST); + bool.add(intersects, BooleanClause.Occur.MUST_NOT); + query = new ConstantScoreQuery(bool); + } else { + query = strategy.makeQuery(getArgs(shape, shapeRelation)); + } query.setBoost(boost); if (queryName != null) { parseContext.addNamedQuery(queryName, query); diff --git a/src/main/java/org/elasticsearch/index/query/HasChildFilterParser.java b/src/main/java/org/elasticsearch/index/query/HasChildFilterParser.java index 78e102d0b31..6b4d362d69a 100644 --- a/src/main/java/org/elasticsearch/index/query/HasChildFilterParser.java +++ b/src/main/java/org/elasticsearch/index/query/HasChildFilterParser.java @@ -19,12 +19,12 @@ package org.elasticsearch.index.query; import org.apache.lucene.search.Filter; +import org.apache.lucene.search.FilteredQuery; import org.apache.lucene.search.Query; +import org.apache.lucene.search.join.BitDocIdSetFilter; import org.elasticsearch.common.Strings; import org.elasticsearch.common.inject.Inject; -import org.elasticsearch.common.lucene.search.XFilteredQuery; import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.index.cache.fixedbitset.FixedBitSetFilter; import org.elasticsearch.index.fielddata.plain.ParentChildIndexFieldData; import org.elasticsearch.index.mapper.DocumentMapper; import org.elasticsearch.index.mapper.internal.ParentFieldMapper; @@ -138,7 +138,7 @@ public class HasChildFilterParser implements FilterParser { String parentType = parentFieldMapper.type(); // wrap the query with type query - query = new XFilteredQuery(query, parseContext.cacheFilter(childDocMapper.typeFilter(), null)); + query = new FilteredQuery(query, parseContext.cacheFilter(childDocMapper.typeFilter(), null)); DocumentMapper parentDocMapper = parseContext.mapperService().documentMapper(parentType); if (parentDocMapper == null) { @@ -149,12 +149,12 @@ public class HasChildFilterParser implements FilterParser { throw new QueryParsingException(parseContext.index(), "[has_child] 'max_children' is less than 'min_children'"); } - FixedBitSetFilter nonNestedDocsFilter = null; + BitDocIdSetFilter nonNestedDocsFilter = null; if (parentDocMapper.hasNestedObjects()) { - nonNestedDocsFilter = parseContext.fixedBitSetFilter(NonNestedDocsFilter.INSTANCE); + nonNestedDocsFilter = parseContext.bitsetFilter(NonNestedDocsFilter.INSTANCE); } - FixedBitSetFilter parentFilter = parseContext.fixedBitSetFilter(parentDocMapper.typeFilter()); + BitDocIdSetFilter parentFilter = parseContext.bitsetFilter(parentDocMapper.typeFilter()); ParentChildIndexFieldData parentChildIndexFieldData = parseContext.getForField(parentFieldMapper); Query childrenQuery; diff --git a/src/main/java/org/elasticsearch/index/query/HasChildQueryParser.java b/src/main/java/org/elasticsearch/index/query/HasChildQueryParser.java index d7c8668453d..29bf5b12e60 100644 --- a/src/main/java/org/elasticsearch/index/query/HasChildQueryParser.java +++ b/src/main/java/org/elasticsearch/index/query/HasChildQueryParser.java @@ -19,12 +19,12 @@ package org.elasticsearch.index.query; +import org.apache.lucene.search.FilteredQuery; import org.apache.lucene.search.Query; +import org.apache.lucene.search.join.BitDocIdSetFilter; import org.elasticsearch.common.Strings; import org.elasticsearch.common.inject.Inject; -import org.elasticsearch.common.lucene.search.XFilteredQuery; import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.index.cache.fixedbitset.FixedBitSetFilter; import org.elasticsearch.index.fielddata.plain.ParentChildIndexFieldData; import org.elasticsearch.index.mapper.DocumentMapper; import org.elasticsearch.index.mapper.internal.ParentFieldMapper; @@ -146,16 +146,16 @@ public class HasChildQueryParser implements QueryParser { throw new QueryParsingException(parseContext.index(), "[has_child] 'max_children' is less than 'min_children'"); } - FixedBitSetFilter nonNestedDocsFilter = null; + BitDocIdSetFilter nonNestedDocsFilter = null; if (parentDocMapper.hasNestedObjects()) { - nonNestedDocsFilter = parseContext.fixedBitSetFilter(NonNestedDocsFilter.INSTANCE); + nonNestedDocsFilter = parseContext.bitsetFilter(NonNestedDocsFilter.INSTANCE); } // wrap the query with type query - innerQuery = new XFilteredQuery(innerQuery, parseContext.cacheFilter(childDocMapper.typeFilter(), null)); + innerQuery = new FilteredQuery(innerQuery, parseContext.cacheFilter(childDocMapper.typeFilter(), null)); Query query; - FixedBitSetFilter parentFilter = parseContext.fixedBitSetFilter(parentDocMapper.typeFilter()); + BitDocIdSetFilter parentFilter = parseContext.bitsetFilter(parentDocMapper.typeFilter()); ParentChildIndexFieldData parentChildIndexFieldData = parseContext.getForField(parentFieldMapper); if (minChildren > 1 || maxChildren > 0 || scoreType != ScoreType.NONE) { query = new ChildrenQuery(parentChildIndexFieldData, parentType, childType, parentFilter, innerQuery, scoreType, minChildren, diff --git a/src/main/java/org/elasticsearch/index/query/HasParentQueryParser.java b/src/main/java/org/elasticsearch/index/query/HasParentQueryParser.java index 3ebca6270af..8c060326803 100644 --- a/src/main/java/org/elasticsearch/index/query/HasParentQueryParser.java +++ b/src/main/java/org/elasticsearch/index/query/HasParentQueryParser.java @@ -20,14 +20,14 @@ package org.elasticsearch.index.query; import org.apache.lucene.search.BooleanClause; import org.apache.lucene.search.Filter; +import org.apache.lucene.search.FilteredQuery; import org.apache.lucene.search.Query; +import org.apache.lucene.search.join.BitDocIdSetFilter; import org.elasticsearch.common.Strings; import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.lucene.search.NotFilter; import org.elasticsearch.common.lucene.search.XBooleanFilter; -import org.elasticsearch.common.lucene.search.XFilteredQuery; import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.index.cache.fixedbitset.FixedBitSetFilter; import org.elasticsearch.index.fielddata.plain.ParentChildIndexFieldData; import org.elasticsearch.index.mapper.DocumentMapper; import org.elasticsearch.index.mapper.internal.ParentFieldMapper; @@ -181,8 +181,8 @@ public class HasParentQueryParser implements QueryParser { } // wrap the query with type query - innerQuery = new XFilteredQuery(innerQuery, parseContext.cacheFilter(parentDocMapper.typeFilter(), null)); - FixedBitSetFilter childrenFilter = parseContext.fixedBitSetFilter(new NotFilter(parentFilter)); + innerQuery = new FilteredQuery(innerQuery, parseContext.cacheFilter(parentDocMapper.typeFilter(), null)); + BitDocIdSetFilter childrenFilter = parseContext.bitsetFilter(new NotFilter(parentFilter)); if (score) { return new ParentQuery(parentChildIndexFieldData, innerQuery, parentDocMapper.type(), childrenFilter); } else { diff --git a/src/main/java/org/elasticsearch/index/query/IndexQueryParserService.java b/src/main/java/org/elasticsearch/index/query/IndexQueryParserService.java index dc741b1c0bc..05e3d5f5c92 100644 --- a/src/main/java/org/elasticsearch/index/query/IndexQueryParserService.java +++ b/src/main/java/org/elasticsearch/index/query/IndexQueryParserService.java @@ -38,8 +38,8 @@ import org.elasticsearch.index.AbstractIndexComponent; import org.elasticsearch.index.Index; import org.elasticsearch.index.analysis.AnalysisService; import org.elasticsearch.index.cache.IndexCache; +import org.elasticsearch.index.cache.bitset.BitsetFilterCache; import org.elasticsearch.index.engine.IndexEngine; -import org.elasticsearch.index.cache.fixedbitset.FixedBitSetFilterCache; import org.elasticsearch.index.fielddata.IndexFieldDataService; import org.elasticsearch.index.mapper.MapperService; import org.elasticsearch.index.mapper.internal.AllFieldMapper; @@ -91,7 +91,7 @@ public class IndexQueryParserService extends AbstractIndexComponent { final IndexFieldDataService fieldDataService; - final FixedBitSetFilterCache fixedBitSetFilterCache; + final BitsetFilterCache bitsetFilterCache; final IndexEngine indexEngine; @@ -109,7 +109,7 @@ public class IndexQueryParserService extends AbstractIndexComponent { IndicesQueriesRegistry indicesQueriesRegistry, ScriptService scriptService, AnalysisService analysisService, MapperService mapperService, IndexCache indexCache, IndexFieldDataService fieldDataService, - IndexEngine indexEngine, FixedBitSetFilterCache fixedBitSetFilterCache, + IndexEngine indexEngine, BitsetFilterCache bitsetFilterCache, @Nullable SimilarityService similarityService, @Nullable Map namedQueryParsers, @Nullable Map namedFilterParsers) { @@ -121,7 +121,7 @@ public class IndexQueryParserService extends AbstractIndexComponent { this.indexCache = indexCache; this.fieldDataService = fieldDataService; this.indexEngine = indexEngine; - this.fixedBitSetFilterCache = fixedBitSetFilterCache; + this.bitsetFilterCache = bitsetFilterCache; this.defaultField = indexSettings.get(DEFAULT_FIELD, AllFieldMapper.NAME); this.queryStringLenient = indexSettings.getAsBoolean(QUERY_STRING_LENIENT, false); diff --git a/src/main/java/org/elasticsearch/index/query/MissingFilterParser.java b/src/main/java/org/elasticsearch/index/query/MissingFilterParser.java index d50d5aaaae2..9b8f48393a6 100644 --- a/src/main/java/org/elasticsearch/index/query/MissingFilterParser.java +++ b/src/main/java/org/elasticsearch/index/query/MissingFilterParser.java @@ -19,6 +19,7 @@ package org.elasticsearch.index.query; +import org.apache.lucene.index.IndexOptions; import org.apache.lucene.search.BooleanClause; import org.apache.lucene.search.Filter; import org.apache.lucene.search.TermRangeFilter; @@ -34,7 +35,6 @@ import org.elasticsearch.index.mapper.internal.FieldNamesFieldMapper; import java.io.IOException; import java.util.List; -import java.util.Set; import static org.elasticsearch.index.query.support.QueryParsers.wrapSmartNameFilter; @@ -126,7 +126,7 @@ public class MissingFilterParser implements FilterParser { nonNullFieldMappers = smartNameFieldMappers; } Filter filter = null; - if (fieldNamesMapper != null && fieldNamesMapper.mapper().fieldType().indexed()) { + if (fieldNamesMapper != null && fieldNamesMapper.mapper().fieldType().indexOptions() != IndexOptions.NONE) { final String f; if (smartNameFieldMappers != null && smartNameFieldMappers.hasMapper()) { f = smartNameFieldMappers.mapper().names().indexName(); diff --git a/src/main/java/org/elasticsearch/index/query/NestedFilterParser.java b/src/main/java/org/elasticsearch/index/query/NestedFilterParser.java index 557d371324b..6d5174d1a42 100644 --- a/src/main/java/org/elasticsearch/index/query/NestedFilterParser.java +++ b/src/main/java/org/elasticsearch/index/query/NestedFilterParser.java @@ -19,18 +19,18 @@ package org.elasticsearch.index.query; +import org.apache.lucene.search.ConstantScoreQuery; import org.apache.lucene.search.Filter; +import org.apache.lucene.search.FilteredQuery; import org.apache.lucene.search.Query; +import org.apache.lucene.search.join.BitDocIdSetFilter; import org.apache.lucene.search.join.ScoreMode; import org.apache.lucene.search.join.ToParentBlockJoinQuery; import org.elasticsearch.common.Strings; import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.lucene.search.Queries; -import org.elasticsearch.common.lucene.search.XConstantScoreQuery; -import org.elasticsearch.common.lucene.search.XFilteredQuery; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.cache.filter.support.CacheKeyFilter; -import org.elasticsearch.index.cache.fixedbitset.FixedBitSetFilter; import org.elasticsearch.index.mapper.MapperService; import org.elasticsearch.index.mapper.object.ObjectMapper; import org.elasticsearch.index.search.nested.NonNestedDocsFilter; @@ -114,7 +114,7 @@ public class NestedFilterParser implements FilterParser { } if (filter != null) { - query = new XConstantScoreQuery(filter); + query = new ConstantScoreQuery(filter); } query.setBoost(boost); @@ -131,23 +131,22 @@ public class NestedFilterParser implements FilterParser { throw new QueryParsingException(parseContext.index(), "[nested] nested object under path [" + path + "] is not of nested type"); } - FixedBitSetFilter childFilter = parseContext.fixedBitSetFilter(objectMapper.nestedTypeFilter()); + BitDocIdSetFilter childFilter = parseContext.bitsetFilter(objectMapper.nestedTypeFilter()); usAsParentFilter.filter = childFilter; // wrap the child query to only work on the nested path type - query = new XFilteredQuery(query, childFilter); + query = new FilteredQuery(query, childFilter); - Filter parentFilter = currentParentFilterContext; + BitDocIdSetFilter parentFilter = currentParentFilterContext; if (parentFilter == null) { - parentFilter = NonNestedDocsFilter.INSTANCE; + parentFilter = parseContext.bitsetFilter(NonNestedDocsFilter.INSTANCE); // don't do special parent filtering, since we might have same nested mapping on two different types //if (mapper.hasDocMapper()) { // // filter based on the type... // parentFilter = mapper.docMapper().typeFilter(); //} + } else { + parentFilter = parseContext.bitsetFilter(parentFilter); } - // if the filter cache is disabled, then we still have a filter that is not cached while ToParentBlockJoinQuery - // expects FixedBitSet instances - parentFilter = parseContext.fixedBitSetFilter(parentFilter); Filter nestedFilter = Queries.wrap(new ToParentBlockJoinQuery(query, parentFilter, ScoreMode.None), parseContext); diff --git a/src/main/java/org/elasticsearch/index/query/NestedQueryParser.java b/src/main/java/org/elasticsearch/index/query/NestedQueryParser.java index ae67f96c383..b86b1b60936 100644 --- a/src/main/java/org/elasticsearch/index/query/NestedQueryParser.java +++ b/src/main/java/org/elasticsearch/index/query/NestedQueryParser.java @@ -19,19 +19,18 @@ package org.elasticsearch.index.query; -import org.apache.lucene.index.AtomicReaderContext; -import org.apache.lucene.search.DocIdSet; +import org.apache.lucene.index.LeafReaderContext; +import org.apache.lucene.search.ConstantScoreQuery; import org.apache.lucene.search.Filter; +import org.apache.lucene.search.FilteredQuery; import org.apache.lucene.search.Query; +import org.apache.lucene.search.join.BitDocIdSetFilter; import org.apache.lucene.search.join.ScoreMode; import org.apache.lucene.search.join.ToParentBlockJoinQuery; -import org.apache.lucene.util.Bits; +import org.apache.lucene.util.BitDocIdSet; import org.elasticsearch.common.Strings; import org.elasticsearch.common.inject.Inject; -import org.elasticsearch.common.lucene.search.XConstantScoreQuery; -import org.elasticsearch.common.lucene.search.XFilteredQuery; import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.index.cache.fixedbitset.FixedBitSetFilter; import org.elasticsearch.index.mapper.MapperService; import org.elasticsearch.index.mapper.object.ObjectMapper; import org.elasticsearch.index.search.nested.NonNestedDocsFilter; @@ -123,7 +122,7 @@ public class NestedQueryParser implements QueryParser { } if (filter != null) { - query = new XConstantScoreQuery(filter); + query = new ConstantScoreQuery(filter); } MapperService.SmartNameObjectMapper mapper = parseContext.smartObjectMapper(path); @@ -138,21 +137,22 @@ public class NestedQueryParser implements QueryParser { throw new QueryParsingException(parseContext.index(), "[nested] nested object under path [" + path + "] is not of nested type"); } - FixedBitSetFilter childFilter = parseContext.fixedBitSetFilter(objectMapper.nestedTypeFilter()); + BitDocIdSetFilter childFilter = parseContext.bitsetFilter(objectMapper.nestedTypeFilter()); usAsParentFilter.filter = childFilter; // wrap the child query to only work on the nested path type - query = new XFilteredQuery(query, childFilter); + query = new FilteredQuery(query, childFilter); - Filter parentFilter = currentParentFilterContext; + BitDocIdSetFilter parentFilter = currentParentFilterContext; if (parentFilter == null) { - parentFilter = NonNestedDocsFilter.INSTANCE; + parentFilter = parseContext.bitsetFilter(NonNestedDocsFilter.INSTANCE); // don't do special parent filtering, since we might have same nested mapping on two different types //if (mapper.hasDocMapper()) { // // filter based on the type... // parentFilter = mapper.docMapper().typeFilter(); //} + } else { + parentFilter = parseContext.bitsetFilter(parentFilter); } - parentFilter = parseContext.fixedBitSetFilter(parentFilter); ToParentBlockJoinQuery joinQuery = new ToParentBlockJoinQuery(query, parentFilter, scoreMode); joinQuery.setBoost(boost); if (queryName != null) { @@ -167,9 +167,9 @@ public class NestedQueryParser implements QueryParser { static ThreadLocal parentFilterContext = new ThreadLocal<>(); - static class LateBindingParentFilter extends Filter { + static class LateBindingParentFilter extends BitDocIdSetFilter { - Filter filter; + BitDocIdSetFilter filter; @Override public int hashCode() { @@ -187,9 +187,8 @@ public class NestedQueryParser implements QueryParser { } @Override - public DocIdSet getDocIdSet(AtomicReaderContext ctx, Bits liveDocs) throws IOException { - //LUCENE 4 UPGRADE just passing on ctx and live docs here - return filter.getDocIdSet(ctx, liveDocs); + public BitDocIdSet getDocIdSet(LeafReaderContext ctx) throws IOException { + return filter.getDocIdSet(ctx); } } } diff --git a/src/main/java/org/elasticsearch/index/query/QueryParseContext.java b/src/main/java/org/elasticsearch/index/query/QueryParseContext.java index e7076cbce22..a94b86f485e 100644 --- a/src/main/java/org/elasticsearch/index/query/QueryParseContext.java +++ b/src/main/java/org/elasticsearch/index/query/QueryParseContext.java @@ -25,6 +25,7 @@ import org.apache.lucene.queryparser.classic.MapperQueryParser; import org.apache.lucene.queryparser.classic.QueryParserSettings; import org.apache.lucene.search.Filter; import org.apache.lucene.search.Query; +import org.apache.lucene.search.join.BitDocIdSetFilter; import org.apache.lucene.search.similarities.Similarity; import org.elasticsearch.Version; import org.elasticsearch.common.Nullable; @@ -37,7 +38,6 @@ import org.elasticsearch.index.analysis.AnalysisService; import org.elasticsearch.index.cache.filter.support.CacheKeyFilter; import org.elasticsearch.index.cache.query.parser.QueryParserCache; import org.elasticsearch.index.engine.IndexEngine; -import org.elasticsearch.index.cache.fixedbitset.FixedBitSetFilter; import org.elasticsearch.index.fielddata.IndexFieldData; import org.elasticsearch.index.mapper.FieldMapper; import org.elasticsearch.index.mapper.FieldMappers; @@ -183,8 +183,8 @@ public class QueryParseContext { return queryParser; } - public FixedBitSetFilter fixedBitSetFilter(Filter filter) { - return indexQueryParser.fixedBitSetFilterCache.getFixedBitSetFilter(filter); + public BitDocIdSetFilter bitsetFilter(Filter filter) { + return indexQueryParser.bitsetFilterCache.getBitDocIdSetFilter(filter); } public Filter cacheFilter(Filter filter, @Nullable CacheKeyFilter.Key cacheKey) { diff --git a/src/main/java/org/elasticsearch/index/query/ScriptFilterParser.java b/src/main/java/org/elasticsearch/index/query/ScriptFilterParser.java index ca7cff3f7f0..9d8e879660a 100644 --- a/src/main/java/org/elasticsearch/index/query/ScriptFilterParser.java +++ b/src/main/java/org/elasticsearch/index/query/ScriptFilterParser.java @@ -19,15 +19,18 @@ package org.elasticsearch.index.query; -import org.apache.lucene.index.AtomicReaderContext; +import java.io.IOException; +import java.util.Map; + +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.search.BitsFilteredDocIdSet; import org.apache.lucene.search.DocIdSet; +import org.apache.lucene.search.DocValuesDocIdSet; import org.apache.lucene.search.Filter; import org.apache.lucene.util.Bits; import org.elasticsearch.ElasticsearchIllegalArgumentException; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.inject.Inject; -import org.elasticsearch.common.lucene.docset.MatchDocIdSet; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.cache.filter.support.CacheKeyFilter; import org.elasticsearch.script.ScriptParameterParser; @@ -36,9 +39,6 @@ import org.elasticsearch.script.ScriptService; import org.elasticsearch.script.SearchScript; import org.elasticsearch.search.lookup.SearchLookup; -import java.io.IOException; -import java.util.Map; - import static com.google.common.collect.Maps.newHashMap; /** @@ -168,13 +168,13 @@ public class ScriptFilterParser implements FilterParser { } @Override - public DocIdSet getDocIdSet(AtomicReaderContext context, Bits acceptDocs) throws IOException { + public DocIdSet getDocIdSet(LeafReaderContext context, Bits acceptDocs) throws IOException { searchScript.setNextReader(context); // LUCENE 4 UPGRADE: we can simply wrap this here since it is not cacheable and if we are not top level we will get a null passed anyway return BitsFilteredDocIdSet.wrap(new ScriptDocSet(context.reader().maxDoc(), acceptDocs, searchScript), acceptDocs); } - static class ScriptDocSet extends MatchDocIdSet { + static class ScriptDocSet extends DocValuesDocIdSet { private final SearchScript searchScript; @@ -198,6 +198,11 @@ public class ScriptFilterParser implements FilterParser { } throw new ElasticsearchIllegalArgumentException("Can't handle type [" + val + "] in script filter"); } + + @Override + public long ramBytesUsed() { + return 0; + } } } } \ No newline at end of file diff --git a/src/main/java/org/elasticsearch/index/query/TopChildrenQueryParser.java b/src/main/java/org/elasticsearch/index/query/TopChildrenQueryParser.java index 986ac4a91c4..28143a4cd5c 100644 --- a/src/main/java/org/elasticsearch/index/query/TopChildrenQueryParser.java +++ b/src/main/java/org/elasticsearch/index/query/TopChildrenQueryParser.java @@ -18,12 +18,12 @@ */ package org.elasticsearch.index.query; +import org.apache.lucene.search.FilteredQuery; import org.apache.lucene.search.Query; +import org.apache.lucene.search.join.BitDocIdSetFilter; import org.elasticsearch.common.Strings; import org.elasticsearch.common.inject.Inject; -import org.elasticsearch.common.lucene.search.XFilteredQuery; import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.index.cache.fixedbitset.FixedBitSetFilter; import org.elasticsearch.index.fielddata.plain.ParentChildIndexFieldData; import org.elasticsearch.index.mapper.DocumentMapper; import org.elasticsearch.index.mapper.internal.ParentFieldMapper; @@ -126,14 +126,14 @@ public class TopChildrenQueryParser implements QueryParser { } String parentType = childDocMapper.parentFieldMapper().type(); - FixedBitSetFilter nonNestedDocsFilter = null; + BitDocIdSetFilter nonNestedDocsFilter = null; if (childDocMapper.hasNestedObjects()) { - nonNestedDocsFilter = parseContext.fixedBitSetFilter(NonNestedDocsFilter.INSTANCE); + nonNestedDocsFilter = parseContext.bitsetFilter(NonNestedDocsFilter.INSTANCE); } innerQuery.setBoost(boost); // wrap the query with type query - innerQuery = new XFilteredQuery(innerQuery, parseContext.cacheFilter(childDocMapper.typeFilter(), null)); + innerQuery = new FilteredQuery(innerQuery, parseContext.cacheFilter(childDocMapper.typeFilter(), null)); ParentChildIndexFieldData parentChildIndexFieldData = parseContext.getForField(parentFieldMapper); TopChildrenQuery query = new TopChildrenQuery(parentChildIndexFieldData, innerQuery, childType, parentType, scoreType, factor, incrementalFactor, nonNestedDocsFilter); if (queryName != null) { diff --git a/src/main/java/org/elasticsearch/index/query/functionscore/DecayFunctionParser.java b/src/main/java/org/elasticsearch/index/query/functionscore/DecayFunctionParser.java index 8edea390856..23c154b5eed 100644 --- a/src/main/java/org/elasticsearch/index/query/functionscore/DecayFunctionParser.java +++ b/src/main/java/org/elasticsearch/index/query/functionscore/DecayFunctionParser.java @@ -19,7 +19,7 @@ package org.elasticsearch.index.query.functionscore; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.search.ComplexExplanation; import org.apache.lucene.search.Explanation; import org.elasticsearch.ElasticsearchIllegalArgumentException; @@ -295,7 +295,7 @@ public abstract class DecayFunctionParser implements ScoreFunctionParser { } @Override - public void setNextReader(AtomicReaderContext context) { + public void setNextReader(LeafReaderContext context) { geoPointValues = fieldData.load(context).getGeoPointValues(); } @@ -357,7 +357,7 @@ public abstract class DecayFunctionParser implements ScoreFunctionParser { this.origin = origin; } - public void setNextReader(AtomicReaderContext context) { + public void setNextReader(LeafReaderContext context) { this.doubleValues = this.fieldData.load(context).getDoubleValues(); } diff --git a/src/main/java/org/elasticsearch/index/query/functionscore/FunctionScoreQueryParser.java b/src/main/java/org/elasticsearch/index/query/functionscore/FunctionScoreQueryParser.java index 09cd5b05a0f..5f2e37e5632 100644 --- a/src/main/java/org/elasticsearch/index/query/functionscore/FunctionScoreQueryParser.java +++ b/src/main/java/org/elasticsearch/index/query/functionscore/FunctionScoreQueryParser.java @@ -21,6 +21,7 @@ package org.elasticsearch.index.query.functionscore; import com.google.common.collect.ImmutableMap; import com.google.common.collect.ImmutableMap.Builder; +import org.apache.lucene.search.ConstantScoreQuery; import org.apache.lucene.search.Filter; import org.apache.lucene.search.Query; import org.elasticsearch.ElasticsearchParseException; @@ -29,8 +30,11 @@ import org.elasticsearch.common.Strings; import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.lucene.search.MatchAllDocsFilter; import org.elasticsearch.common.lucene.search.Queries; -import org.elasticsearch.common.lucene.search.XConstantScoreQuery; -import org.elasticsearch.common.lucene.search.function.*; +import org.elasticsearch.common.lucene.search.function.CombineFunction; +import org.elasticsearch.common.lucene.search.function.FiltersFunctionScoreQuery; +import org.elasticsearch.common.lucene.search.function.FunctionScoreQuery; +import org.elasticsearch.common.lucene.search.function.ScoreFunction; +import org.elasticsearch.common.lucene.search.function.WeightFactorFunction; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.query.QueryParseContext; import org.elasticsearch.index.query.QueryParser; @@ -100,7 +104,7 @@ public class FunctionScoreQueryParser implements QueryParser { } else if ("query".equals(currentFieldName)) { query = parseContext.parseInnerQuery(); } else if ("filter".equals(currentFieldName)) { - query = new XConstantScoreQuery(parseContext.parseInnerFilter()); + query = new ConstantScoreQuery(parseContext.parseInnerFilter()); } else if ("score_mode".equals(currentFieldName) || "scoreMode".equals(currentFieldName)) { scoreMode = parseScoreMode(parseContext, parser); } else if ("boost_mode".equals(currentFieldName) || "boostMode".equals(currentFieldName)) { diff --git a/src/main/java/org/elasticsearch/index/query/support/QueryParsers.java b/src/main/java/org/elasticsearch/index/query/support/QueryParsers.java index 830619c782e..6ddad62d30c 100644 --- a/src/main/java/org/elasticsearch/index/query/support/QueryParsers.java +++ b/src/main/java/org/elasticsearch/index/query/support/QueryParsers.java @@ -21,12 +21,12 @@ package org.elasticsearch.index.query.support; import com.google.common.collect.ImmutableList; import org.apache.lucene.search.Filter; +import org.apache.lucene.search.FilteredQuery; import org.apache.lucene.search.MultiTermQuery; import org.apache.lucene.search.Query; import org.elasticsearch.ElasticsearchIllegalArgumentException; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.lucene.search.AndFilter; -import org.elasticsearch.common.lucene.search.XFilteredQuery; import org.elasticsearch.index.mapper.DocumentMapper; import org.elasticsearch.index.mapper.MapperService; import org.elasticsearch.index.query.QueryParseContext; @@ -55,7 +55,7 @@ public final class QueryParsers { } public static MultiTermQuery.RewriteMethod parseRewriteMethod(@Nullable String rewriteMethod) { - return parseRewriteMethod(rewriteMethod, MultiTermQuery.CONSTANT_SCORE_AUTO_REWRITE_DEFAULT); + return parseRewriteMethod(rewriteMethod, MultiTermQuery.CONSTANT_SCORE_FILTER_REWRITE); } public static MultiTermQuery.RewriteMethod parseRewriteMethod(@Nullable String rewriteMethod, @Nullable MultiTermQuery.RewriteMethod defaultRewriteMethod) { @@ -63,7 +63,7 @@ public final class QueryParsers { return defaultRewriteMethod; } if ("constant_score_auto".equals(rewriteMethod) || "constant_score_auto".equals(rewriteMethod)) { - return MultiTermQuery.CONSTANT_SCORE_AUTO_REWRITE_DEFAULT; + return MultiTermQuery.CONSTANT_SCORE_FILTER_REWRITE; } if ("scoring_boolean".equals(rewriteMethod) || "scoringBoolean".equals(rewriteMethod)) { return MultiTermQuery.SCORING_BOOLEAN_QUERY_REWRITE; @@ -105,7 +105,7 @@ public final class QueryParsers { return query; } DocumentMapper docMapper = smartFieldMappers.docMapper(); - return new XFilteredQuery(query, parseContext.cacheFilter(docMapper.typeFilter(), null)); + return new FilteredQuery(query, parseContext.cacheFilter(docMapper.typeFilter(), null)); } public static Filter wrapSmartNameFilter(Filter filter, @Nullable MapperService.SmartNameFieldMappers smartFieldMappers, diff --git a/src/main/java/org/elasticsearch/index/query/support/XContentStructure.java b/src/main/java/org/elasticsearch/index/query/support/XContentStructure.java index 264fa2cd9fa..ac5de596b9f 100644 --- a/src/main/java/org/elasticsearch/index/query/support/XContentStructure.java +++ b/src/main/java/org/elasticsearch/index/query/support/XContentStructure.java @@ -19,11 +19,11 @@ package org.elasticsearch.index.query.support; +import org.apache.lucene.search.ConstantScoreQuery; import org.apache.lucene.search.Filter; import org.apache.lucene.search.Query; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.bytes.BytesReference; -import org.elasticsearch.common.lucene.search.XConstantScoreQuery; import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentHelper; import org.elasticsearch.common.xcontent.XContentParser; @@ -113,7 +113,7 @@ public abstract class XContentStructure { parseContext.parser(innerParser); try { Filter innerFilter = parseContext.parseInnerFilter(); - return new XConstantScoreQuery(innerFilter); + return new ConstantScoreQuery(innerFilter); } finally { parseContext.parser(old); QueryParseContext.setTypes(origTypes); @@ -174,7 +174,7 @@ public abstract class XContentStructure { String[] origTypes = QueryParseContext.setTypesWithPrevious(types); try { Filter innerFilter = parseContext1.parseInnerFilter(); - query = new XConstantScoreQuery(innerFilter); + query = new ConstantScoreQuery(innerFilter); queryParsed = true; } finally { QueryParseContext.setTypes(origTypes); diff --git a/src/main/java/org/elasticsearch/index/search/FieldDataTermsFilter.java b/src/main/java/org/elasticsearch/index/search/FieldDataTermsFilter.java index 67a42b4448e..206f1a5ba76 100644 --- a/src/main/java/org/elasticsearch/index/search/FieldDataTermsFilter.java +++ b/src/main/java/org/elasticsearch/index/search/FieldDataTermsFilter.java @@ -18,23 +18,24 @@ */ package org.elasticsearch.index.search; +import java.io.IOException; + import com.carrotsearch.hppc.DoubleOpenHashSet; import com.carrotsearch.hppc.LongOpenHashSet; import com.carrotsearch.hppc.ObjectOpenHashSet; -import org.apache.lucene.index.AtomicReaderContext; + +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.SortedNumericDocValues; import org.apache.lucene.search.DocIdSet; +import org.apache.lucene.search.DocValuesDocIdSet; import org.apache.lucene.search.Filter; import org.apache.lucene.util.Bits; import org.apache.lucene.util.BytesRef; -import org.elasticsearch.common.lucene.docset.MatchDocIdSet; import org.elasticsearch.index.fielddata.IndexFieldData; import org.elasticsearch.index.fielddata.IndexNumericFieldData; import org.elasticsearch.index.fielddata.SortedBinaryDocValues; import org.elasticsearch.index.fielddata.SortedNumericDoubleValues; -import java.io.IOException; - /** * Similar to a {@link org.apache.lucene.queries.TermsFilter} but pulls terms from the fielddata. */ @@ -129,12 +130,12 @@ public abstract class FieldDataTermsFilter extends Filter { } @Override - public DocIdSet getDocIdSet(AtomicReaderContext context, Bits acceptDocs) throws IOException { + public DocIdSet getDocIdSet(LeafReaderContext context, Bits acceptDocs) throws IOException { // make sure there are terms to filter on if (terms == null || terms.isEmpty()) return null; final SortedBinaryDocValues values = fieldData.load(context).getBytesValues(); // load fielddata - return new MatchDocIdSet(context.reader().maxDoc(), acceptDocs) { + return new DocValuesDocIdSet(context.reader().maxDoc(), acceptDocs) { @Override protected boolean matchDoc(int doc) { values.setDocument(doc); @@ -147,6 +148,11 @@ public abstract class FieldDataTermsFilter extends Filter { return false; } + + @Override + public long ramBytesUsed() { + return 0; + } }; } } @@ -181,14 +187,14 @@ public abstract class FieldDataTermsFilter extends Filter { } @Override - public DocIdSet getDocIdSet(AtomicReaderContext context, Bits acceptDocs) throws IOException { + public DocIdSet getDocIdSet(LeafReaderContext context, Bits acceptDocs) throws IOException { // make sure there are terms to filter on if (terms == null || terms.isEmpty()) return null; IndexNumericFieldData numericFieldData = (IndexNumericFieldData) fieldData; if (!numericFieldData.getNumericType().isFloatingPoint()) { final SortedNumericDocValues values = numericFieldData.load(context).getLongValues(); // load fielddata - return new MatchDocIdSet(context.reader().maxDoc(), acceptDocs) { + return new DocValuesDocIdSet(context.reader().maxDoc(), acceptDocs) { @Override protected boolean matchDoc(int doc) { values.setDocument(doc); @@ -240,7 +246,7 @@ public abstract class FieldDataTermsFilter extends Filter { } @Override - public DocIdSet getDocIdSet(AtomicReaderContext context, Bits acceptDocs) throws IOException { + public DocIdSet getDocIdSet(LeafReaderContext context, Bits acceptDocs) throws IOException { // make sure there are terms to filter on if (terms == null || terms.isEmpty()) return null; @@ -248,7 +254,7 @@ public abstract class FieldDataTermsFilter extends Filter { IndexNumericFieldData indexNumericFieldData = (IndexNumericFieldData) fieldData; if (indexNumericFieldData.getNumericType().isFloatingPoint()) { final SortedNumericDoubleValues values = indexNumericFieldData.load(context).getDoubleValues(); // load fielddata - return new MatchDocIdSet(context.reader().maxDoc(), acceptDocs) { + return new DocValuesDocIdSet(context.reader().maxDoc(), acceptDocs) { @Override protected boolean matchDoc(int doc) { values.setDocument(doc); diff --git a/src/main/java/org/elasticsearch/index/search/NumericRangeFieldDataFilter.java b/src/main/java/org/elasticsearch/index/search/NumericRangeFieldDataFilter.java index dd21eede8f5..c52eb7899c3 100644 --- a/src/main/java/org/elasticsearch/index/search/NumericRangeFieldDataFilter.java +++ b/src/main/java/org/elasticsearch/index/search/NumericRangeFieldDataFilter.java @@ -19,18 +19,18 @@ package org.elasticsearch.index.search; -import org.apache.lucene.index.AtomicReaderContext; +import java.io.IOException; + +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.SortedNumericDocValues; import org.apache.lucene.search.DocIdSet; +import org.apache.lucene.search.DocValuesDocIdSet; import org.apache.lucene.search.Filter; import org.apache.lucene.util.Bits; import org.apache.lucene.util.NumericUtils; -import org.elasticsearch.common.lucene.docset.MatchDocIdSet; import org.elasticsearch.index.fielddata.IndexNumericFieldData; import org.elasticsearch.index.fielddata.SortedNumericDoubleValues; -import java.io.IOException; - /** * A numeric filter that can be much faster than {@link org.apache.lucene.search.NumericRangeFilter} at the * expense of loading numeric values of the field to memory using {@link org.elasticsearch.index.cache.field.data.FieldDataCache}. @@ -112,7 +112,7 @@ public abstract class NumericRangeFieldDataFilter extends Filter { public static NumericRangeFieldDataFilter newByteRange(IndexNumericFieldData indexFieldData, Byte lowerVal, Byte upperVal, boolean includeLower, boolean includeUpper) { return new NumericRangeFieldDataFilter(indexFieldData, lowerVal, upperVal, includeLower, includeUpper) { @Override - public DocIdSet getDocIdSet(AtomicReaderContext ctx, Bits acceptedDocs) throws IOException { + public DocIdSet getDocIdSet(LeafReaderContext ctx, Bits acceptedDocs) throws IOException { final byte inclusiveLowerPoint, inclusiveUpperPoint; if (lowerVal != null) { byte i = lowerVal.byteValue(); @@ -144,7 +144,7 @@ public abstract class NumericRangeFieldDataFilter extends Filter { public static NumericRangeFieldDataFilter newShortRange(IndexNumericFieldData indexFieldData, Short lowerVal, Short upperVal, boolean includeLower, boolean includeUpper) { return new NumericRangeFieldDataFilter(indexFieldData, lowerVal, upperVal, includeLower, includeUpper) { @Override - public DocIdSet getDocIdSet(AtomicReaderContext ctx, Bits acceptedDocs) throws IOException { + public DocIdSet getDocIdSet(LeafReaderContext ctx, Bits acceptedDocs) throws IOException { final short inclusiveLowerPoint, inclusiveUpperPoint; if (lowerVal != null) { short i = lowerVal.shortValue(); @@ -175,7 +175,7 @@ public abstract class NumericRangeFieldDataFilter extends Filter { public static NumericRangeFieldDataFilter newIntRange(IndexNumericFieldData indexFieldData, Integer lowerVal, Integer upperVal, boolean includeLower, boolean includeUpper) { return new NumericRangeFieldDataFilter(indexFieldData, lowerVal, upperVal, includeLower, includeUpper) { @Override - public DocIdSet getDocIdSet(AtomicReaderContext ctx, Bits acceptedDocs) throws IOException { + public DocIdSet getDocIdSet(LeafReaderContext ctx, Bits acceptedDocs) throws IOException { final int inclusiveLowerPoint, inclusiveUpperPoint; if (lowerVal != null) { int i = lowerVal.intValue(); @@ -206,7 +206,7 @@ public abstract class NumericRangeFieldDataFilter extends Filter { public static NumericRangeFieldDataFilter newLongRange(IndexNumericFieldData indexFieldData, Long lowerVal, Long upperVal, boolean includeLower, boolean includeUpper) { return new NumericRangeFieldDataFilter(indexFieldData, lowerVal, upperVal, includeLower, includeUpper) { @Override - public DocIdSet getDocIdSet(AtomicReaderContext ctx, Bits acceptedDocs) throws IOException { + public DocIdSet getDocIdSet(LeafReaderContext ctx, Bits acceptedDocs) throws IOException { final long inclusiveLowerPoint, inclusiveUpperPoint; if (lowerVal != null) { long i = lowerVal.longValue(); @@ -238,7 +238,7 @@ public abstract class NumericRangeFieldDataFilter extends Filter { public static NumericRangeFieldDataFilter newFloatRange(IndexNumericFieldData indexFieldData, Float lowerVal, Float upperVal, boolean includeLower, boolean includeUpper) { return new NumericRangeFieldDataFilter(indexFieldData, lowerVal, upperVal, includeLower, includeUpper) { @Override - public DocIdSet getDocIdSet(AtomicReaderContext ctx, Bits acceptedDocs) throws IOException { + public DocIdSet getDocIdSet(LeafReaderContext ctx, Bits acceptedDocs) throws IOException { // we transform the floating point numbers to sortable integers // using NumericUtils to easier find the next bigger/lower value final float inclusiveLowerPoint, inclusiveUpperPoint; @@ -273,7 +273,7 @@ public abstract class NumericRangeFieldDataFilter extends Filter { public static NumericRangeFieldDataFilter newDoubleRange(IndexNumericFieldData indexFieldData, Double lowerVal, Double upperVal, boolean includeLower, boolean includeUpper) { return new NumericRangeFieldDataFilter(indexFieldData, lowerVal, upperVal, includeLower, includeUpper) { @Override - public DocIdSet getDocIdSet(AtomicReaderContext ctx, Bits acceptedDocs) throws IOException { + public DocIdSet getDocIdSet(LeafReaderContext ctx, Bits acceptedDocs) throws IOException { // we transform the floating point numbers to sortable integers // using NumericUtils to easier find the next bigger/lower value final double inclusiveLowerPoint, inclusiveUpperPoint; @@ -305,7 +305,7 @@ public abstract class NumericRangeFieldDataFilter extends Filter { }; } - private static final class DoubleRangeMatchDocIdSet extends MatchDocIdSet { + private static final class DoubleRangeMatchDocIdSet extends DocValuesDocIdSet { private final SortedNumericDoubleValues values; private final double inclusiveLowerPoint; private final double inclusiveUpperPoint; @@ -332,7 +332,7 @@ public abstract class NumericRangeFieldDataFilter extends Filter { } - private static final class LongRangeMatchDocIdSet extends MatchDocIdSet { + private static final class LongRangeMatchDocIdSet extends DocValuesDocIdSet { private final SortedNumericDocValues values; private final long inclusiveLowerPoint; private final long inclusiveUpperPoint; diff --git a/src/main/java/org/elasticsearch/index/search/child/ChildrenConstantScoreQuery.java b/src/main/java/org/elasticsearch/index/search/child/ChildrenConstantScoreQuery.java index cc93b5a0392..45b5362dd72 100644 --- a/src/main/java/org/elasticsearch/index/search/child/ChildrenConstantScoreQuery.java +++ b/src/main/java/org/elasticsearch/index/search/child/ChildrenConstantScoreQuery.java @@ -19,18 +19,27 @@ package org.elasticsearch.index.search.child; -import org.apache.lucene.index.AtomicReaderContext; import org.apache.lucene.index.IndexReader; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.SortedDocValues; import org.apache.lucene.index.Term; -import org.apache.lucene.search.*; +import org.apache.lucene.search.BitsFilteredDocIdSet; +import org.apache.lucene.search.CollectionTerminatedException; +import org.apache.lucene.search.DocIdSet; +import org.apache.lucene.search.DocIdSetIterator; +import org.apache.lucene.search.Explanation; +import org.apache.lucene.search.Filter; +import org.apache.lucene.search.IndexSearcher; +import org.apache.lucene.search.Query; +import org.apache.lucene.search.Scorer; +import org.apache.lucene.search.Weight; +import org.apache.lucene.search.XFilteredDocIdSetIterator; +import org.apache.lucene.search.join.BitDocIdSetFilter; import org.apache.lucene.util.Bits; import org.apache.lucene.util.LongBitSet; import org.elasticsearch.common.lucene.docset.DocIdSets; -import org.elasticsearch.common.lucene.search.ApplyAcceptedDocsFilter; import org.elasticsearch.common.lucene.search.NoopCollector; import org.elasticsearch.common.lucene.search.Queries; -import org.elasticsearch.index.cache.fixedbitset.FixedBitSetFilter; import org.elasticsearch.index.fielddata.AtomicParentChildFieldData; import org.elasticsearch.index.fielddata.IndexParentChildFieldData; import org.elasticsearch.search.internal.SearchContext; @@ -48,14 +57,14 @@ public class ChildrenConstantScoreQuery extends Query { private Query originalChildQuery; private final String parentType; private final String childType; - private final FixedBitSetFilter parentFilter; + private final BitDocIdSetFilter parentFilter; private final int shortCircuitParentDocSet; - private final FixedBitSetFilter nonNestedDocsFilter; + private final BitDocIdSetFilter nonNestedDocsFilter; private Query rewrittenChildQuery; private IndexReader rewriteIndexReader; - public ChildrenConstantScoreQuery(IndexParentChildFieldData parentChildIndexFieldData, Query childQuery, String parentType, String childType, FixedBitSetFilter parentFilter, int shortCircuitParentDocSet, FixedBitSetFilter nonNestedDocsFilter) { + public ChildrenConstantScoreQuery(IndexParentChildFieldData parentChildIndexFieldData, Query childQuery, String parentType, String childType, BitDocIdSetFilter parentFilter, int shortCircuitParentDocSet, BitDocIdSetFilter nonNestedDocsFilter) { this.parentChildIndexFieldData = parentChildIndexFieldData; this.parentFilter = parentFilter; this.parentType = parentType; @@ -98,7 +107,7 @@ public class ChildrenConstantScoreQuery extends Query { assert rewriteIndexReader == searcher.getIndexReader() : "not equal, rewriteIndexReader=" + rewriteIndexReader + " searcher.getIndexReader()=" + searcher.getIndexReader(); final long valueCount; - List leaves = searcher.getIndexReader().leaves(); + List leaves = searcher.getIndexReader().leaves(); if (globalIfd == null || leaves.isEmpty()) { return Queries.newMatchNoDocsQuery().createWeight(searcher); } else { @@ -182,7 +191,7 @@ public class ChildrenConstantScoreQuery extends Query { private float queryWeight; public ParentWeight(Filter parentFilter, IndexParentChildFieldData globalIfd, Filter shortCircuitFilter, ParentOrdCollector collector, long remaining) { - this.parentFilter = new ApplyAcceptedDocsFilter(parentFilter); + this.parentFilter = parentFilter; this.globalIfd = globalIfd; this.shortCircuitFilter = shortCircuitFilter; this.collector = collector; @@ -190,7 +199,7 @@ public class ChildrenConstantScoreQuery extends Query { } @Override - public Explanation explain(AtomicReaderContext context, int doc) throws IOException { + public Explanation explain(LeafReaderContext context, int doc) throws IOException { return new Explanation(getBoost(), "not implemented yet..."); } @@ -212,7 +221,7 @@ public class ChildrenConstantScoreQuery extends Query { } @Override - public Scorer scorer(AtomicReaderContext context, Bits acceptDocs) throws IOException { + public Scorer scorer(LeafReaderContext context, Bits acceptDocs) throws IOException { if (remaining == 0) { return null; } @@ -275,7 +284,7 @@ public class ChildrenConstantScoreQuery extends Query { } @Override - public void setNextReader(AtomicReaderContext context) throws IOException { + protected void doSetNextReader(LeafReaderContext context) throws IOException { values = indexFieldData.load(context).getOrdinalsValues(parentType); } @@ -285,7 +294,7 @@ public class ChildrenConstantScoreQuery extends Query { } - private final static class ParentOrdIterator extends FilteredDocIdSetIterator { + private final static class ParentOrdIterator extends XFilteredDocIdSetIterator { private final LongBitSet parentOrds; private final SortedDocValues ordinals; @@ -301,12 +310,7 @@ public class ChildrenConstantScoreQuery extends Query { @Override protected boolean match(int doc) { if (parentWeight.remaining == 0) { - try { - advance(DocIdSetIterator.NO_MORE_DOCS); - } catch (IOException e) { - throw new RuntimeException(e); - } - return false; + throw new CollectionTerminatedException(); } long parentOrd = ordinals.getOrd(doc); diff --git a/src/main/java/org/elasticsearch/index/search/child/ChildrenQuery.java b/src/main/java/org/elasticsearch/index/search/child/ChildrenQuery.java index ff6f132fb2b..5fe0995eee9 100644 --- a/src/main/java/org/elasticsearch/index/search/child/ChildrenQuery.java +++ b/src/main/java/org/elasticsearch/index/search/child/ChildrenQuery.java @@ -18,25 +18,34 @@ */ package org.elasticsearch.index.search.child; -import org.apache.lucene.index.AtomicReaderContext; import org.apache.lucene.index.IndexReader; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.SortedDocValues; import org.apache.lucene.index.Term; -import org.apache.lucene.search.*; +import org.apache.lucene.search.BitsFilteredDocIdSet; +import org.apache.lucene.search.CollectionTerminatedException; +import org.apache.lucene.search.DocIdSet; +import org.apache.lucene.search.DocIdSetIterator; +import org.apache.lucene.search.Explanation; +import org.apache.lucene.search.Filter; +import org.apache.lucene.search.IndexSearcher; +import org.apache.lucene.search.Query; +import org.apache.lucene.search.Scorer; +import org.apache.lucene.search.Weight; +import org.apache.lucene.search.XFilteredDocIdSetIterator; +import org.apache.lucene.search.join.BitDocIdSetFilter; import org.apache.lucene.util.Bits; import org.apache.lucene.util.ToStringUtils; import org.elasticsearch.ElasticsearchException; import org.elasticsearch.common.lease.Releasable; import org.elasticsearch.common.lease.Releasables; import org.elasticsearch.common.lucene.docset.DocIdSets; -import org.elasticsearch.common.lucene.search.ApplyAcceptedDocsFilter; import org.elasticsearch.common.lucene.search.NoopCollector; import org.elasticsearch.common.lucene.search.Queries; import org.elasticsearch.common.util.BigArrays; import org.elasticsearch.common.util.FloatArray; import org.elasticsearch.common.util.IntArray; import org.elasticsearch.common.util.LongHash; -import org.elasticsearch.index.cache.fixedbitset.FixedBitSetFilter; import org.elasticsearch.index.fielddata.IndexParentChildFieldData; import org.elasticsearch.index.fielddata.plain.ParentChildIndexFieldData; import org.elasticsearch.search.internal.SearchContext; @@ -59,18 +68,18 @@ public class ChildrenQuery extends Query { protected final ParentChildIndexFieldData ifd; protected final String parentType; protected final String childType; - protected final FixedBitSetFilter parentFilter; + protected final BitDocIdSetFilter parentFilter; protected final ScoreType scoreType; protected Query originalChildQuery; protected final int minChildren; protected final int maxChildren; protected final int shortCircuitParentDocSet; - protected final FixedBitSetFilter nonNestedDocsFilter; + protected final BitDocIdSetFilter nonNestedDocsFilter; protected Query rewrittenChildQuery; protected IndexReader rewriteIndexReader; - public ChildrenQuery(ParentChildIndexFieldData ifd, String parentType, String childType, FixedBitSetFilter parentFilter, Query childQuery, ScoreType scoreType, int minChildren, int maxChildren, int shortCircuitParentDocSet, FixedBitSetFilter nonNestedDocsFilter) { + public ChildrenQuery(ParentChildIndexFieldData ifd, String parentType, String childType, BitDocIdSetFilter parentFilter, Query childQuery, ScoreType scoreType, int minChildren, int maxChildren, int shortCircuitParentDocSet, BitDocIdSetFilter nonNestedDocsFilter) { this.ifd = ifd; this.parentType = parentType; this.childType = childType; @@ -224,7 +233,7 @@ public class ChildrenQuery extends Query { parentFilter = ParentIdsFilter.createShortCircuitFilter(nonNestedDocsFilter, sc, parentType, collector.values, collector.parentIdxs, numFoundParents); } else { - parentFilter = new ApplyAcceptedDocsFilter(this.parentFilter); + parentFilter = this.parentFilter; } return new ParentWeight(rewrittenChildQuery.createWeight(searcher), parentFilter, numFoundParents, collector, minChildren, maxChildren); @@ -252,7 +261,7 @@ public class ChildrenQuery extends Query { } @Override - public Explanation explain(AtomicReaderContext context, int doc) throws IOException { + public Explanation explain(LeafReaderContext context, int doc) throws IOException { return new Explanation(getBoost(), "not implemented yet..."); } @@ -279,7 +288,7 @@ public class ChildrenQuery extends Query { } @Override - public Scorer scorer(AtomicReaderContext context, Bits acceptDocs) throws IOException { + public Scorer scorer(LeafReaderContext context, Bits acceptDocs) throws IOException { DocIdSet parentsSet = parentFilter.getDocIdSet(context, acceptDocs); if (DocIdSets.isEmpty(parentsSet) || remaining == 0) { return null; @@ -364,7 +373,7 @@ public class ChildrenQuery extends Query { } @Override - public void setNextReader(AtomicReaderContext context) throws IOException { + protected void doSetNextReader(LeafReaderContext context) throws IOException { values = globalIfd.load(context).getOrdinalsValues(parentType); } @@ -691,7 +700,7 @@ public class ChildrenQuery extends Query { } } - private final static class CountParentOrdIterator extends FilteredDocIdSetIterator { + private final static class CountParentOrdIterator extends XFilteredDocIdSetIterator { private final LongHash parentIds; protected final IntArray occurrences; @@ -713,12 +722,7 @@ public class ChildrenQuery extends Query { @Override protected boolean match(int doc) { if (parentWeight.remaining == 0) { - try { - advance(DocIdSetIterator.NO_MORE_DOCS); - } catch (IOException e) { - throw new RuntimeException(e); - } - return false; + throw new CollectionTerminatedException(); } final long parentOrd = ordinals.getOrd(doc); diff --git a/src/main/java/org/elasticsearch/index/search/child/CustomQueryWrappingFilter.java b/src/main/java/org/elasticsearch/index/search/child/CustomQueryWrappingFilter.java index 35d18e2cf07..547c57389e1 100644 --- a/src/main/java/org/elasticsearch/index/search/child/CustomQueryWrappingFilter.java +++ b/src/main/java/org/elasticsearch/index/search/child/CustomQueryWrappingFilter.java @@ -18,13 +18,12 @@ */ package org.elasticsearch.index.search.child; -import org.apache.lucene.index.AtomicReader; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReader; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.search.*; import org.apache.lucene.util.Bits; import org.elasticsearch.ElasticsearchException; import org.elasticsearch.common.lease.Releasable; -import org.elasticsearch.common.lucene.docset.DocIdSets; import org.elasticsearch.common.lucene.search.NoCacheFilter; import org.elasticsearch.search.internal.SearchContext; import org.elasticsearch.search.internal.SearchContext.Lifetime; @@ -43,7 +42,7 @@ public class CustomQueryWrappingFilter extends NoCacheFilter implements Releasab private final Query query; private IndexSearcher searcher; - private IdentityHashMap docIdSets; + private IdentityHashMap docIdSets; /** Constructs a filter which only matches documents matching * query. @@ -60,7 +59,7 @@ public class CustomQueryWrappingFilter extends NoCacheFilter implements Releasab } @Override - public DocIdSet getDocIdSet(final AtomicReaderContext context, final Bits acceptDocs) throws IOException { + public DocIdSet getDocIdSet(final LeafReaderContext context, final Bits acceptDocs) throws IOException { final SearchContext searchContext = SearchContext.current(); if (docIdSets == null) { assert searcher == null; @@ -70,25 +69,27 @@ public class CustomQueryWrappingFilter extends NoCacheFilter implements Releasab searchContext.addReleasable(this, Lifetime.COLLECTION); final Weight weight = searcher.createNormalizedWeight(query); - for (final AtomicReaderContext leaf : searcher.getTopReaderContext().leaves()) { - final DocIdSet set = DocIdSets.toCacheable(leaf.reader(), new DocIdSet() { + for (final LeafReaderContext leaf : searcher.getTopReaderContext().leaves()) { + final DocIdSet set = new DocIdSet() { @Override public DocIdSetIterator iterator() throws IOException { return weight.scorer(leaf, null); } @Override public boolean isCacheable() { return false; } - }); + + @Override + public long ramBytesUsed() { + return 0; + } + }; docIdSets.put(leaf.reader(), set); } } else { assert searcher == SearchContext.current().searcher(); } final DocIdSet set = docIdSets.get(context.reader()); - if (set != null && acceptDocs != null) { - return BitsFilteredDocIdSet.wrap(set, acceptDocs); - } - return set; + return BitsFilteredDocIdSet.wrap(set, acceptDocs); } @Override diff --git a/src/main/java/org/elasticsearch/index/search/child/ParentConstantScoreQuery.java b/src/main/java/org/elasticsearch/index/search/child/ParentConstantScoreQuery.java index 6568972b040..1b3f87a10cb 100644 --- a/src/main/java/org/elasticsearch/index/search/child/ParentConstantScoreQuery.java +++ b/src/main/java/org/elasticsearch/index/search/child/ParentConstantScoreQuery.java @@ -18,18 +18,25 @@ */ package org.elasticsearch.index.search.child; -import org.apache.lucene.index.AtomicReaderContext; import org.apache.lucene.index.IndexReader; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.SortedDocValues; import org.apache.lucene.index.Term; -import org.apache.lucene.search.*; +import org.apache.lucene.search.DocIdSet; +import org.apache.lucene.search.DocIdSetIterator; +import org.apache.lucene.search.Explanation; +import org.apache.lucene.search.Filter; +import org.apache.lucene.search.FilteredDocIdSetIterator; +import org.apache.lucene.search.IndexSearcher; +import org.apache.lucene.search.Query; +import org.apache.lucene.search.Scorer; +import org.apache.lucene.search.Weight; +import org.apache.lucene.search.join.BitDocIdSetFilter; import org.apache.lucene.util.Bits; import org.apache.lucene.util.LongBitSet; import org.elasticsearch.common.lucene.docset.DocIdSets; -import org.elasticsearch.common.lucene.search.ApplyAcceptedDocsFilter; import org.elasticsearch.common.lucene.search.NoopCollector; import org.elasticsearch.common.lucene.search.Queries; -import org.elasticsearch.index.cache.fixedbitset.FixedBitSetFilter; import org.elasticsearch.index.fielddata.AtomicParentChildFieldData; import org.elasticsearch.index.fielddata.IndexParentChildFieldData; import org.elasticsearch.index.fielddata.plain.ParentChildIndexFieldData; @@ -46,12 +53,12 @@ public class ParentConstantScoreQuery extends Query { private final ParentChildIndexFieldData parentChildIndexFieldData; private Query originalParentQuery; private final String parentType; - private final FixedBitSetFilter childrenFilter; + private final BitDocIdSetFilter childrenFilter; private Query rewrittenParentQuery; private IndexReader rewriteIndexReader; - public ParentConstantScoreQuery(ParentChildIndexFieldData parentChildIndexFieldData, Query parentQuery, String parentType, FixedBitSetFilter childrenFilter) { + public ParentConstantScoreQuery(ParentChildIndexFieldData parentChildIndexFieldData, Query parentQuery, String parentType, BitDocIdSetFilter childrenFilter) { this.parentChildIndexFieldData = parentChildIndexFieldData; this.originalParentQuery = parentQuery; this.parentType = parentType; @@ -90,7 +97,7 @@ public class ParentConstantScoreQuery extends Query { assert rewriteIndexReader == searcher.getIndexReader() : "not equal, rewriteIndexReader=" + rewriteIndexReader + " searcher.getIndexReader()=" + searcher.getIndexReader(); final long maxOrd; - List leaves = searcher.getIndexReader().leaves(); + List leaves = searcher.getIndexReader().leaves(); if (globalIfd == null || leaves.isEmpty()) { return Queries.newMatchNoDocsQuery().createWeight(searcher); } else { @@ -162,12 +169,12 @@ public class ParentConstantScoreQuery extends Query { private ChildrenWeight(Filter childrenFilter, ParentOrdsCollector collector, IndexParentChildFieldData globalIfd) { this.globalIfd = globalIfd; - this.childrenFilter = new ApplyAcceptedDocsFilter(childrenFilter); + this.childrenFilter = childrenFilter; this.parentOrds = collector.parentOrds; } @Override - public Explanation explain(AtomicReaderContext context, int doc) throws IOException { + public Explanation explain(LeafReaderContext context, int doc) throws IOException { return new Explanation(getBoost(), "not implemented yet..."); } @@ -189,7 +196,7 @@ public class ParentConstantScoreQuery extends Query { } @Override - public Scorer scorer(AtomicReaderContext context, Bits acceptDocs) throws IOException { + public Scorer scorer(LeafReaderContext context, Bits acceptDocs) throws IOException { DocIdSet childrenDocIdSet = childrenFilter.getDocIdSet(context, acceptDocs); if (DocIdSets.isEmpty(childrenDocIdSet)) { return null; @@ -258,7 +265,7 @@ public class ParentConstantScoreQuery extends Query { } @Override - public void setNextReader(AtomicReaderContext readerContext) throws IOException { + public void doSetNextReader(LeafReaderContext readerContext) throws IOException { globalOrdinals = globalIfd.load(readerContext).getOrdinalsValues(parentType); } diff --git a/src/main/java/org/elasticsearch/index/search/child/ParentIdsFilter.java b/src/main/java/org/elasticsearch/index/search/child/ParentIdsFilter.java index 187e083f4c6..6333315ce3a 100644 --- a/src/main/java/org/elasticsearch/index/search/child/ParentIdsFilter.java +++ b/src/main/java/org/elasticsearch/index/search/child/ParentIdsFilter.java @@ -18,12 +18,25 @@ */ package org.elasticsearch.index.search.child; -import org.apache.lucene.index.*; +import org.apache.lucene.index.DocsEnum; +import org.apache.lucene.index.LeafReaderContext; +import org.apache.lucene.index.SortedDocValues; +import org.apache.lucene.index.Term; +import org.apache.lucene.index.Terms; +import org.apache.lucene.index.TermsEnum; import org.apache.lucene.queries.TermFilter; import org.apache.lucene.search.DocIdSet; import org.apache.lucene.search.DocIdSetIterator; import org.apache.lucene.search.Filter; -import org.apache.lucene.util.*; +import org.apache.lucene.search.join.BitDocIdSetFilter; +import org.apache.lucene.util.BitDocIdSet; +import org.apache.lucene.util.BitSet; +import org.apache.lucene.util.Bits; +import org.apache.lucene.util.BytesRef; +import org.apache.lucene.util.BytesRefBuilder; +import org.apache.lucene.util.FixedBitSet; +import org.apache.lucene.util.LongBitSet; +import org.apache.lucene.util.SparseFixedBitSet; import org.elasticsearch.common.lease.Releasables; import org.elasticsearch.common.lucene.search.AndFilter; import org.elasticsearch.common.util.BytesRefHash; @@ -44,7 +57,7 @@ import java.util.List; */ final class ParentIdsFilter extends Filter { - static Filter createShortCircuitFilter(Filter nonNestedDocsFilter, SearchContext searchContext, + static Filter createShortCircuitFilter(BitDocIdSetFilter nonNestedDocsFilter, SearchContext searchContext, String parentType, SortedDocValues globalValues, LongBitSet parentOrds, long numFoundParents) { if (numFoundParents == 1) { @@ -77,7 +90,7 @@ final class ParentIdsFilter extends Filter { } } - static Filter createShortCircuitFilter(Filter nonNestedDocsFilter, SearchContext searchContext, + static Filter createShortCircuitFilter(BitDocIdSetFilter nonNestedDocsFilter, SearchContext searchContext, String parentType, SortedDocValues globalValues, LongHash parentIdxs, long numFoundParents) { if (numFoundParents == 1) { @@ -111,17 +124,17 @@ final class ParentIdsFilter extends Filter { } private final BytesRef parentTypeBr; - private final Filter nonNestedDocsFilter; + private final BitDocIdSetFilter nonNestedDocsFilter; private final BytesRefHash parentIds; - private ParentIdsFilter(String parentType, Filter nonNestedDocsFilter, BytesRefHash parentIds) { + private ParentIdsFilter(String parentType, BitDocIdSetFilter nonNestedDocsFilter, BytesRefHash parentIds) { this.nonNestedDocsFilter = nonNestedDocsFilter; this.parentTypeBr = new BytesRef(parentType); this.parentIds = parentIds; } @Override - public DocIdSet getDocIdSet(AtomicReaderContext context, Bits acceptDocs) throws IOException { + public DocIdSet getDocIdSet(LeafReaderContext context, Bits acceptDocs) throws IOException { Terms terms = context.reader().terms(UidFieldMapper.NAME); if (terms == null) { return null; @@ -135,14 +148,14 @@ final class ParentIdsFilter extends Filter { acceptDocs = context.reader().getLiveDocs(); } - FixedBitSet nonNestedDocs = null; + BitSet nonNestedDocs = null; if (nonNestedDocsFilter != null) { - nonNestedDocs = (FixedBitSet) nonNestedDocsFilter.getDocIdSet(context, acceptDocs); + nonNestedDocs = nonNestedDocsFilter.getDocIdSet(context).bits(); } DocsEnum docsEnum = null; - FixedBitSet result = null; - long size = parentIds.size(); + BitSet result = null; + int size = (int) parentIds.size(); for (int i = 0; i < size; i++) { parentIds.get(i, idSpare); BytesRef uid = Uid.createUidAsBytes(parentTypeBr, idSpare, uidSpare); @@ -152,7 +165,15 @@ final class ParentIdsFilter extends Filter { if (result == null) { docId = docsEnum.nextDoc(); if (docId != DocIdSetIterator.NO_MORE_DOCS) { - result = new FixedBitSet(context.reader().maxDoc()); + // very rough heuristic that tries to get an idea of the number of documents + // in the set based on the number of parent ids that we didn't find in this segment + final int expectedCardinality = size / (i + 1); + // similar heuristic to BitDocIdSet.Builder + if (expectedCardinality >= (context.reader().maxDoc() >>> 10)) { + result = new FixedBitSet(context.reader().maxDoc()); + } else { + result = new SparseFixedBitSet(context.reader().maxDoc()); + } } else { continue; } @@ -162,13 +183,13 @@ final class ParentIdsFilter extends Filter { continue; } } - if (nonNestedDocs != null && !nonNestedDocs.get(docId)) { + if (nonNestedDocs != null) { docId = nonNestedDocs.nextSetBit(docId); } result.set(docId); assert docsEnum.advance(docId + 1) == DocIdSetIterator.NO_MORE_DOCS : "DocId " + docId + " should have been the last one but docId " + docsEnum.docID() + " exists."; } } - return result; + return result == null ? null : new BitDocIdSet(result); } } \ No newline at end of file diff --git a/src/main/java/org/elasticsearch/index/search/child/ParentQuery.java b/src/main/java/org/elasticsearch/index/search/child/ParentQuery.java index 5770efbdb8d..ea041629642 100644 --- a/src/main/java/org/elasticsearch/index/search/child/ParentQuery.java +++ b/src/main/java/org/elasticsearch/index/search/child/ParentQuery.java @@ -18,21 +18,31 @@ */ package org.elasticsearch.index.search.child; -import org.apache.lucene.index.*; -import org.apache.lucene.search.*; +import org.apache.lucene.index.IndexReader; +import org.apache.lucene.index.LeafReaderContext; +import org.apache.lucene.index.SortedDocValues; +import org.apache.lucene.index.SortedSetDocValues; +import org.apache.lucene.index.Term; +import org.apache.lucene.search.DocIdSet; +import org.apache.lucene.search.DocIdSetIterator; +import org.apache.lucene.search.Explanation; +import org.apache.lucene.search.Filter; +import org.apache.lucene.search.IndexSearcher; +import org.apache.lucene.search.Query; +import org.apache.lucene.search.Scorer; +import org.apache.lucene.search.Weight; +import org.apache.lucene.search.join.BitDocIdSetFilter; import org.apache.lucene.util.Bits; import org.apache.lucene.util.ToStringUtils; import org.elasticsearch.ElasticsearchException; import org.elasticsearch.common.lease.Releasable; import org.elasticsearch.common.lease.Releasables; import org.elasticsearch.common.lucene.docset.DocIdSets; -import org.elasticsearch.common.lucene.search.ApplyAcceptedDocsFilter; import org.elasticsearch.common.lucene.search.NoopCollector; import org.elasticsearch.common.lucene.search.Queries; import org.elasticsearch.common.util.BigArrays; import org.elasticsearch.common.util.FloatArray; import org.elasticsearch.common.util.LongHash; -import org.elasticsearch.index.cache.fixedbitset.FixedBitSetFilter; import org.elasticsearch.index.fielddata.IndexParentChildFieldData; import org.elasticsearch.index.fielddata.plain.ParentChildIndexFieldData; import org.elasticsearch.search.internal.SearchContext; @@ -51,12 +61,12 @@ public class ParentQuery extends Query { private final ParentChildIndexFieldData parentChildIndexFieldData; private Query originalParentQuery; private final String parentType; - private final FixedBitSetFilter childrenFilter; + private final BitDocIdSetFilter childrenFilter; private Query rewrittenParentQuery; private IndexReader rewriteIndexReader; - public ParentQuery(ParentChildIndexFieldData parentChildIndexFieldData, Query parentQuery, String parentType, FixedBitSetFilter childrenFilter) { + public ParentQuery(ParentChildIndexFieldData parentChildIndexFieldData, Query parentQuery, String parentType, BitDocIdSetFilter childrenFilter) { this.parentChildIndexFieldData = parentChildIndexFieldData; this.originalParentQuery = parentQuery; this.parentType = parentType; @@ -200,7 +210,7 @@ public class ParentQuery extends Query { } @Override - public void setNextReader(AtomicReaderContext context) throws IOException { + protected void doSetNextReader(LeafReaderContext context) throws IOException { values = globalIfd.load(context).getOrdinalsValues(parentType); } @@ -225,14 +235,14 @@ public class ParentQuery extends Query { private ChildWeight(Weight parentWeight, Filter childrenFilter, ParentOrdAndScoreCollector collector, IndexParentChildFieldData globalIfd) { this.parentWeight = parentWeight; - this.childrenFilter = new ApplyAcceptedDocsFilter(childrenFilter); + this.childrenFilter = childrenFilter; this.parentIdxs = collector.parentIdxs; this.scores = collector.scores; this.globalIfd = globalIfd; } @Override - public Explanation explain(AtomicReaderContext context, int doc) throws IOException { + public Explanation explain(LeafReaderContext context, int doc) throws IOException { return new Explanation(getBoost(), "not implemented yet..."); } @@ -253,7 +263,7 @@ public class ParentQuery extends Query { } @Override - public Scorer scorer(AtomicReaderContext context, Bits acceptDocs) throws IOException { + public Scorer scorer(LeafReaderContext context, Bits acceptDocs) throws IOException { DocIdSet childrenDocSet = childrenFilter.getDocIdSet(context, acceptDocs); if (DocIdSets.isEmpty(childrenDocSet)) { return null; diff --git a/src/main/java/org/elasticsearch/index/search/child/TopChildrenQuery.java b/src/main/java/org/elasticsearch/index/search/child/TopChildrenQuery.java index 21e4fca083e..048e8040b55 100644 --- a/src/main/java/org/elasticsearch/index/search/child/TopChildrenQuery.java +++ b/src/main/java/org/elasticsearch/index/search/child/TopChildrenQuery.java @@ -22,15 +22,12 @@ import com.carrotsearch.hppc.IntObjectOpenHashMap; import com.carrotsearch.hppc.ObjectObjectOpenHashMap; import org.apache.lucene.index.*; import org.apache.lucene.search.*; -import org.apache.lucene.util.Bits; -import org.apache.lucene.util.BytesRef; -import org.apache.lucene.util.FixedBitSet; -import org.apache.lucene.util.ToStringUtils; +import org.apache.lucene.util.*; import org.elasticsearch.ElasticsearchException; import org.elasticsearch.ElasticsearchIllegalStateException; import org.elasticsearch.common.lease.Releasable; import org.elasticsearch.common.lucene.search.EmptyScorer; -import org.elasticsearch.index.cache.fixedbitset.FixedBitSetFilter; +import org.apache.lucene.search.join.BitDocIdSetFilter; import org.elasticsearch.index.fielddata.IndexParentChildFieldData; import org.elasticsearch.index.mapper.Uid; import org.elasticsearch.index.mapper.internal.UidFieldMapper; @@ -67,14 +64,14 @@ public class TopChildrenQuery extends Query { private final int factor; private final int incrementalFactor; private Query originalChildQuery; - private final FixedBitSetFilter nonNestedDocsFilter; + private final BitDocIdSetFilter nonNestedDocsFilter; // This field will hold the rewritten form of originalChildQuery, so that we can reuse it private Query rewrittenChildQuery; private IndexReader rewriteIndexReader; // Note, the query is expected to already be filtered to only child type docs - public TopChildrenQuery(IndexParentChildFieldData parentChildIndexFieldData, Query childQuery, String childType, String parentType, ScoreType scoreType, int factor, int incrementalFactor, FixedBitSetFilter nonNestedDocsFilter) { + public TopChildrenQuery(IndexParentChildFieldData parentChildIndexFieldData, Query childQuery, String childType, String parentType, ScoreType scoreType, int factor, int incrementalFactor, BitDocIdSetFilter nonNestedDocsFilter) { this.parentChildIndexFieldData = parentChildIndexFieldData; this.originalChildQuery = childQuery; this.childType = childType; @@ -173,7 +170,7 @@ public class TopChildrenQuery extends Query { ObjectObjectOpenHashMap> parentDocsPerReader = new ObjectObjectOpenHashMap<>(context.searcher().getIndexReader().leaves().size()); child_hits: for (ScoreDoc scoreDoc : topDocs.scoreDocs) { int readerIndex = ReaderUtil.subIndex(scoreDoc.doc, context.searcher().getIndexReader().leaves()); - AtomicReaderContext subContext = context.searcher().getIndexReader().leaves().get(readerIndex); + LeafReaderContext subContext = context.searcher().getIndexReader().leaves().get(readerIndex); SortedDocValues parentValues = parentChildIndexFieldData.load(subContext).getOrdinalsValues(parentType); int subDoc = scoreDoc.doc - subContext.docBase; @@ -184,11 +181,14 @@ public class TopChildrenQuery extends Query { continue; } // now go over and find the parent doc Id and reader tuple - for (AtomicReaderContext atomicReaderContext : context.searcher().getIndexReader().leaves()) { - AtomicReader indexReader = atomicReaderContext.reader(); - FixedBitSet nonNestedDocs = null; + for (LeafReaderContext atomicReaderContext : context.searcher().getIndexReader().leaves()) { + LeafReader indexReader = atomicReaderContext.reader(); + BitSet nonNestedDocs = null; if (nonNestedDocsFilter != null) { - nonNestedDocs = (FixedBitSet) nonNestedDocsFilter.getDocIdSet(atomicReaderContext, indexReader.getLiveDocs()); + BitDocIdSet nonNestedDocIdSet = nonNestedDocsFilter.getDocIdSet(atomicReaderContext); + if (nonNestedDocIdSet != null) { + nonNestedDocs = nonNestedDocIdSet.bits(); + } } Terms terms = indexReader.terms(UidFieldMapper.NAME); @@ -323,7 +323,7 @@ public class TopChildrenQuery extends Query { } @Override - public Scorer scorer(AtomicReaderContext context, Bits acceptDocs) throws IOException { + public Scorer scorer(LeafReaderContext context, Bits acceptDocs) throws IOException { ParentDoc[] readerParentDocs = parentDocs.get(context.reader().getCoreCacheKey()); if (readerParentDocs != null) { if (scoreType == ScoreType.MIN) { @@ -366,7 +366,7 @@ public class TopChildrenQuery extends Query { } @Override - public Explanation explain(AtomicReaderContext context, int doc) throws IOException { + public Explanation explain(LeafReaderContext context, int doc) throws IOException { return new Explanation(getBoost(), "not implemented yet..."); } } diff --git a/src/main/java/org/elasticsearch/index/search/geo/GeoDistanceFilter.java b/src/main/java/org/elasticsearch/index/search/geo/GeoDistanceFilter.java index 1fdb6b4fad3..a39c766a177 100644 --- a/src/main/java/org/elasticsearch/index/search/geo/GeoDistanceFilter.java +++ b/src/main/java/org/elasticsearch/index/search/geo/GeoDistanceFilter.java @@ -19,8 +19,11 @@ package org.elasticsearch.index.search.geo; -import org.apache.lucene.index.AtomicReaderContext; +import java.io.IOException; + +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.search.DocIdSet; +import org.apache.lucene.search.DocValuesDocIdSet; import org.apache.lucene.search.Filter; import org.apache.lucene.util.Bits; import org.elasticsearch.ElasticsearchIllegalArgumentException; @@ -29,14 +32,11 @@ import org.elasticsearch.common.geo.GeoDistance; import org.elasticsearch.common.geo.GeoPoint; import org.elasticsearch.common.lucene.docset.AndDocIdSet; import org.elasticsearch.common.lucene.docset.DocIdSets; -import org.elasticsearch.common.lucene.docset.MatchDocIdSet; import org.elasticsearch.common.unit.DistanceUnit; import org.elasticsearch.index.fielddata.IndexGeoPointFieldData; import org.elasticsearch.index.fielddata.MultiGeoPointValues; import org.elasticsearch.index.mapper.geo.GeoPointFieldMapper; -import java.io.IOException; - /** */ public class GeoDistanceFilter extends Filter { @@ -103,10 +103,10 @@ public class GeoDistanceFilter extends Filter { } @Override - public DocIdSet getDocIdSet(AtomicReaderContext context, Bits acceptedDocs) throws IOException { + public DocIdSet getDocIdSet(LeafReaderContext context, Bits acceptedDocs) throws IOException { DocIdSet boundingBoxDocSet = null; if (boundingBoxFilter != null) { - boundingBoxDocSet = boundingBoxFilter.getDocIdSet(context, acceptedDocs); + boundingBoxDocSet = boundingBoxFilter.getDocIdSet(context, null); if (DocIdSets.isEmpty(boundingBoxDocSet)) { return null; } @@ -157,7 +157,7 @@ public class GeoDistanceFilter extends Filter { return result; } - public static class GeoDistanceDocSet extends MatchDocIdSet { + public static class GeoDistanceDocSet extends DocValuesDocIdSet { private final double distance; // in miles private final MultiGeoPointValues values; private final GeoDistance.FixedSourceDistance fixedSourceDistance; diff --git a/src/main/java/org/elasticsearch/index/search/geo/GeoDistanceRangeFilter.java b/src/main/java/org/elasticsearch/index/search/geo/GeoDistanceRangeFilter.java index cbe0a36f0ff..423ced6a849 100644 --- a/src/main/java/org/elasticsearch/index/search/geo/GeoDistanceRangeFilter.java +++ b/src/main/java/org/elasticsearch/index/search/geo/GeoDistanceRangeFilter.java @@ -19,8 +19,11 @@ package org.elasticsearch.index.search.geo; -import org.apache.lucene.index.AtomicReaderContext; +import java.io.IOException; + +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.search.DocIdSet; +import org.apache.lucene.search.DocValuesDocIdSet; import org.apache.lucene.search.Filter; import org.apache.lucene.util.Bits; import org.apache.lucene.util.NumericUtils; @@ -30,14 +33,11 @@ import org.elasticsearch.common.geo.GeoDistance; import org.elasticsearch.common.geo.GeoPoint; import org.elasticsearch.common.lucene.docset.AndDocIdSet; import org.elasticsearch.common.lucene.docset.DocIdSets; -import org.elasticsearch.common.lucene.docset.MatchDocIdSet; import org.elasticsearch.common.unit.DistanceUnit; import org.elasticsearch.index.fielddata.IndexGeoPointFieldData; import org.elasticsearch.index.fielddata.MultiGeoPointValues; import org.elasticsearch.index.mapper.geo.GeoPointFieldMapper; -import java.io.IOException; - /** * */ @@ -113,10 +113,10 @@ public class GeoDistanceRangeFilter extends Filter { } @Override - public DocIdSet getDocIdSet(AtomicReaderContext context, Bits acceptedDocs) throws IOException { + public DocIdSet getDocIdSet(LeafReaderContext context, Bits acceptedDocs) throws IOException { DocIdSet boundingBoxDocSet = null; if (boundingBoxFilter != null) { - boundingBoxDocSet = boundingBoxFilter.getDocIdSet(context, acceptedDocs); + boundingBoxDocSet = boundingBoxFilter.getDocIdSet(context, null); if (DocIdSets.isEmpty(boundingBoxDocSet)) { return null; } @@ -170,7 +170,7 @@ public class GeoDistanceRangeFilter extends Filter { return result; } - public static class GeoDistanceRangeDocSet extends MatchDocIdSet { + public static class GeoDistanceRangeDocSet extends DocValuesDocIdSet { private final MultiGeoPointValues values; private final GeoDistance.FixedSourceDistance fixedSourceDistance; diff --git a/src/main/java/org/elasticsearch/index/search/geo/GeoPolygonFilter.java b/src/main/java/org/elasticsearch/index/search/geo/GeoPolygonFilter.java index 5d11e26a46c..d1da022acc7 100644 --- a/src/main/java/org/elasticsearch/index/search/geo/GeoPolygonFilter.java +++ b/src/main/java/org/elasticsearch/index/search/geo/GeoPolygonFilter.java @@ -19,19 +19,19 @@ package org.elasticsearch.index.search.geo; -import org.apache.lucene.index.AtomicReaderContext; +import java.io.IOException; +import java.util.Arrays; + +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.search.DocIdSet; +import org.apache.lucene.search.DocValuesDocIdSet; import org.apache.lucene.search.Filter; import org.apache.lucene.util.Bits; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.geo.GeoPoint; -import org.elasticsearch.common.lucene.docset.MatchDocIdSet; import org.elasticsearch.index.fielddata.IndexGeoPointFieldData; import org.elasticsearch.index.fielddata.MultiGeoPointValues; -import java.io.IOException; -import java.util.Arrays; - /** * */ @@ -55,7 +55,7 @@ public class GeoPolygonFilter extends Filter { } @Override - public DocIdSet getDocIdSet(AtomicReaderContext context, Bits acceptedDocs) throws IOException { + public DocIdSet getDocIdSet(LeafReaderContext context, Bits acceptedDocs) throws IOException { final MultiGeoPointValues values = indexFieldData.load(context).getGeoPointValues(); return new GeoPolygonDocIdSet(context.reader().maxDoc(), acceptedDocs, values, points); } @@ -68,7 +68,7 @@ public class GeoPolygonFilter extends Filter { return sb.toString(); } - public static class GeoPolygonDocIdSet extends MatchDocIdSet { + public static class GeoPolygonDocIdSet extends DocValuesDocIdSet { private final MultiGeoPointValues values; private final GeoPoint[] points; diff --git a/src/main/java/org/elasticsearch/index/search/geo/InMemoryGeoBoundingBoxFilter.java b/src/main/java/org/elasticsearch/index/search/geo/InMemoryGeoBoundingBoxFilter.java index b12f220092e..ef406e879dc 100644 --- a/src/main/java/org/elasticsearch/index/search/geo/InMemoryGeoBoundingBoxFilter.java +++ b/src/main/java/org/elasticsearch/index/search/geo/InMemoryGeoBoundingBoxFilter.java @@ -19,18 +19,18 @@ package org.elasticsearch.index.search.geo; -import org.apache.lucene.index.AtomicReaderContext; +import java.io.IOException; + +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.search.DocIdSet; +import org.apache.lucene.search.DocValuesDocIdSet; import org.apache.lucene.search.Filter; import org.apache.lucene.util.Bits; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.geo.GeoPoint; -import org.elasticsearch.common.lucene.docset.MatchDocIdSet; import org.elasticsearch.index.fielddata.IndexGeoPointFieldData; import org.elasticsearch.index.fielddata.MultiGeoPointValues; -import java.io.IOException; - /** * */ @@ -60,7 +60,7 @@ public class InMemoryGeoBoundingBoxFilter extends Filter { } @Override - public DocIdSet getDocIdSet(AtomicReaderContext context, Bits acceptedDocs) throws IOException { + public DocIdSet getDocIdSet(LeafReaderContext context, Bits acceptedDocs) throws IOException { final MultiGeoPointValues values = indexFieldData.load(context).getGeoPointValues(); //checks to see if bounding box crosses 180 degrees @@ -76,7 +76,7 @@ public class InMemoryGeoBoundingBoxFilter extends Filter { return "GeoBoundingBoxFilter(" + indexFieldData.getFieldNames().indexName() + ", " + topLeft + ", " + bottomRight + ")"; } - public static class Meridian180GeoBoundingBoxDocSet extends MatchDocIdSet { + public static class Meridian180GeoBoundingBoxDocSet extends DocValuesDocIdSet { private final MultiGeoPointValues values; private final GeoPoint topLeft; private final GeoPoint bottomRight; @@ -103,7 +103,7 @@ public class InMemoryGeoBoundingBoxFilter extends Filter { } } - public static class GeoBoundingBoxDocSet extends MatchDocIdSet { + public static class GeoBoundingBoxDocSet extends DocValuesDocIdSet { private final MultiGeoPointValues values; private final GeoPoint topLeft; private final GeoPoint bottomRight; diff --git a/src/main/java/org/elasticsearch/index/search/nested/IncludeNestedDocsQuery.java b/src/main/java/org/elasticsearch/index/search/nested/IncludeNestedDocsQuery.java index 32d449c3f79..293c9be94f4 100644 --- a/src/main/java/org/elasticsearch/index/search/nested/IncludeNestedDocsQuery.java +++ b/src/main/java/org/elasticsearch/index/search/nested/IncludeNestedDocsQuery.java @@ -19,13 +19,14 @@ package org.elasticsearch.index.search.nested; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.IndexReader; import org.apache.lucene.index.Term; import org.apache.lucene.search.*; +import org.apache.lucene.search.join.BitDocIdSetFilter; +import org.apache.lucene.util.BitSet; import org.apache.lucene.util.Bits; -import org.apache.lucene.util.FixedBitSet; -import org.elasticsearch.index.cache.fixedbitset.FixedBitSetFilter; +import org.apache.lucene.util.BitDocIdSet; import java.io.IOException; import java.util.Collection; @@ -39,7 +40,7 @@ import java.util.Set; */ public class IncludeNestedDocsQuery extends Query { - private final FixedBitSetFilter parentFilter; + private final BitDocIdSetFilter parentFilter; private final Query parentQuery; // If we are rewritten, this is the original childQuery we @@ -50,7 +51,7 @@ public class IncludeNestedDocsQuery extends Query { private final Query origParentQuery; - public IncludeNestedDocsQuery(Query parentQuery, FixedBitSetFilter parentFilter) { + public IncludeNestedDocsQuery(Query parentQuery, BitDocIdSetFilter parentFilter) { this.origParentQuery = parentQuery; this.parentQuery = parentQuery; this.parentFilter = parentFilter; @@ -80,9 +81,9 @@ public class IncludeNestedDocsQuery extends Query { private final Query parentQuery; private final Weight parentWeight; - private final FixedBitSetFilter parentsFilter; + private final BitDocIdSetFilter parentsFilter; - IncludeNestedDocsWeight(Query parentQuery, Weight parentWeight, FixedBitSetFilter parentsFilter) { + IncludeNestedDocsWeight(Query parentQuery, Weight parentWeight, BitDocIdSetFilter parentsFilter) { this.parentQuery = parentQuery; this.parentWeight = parentWeight; this.parentsFilter = parentsFilter; @@ -104,7 +105,7 @@ public class IncludeNestedDocsQuery extends Query { } @Override - public Scorer scorer(AtomicReaderContext context, Bits acceptDocs) throws IOException { + public Scorer scorer(LeafReaderContext context, Bits acceptDocs) throws IOException { final Scorer parentScorer = parentWeight.scorer(context, acceptDocs); // no matches @@ -112,7 +113,7 @@ public class IncludeNestedDocsQuery extends Query { return null; } - FixedBitSet parents = parentsFilter.getDocIdSet(context, acceptDocs); + BitDocIdSet parents = parentsFilter.getDocIdSet(context); if (parents == null) { // No matches return null; @@ -123,11 +124,11 @@ public class IncludeNestedDocsQuery extends Query { // No matches return null; } - return new IncludeNestedDocsScorer(this, parentScorer, (FixedBitSet) parents, firstParentDoc); + return new IncludeNestedDocsScorer(this, parentScorer, parents, firstParentDoc); } @Override - public Explanation explain(AtomicReaderContext context, int doc) throws IOException { + public Explanation explain(LeafReaderContext context, int doc) throws IOException { return null; //Query is used internally and not by users, so explain can be empty } @@ -140,21 +141,21 @@ public class IncludeNestedDocsQuery extends Query { static class IncludeNestedDocsScorer extends Scorer { final Scorer parentScorer; - final FixedBitSet parentBits; + final BitSet parentBits; int currentChildPointer = -1; int currentParentPointer = -1; int currentDoc = -1; - IncludeNestedDocsScorer(Weight weight, Scorer parentScorer, FixedBitSet parentBits, int currentParentPointer) { + IncludeNestedDocsScorer(Weight weight, Scorer parentScorer, BitDocIdSet parentBits, int currentParentPointer) { super(weight); this.parentScorer = parentScorer; - this.parentBits = parentBits; + this.parentBits = parentBits.bits(); this.currentParentPointer = currentParentPointer; if (currentParentPointer == 0) { currentChildPointer = 0; } else { - this.currentChildPointer = parentBits.prevSetBit(currentParentPointer - 1); + this.currentChildPointer = this.parentBits.prevSetBit(currentParentPointer - 1); if (currentChildPointer == -1) { // no previous set parent, we delete from doc 0 currentChildPointer = 0; diff --git a/src/main/java/org/elasticsearch/index/search/nested/NestedDocsFilter.java b/src/main/java/org/elasticsearch/index/search/nested/NestedDocsFilter.java index d9d31fb0303..6594b4cb47b 100644 --- a/src/main/java/org/elasticsearch/index/search/nested/NestedDocsFilter.java +++ b/src/main/java/org/elasticsearch/index/search/nested/NestedDocsFilter.java @@ -19,7 +19,7 @@ package org.elasticsearch.index.search.nested; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.Term; import org.apache.lucene.search.DocIdSet; import org.apache.lucene.search.Filter; @@ -46,7 +46,7 @@ public class NestedDocsFilter extends Filter { } @Override - public DocIdSet getDocIdSet(AtomicReaderContext context, Bits acceptDocs) throws IOException { + public DocIdSet getDocIdSet(LeafReaderContext context, Bits acceptDocs) throws IOException { return filter.getDocIdSet(context, acceptDocs); } diff --git a/src/main/java/org/elasticsearch/index/search/nested/NonNestedDocsFilter.java b/src/main/java/org/elasticsearch/index/search/nested/NonNestedDocsFilter.java index 6de5f27910e..5d2bbc7973c 100644 --- a/src/main/java/org/elasticsearch/index/search/nested/NonNestedDocsFilter.java +++ b/src/main/java/org/elasticsearch/index/search/nested/NonNestedDocsFilter.java @@ -19,7 +19,7 @@ package org.elasticsearch.index.search.nested; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.search.DocIdSet; import org.apache.lucene.search.Filter; import org.apache.lucene.util.Bits; @@ -42,7 +42,7 @@ public class NonNestedDocsFilter extends Filter { } @Override - public DocIdSet getDocIdSet(AtomicReaderContext context, Bits acceptDocs) throws IOException { + public DocIdSet getDocIdSet(LeafReaderContext context, Bits acceptDocs) throws IOException { return filter.getDocIdSet(context, acceptDocs); } diff --git a/src/main/java/org/elasticsearch/index/service/IndexService.java b/src/main/java/org/elasticsearch/index/service/IndexService.java index 2ae2615bc0c..0e75857a5ef 100644 --- a/src/main/java/org/elasticsearch/index/service/IndexService.java +++ b/src/main/java/org/elasticsearch/index/service/IndexService.java @@ -28,8 +28,8 @@ import org.elasticsearch.index.IndexShardMissingException; import org.elasticsearch.index.aliases.IndexAliasesService; import org.elasticsearch.index.analysis.AnalysisService; import org.elasticsearch.index.cache.IndexCache; +import org.elasticsearch.index.cache.bitset.BitsetFilterCache; import org.elasticsearch.index.engine.IndexEngine; -import org.elasticsearch.index.cache.fixedbitset.FixedBitSetFilterCache; import org.elasticsearch.index.fielddata.IndexFieldDataService; import org.elasticsearch.index.gateway.IndexGateway; import org.elasticsearch.index.mapper.MapperService; @@ -52,7 +52,7 @@ public interface IndexService extends IndexComponent, Iterable { IndexFieldDataService fieldData(); - FixedBitSetFilterCache fixedBitSetFilterCache(); + BitsetFilterCache bitsetFilterCache(); IndexSettingsService settingsService(); diff --git a/src/main/java/org/elasticsearch/index/service/InternalIndexService.java b/src/main/java/org/elasticsearch/index/service/InternalIndexService.java index f0680f1e977..b4d71f618b4 100644 --- a/src/main/java/org/elasticsearch/index/service/InternalIndexService.java +++ b/src/main/java/org/elasticsearch/index/service/InternalIndexService.java @@ -33,9 +33,9 @@ import org.elasticsearch.index.*; import org.elasticsearch.index.aliases.IndexAliasesService; import org.elasticsearch.index.analysis.AnalysisService; import org.elasticsearch.index.cache.IndexCache; +import org.elasticsearch.index.cache.bitset.BitsetFilterCache; +import org.elasticsearch.index.cache.bitset.ShardBitsetFilterCacheModule; import org.elasticsearch.index.cache.filter.ShardFilterCacheModule; -import org.elasticsearch.index.cache.fixedbitset.FixedBitSetFilterCache; -import org.elasticsearch.index.cache.fixedbitset.ShardFixedBitSetFilterCacheModule; import org.elasticsearch.index.cache.query.ShardQueryCacheModule; import org.elasticsearch.index.deletionpolicy.DeletionPolicyModule; import org.elasticsearch.index.engine.Engine; @@ -117,7 +117,7 @@ public class InternalIndexService extends AbstractIndexComponent implements Inde private final IndexFieldDataService indexFieldData; - private final FixedBitSetFilterCache fixedBitSetFilterCache; + private final BitsetFilterCache bitsetFilterCache; private final IndexEngine indexEngine; @@ -138,7 +138,7 @@ public class InternalIndexService extends AbstractIndexComponent implements Inde AnalysisService analysisService, MapperService mapperService, IndexQueryParserService queryParserService, SimilarityService similarityService, IndexAliasesService aliasesService, IndexCache indexCache, IndexEngine indexEngine, IndexGateway indexGateway, IndexStore indexStore, IndexSettingsService settingsService, IndexFieldDataService indexFieldData, - FixedBitSetFilterCache fixedBitSetFilterCache) { + BitsetFilterCache bitSetFilterCache) { super(index, indexSettings); this.injector = injector; this.threadPool = threadPool; @@ -154,7 +154,7 @@ public class InternalIndexService extends AbstractIndexComponent implements Inde this.indexGateway = indexGateway; this.indexStore = indexStore; this.settingsService = settingsService; - this.fixedBitSetFilterCache = fixedBitSetFilterCache; + this.bitsetFilterCache = bitSetFilterCache; this.pluginsService = injector.getInstance(PluginsService.class); this.indicesLifecycle = (InternalIndicesLifecycle) injector.getInstance(IndicesLifecycle.class); @@ -162,7 +162,7 @@ public class InternalIndexService extends AbstractIndexComponent implements Inde // inject workarounds for cyclic dep indexCache.filter().setIndexService(this); indexFieldData.setIndexService(this); - fixedBitSetFilterCache.setIndexService(this); + bitSetFilterCache.setIndexService(this); } @Override @@ -230,8 +230,8 @@ public class InternalIndexService extends AbstractIndexComponent implements Inde } @Override - public FixedBitSetFilterCache fixedBitSetFilterCache() { - return fixedBitSetFilterCache; + public BitsetFilterCache bitsetFilterCache() { + return bitsetFilterCache; } @Override @@ -343,7 +343,7 @@ public class InternalIndexService extends AbstractIndexComponent implements Inde modules.add(new MergeSchedulerModule(indexSettings)); modules.add(new ShardFilterCacheModule()); modules.add(new ShardQueryCacheModule()); - modules.add(new ShardFixedBitSetFilterCacheModule()); + modules.add(new ShardBitsetFilterCacheModule()); modules.add(new ShardFieldDataModule()); modules.add(new TranslogModule(indexSettings)); modules.add(new EngineModule(indexSettings)); diff --git a/src/main/java/org/elasticsearch/index/settings/IndexDynamicSettingsModule.java b/src/main/java/org/elasticsearch/index/settings/IndexDynamicSettingsModule.java index 66debafd521..98dffa5fe99 100644 --- a/src/main/java/org/elasticsearch/index/settings/IndexDynamicSettingsModule.java +++ b/src/main/java/org/elasticsearch/index/settings/IndexDynamicSettingsModule.java @@ -90,7 +90,6 @@ public class IndexDynamicSettingsModule extends AbstractModule { indexDynamicSettings.addDynamicSetting(InternalEngine.INDEX_CODEC); indexDynamicSettings.addDynamicSetting(InternalEngine.INDEX_FAIL_ON_MERGE_FAILURE); indexDynamicSettings.addDynamicSetting(InternalEngine.INDEX_FAIL_ON_CORRUPTION); - indexDynamicSettings.addDynamicSetting(InternalEngine.INDEX_CHECKSUM_ON_MERGE, Validator.BOOLEAN); indexDynamicSettings.addDynamicSetting(ShardSlowLogIndexingService.INDEX_INDEXING_SLOWLOG_THRESHOLD_INDEX_WARN, Validator.TIME); indexDynamicSettings.addDynamicSetting(ShardSlowLogIndexingService.INDEX_INDEXING_SLOWLOG_THRESHOLD_INDEX_INFO, Validator.TIME); indexDynamicSettings.addDynamicSetting(ShardSlowLogIndexingService.INDEX_INDEXING_SLOWLOG_THRESHOLD_INDEX_DEBUG, Validator.TIME); diff --git a/src/main/java/org/elasticsearch/index/shard/ShardUtils.java b/src/main/java/org/elasticsearch/index/shard/ShardUtils.java index 5203b376ea2..59b2e878970 100644 --- a/src/main/java/org/elasticsearch/index/shard/ShardUtils.java +++ b/src/main/java/org/elasticsearch/index/shard/ShardUtils.java @@ -19,7 +19,7 @@ package org.elasticsearch.index.shard; -import org.apache.lucene.index.AtomicReader; +import org.apache.lucene.index.LeafReader; import org.apache.lucene.index.IndexReader; import org.apache.lucene.index.SegmentReader; import org.elasticsearch.common.Nullable; @@ -38,7 +38,7 @@ public class ShardUtils { * This will be the case in almost all cases, except for percolator currently. */ @Nullable - public static ShardId extractShardId(AtomicReader reader) { + public static ShardId extractShardId(LeafReader reader) { return extractShardId(SegmentReaderUtils.segmentReaderOrNull(reader)); } diff --git a/src/main/java/org/elasticsearch/index/shard/service/IndexShard.java b/src/main/java/org/elasticsearch/index/shard/service/IndexShard.java index f61858b0f2b..d1a63819956 100644 --- a/src/main/java/org/elasticsearch/index/shard/service/IndexShard.java +++ b/src/main/java/org/elasticsearch/index/shard/service/IndexShard.java @@ -24,6 +24,7 @@ import org.elasticsearch.cluster.routing.ShardRouting; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.index.VersionType; +import org.elasticsearch.index.cache.bitset.ShardBitsetFilterCache; import org.elasticsearch.index.cache.filter.FilterCacheStats; import org.elasticsearch.index.cache.filter.ShardFilterCache; import org.elasticsearch.index.cache.id.IdCacheStats; @@ -46,7 +47,6 @@ import org.elasticsearch.index.mapper.SourceToParse; import org.elasticsearch.index.merge.MergeStats; import org.elasticsearch.index.percolator.PercolatorQueriesRegistry; import org.elasticsearch.index.percolator.stats.ShardPercolateService; -import org.elasticsearch.index.cache.fixedbitset.ShardFixedBitSetFilterCache; import org.elasticsearch.index.refresh.RefreshStats; import org.elasticsearch.index.search.stats.SearchStats; import org.elasticsearch.index.search.stats.ShardSearchService; @@ -130,7 +130,7 @@ public interface IndexShard extends IndexShardComponent { ShardSuggestService shardSuggestService(); - ShardFixedBitSetFilterCache shardFixedBitSetFilterCache(); + ShardBitsetFilterCache shardBitsetFilterCache(); MapperService mapperService(); diff --git a/src/main/java/org/elasticsearch/index/shard/service/InternalIndexShard.java b/src/main/java/org/elasticsearch/index/shard/service/InternalIndexShard.java index 2e7c2791d91..0cd7c3ff52f 100644 --- a/src/main/java/org/elasticsearch/index/shard/service/InternalIndexShard.java +++ b/src/main/java/org/elasticsearch/index/shard/service/InternalIndexShard.java @@ -23,7 +23,9 @@ import com.google.common.base.Charsets; import org.apache.lucene.codecs.PostingsFormat; import org.apache.lucene.index.CheckIndex; import org.apache.lucene.search.Filter; +import org.apache.lucene.search.FilteredQuery; import org.apache.lucene.search.Query; +import org.apache.lucene.search.join.BitDocIdSetFilter; import org.apache.lucene.store.AlreadyClosedException; import org.apache.lucene.util.ThreadInterruptedException; import org.elasticsearch.ElasticsearchException; @@ -39,22 +41,26 @@ import org.elasticsearch.common.collect.Tuple; import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.io.stream.BytesStreamOutput; import org.elasticsearch.common.lucene.Lucene; -import org.elasticsearch.common.lucene.search.XFilteredQuery; import org.elasticsearch.common.metrics.MeanMetric; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.index.VersionType; import org.elasticsearch.index.aliases.IndexAliasesService; import org.elasticsearch.index.cache.IndexCache; +import org.elasticsearch.index.cache.bitset.ShardBitsetFilterCache; import org.elasticsearch.index.cache.filter.FilterCacheStats; import org.elasticsearch.index.cache.filter.ShardFilterCache; -import org.elasticsearch.index.cache.fixedbitset.FixedBitSetFilter; -import org.elasticsearch.index.cache.fixedbitset.ShardFixedBitSetFilterCache; import org.elasticsearch.index.cache.id.IdCacheStats; import org.elasticsearch.index.cache.query.ShardQueryCache; import org.elasticsearch.index.codec.CodecService; import org.elasticsearch.index.deletionpolicy.SnapshotIndexCommit; -import org.elasticsearch.index.engine.*; +import org.elasticsearch.index.engine.Engine; +import org.elasticsearch.index.engine.EngineClosedException; +import org.elasticsearch.index.engine.EngineException; +import org.elasticsearch.index.engine.IgnoreOnRecoveryEngineException; +import org.elasticsearch.index.engine.OptimizeFailedEngineException; +import org.elasticsearch.index.engine.RefreshFailedEngineException; +import org.elasticsearch.index.engine.SegmentsStats; import org.elasticsearch.index.fielddata.FieldDataStats; import org.elasticsearch.index.fielddata.IndexFieldDataService; import org.elasticsearch.index.fielddata.ShardFieldData; @@ -63,7 +69,11 @@ import org.elasticsearch.index.get.GetStats; import org.elasticsearch.index.get.ShardGetService; import org.elasticsearch.index.indexing.IndexingStats; import org.elasticsearch.index.indexing.ShardIndexingService; -import org.elasticsearch.index.mapper.*; +import org.elasticsearch.index.mapper.DocumentMapper; +import org.elasticsearch.index.mapper.MapperService; +import org.elasticsearch.index.mapper.ParsedDocument; +import org.elasticsearch.index.mapper.SourceToParse; +import org.elasticsearch.index.mapper.Uid; import org.elasticsearch.index.mapper.internal.ParentFieldMapper; import org.elasticsearch.index.merge.MergeStats; import org.elasticsearch.index.merge.scheduler.MergeSchedulerProvider; @@ -77,7 +87,18 @@ import org.elasticsearch.index.search.stats.ShardSearchService; import org.elasticsearch.index.service.IndexService; import org.elasticsearch.index.settings.IndexSettings; import org.elasticsearch.index.settings.IndexSettingsService; -import org.elasticsearch.index.shard.*; +import org.elasticsearch.index.shard.AbstractIndexShardComponent; +import org.elasticsearch.index.shard.DocsStats; +import org.elasticsearch.index.shard.IllegalIndexShardStateException; +import org.elasticsearch.index.shard.IndexShardClosedException; +import org.elasticsearch.index.shard.IndexShardException; +import org.elasticsearch.index.shard.IndexShardNotRecoveringException; +import org.elasticsearch.index.shard.IndexShardNotStartedException; +import org.elasticsearch.index.shard.IndexShardRecoveringException; +import org.elasticsearch.index.shard.IndexShardRelocatedException; +import org.elasticsearch.index.shard.IndexShardStartedException; +import org.elasticsearch.index.shard.IndexShardState; +import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.index.store.Store; import org.elasticsearch.index.store.StoreStats; import org.elasticsearch.index.suggest.stats.ShardSuggestService; @@ -132,7 +153,7 @@ public class InternalIndexShard extends AbstractIndexShardComponent implements I private final IndexFieldDataService indexFieldDataService; private final IndexService indexService; private final ShardSuggestService shardSuggestService; - private final ShardFixedBitSetFilterCache shardFixedBitSetFilterCache; + private final ShardBitsetFilterCache shardBitsetFilterCache; private final Object mutex = new Object(); private final String checkIndexOnStartup; @@ -158,7 +179,7 @@ public class InternalIndexShard extends AbstractIndexShardComponent implements I public InternalIndexShard(ShardId shardId, @IndexSettings Settings indexSettings, IndexSettingsService indexSettingsService, IndicesLifecycle indicesLifecycle, Store store, Engine engine, MergeSchedulerProvider mergeScheduler, Translog translog, ThreadPool threadPool, MapperService mapperService, IndexQueryParserService queryParserService, IndexCache indexCache, IndexAliasesService indexAliasesService, ShardIndexingService indexingService, ShardGetService getService, ShardSearchService searchService, ShardIndexWarmerService shardWarmerService, ShardFilterCache shardFilterCache, ShardFieldData shardFieldData, PercolatorQueriesRegistry percolatorQueriesRegistry, ShardPercolateService shardPercolateService, CodecService codecService, - ShardTermVectorService termVectorService, IndexFieldDataService indexFieldDataService, IndexService indexService, ShardSuggestService shardSuggestService, ShardQueryCache shardQueryCache, ShardFixedBitSetFilterCache shardFixedBitSetFilterCache) { + ShardTermVectorService termVectorService, IndexFieldDataService indexFieldDataService, IndexService indexService, ShardSuggestService shardSuggestService, ShardQueryCache shardQueryCache, ShardBitsetFilterCache shardBitsetFilterCache) { super(shardId, indexSettings); this.indicesLifecycle = (InternalIndicesLifecycle) indicesLifecycle; this.indexSettingsService = indexSettingsService; @@ -185,7 +206,7 @@ public class InternalIndexShard extends AbstractIndexShardComponent implements I this.indexService = indexService; this.codecService = codecService; this.shardSuggestService = shardSuggestService; - this.shardFixedBitSetFilterCache = shardFixedBitSetFilterCache; + this.shardBitsetFilterCache = shardBitsetFilterCache; state = IndexShardState.CREATED; this.refreshInterval = indexSettings.getAsTime(INDEX_REFRESH_INTERVAL, engine.defaultRefreshInterval()); @@ -234,8 +255,8 @@ public class InternalIndexShard extends AbstractIndexShardComponent implements I } @Override - public ShardFixedBitSetFilterCache shardFixedBitSetFilterCache() { - return shardFixedBitSetFilterCache; + public ShardBitsetFilterCache shardBitsetFilterCache() { + return shardBitsetFilterCache; } @Override @@ -467,7 +488,7 @@ public class InternalIndexShard extends AbstractIndexShardComponent implements I query = filterQueryIfNeeded(query, types); Filter aliasFilter = indexAliasesService.aliasFilter(filteringAliases); - FixedBitSetFilter parentFilter = mapperService.hasNested() ? indexCache.fixedBitSetFilterCache().getFixedBitSetFilter(NonNestedDocsFilter.INSTANCE) : null; + BitDocIdSetFilter parentFilter = mapperService.hasNested() ? indexCache.bitsetFilterCache().getBitDocIdSetFilter(NonNestedDocsFilter.INSTANCE) : null; return new Engine.DeleteByQuery(query, source, filteringAliases, aliasFilter, parentFilter, origin, startTime, types); } @@ -554,7 +575,7 @@ public class InternalIndexShard extends AbstractIndexShardComponent implements I @Override public SegmentsStats segmentStats() { SegmentsStats segmentsStats = engine.segmentsStats(); - segmentsStats.addFixedBitSetMemoryInBytes(shardFixedBitSetFilterCache.getMemorySizeInBytes()); + segmentsStats.addBitsetMemoryInBytes(shardBitsetFilterCache.getMemorySizeInBytes()); return segmentsStats; } @@ -912,7 +933,7 @@ public class InternalIndexShard extends AbstractIndexShardComponent implements I private Query filterQueryIfNeeded(Query query, String[] types) { Filter searchFilter = mapperService.searchFilter(types); if (searchFilter != null) { - query = new XFilteredQuery(query, indexCache.filter().cache(searchFilter)); + query = new FilteredQuery(query, indexCache.filter().cache(searchFilter)); } return query; } @@ -1065,7 +1086,7 @@ public class InternalIndexShard extends AbstractIndexShardComponent implements I if (logger.isDebugEnabled()) { logger.debug("fixing index, writing new segments file ..."); } - checkIndex.fixIndex(status); + checkIndex.exorciseIndex(status); if (logger.isDebugEnabled()) { logger.debug("index fixed, wrote new segments file \"{}\"", status.segmentsFileName); } diff --git a/src/main/java/org/elasticsearch/index/snapshots/blobstore/BlobStoreIndexShardRepository.java b/src/main/java/org/elasticsearch/index/snapshots/blobstore/BlobStoreIndexShardRepository.java index 38c340bcbb6..064cfb26052 100644 --- a/src/main/java/org/elasticsearch/index/snapshots/blobstore/BlobStoreIndexShardRepository.java +++ b/src/main/java/org/elasticsearch/index/snapshots/blobstore/BlobStoreIndexShardRepository.java @@ -23,6 +23,8 @@ import com.google.common.collect.ImmutableMap; import com.google.common.collect.Iterables; import com.google.common.collect.Lists; import org.apache.lucene.index.CorruptIndexException; +import org.apache.lucene.index.IndexFormatTooNewException; +import org.apache.lucene.index.IndexFormatTooOldException; import org.apache.lucene.store.IOContext; import org.apache.lucene.store.IndexInput; import org.apache.lucene.store.IndexOutput; @@ -569,9 +571,9 @@ public class BlobStoreIndexShardRepository extends AbstractComponent implements } private void failStoreIfCorrupted(Throwable t) { - if (t instanceof CorruptIndexException) { + if (t instanceof CorruptIndexException || t instanceof IndexFormatTooOldException || t instanceof IndexFormatTooNewException) { try { - store.markStoreCorrupted((CorruptIndexException) t); + store.markStoreCorrupted((IOException) t); } catch (IOException e) { logger.warn("store cannot be marked as corrupted", e); } @@ -718,7 +720,7 @@ public class BlobStoreIndexShardRepository extends AbstractComponent implements final Store.MetadataSnapshot recoveryTargetMetadata; try { recoveryTargetMetadata = store.getMetadataOrEmpty(); - } catch (CorruptIndexException e) { + } catch (CorruptIndexException | IndexFormatTooOldException | IndexFormatTooNewException e) { logger.warn("{} Can't read metadata from store", e, shardId); throw new IndexShardRestoreFailedException(shardId, "Can't restore corrupted shard", e); } @@ -853,7 +855,7 @@ public class BlobStoreIndexShardRepository extends AbstractComponent implements store.directory().sync(Collections.singleton(fileInfo.physicalName())); recoveryState.getIndex().addRecoveredFileCount(1); success = true; - } catch (CorruptIndexException ex) { + } catch (CorruptIndexException | IndexFormatTooOldException | IndexFormatTooNewException ex) { try { store.markStoreCorrupted(ex); } catch (IOException e) { diff --git a/src/main/java/org/elasticsearch/index/store/DirectoryService.java b/src/main/java/org/elasticsearch/index/store/DirectoryService.java index 54f996c26ea..3291377835e 100644 --- a/src/main/java/org/elasticsearch/index/store/DirectoryService.java +++ b/src/main/java/org/elasticsearch/index/store/DirectoryService.java @@ -30,8 +30,4 @@ public interface DirectoryService { Directory[] build() throws IOException; long throttleTimeInNanos(); - - void renameFile(Directory dir, String from, String to) throws IOException; - - void fullDelete(Directory dir) throws IOException; -} +} \ No newline at end of file diff --git a/src/main/java/org/elasticsearch/index/store/DirectoryUtils.java b/src/main/java/org/elasticsearch/index/store/DirectoryUtils.java index 3c568385125..4b123aee46d 100644 --- a/src/main/java/org/elasticsearch/index/store/DirectoryUtils.java +++ b/src/main/java/org/elasticsearch/index/store/DirectoryUtils.java @@ -19,7 +19,6 @@ package org.elasticsearch.index.store; -import org.apache.lucene.store.CompoundFileDirectory; import org.apache.lucene.store.Directory; import org.apache.lucene.store.FileSwitchDirectory; import org.apache.lucene.store.FilterDirectory; @@ -45,8 +44,6 @@ public final class DirectoryUtils { } if (current instanceof FilterDirectory) { current = ((FilterDirectory) current).getDelegate(); - } else if (current instanceof CompoundFileDirectory) { - current = ((CompoundFileDirectory) current).getDirectory(); } else { return null; } diff --git a/src/main/java/org/elasticsearch/index/store/DistributorDirectory.java b/src/main/java/org/elasticsearch/index/store/DistributorDirectory.java index 0d4a3096768..10f397fe0fe 100644 --- a/src/main/java/org/elasticsearch/index/store/DistributorDirectory.java +++ b/src/main/java/org/elasticsearch/index/store/DistributorDirectory.java @@ -84,15 +84,6 @@ public final class DistributorDirectory extends BaseDirectory { return nameDirMapping.keySet().toArray(new String[0]); } - @Override - public boolean fileExists(String name) throws IOException { - try { - return getDirectory(name).fileExists(name); - } catch (FileNotFoundException ex) { - return false; - } - } - @Override public void deleteFile(String name) throws IOException { getDirectory(name, true).deleteFile(name); @@ -117,6 +108,25 @@ public final class DistributorDirectory extends BaseDirectory { } } + @Override + public void renameFile(String source, String dest) throws IOException { + Directory directory = getDirectory(source); + if (nameDirMapping.putIfAbsent(dest, directory) != null) { + throw new IOException("Can't rename file from " + source + + " to: " + dest + ": target file already exists"); + } + boolean success = false; + try { + directory.renameFile(source, dest); + nameDirMapping.remove(source); + success = true; + } finally { + if (!success) { + nameDirMapping.remove(dest); + } + } + } + @Override public IndexInput openInput(String name, IOContext context) throws IOException { return getDirectory(name).openInput(name, context); @@ -174,32 +184,6 @@ public final class DistributorDirectory extends BaseDirectory { return distributor.toString(); } - /** - * Renames the given source file to the given target file unless the target already exists. - * - * @param directoryService the DirectoryService to use. - * @param from the source file name. - * @param to the target file name - * @throws IOException if the target file already exists. - */ - public void renameFile(DirectoryService directoryService, String from, String to) throws IOException { - Directory directory = getDirectory(from); - if (nameDirMapping.putIfAbsent(to, directory) != null) { - throw new IOException("Can't rename file from " + from - + " to: " + to + ": target file already exists"); - } - boolean success = false; - try { - directoryService.renameFile(directory, from, to); - nameDirMapping.remove(from); - success = true; - } finally { - if (!success) { - nameDirMapping.remove(to); - } - } - } - Distributor getDistributor() { return distributor; } @@ -210,34 +194,29 @@ public final class DistributorDirectory extends BaseDirectory { static boolean assertConsistency(ESLogger logger, DistributorDirectory dir) throws IOException { boolean consistent = true; StringBuilder builder = new StringBuilder(); - try { - Directory[] all = dir.distributor.all(); - for (Directory d : all) { - for (String file : d.listAll()) { - final Directory directory = dir.nameDirMapping.get(file); - if (directory == null) { - consistent = false; - builder.append("File ").append(file) - .append(" was not mapped to a directory but exists in one of the distributors directories") - .append(System.lineSeparator()); - } - if (directory != d) { - consistent = false; - builder.append("File ").append(file).append(" was mapped to a directory ").append(directory) - .append(" but exists in another distributor directory").append(d) - .append(System.lineSeparator()); - } - + Directory[] all = dir.distributor.all(); + for (Directory d : all) { + for (String file : d.listAll()) { + final Directory directory = dir.nameDirMapping.get(file); + if (directory == null) { + consistent = false; + builder.append("File ").append(file) + .append(" was not mapped to a directory but exists in one of the distributors directories") + .append(System.lineSeparator()); } + if (directory != d) { + consistent = false; + builder.append("File ").append(file).append(" was mapped to a directory ").append(directory) + .append(" but exists in another distributor directory").append(d) + .append(System.lineSeparator()); + } + } - if (consistent == false) { - logger.info(builder.toString()); - } - assert consistent: builder.toString(); - } catch (NoSuchDirectoryException ex) { - // that's fine - we can't check the directory since we might have already been wiped by a shutdown or - // a test cleanup ie the directory is not there anymore. } + if (consistent == false) { + logger.info(builder.toString()); + } + assert consistent: builder.toString(); return consistent; // return boolean so it can be easily be used in asserts } @@ -319,4 +298,4 @@ public final class DistributorDirectory extends BaseDirectory { } } } -} \ No newline at end of file +} diff --git a/src/main/java/org/elasticsearch/index/store/Store.java b/src/main/java/org/elasticsearch/index/store/Store.java index 4dc577ec515..683a0ba28b8 100644 --- a/src/main/java/org/elasticsearch/index/store/Store.java +++ b/src/main/java/org/elasticsearch/index/store/Store.java @@ -23,9 +23,9 @@ import com.google.common.collect.ImmutableList; import com.google.common.collect.ImmutableMap; import com.google.common.collect.Iterables; import org.apache.lucene.codecs.CodecUtil; -import org.apache.lucene.codecs.lucene46.Lucene46SegmentInfoFormat; import org.apache.lucene.index.*; import org.apache.lucene.store.*; +import org.apache.lucene.util.ArrayUtil; import org.apache.lucene.util.BytesRefBuilder; import org.apache.lucene.util.IOUtils; import org.apache.lucene.util.Version; @@ -55,6 +55,7 @@ import java.io.*; import java.nio.file.NoSuchFileException; import java.util.*; import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.locks.ReentrantReadWriteLock; import java.util.zip.CRC32; import java.util.zip.Checksum; @@ -91,6 +92,8 @@ public class Store extends AbstractIndexShardComponent implements CloseableIndex private final DirectoryService directoryService; private final StoreDirectory directory; private final DistributorDirectory distributorDirectory; + private final ReentrantReadWriteLock metadataLock = new ReentrantReadWriteLock(); + private final AbstractRefCounted refCounter = new AbstractRefCounted("store") { @Override protected void closeInternal() { @@ -131,11 +134,11 @@ public class Store extends AbstractIndexShardComponent implements CloseableIndex return commit == null ? Lucene.readSegmentInfos(directory) : Lucene.readSegmentInfos(commit, directory); } catch (EOFException eof) { // TODO this should be caught by lucene - EOF is almost certainly an index corruption - throw new CorruptIndexException("Read past EOF while reading segment infos", eof); + throw new CorruptIndexException("Read past EOF while reading segment infos", "commit(" + commit + ")", eof); } catch (IOException exception) { throw exception; // IOExceptions like too many open files are not necessarily a corruption - just bubble it up } catch (Exception ex) { - throw new CorruptIndexException("Hit unexpected exception while reading segment infos", ex); + throw new CorruptIndexException("Hit unexpected exception while reading segment infos", "commit(" + commit + ")", ex); } } @@ -189,14 +192,66 @@ public class Store extends AbstractIndexShardComponent implements CloseableIndex public MetadataSnapshot getMetadata(IndexCommit commit) throws IOException { ensureOpen(); failIfCorrupted(); + metadataLock.readLock().lock(); try { return new MetadataSnapshot(commit, distributorDirectory, logger); - } catch (CorruptIndexException ex) { + } catch (CorruptIndexException | IndexFormatTooOldException | IndexFormatTooNewException ex) { markStoreCorrupted(ex); throw ex; + } finally { + metadataLock.readLock().unlock(); } } + + /** + * Renames all the given files form the key of the map to the + * value of the map. All successfully renamed files are removed from the map in-place. + */ + public void renameFilesSafe(Map tempFileMap) throws IOException { + // this works just like a lucene commit - we rename all temp files and once we successfully + // renamed all the segments we rename the commit to ensure we don't leave half baked commits behind. + final Map.Entry[] entries = tempFileMap.entrySet().toArray(new Map.Entry[tempFileMap.size()]); + ArrayUtil.timSort(entries, new Comparator>() { + @Override + public int compare(Map.Entry o1, Map.Entry o2) { + String left = o1.getValue(); + String right = o2.getValue(); + if (left.startsWith(IndexFileNames.SEGMENTS) || right.startsWith(IndexFileNames.SEGMENTS)) { + if (left.startsWith(IndexFileNames.SEGMENTS) == false) { + return -1; + } else if (right.startsWith(IndexFileNames.SEGMENTS) == false) { + return 1; + } + } + return left.compareTo(right); + } + }); + metadataLock.writeLock().lock(); + // we make sure that nobody fetches the metadata while we do this rename operation here to ensure we don't + // get exceptions if files are still open. + try { + for (Map.Entry entry : entries) { + String tempFile = entry.getKey(); + String origFile = entry.getValue(); + // first, go and delete the existing ones + try { + directory.deleteFile(origFile); + } catch (FileNotFoundException | NoSuchFileException e) { + } catch (Throwable ex) { + logger.debug("failed to delete file [{}]", ex, origFile); + } + // now, rename the files... and fail it it won't work + this.renameFile(tempFile, origFile); + final String remove = tempFileMap.remove(tempFile); + assert remove != null; + } + } finally { + metadataLock.writeLock().unlock(); + } + + } + /** * Deletes the content of a shard store. Be careful calling this!. */ @@ -225,7 +280,7 @@ public class Store extends AbstractIndexShardComponent implements CloseableIndex public void renameFile(String from, String to) throws IOException { ensureOpen(); - distributorDirectory.renameFile(directoryService, from, to); + distributorDirectory.renameFile(from, to); } /** @@ -304,7 +359,7 @@ public class Store extends AbstractIndexShardComponent implements CloseableIndex final Directory[] dirs = new Directory[indexLocations.length]; try { for (int i=0; i< indexLocations.length; i++) { - dirs[i] = new SimpleFSDirectory(indexLocations[i]); + dirs[i] = new SimpleFSDirectory(indexLocations[i].toPath()); } DistributorDirectory dir = new DistributorDirectory(dirs); failIfCorrupted(dir, new ShardId("", 1)); @@ -335,7 +390,7 @@ public class Store extends AbstractIndexShardComponent implements CloseableIndex logger.debug("create legacy output for {}", fileName); } else { assert metadata.writtenBy() != null; - assert metadata.writtenBy().onOrAfter(Version.LUCENE_48); + assert metadata.writtenBy().onOrAfter(Version.LUCENE_4_8_0); output = new VerifyingIndexOutput(metadata, output); } success = true; @@ -345,7 +400,6 @@ public class Store extends AbstractIndexShardComponent implements CloseableIndex } } return output; - } public static void verify(IndexOutput output) throws IOException { @@ -360,7 +414,7 @@ public class Store extends AbstractIndexShardComponent implements CloseableIndex return directory().openInput(filename, context); } assert metadata.writtenBy() != null; - assert metadata.writtenBy().onOrAfter(Version.LUCENE_48); + assert metadata.writtenBy().onOrAfter(Version.LUCENE_4_8_0); return new VerifyingIndexInput(directory().openInput(filename, context)); } @@ -371,7 +425,7 @@ public class Store extends AbstractIndexShardComponent implements CloseableIndex } public boolean checkIntegrity(StoreFileMetaData md) { - if (md.writtenBy() != null && md.writtenBy().onOrAfter(Version.LUCENE_48)) { + if (md.writtenBy() != null && md.writtenBy().onOrAfter(Version.LUCENE_4_8_0)) { try (IndexInput input = directory().openInput(md.name(), IOContext.READONCE)) { CodecUtil.checksumEntireFile(input); } catch (IOException e) { @@ -416,7 +470,7 @@ public class Store extends AbstractIndexShardComponent implements CloseableIndex builder.append(System.lineSeparator()); builder.append(input.readString()); } - ex.add(new CorruptIndexException(builder.toString())); + ex.add(new CorruptIndexException(builder.toString(), "preexisting_corruption")); CodecUtil.checkFooter(input); } } @@ -520,7 +574,7 @@ public class Store extends AbstractIndexShardComponent implements CloseableIndex Map checksumMap = readLegacyChecksums(directory).v1(); try { final SegmentInfos segmentCommitInfos = Store.readSegmentsInfo(commit, directory); - Version maxVersion = Version.LUCENE_3_0; // we don't know which version was used to write so we take the max version. + Version maxVersion = Version.LUCENE_4_0; // we don't know which version was used to write so we take the max version. for (SegmentCommitInfo info : segmentCommitInfos) { final Version version = info.info.getVersion(); if (version != null && version.onOrAfter(maxVersion)) { @@ -529,7 +583,7 @@ public class Store extends AbstractIndexShardComponent implements CloseableIndex for (String file : info.files()) { String legacyChecksum = checksumMap.get(file); if (version.onOrAfter(Version.LUCENE_4_8) && legacyChecksum == null) { - checksumFromLuceneFile(directory, file, builder, logger, version, Lucene46SegmentInfoFormat.SI_EXTENSION.equals(IndexFileNames.getExtension(file))); + checksumFromLuceneFile(directory, file, builder, logger, version, SEGMENT_INFO_EXTENSION.equals(IndexFileNames.getExtension(file))); } else { builder.put(file, new StoreFileMetaData(file, directory.fileLength(file), legacyChecksum, null)); } @@ -542,7 +596,7 @@ public class Store extends AbstractIndexShardComponent implements CloseableIndex } else { builder.put(segmentsFile, new StoreFileMetaData(segmentsFile, directory.fileLength(segmentsFile), legacyChecksum, null)); } - } catch (CorruptIndexException ex) { + } catch (CorruptIndexException | IndexFormatTooOldException | IndexFormatTooNewException ex) { throw ex; } catch (Throwable ex) { try { @@ -550,7 +604,7 @@ public class Store extends AbstractIndexShardComponent implements CloseableIndex // in that case we might get only IAE or similar exceptions while we are really corrupt... // TODO we should check the checksum in lucene if we hit an exception Lucene.checkSegmentInfoIntegrity(directory); - } catch (CorruptIndexException cex) { + } catch (CorruptIndexException | IndexFormatTooOldException | IndexFormatTooNewException cex) { cex.addSuppressed(ex); throw cex; } catch (Throwable e) { @@ -591,7 +645,7 @@ public class Store extends AbstractIndexShardComponent implements CloseableIndex try { if (in.length() < CodecUtil.footerLength()) { // truncated files trigger IAE if we seek negative... these files are really corrupted though - throw new CorruptIndexException("Can't retrieve checksum from file: " + file + " file length must be >= " + CodecUtil.footerLength() + " but was: " + in.length()); + throw new CorruptIndexException("Can't retrieve checksum from file: " + file + " file length must be >= " + CodecUtil.footerLength() + " but was: " + in.length(), in); } if (readFileAsHash) { hashFile(fileHash, new InputStreamIndexInput(in, in.length()), in.length()); @@ -631,8 +685,10 @@ public class Store extends AbstractIndexShardComponent implements CloseableIndex return metadata; } - private static final String DEL_FILE_EXTENSION = "del"; // TODO think about how we can detect if this changes? + private static final String DEL_FILE_EXTENSION = "del"; // legacy delete file + private static final String LIV_FILE_EXTENSION = "liv"; // lucene 5 delete file private static final String FIELD_INFOS_FILE_EXTENSION = "fnm"; + private static final String SEGMENT_INFO_EXTENSION = "si"; /** * Returns a diff between the two snapshots that can be used for recovery. The given snapshot is treated as the @@ -673,13 +729,13 @@ public class Store extends AbstractIndexShardComponent implements CloseableIndex final List perCommitStoreFiles = new ArrayList<>(); for (StoreFileMetaData meta : this) { - if (IndexFileNames.SEGMENTS_GEN.equals(meta.name())) { + if (IndexFileNames.OLD_SEGMENTS_GEN.equals(meta.name())) { // legacy continue; // we don't need that file at all } final String segmentId = IndexFileNames.parseSegmentName(meta.name()); final String extension = IndexFileNames.getExtension(meta.name()); assert FIELD_INFOS_FILE_EXTENSION.equals(extension) == false || IndexFileNames.stripExtension(IndexFileNames.stripSegmentName(meta.name())).isEmpty() : "FieldInfos are generational but updateable DV are not supported in elasticsearch"; - if (IndexFileNames.SEGMENTS.equals(segmentId) || DEL_FILE_EXTENSION.equals(extension)) { + if (IndexFileNames.SEGMENTS.equals(segmentId) || DEL_FILE_EXTENSION.equals(extension) || LIV_FILE_EXTENSION.equals(extension)) { // only treat del files as per-commit files fnm files are generational but only for upgradable DV perCommitStoreFiles.add(meta); } else { @@ -715,8 +771,8 @@ public class Store extends AbstractIndexShardComponent implements CloseableIndex } } RecoveryDiff recoveryDiff = new RecoveryDiff(identical.build(), different.build(), missing.build()); - assert recoveryDiff.size() == this.metadata.size() - (metadata.containsKey(IndexFileNames.SEGMENTS_GEN) ? 1: 0) - : "some files are missing recoveryDiff size: [" + recoveryDiff.size() + "] metadata size: [" + this.metadata.size() + "] contains segments.gen: [" + metadata.containsKey(IndexFileNames.SEGMENTS_GEN) + "]" ; + assert recoveryDiff.size() == this.metadata.size() - (metadata.containsKey(IndexFileNames.OLD_SEGMENTS_GEN) ? 1: 0) + : "some files are missing recoveryDiff size: [" + recoveryDiff.size() + "] metadata size: [" + this.metadata.size() + "] contains segments.gen: [" + metadata.containsKey(IndexFileNames.OLD_SEGMENTS_GEN) + "]" ; return recoveryDiff; } @@ -834,11 +890,6 @@ public class Store extends AbstractIndexShardComponent implements CloseableIndex checksumPosition = metadata.length() - 8; // the last 8 bytes are the checksum } - @Override - public void flush() throws IOException { - output.flush(); - } - @Override public void close() throws IOException { output.close(); @@ -854,11 +905,6 @@ public class Store extends AbstractIndexShardComponent implements CloseableIndex return output.getChecksum(); } - @Override - public long length() throws IOException { - return output.length(); - } - /** * Verifies the checksum and compares the written length with the expected file length. This method should bec * called after all data has been written to this output. @@ -869,7 +915,7 @@ public class Store extends AbstractIndexShardComponent implements CloseableIndex } throw new CorruptIndexException("verification failed (hardware problem?) : expected=" + metadata.checksum() + " actual=" + actualChecksum + " writtenLength=" + writtenBytes + " expectedLength=" + metadata.length() + - " (resource=" + metadata.toString() + ")"); + " (resource=" + metadata.toString() + ")", "VerifyingIndexOutput(" + metadata.name() + ")"); } @Override @@ -885,7 +931,7 @@ public class Store extends AbstractIndexShardComponent implements CloseableIndex if (!metadata.checksum().equals(actualChecksum)) { throw new CorruptIndexException("checksum failed (hardware problem?) : expected=" + metadata.checksum() + " actual=" + actualChecksum + - " (resource=" + metadata.toString() + ")"); + " (resource=" + metadata.toString() + ")", "VerifyingIndexOutput(" + metadata.name() + ")"); } } @@ -1033,7 +1079,7 @@ public class Store extends AbstractIndexShardComponent implements CloseableIndex return; } throw new CorruptIndexException("verification failed : calculated=" + Store.digestToString(getChecksum()) + - " stored=" + Store.digestToString(storedChecksum)); + " stored=" + Store.digestToString(storedChecksum), this); } } @@ -1052,7 +1098,7 @@ public class Store extends AbstractIndexShardComponent implements CloseableIndex * Marks this store as corrupted. This method writes a corrupted_${uuid} file containing the given exception * message. If a store contains a corrupted_${uuid} file {@link #isMarkedCorrupted()} will return true. */ - public void markStoreCorrupted(CorruptIndexException exception) throws IOException { + public void markStoreCorrupted(IOException exception) throws IOException { ensureOpen(); if (!isMarkedCorrupted()) { String uuid = CORRUPTED + Strings.randomBase64UUID(); diff --git a/src/main/java/org/elasticsearch/index/store/distributor/AbstractDistributor.java b/src/main/java/org/elasticsearch/index/store/distributor/AbstractDistributor.java index 98ef6d33fd6..36eae00e88b 100644 --- a/src/main/java/org/elasticsearch/index/store/distributor/AbstractDistributor.java +++ b/src/main/java/org/elasticsearch/index/store/distributor/AbstractDistributor.java @@ -25,6 +25,9 @@ import org.elasticsearch.index.store.DirectoryUtils; import org.elasticsearch.index.store.DirectoryService; import java.io.IOException; +import java.nio.file.FileStore; +import java.nio.file.Files; +import java.nio.file.Path; import java.util.Arrays; public abstract class AbstractDistributor implements Distributor { @@ -45,7 +48,7 @@ public abstract class AbstractDistributor implements Distributor { } @Override - public Directory any() { + public Directory any() throws IOException { if (delegates.length == 1) { return delegates[0]; } else { @@ -54,10 +57,10 @@ public abstract class AbstractDistributor implements Distributor { } @SuppressWarnings("unchecked") - protected long getUsableSpace(Directory directory) { + protected long getUsableSpace(Directory directory) throws IOException { final FSDirectory leaf = DirectoryUtils.getLeaf(directory, FSDirectory.class); if (leaf != null) { - return leaf.getDirectory().getUsableSpace(); + return Files.getFileStore(leaf.getDirectory()).getUsableSpace(); } else { return 0; } @@ -68,7 +71,7 @@ public abstract class AbstractDistributor implements Distributor { return name() + Arrays.toString(delegates); } - protected abstract Directory doAny(); + protected abstract Directory doAny() throws IOException; protected abstract String name(); diff --git a/src/main/java/org/elasticsearch/index/store/distributor/Distributor.java b/src/main/java/org/elasticsearch/index/store/distributor/Distributor.java index e6094791747..a7ccae48532 100644 --- a/src/main/java/org/elasticsearch/index/store/distributor/Distributor.java +++ b/src/main/java/org/elasticsearch/index/store/distributor/Distributor.java @@ -21,6 +21,8 @@ package org.elasticsearch.index.store.distributor; import org.apache.lucene.store.Directory; +import java.io.IOException; + /** * Keeps track of available directories and selects a directory * based on some distribution strategy @@ -40,5 +42,5 @@ public interface Distributor { /** * Selects one of the directories based on distribution strategy */ - Directory any(); + Directory any() throws IOException; } diff --git a/src/main/java/org/elasticsearch/index/store/distributor/LeastUsedDistributor.java b/src/main/java/org/elasticsearch/index/store/distributor/LeastUsedDistributor.java index 8470dce0076..35123e61ab3 100644 --- a/src/main/java/org/elasticsearch/index/store/distributor/LeastUsedDistributor.java +++ b/src/main/java/org/elasticsearch/index/store/distributor/LeastUsedDistributor.java @@ -37,7 +37,7 @@ public class LeastUsedDistributor extends AbstractDistributor { } @Override - public Directory doAny() { + public Directory doAny() throws IOException { Directory directory = null; long size = Long.MIN_VALUE; int sameSize = 0; diff --git a/src/main/java/org/elasticsearch/index/store/distributor/RandomWeightedDistributor.java b/src/main/java/org/elasticsearch/index/store/distributor/RandomWeightedDistributor.java index 7a8b222f78f..d42c2fc7c1b 100644 --- a/src/main/java/org/elasticsearch/index/store/distributor/RandomWeightedDistributor.java +++ b/src/main/java/org/elasticsearch/index/store/distributor/RandomWeightedDistributor.java @@ -38,7 +38,7 @@ public class RandomWeightedDistributor extends AbstractDistributor { } @Override - public Directory doAny() { + public Directory doAny() throws IOException { long[] usableSpace = new long[delegates.length]; long size = 0; diff --git a/src/main/java/org/elasticsearch/index/store/fs/DefaultFsDirectoryService.java b/src/main/java/org/elasticsearch/index/store/fs/DefaultFsDirectoryService.java index c7ee4c12db9..f81b8c4a010 100644 --- a/src/main/java/org/elasticsearch/index/store/fs/DefaultFsDirectoryService.java +++ b/src/main/java/org/elasticsearch/index/store/fs/DefaultFsDirectoryService.java @@ -49,6 +49,6 @@ public class DefaultFsDirectoryService extends FsDirectoryService { @Override protected Directory newFSDirectory(File location, LockFactory lockFactory) throws IOException { - return new FileSwitchDirectory(PRIMARY_EXTENSIONS, new MMapDirectory(location, lockFactory), new NIOFSDirectory(location, lockFactory), true); + return new FileSwitchDirectory(PRIMARY_EXTENSIONS, new MMapDirectory(location.toPath(), lockFactory), new NIOFSDirectory(location.toPath(), lockFactory), true); } } diff --git a/src/main/java/org/elasticsearch/index/store/fs/FsDirectoryService.java b/src/main/java/org/elasticsearch/index/store/fs/FsDirectoryService.java index 342915f2d14..45cee0743e3 100644 --- a/src/main/java/org/elasticsearch/index/store/fs/FsDirectoryService.java +++ b/src/main/java/org/elasticsearch/index/store/fs/FsDirectoryService.java @@ -72,54 +72,6 @@ public abstract class FsDirectoryService extends AbstractIndexShardComponent imp return lockFactory; } - @Override - public final void renameFile(Directory dir, String from, String to) throws IOException { - final FSDirectory fsDirectory = DirectoryUtils.getLeaf(dir, FSDirectory.class); - if (fsDirectory == null) { - throw new ElasticsearchIllegalArgumentException("Can not rename file on non-filesystem based directory "); - } - File directory = fsDirectory.getDirectory(); - File old = new File(directory, from); - File nu = new File(directory, to); - if (nu.exists()) - if (!nu.delete()) - throw new IOException("Cannot delete " + nu); - - if (!old.exists()) { - throw new FileNotFoundException("Can't rename from [" + from + "] to [" + to + "], from does not exists"); - } - - boolean renamed = false; - for (int i = 0; i < 3; i++) { - if (old.renameTo(nu)) { - renamed = true; - break; - } - try { - Thread.sleep(100); - } catch (InterruptedException e) { - throw new InterruptedIOException(e.getMessage()); - } - } - if (!renamed) { - throw new IOException("Failed to rename, from [" + from + "], to [" + to + "]"); - } - } - - @Override - public final void fullDelete(Directory dir) throws IOException { - final FSDirectory fsDirectory = DirectoryUtils.getLeaf(dir, FSDirectory.class); - if (fsDirectory == null) { - throw new ElasticsearchIllegalArgumentException("Can not fully delete on non-filesystem based directory"); - } - FileSystemUtils.deleteRecursively(fsDirectory.getDirectory()); - // if we are the last ones, delete also the actual index - String[] list = fsDirectory.getDirectory().getParentFile().list(); - if (list == null || list.length == 0) { - FileSystemUtils.deleteRecursively(fsDirectory.getDirectory().getParentFile()); - } - } - @Override public Directory[] build() throws IOException { File[] locations = indexStore.shardIndexLocations(shardId); diff --git a/src/main/java/org/elasticsearch/index/store/fs/MmapFsDirectoryService.java b/src/main/java/org/elasticsearch/index/store/fs/MmapFsDirectoryService.java index e4dcc0e4b07..8efffcdf78b 100644 --- a/src/main/java/org/elasticsearch/index/store/fs/MmapFsDirectoryService.java +++ b/src/main/java/org/elasticsearch/index/store/fs/MmapFsDirectoryService.java @@ -42,6 +42,6 @@ public class MmapFsDirectoryService extends FsDirectoryService { @Override protected Directory newFSDirectory(File location, LockFactory lockFactory) throws IOException { - return new MMapDirectory(location, buildLockFactory()); + return new MMapDirectory(location.toPath(), buildLockFactory()); } } diff --git a/src/main/java/org/elasticsearch/index/store/fs/NioFsDirectoryService.java b/src/main/java/org/elasticsearch/index/store/fs/NioFsDirectoryService.java index 389e418b275..2aba16beb21 100644 --- a/src/main/java/org/elasticsearch/index/store/fs/NioFsDirectoryService.java +++ b/src/main/java/org/elasticsearch/index/store/fs/NioFsDirectoryService.java @@ -42,6 +42,6 @@ public class NioFsDirectoryService extends FsDirectoryService { @Override protected Directory newFSDirectory(File location, LockFactory lockFactory) throws IOException { - return new NIOFSDirectory(location, lockFactory); + return new NIOFSDirectory(location.toPath(), lockFactory); } } diff --git a/src/main/java/org/elasticsearch/index/store/fs/SimpleFsDirectoryService.java b/src/main/java/org/elasticsearch/index/store/fs/SimpleFsDirectoryService.java index 1574aa6cd7b..71e0763a69f 100644 --- a/src/main/java/org/elasticsearch/index/store/fs/SimpleFsDirectoryService.java +++ b/src/main/java/org/elasticsearch/index/store/fs/SimpleFsDirectoryService.java @@ -42,6 +42,6 @@ public class SimpleFsDirectoryService extends FsDirectoryService { @Override protected Directory newFSDirectory(File location, LockFactory lockFactory) throws IOException { - return new SimpleFSDirectory(location, lockFactory); + return new SimpleFSDirectory(location.toPath(), lockFactory); } } diff --git a/src/main/java/org/elasticsearch/index/store/ram/RamDirectoryService.java b/src/main/java/org/elasticsearch/index/store/ram/RamDirectoryService.java index 8191c3e3986..2eb77e1943e 100644 --- a/src/main/java/org/elasticsearch/index/store/ram/RamDirectoryService.java +++ b/src/main/java/org/elasticsearch/index/store/ram/RamDirectoryService.java @@ -52,17 +52,6 @@ public final class RamDirectoryService extends AbstractIndexShardComponent imple return new Directory[]{new CustomRAMDirectory()}; } - @Override - public void renameFile(Directory dir, String from, String to) throws IOException { - CustomRAMDirectory leaf = DirectoryUtils.getLeaf(dir, CustomRAMDirectory.class); - assert leaf != null; - leaf.renameTo(from, to); - } - - @Override - public void fullDelete(Directory dir) { - } - static class CustomRAMDirectory extends RAMDirectory { public synchronized void renameTo(String from, String to) throws IOException { diff --git a/src/main/java/org/elasticsearch/index/termvectors/ShardTermVectorService.java b/src/main/java/org/elasticsearch/index/termvectors/ShardTermVectorService.java index 4960ec0aad2..9993edb4e1f 100644 --- a/src/main/java/org/elasticsearch/index/termvectors/ShardTermVectorService.java +++ b/src/main/java/org/elasticsearch/index/termvectors/ShardTermVectorService.java @@ -159,7 +159,7 @@ public class ShardTermVectorService extends AbstractIndexShardComponent { return false; } // and must be indexed - if (!field.fieldType().indexed()) { + if (field.fieldType().indexOptions() == IndexOptions.NONE) { return false; } return true; @@ -301,7 +301,7 @@ public class ShardTermVectorService extends AbstractIndexShardComponent { return parallelFields; } - // Poached from Lucene ParallelAtomicReader + // Poached from Lucene ParallelLeafReader private static final class ParallelFields extends Fields { final Map fields = new TreeMap<>(); diff --git a/src/main/java/org/elasticsearch/index/translog/Translog.java b/src/main/java/org/elasticsearch/index/translog/Translog.java index 18143d6b42a..58daea80a0b 100644 --- a/src/main/java/org/elasticsearch/index/translog/Translog.java +++ b/src/main/java/org/elasticsearch/index/translog/Translog.java @@ -41,6 +41,7 @@ import org.elasticsearch.index.engine.Engine; import org.elasticsearch.index.shard.IndexShardComponent; import java.io.IOException; +import java.util.Collections; /** @@ -153,6 +154,11 @@ public interface Translog extends IndexShardComponent, CloseableIndexComponent, return RamUsageEstimator.NUM_BYTES_OBJECT_HEADER + 2*RamUsageEstimator.NUM_BYTES_LONG + RamUsageEstimator.NUM_BYTES_INT; } + @Override + public Iterable getChildResources() { + return Collections.emptyList(); + } + @Override public String toString() { return "[id: " + translogId + ", location: " + translogLocation + ", size: " + size + "]"; diff --git a/src/main/java/org/elasticsearch/index/translog/TranslogStreams.java b/src/main/java/org/elasticsearch/index/translog/TranslogStreams.java index 8169ae39103..ff014dc6d9b 100644 --- a/src/main/java/org/elasticsearch/index/translog/TranslogStreams.java +++ b/src/main/java/org/elasticsearch/index/translog/TranslogStreams.java @@ -21,6 +21,8 @@ package org.elasticsearch.index.translog; import org.apache.lucene.codecs.CodecUtil; import org.apache.lucene.index.CorruptIndexException; +import org.apache.lucene.index.IndexFormatTooNewException; +import org.apache.lucene.index.IndexFormatTooOldException; import org.apache.lucene.store.InputStreamDataInput; import org.elasticsearch.common.io.stream.InputStreamStreamInput; import org.elasticsearch.common.io.stream.StreamInput; @@ -141,7 +143,7 @@ public class TranslogStreams { } else { throw new TranslogCorruptedException("Invalid first byte in translog file, got: " + Long.toHexString(b1) + ", expected 0x00 or 0x3f"); } - } catch (CorruptIndexException e) { + } catch (CorruptIndexException | IndexFormatTooOldException | IndexFormatTooNewException e) { throw new TranslogCorruptedException("Translog header corrupted", e); } } diff --git a/src/main/java/org/elasticsearch/index/translog/fs/FsTranslog.java b/src/main/java/org/elasticsearch/index/translog/fs/FsTranslog.java index d15e23cf846..1ad6f1635a6 100644 --- a/src/main/java/org/elasticsearch/index/translog/fs/FsTranslog.java +++ b/src/main/java/org/elasticsearch/index/translog/fs/FsTranslog.java @@ -19,6 +19,7 @@ package org.elasticsearch.index.translog.fs; +import org.apache.lucene.util.Accountable; import org.elasticsearch.ElasticsearchException; import org.elasticsearch.common.bytes.BytesArray; import org.elasticsearch.common.bytes.ReleasableBytesReference; @@ -40,6 +41,7 @@ import org.elasticsearch.index.translog.*; import java.io.File; import java.io.IOException; import java.nio.channels.ClosedChannelException; +import java.util.Collections; import java.util.concurrent.ThreadLocalRandom; import java.util.concurrent.locks.ReadWriteLock; import java.util.concurrent.locks.ReentrantReadWriteLock; @@ -184,6 +186,11 @@ public class FsTranslog extends AbstractIndexShardComponent implements Translog return 0; } + @Override + public Iterable getChildResources() { + return Collections.emptyList(); + } + @Override public long translogSizeInBytes() { FsTranslogFile current1 = this.current; diff --git a/src/main/java/org/elasticsearch/indices/analysis/IndicesAnalysisService.java b/src/main/java/org/elasticsearch/indices/analysis/IndicesAnalysisService.java index 12849c1bf2e..bcd735cfafe 100644 --- a/src/main/java/org/elasticsearch/indices/analysis/IndicesAnalysisService.java +++ b/src/main/java/org/elasticsearch/indices/analysis/IndicesAnalysisService.java @@ -20,45 +20,13 @@ package org.elasticsearch.indices.analysis; import org.apache.lucene.analysis.Analyzer; -import org.apache.lucene.analysis.TokenStream; -import org.apache.lucene.analysis.Tokenizer; -import org.apache.lucene.analysis.ar.ArabicNormalizationFilter; -import org.apache.lucene.analysis.ar.ArabicStemFilter; -import org.apache.lucene.analysis.br.BrazilianStemFilter; -import org.apache.lucene.analysis.charfilter.HTMLStripCharFilter; -import org.apache.lucene.analysis.commongrams.CommonGramsFilter; -import org.apache.lucene.analysis.core.*; -import org.apache.lucene.analysis.cz.CzechStemFilter; -import org.apache.lucene.analysis.de.GermanStemFilter; -import org.apache.lucene.analysis.en.KStemFilter; -import org.apache.lucene.analysis.en.PorterStemFilter; -import org.apache.lucene.analysis.fa.PersianNormalizationFilter; -import org.apache.lucene.analysis.fr.FrenchAnalyzer; -import org.apache.lucene.analysis.fr.FrenchStemFilter; -import org.apache.lucene.analysis.miscellaneous.*; -import org.apache.lucene.analysis.ngram.EdgeNGramTokenFilter; -import org.apache.lucene.analysis.ngram.EdgeNGramTokenizer; -import org.apache.lucene.analysis.ngram.NGramTokenFilter; -import org.apache.lucene.analysis.ngram.NGramTokenizer; -import org.apache.lucene.analysis.nl.DutchStemFilter; -import org.apache.lucene.analysis.path.PathHierarchyTokenizer; -import org.apache.lucene.analysis.pattern.PatternTokenizer; -import org.apache.lucene.analysis.payloads.TypeAsPayloadTokenFilter; -import org.apache.lucene.analysis.reverse.ReverseStringFilter; -import org.apache.lucene.analysis.snowball.SnowballFilter; -import org.apache.lucene.analysis.standard.*; -import org.apache.lucene.analysis.util.CharArraySet; -import org.apache.lucene.analysis.util.ElisionFilter; import org.elasticsearch.Version; import org.elasticsearch.common.component.AbstractComponent; import org.elasticsearch.common.inject.Inject; -import org.elasticsearch.common.lucene.Lucene; -import org.elasticsearch.common.regex.Regex; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.util.concurrent.ConcurrentCollections; import org.elasticsearch.index.analysis.*; -import java.io.Reader; import java.util.Locale; import java.util.Map; diff --git a/src/main/java/org/elasticsearch/indices/analysis/PreBuiltAnalyzers.java b/src/main/java/org/elasticsearch/indices/analysis/PreBuiltAnalyzers.java index 5cf7ecf0682..7aeafcf3ee7 100644 --- a/src/main/java/org/elasticsearch/indices/analysis/PreBuiltAnalyzers.java +++ b/src/main/java/org/elasticsearch/indices/analysis/PreBuiltAnalyzers.java @@ -25,7 +25,6 @@ import org.apache.lucene.analysis.br.BrazilianAnalyzer; import org.apache.lucene.analysis.ca.CatalanAnalyzer; import org.apache.lucene.analysis.cjk.CJKAnalyzer; import org.apache.lucene.analysis.ckb.SoraniAnalyzer; -import org.apache.lucene.analysis.cn.ChineseAnalyzer; import org.apache.lucene.analysis.core.KeywordAnalyzer; import org.apache.lucene.analysis.core.SimpleAnalyzer; import org.apache.lucene.analysis.core.StopAnalyzer; @@ -48,13 +47,11 @@ import org.apache.lucene.analysis.hy.ArmenianAnalyzer; import org.apache.lucene.analysis.id.IndonesianAnalyzer; import org.apache.lucene.analysis.it.ItalianAnalyzer; import org.apache.lucene.analysis.lv.LatvianAnalyzer; -import org.apache.lucene.analysis.miscellaneous.PatternAnalyzer; import org.apache.lucene.analysis.nl.DutchAnalyzer; import org.apache.lucene.analysis.no.NorwegianAnalyzer; import org.apache.lucene.analysis.pt.PortugueseAnalyzer; import org.apache.lucene.analysis.ro.RomanianAnalyzer; import org.apache.lucene.analysis.ru.RussianAnalyzer; -import org.apache.lucene.analysis.snowball.SnowballAnalyzer; import org.apache.lucene.analysis.standard.ClassicAnalyzer; import org.apache.lucene.analysis.standard.StandardAnalyzer; import org.apache.lucene.analysis.sv.SwedishAnalyzer; @@ -63,7 +60,9 @@ import org.apache.lucene.analysis.tr.TurkishAnalyzer; import org.apache.lucene.analysis.util.CharArraySet; import org.elasticsearch.Version; import org.elasticsearch.common.regex.Regex; +import org.elasticsearch.index.analysis.PatternAnalyzer; import org.elasticsearch.index.analysis.StandardHtmlStripAnalyzer; +import org.elasticsearch.index.analysis.SnowballAnalyzer; import org.elasticsearch.indices.analysis.PreBuiltCacheFactory.CachingStrategy; import java.util.Locale; @@ -76,10 +75,14 @@ public enum PreBuiltAnalyzers { STANDARD(CachingStrategy.ELASTICSEARCH) { // we don't do stopwords anymore from 1.0Beta on @Override protected Analyzer create(Version version) { + final Analyzer a; if (version.onOrAfter(Version.V_1_0_0_Beta1)) { - return new StandardAnalyzer(version.luceneVersion, CharArraySet.EMPTY_SET); + a = new StandardAnalyzer(CharArraySet.EMPTY_SET); + } else { + a = new StandardAnalyzer(); } - return new StandardAnalyzer(version.luceneVersion); + a.setVersion(version.luceneVersion); + return a; } }, @@ -102,35 +105,45 @@ public enum PreBuiltAnalyzers { STOP { @Override protected Analyzer create(Version version) { - return new StopAnalyzer(version.luceneVersion); + Analyzer a = new StopAnalyzer(); + a.setVersion(version.luceneVersion); + return a; } }, WHITESPACE { @Override protected Analyzer create(Version version) { - return new WhitespaceAnalyzer(version.luceneVersion); + Analyzer a = new WhitespaceAnalyzer(); + a.setVersion(version.luceneVersion); + return a; } }, SIMPLE { @Override protected Analyzer create(Version version) { - return new SimpleAnalyzer(version.luceneVersion); + Analyzer a = new SimpleAnalyzer(); + a.setVersion(version.luceneVersion); + return a; } }, CLASSIC { @Override protected Analyzer create(Version version) { - return new ClassicAnalyzer(version.luceneVersion); + Analyzer a = new ClassicAnalyzer(); + a.setVersion(version.luceneVersion); + return a; } }, SNOWBALL { @Override protected Analyzer create(Version version) { - return new SnowballAnalyzer(version.luceneVersion, "English", StopAnalyzer.ENGLISH_STOP_WORDS_SET); + Analyzer analyzer = new SnowballAnalyzer("English", StopAnalyzer.ENGLISH_STOP_WORDS_SET); + analyzer.setVersion(version.luceneVersion); + return analyzer; } }, @@ -138,250 +151,320 @@ public enum PreBuiltAnalyzers { @Override protected Analyzer create(Version version) { if (version.onOrAfter(Version.V_1_0_0_RC1)) { - return new PatternAnalyzer(version.luceneVersion, Regex.compile("\\W+" /*PatternAnalyzer.NON_WORD_PATTERN*/, null), true, CharArraySet.EMPTY_SET); + return new PatternAnalyzer(Regex.compile("\\W+" /*PatternAnalyzer.NON_WORD_PATTERN*/, null), true, CharArraySet.EMPTY_SET); } - return new PatternAnalyzer(version.luceneVersion, Regex.compile("\\W+" /*PatternAnalyzer.NON_WORD_PATTERN*/, null), true, StopAnalyzer.ENGLISH_STOP_WORDS_SET); + return new PatternAnalyzer(Regex.compile("\\W+" /*PatternAnalyzer.NON_WORD_PATTERN*/, null), true, StopAnalyzer.ENGLISH_STOP_WORDS_SET); } }, STANDARD_HTML_STRIP(CachingStrategy.ELASTICSEARCH) { @Override protected Analyzer create(Version version) { + final Analyzer analyzer; if (version.onOrAfter(Version.V_1_0_0_RC1)) { - return new StandardHtmlStripAnalyzer(version.luceneVersion, CharArraySet.EMPTY_SET); + analyzer = new StandardHtmlStripAnalyzer(CharArraySet.EMPTY_SET); + } else { + analyzer = new StandardHtmlStripAnalyzer(); } - return new StandardHtmlStripAnalyzer(version.luceneVersion); + analyzer.setVersion(version.luceneVersion); + return analyzer; } }, ARABIC { @Override protected Analyzer create(Version version) { - return new ArabicAnalyzer(version.luceneVersion); + Analyzer a = new ArabicAnalyzer(); + a.setVersion(version.luceneVersion); + return a; } }, ARMENIAN { @Override protected Analyzer create(Version version) { - return new ArmenianAnalyzer(version.luceneVersion); + Analyzer a = new ArmenianAnalyzer(); + a.setVersion(version.luceneVersion); + return a; } }, BASQUE { @Override protected Analyzer create(Version version) { - return new BasqueAnalyzer(version.luceneVersion); + Analyzer a = new BasqueAnalyzer(); + a.setVersion(version.luceneVersion); + return a; } }, BRAZILIAN { @Override protected Analyzer create(Version version) { - return new BrazilianAnalyzer(version.luceneVersion); + Analyzer a = new BrazilianAnalyzer(); + a.setVersion(version.luceneVersion); + return a; } }, BULGARIAN { @Override protected Analyzer create(Version version) { - return new BulgarianAnalyzer(version.luceneVersion); + Analyzer a = new BulgarianAnalyzer(); + a.setVersion(version.luceneVersion); + return a; } }, CATALAN { @Override protected Analyzer create(Version version) { - return new CatalanAnalyzer(version.luceneVersion); + Analyzer a = new CatalanAnalyzer(); + a.setVersion(version.luceneVersion); + return a; } }, CHINESE(CachingStrategy.ONE) { @Override protected Analyzer create(Version version) { - return new ChineseAnalyzer(); + Analyzer a = new StandardAnalyzer(); + a.setVersion(version.luceneVersion); + return a; } }, CJK { @Override protected Analyzer create(Version version) { - return new CJKAnalyzer(version.luceneVersion); + Analyzer a = new CJKAnalyzer(); + a.setVersion(version.luceneVersion); + return a; } }, CZECH { @Override protected Analyzer create(Version version) { - return new CzechAnalyzer(version.luceneVersion); + Analyzer a = new CzechAnalyzer(); + a.setVersion(version.luceneVersion); + return a; } }, DUTCH { @Override protected Analyzer create(Version version) { - return new DutchAnalyzer(version.luceneVersion); + Analyzer a = new DutchAnalyzer(); + a.setVersion(version.luceneVersion); + return a; } }, DANISH { @Override protected Analyzer create(Version version) { - return new DanishAnalyzer(version.luceneVersion); + Analyzer a = new DanishAnalyzer(); + a.setVersion(version.luceneVersion); + return a; } }, ENGLISH { @Override protected Analyzer create(Version version) { - return new EnglishAnalyzer(version.luceneVersion); + Analyzer a = new EnglishAnalyzer(); + a.setVersion(version.luceneVersion); + return a; } }, FINNISH { @Override protected Analyzer create(Version version) { - return new FinnishAnalyzer(version.luceneVersion); + Analyzer a = new FinnishAnalyzer(); + a.setVersion(version.luceneVersion); + return a; } }, FRENCH { @Override protected Analyzer create(Version version) { - return new FrenchAnalyzer(version.luceneVersion); + Analyzer a = new FrenchAnalyzer(); + a.setVersion(version.luceneVersion); + return a; } }, GALICIAN { @Override protected Analyzer create(Version version) { - return new GalicianAnalyzer(version.luceneVersion); + Analyzer a = new GalicianAnalyzer(); + a.setVersion(version.luceneVersion); + return a; } }, GERMAN { @Override protected Analyzer create(Version version) { - return new GermanAnalyzer(version.luceneVersion); + Analyzer a = new GermanAnalyzer(); + a.setVersion(version.luceneVersion); + return a; } }, GREEK { @Override protected Analyzer create(Version version) { - return new GreekAnalyzer(version.luceneVersion); + Analyzer a = new GreekAnalyzer(); + a.setVersion(version.luceneVersion); + return a; } }, HINDI { @Override protected Analyzer create(Version version) { - return new HindiAnalyzer(version.luceneVersion); + Analyzer a = new HindiAnalyzer(); + a.setVersion(version.luceneVersion); + return a; } }, HUNGARIAN { @Override protected Analyzer create(Version version) { - return new HungarianAnalyzer(version.luceneVersion); + Analyzer a = new HungarianAnalyzer(); + a.setVersion(version.luceneVersion); + return a; } }, INDONESIAN { @Override protected Analyzer create(Version version) { - return new IndonesianAnalyzer(version.luceneVersion); + Analyzer a = new IndonesianAnalyzer(); + a.setVersion(version.luceneVersion); + return a; } }, IRISH { @Override protected Analyzer create(Version version) { - return new IrishAnalyzer(version.luceneVersion); + Analyzer a = new IrishAnalyzer(); + a.setVersion(version.luceneVersion); + return a; } }, ITALIAN { @Override protected Analyzer create(Version version) { - return new ItalianAnalyzer(version.luceneVersion); + Analyzer a = new ItalianAnalyzer(); + a.setVersion(version.luceneVersion); + return a; } }, LATVIAN { @Override protected Analyzer create(Version version) { - return new LatvianAnalyzer(version.luceneVersion); + Analyzer a = new LatvianAnalyzer(); + a.setVersion(version.luceneVersion); + return a; } }, NORWEGIAN { @Override protected Analyzer create(Version version) { - return new NorwegianAnalyzer(version.luceneVersion); + Analyzer a = new NorwegianAnalyzer(); + a.setVersion(version.luceneVersion); + return a; } }, PERSIAN { @Override protected Analyzer create(Version version) { - return new PersianAnalyzer(version.luceneVersion); + Analyzer a = new PersianAnalyzer(); + a.setVersion(version.luceneVersion); + return a; } }, PORTUGUESE { @Override protected Analyzer create(Version version) { - return new PortugueseAnalyzer(version.luceneVersion); + Analyzer a = new PortugueseAnalyzer(); + a.setVersion(version.luceneVersion); + return a; } }, ROMANIAN { @Override protected Analyzer create(Version version) { - return new RomanianAnalyzer(version.luceneVersion); + Analyzer a = new RomanianAnalyzer(); + a.setVersion(version.luceneVersion); + return a; } }, RUSSIAN { @Override protected Analyzer create(Version version) { - return new RussianAnalyzer(version.luceneVersion); + Analyzer a = new RussianAnalyzer(); + a.setVersion(version.luceneVersion); + return a; } }, SORANI { @Override protected Analyzer create(Version version) { - return new SoraniAnalyzer(version.luceneVersion); + Analyzer a = new SoraniAnalyzer(); + a.setVersion(version.luceneVersion); + return a; } }, SPANISH { @Override protected Analyzer create(Version version) { - return new SpanishAnalyzer(version.luceneVersion); + Analyzer a = new SpanishAnalyzer(); + a.setVersion(version.luceneVersion); + return a; } }, SWEDISH { @Override protected Analyzer create(Version version) { - return new SwedishAnalyzer(version.luceneVersion); + Analyzer a = new SwedishAnalyzer(); + a.setVersion(version.luceneVersion); + return a; } }, TURKISH { @Override protected Analyzer create(Version version) { - return new TurkishAnalyzer(version.luceneVersion); + Analyzer a = new TurkishAnalyzer(); + a.setVersion(version.luceneVersion); + return a; } }, THAI { @Override protected Analyzer create(Version version) { - return new ThaiAnalyzer(version.luceneVersion); + Analyzer a = new ThaiAnalyzer(); + a.setVersion(version.luceneVersion); + return a; } }; diff --git a/src/main/java/org/elasticsearch/indices/analysis/PreBuiltTokenFilters.java b/src/main/java/org/elasticsearch/indices/analysis/PreBuiltTokenFilters.java index 2b374584c0e..d31e151b43a 100644 --- a/src/main/java/org/elasticsearch/indices/analysis/PreBuiltTokenFilters.java +++ b/src/main/java/org/elasticsearch/indices/analysis/PreBuiltTokenFilters.java @@ -27,6 +27,7 @@ import org.apache.lucene.analysis.cjk.CJKWidthFilter; import org.apache.lucene.analysis.ckb.SoraniNormalizationFilter; import org.apache.lucene.analysis.commongrams.CommonGramsFilter; import org.apache.lucene.analysis.core.LowerCaseFilter; +import org.apache.lucene.analysis.core.Lucene43StopFilter; import org.apache.lucene.analysis.core.StopAnalyzer; import org.apache.lucene.analysis.core.StopFilter; import org.apache.lucene.analysis.core.UpperCaseFilter; @@ -37,13 +38,13 @@ import org.apache.lucene.analysis.en.KStemFilter; import org.apache.lucene.analysis.en.PorterStemFilter; import org.apache.lucene.analysis.fa.PersianNormalizationFilter; import org.apache.lucene.analysis.fr.FrenchAnalyzer; -import org.apache.lucene.analysis.fr.FrenchStemFilter; import org.apache.lucene.analysis.hi.HindiNormalizationFilter; import org.apache.lucene.analysis.in.IndicNormalizationFilter; import org.apache.lucene.analysis.miscellaneous.*; import org.apache.lucene.analysis.ngram.EdgeNGramTokenFilter; +import org.apache.lucene.analysis.ngram.Lucene43EdgeNGramTokenFilter; +import org.apache.lucene.analysis.ngram.Lucene43NGramTokenFilter; import org.apache.lucene.analysis.ngram.NGramTokenFilter; -import org.apache.lucene.analysis.nl.DutchStemFilter; import org.apache.lucene.analysis.payloads.DelimitedPayloadTokenFilter; import org.apache.lucene.analysis.payloads.TypeAsPayloadTokenFilter; import org.apache.lucene.analysis.reverse.ReverseStringFilter; @@ -58,6 +59,8 @@ import org.elasticsearch.Version; import org.elasticsearch.index.analysis.*; import org.elasticsearch.index.analysis.LimitTokenCountFilterFactory; import org.elasticsearch.indices.analysis.PreBuiltCacheFactory.CachingStrategy; +import org.tartarus.snowball.ext.FrenchStemmer; +import org.tartarus.snowball.ext.DutchStemmer; import java.util.Locale; @@ -70,7 +73,7 @@ public enum PreBuiltTokenFilters { @Override public TokenStream create(TokenStream tokenStream, Version version) { if (version.luceneVersion.onOrAfter(org.apache.lucene.util.Version.LUCENE_4_8)) { - return new WordDelimiterFilter(version.luceneVersion, tokenStream, + return new WordDelimiterFilter(tokenStream, WordDelimiterFilter.GENERATE_WORD_PARTS | WordDelimiterFilter.GENERATE_NUMBER_PARTS | WordDelimiterFilter.SPLIT_ON_CASE_CHANGE | @@ -92,21 +95,33 @@ public enum PreBuiltTokenFilters { STOP(CachingStrategy.LUCENE) { @Override public TokenStream create(TokenStream tokenStream, Version version) { - return new StopFilter(version.luceneVersion, tokenStream, StopAnalyzer.ENGLISH_STOP_WORDS_SET); + if (version.luceneVersion.onOrAfter(org.apache.lucene.util.Version.LUCENE_4_4_0)) { + return new StopFilter(tokenStream, StopAnalyzer.ENGLISH_STOP_WORDS_SET); + } else { + @SuppressWarnings("deprecated") + final TokenStream filter = new Lucene43StopFilter(true, tokenStream, StopAnalyzer.ENGLISH_STOP_WORDS_SET); + return filter; + } } }, TRIM(CachingStrategy.LUCENE) { @Override public TokenStream create(TokenStream tokenStream, Version version) { - return new TrimFilter(version.luceneVersion, tokenStream); + if (version.luceneVersion.onOrAfter(org.apache.lucene.util.Version.LUCENE_4_4_0)) { + return new TrimFilter(tokenStream); + } else { + @SuppressWarnings("deprecated") + final TokenStream filter = new Lucene43TrimFilter(tokenStream, true); + return filter; + } } }, REVERSE(CachingStrategy.LUCENE) { @Override public TokenStream create(TokenStream tokenStream, Version version) { - return new ReverseStringFilter(version.luceneVersion, tokenStream); + return new ReverseStringFilter(tokenStream); } }, @@ -120,28 +135,34 @@ public enum PreBuiltTokenFilters { LENGTH(CachingStrategy.LUCENE) { @Override public TokenStream create(TokenStream tokenStream, Version version) { - return new LengthFilter(version.luceneVersion, tokenStream, 0, Integer.MAX_VALUE); + if (version.luceneVersion.onOrAfter(org.apache.lucene.util.Version.LUCENE_4_4_0)) { + return new LengthFilter(tokenStream, 0, Integer.MAX_VALUE); + } else { + @SuppressWarnings("deprecated") + final TokenStream filter = new Lucene43LengthFilter(true, tokenStream, 0, Integer.MAX_VALUE); + return filter; + } } }, COMMON_GRAMS(CachingStrategy.LUCENE) { @Override public TokenStream create(TokenStream tokenStream, Version version) { - return new CommonGramsFilter(version.luceneVersion, tokenStream, CharArraySet.EMPTY_SET); + return new CommonGramsFilter(tokenStream, CharArraySet.EMPTY_SET); } }, LOWERCASE(CachingStrategy.LUCENE) { @Override public TokenStream create(TokenStream tokenStream, Version version) { - return new LowerCaseFilter(version.luceneVersion, tokenStream); + return new LowerCaseFilter(tokenStream); } }, UPPERCASE(CachingStrategy.LUCENE) { @Override public TokenStream create(TokenStream tokenStream, Version version) { - return new UpperCaseFilter(version.luceneVersion, tokenStream); + return new UpperCaseFilter(tokenStream); } }, @@ -162,7 +183,7 @@ public enum PreBuiltTokenFilters { STANDARD(CachingStrategy.LUCENE) { @Override public TokenStream create(TokenStream tokenStream, Version version) { - return new StandardFilter(version.luceneVersion, tokenStream); + return new StandardFilter(tokenStream); } }, @@ -176,14 +197,26 @@ public enum PreBuiltTokenFilters { NGRAM(CachingStrategy.LUCENE) { @Override public TokenStream create(TokenStream tokenStream, Version version) { - return new NGramTokenFilter(version.luceneVersion, tokenStream); + if (version.luceneVersion.onOrAfter(org.apache.lucene.util.Version.LUCENE_4_4_0)) { + return new NGramTokenFilter(tokenStream); + } else { + @SuppressWarnings("deprecated") + final TokenStream filter = new Lucene43NGramTokenFilter(tokenStream); + return filter; + } } }, EDGE_NGRAM(CachingStrategy.LUCENE) { @Override public TokenStream create(TokenStream tokenStream, Version version) { - return new EdgeNGramTokenFilter(version.luceneVersion, tokenStream, EdgeNGramTokenFilter.DEFAULT_MIN_GRAM_SIZE, EdgeNGramTokenFilter.DEFAULT_MAX_GRAM_SIZE); + if (version.luceneVersion.onOrAfter(org.apache.lucene.util.Version.LUCENE_4_4_0)) { + return new EdgeNGramTokenFilter(tokenStream, EdgeNGramTokenFilter.DEFAULT_MIN_GRAM_SIZE, EdgeNGramTokenFilter.DEFAULT_MAX_GRAM_SIZE); + } else { + @SuppressWarnings("deprecated") + final TokenStream filter = new Lucene43EdgeNGramTokenFilter(tokenStream, EdgeNGramTokenFilter.DEFAULT_MIN_GRAM_SIZE, EdgeNGramTokenFilter.DEFAULT_MAX_GRAM_SIZE); + return filter; + } } }, @@ -247,14 +280,14 @@ public enum PreBuiltTokenFilters { DUTCH_STEM(CachingStrategy.ONE) { @Override public TokenStream create(TokenStream tokenStream, Version version) { - return new DutchStemFilter(tokenStream); + return new SnowballFilter(tokenStream, new DutchStemmer()); } }, FRENCH_STEM(CachingStrategy.ONE) { @Override public TokenStream create(TokenStream tokenStream, Version version) { - return new FrenchStemFilter(tokenStream); + return new SnowballFilter(tokenStream, new FrenchStemmer()); } }, diff --git a/src/main/java/org/elasticsearch/indices/analysis/PreBuiltTokenizers.java b/src/main/java/org/elasticsearch/indices/analysis/PreBuiltTokenizers.java index d749ef9708d..88af1fbb5b5 100644 --- a/src/main/java/org/elasticsearch/indices/analysis/PreBuiltTokenizers.java +++ b/src/main/java/org/elasticsearch/indices/analysis/PreBuiltTokenizers.java @@ -25,18 +25,21 @@ import org.apache.lucene.analysis.core.LowerCaseTokenizer; import org.apache.lucene.analysis.core.WhitespaceTokenizer; import org.apache.lucene.analysis.ngram.EdgeNGramTokenizer; import org.apache.lucene.analysis.ngram.NGramTokenizer; +import org.apache.lucene.analysis.ngram.Lucene43EdgeNGramTokenizer; +import org.apache.lucene.analysis.ngram.Lucene43NGramTokenizer; import org.apache.lucene.analysis.path.PathHierarchyTokenizer; import org.apache.lucene.analysis.pattern.PatternTokenizer; import org.apache.lucene.analysis.standard.ClassicTokenizer; import org.apache.lucene.analysis.standard.StandardTokenizer; import org.apache.lucene.analysis.standard.UAX29URLEmailTokenizer; +import org.apache.lucene.analysis.standard.std40.StandardTokenizer40; +import org.apache.lucene.analysis.standard.std40.UAX29URLEmailTokenizer40; import org.apache.lucene.analysis.th.ThaiTokenizer; import org.elasticsearch.Version; import org.elasticsearch.common.regex.Regex; import org.elasticsearch.index.analysis.TokenizerFactory; import org.elasticsearch.indices.analysis.PreBuiltCacheFactory.CachingStrategy; -import java.io.Reader; import java.util.Locale; /** @@ -46,91 +49,113 @@ public enum PreBuiltTokenizers { STANDARD(CachingStrategy.LUCENE) { @Override - protected Tokenizer create(Reader reader, Version version) { - return new StandardTokenizer(version.luceneVersion, reader); + protected Tokenizer create(Version version) { + if (version.luceneVersion.onOrAfter(org.apache.lucene.util.Version.LUCENE_4_7_0)) { + return new StandardTokenizer(); + } else { + return new StandardTokenizer40(); + } } }, CLASSIC(CachingStrategy.LUCENE) { @Override - protected Tokenizer create(Reader reader, Version version) { - return new ClassicTokenizer(version.luceneVersion, reader); + protected Tokenizer create(Version version) { + return new ClassicTokenizer(); } }, UAX_URL_EMAIL(CachingStrategy.LUCENE) { @Override - protected Tokenizer create(Reader reader, Version version) { - return new UAX29URLEmailTokenizer(version.luceneVersion, reader); + protected Tokenizer create(Version version) { + if (version.luceneVersion.onOrAfter(org.apache.lucene.util.Version.LUCENE_4_7_0)) { + return new UAX29URLEmailTokenizer(); + } else { + return new UAX29URLEmailTokenizer40(); + } } }, PATH_HIERARCHY(CachingStrategy.ONE) { @Override - protected Tokenizer create(Reader reader, Version version) { - return new PathHierarchyTokenizer(reader); + protected Tokenizer create(Version version) { + return new PathHierarchyTokenizer(); } }, KEYWORD(CachingStrategy.ONE) { @Override - protected Tokenizer create(Reader reader, Version version) { - return new KeywordTokenizer(reader); + protected Tokenizer create(Version version) { + return new KeywordTokenizer(); } }, LETTER(CachingStrategy.LUCENE) { @Override - protected Tokenizer create(Reader reader, Version version) { - return new LetterTokenizer(version.luceneVersion, reader); + protected Tokenizer create(Version version) { + return new LetterTokenizer(); } }, LOWERCASE(CachingStrategy.LUCENE) { @Override - protected Tokenizer create(Reader reader, Version version) { - return new LowerCaseTokenizer(version.luceneVersion, reader); + protected Tokenizer create(Version version) { + return new LowerCaseTokenizer(); } }, WHITESPACE(CachingStrategy.LUCENE) { @Override - protected Tokenizer create(Reader reader, Version version) { - return new WhitespaceTokenizer(version.luceneVersion, reader); + protected Tokenizer create(Version version) { + return new WhitespaceTokenizer(); } }, NGRAM(CachingStrategy.LUCENE) { @Override - protected Tokenizer create(Reader reader, Version version) { - return new NGramTokenizer(version.luceneVersion, reader); + protected Tokenizer create(Version version) { + // see NGramTokenizerFactory for an explanation of this logic: + // 4.4 patch was used before 4.4 was released + if (version.onOrAfter(org.elasticsearch.Version.V_0_90_2) && + version.luceneVersion.onOrAfter(org.apache.lucene.util.Version.LUCENE_4_3)) { + return new NGramTokenizer(); + } else { + return new Lucene43NGramTokenizer(); + } } }, EDGE_NGRAM(CachingStrategy.LUCENE) { @Override - protected Tokenizer create(Reader reader, Version version) { - return new EdgeNGramTokenizer(version.luceneVersion, reader, EdgeNGramTokenizer.DEFAULT_MIN_GRAM_SIZE, EdgeNGramTokenizer.DEFAULT_MAX_GRAM_SIZE); + protected Tokenizer create(Version version) { + // see EdgeNGramTokenizerFactory for an explanation of this logic: + // 4.4 patch was used before 4.4 was released + if (version.onOrAfter(org.elasticsearch.Version.V_0_90_2) && + version.luceneVersion.onOrAfter(org.apache.lucene.util.Version.LUCENE_4_3)) { + return new EdgeNGramTokenizer(EdgeNGramTokenizer.DEFAULT_MIN_GRAM_SIZE, EdgeNGramTokenizer.DEFAULT_MAX_GRAM_SIZE); + } else { + return new Lucene43EdgeNGramTokenizer(EdgeNGramTokenizer.DEFAULT_MIN_GRAM_SIZE, EdgeNGramTokenizer.DEFAULT_MAX_GRAM_SIZE); + } } }, PATTERN(CachingStrategy.ONE) { @Override - protected Tokenizer create(Reader reader, Version version) { - return new PatternTokenizer(reader, Regex.compile("\\W+", null), -1); + protected Tokenizer create(Version version) { + return new PatternTokenizer(Regex.compile("\\W+", null), -1); } }, THAI(CachingStrategy.ONE) { @Override - protected Tokenizer create(Reader reader, Version version) { - return new ThaiTokenizer(reader); + protected Tokenizer create(Version version) { + return new ThaiTokenizer(); } } ; - abstract protected Tokenizer create(Reader reader, Version version); + abstract protected Tokenizer create(Version version); protected final PreBuiltCacheFactory.PreBuiltCache cache; @@ -151,8 +176,8 @@ public enum PreBuiltTokenizers { } @Override - public Tokenizer create(Reader reader) { - return valueOf(finalName).create(reader, version); + public Tokenizer create() { + return valueOf(finalName).create(version); } }; cache.put(version, tokenizerFactory); diff --git a/src/main/java/org/elasticsearch/indices/cache/query/IndicesQueryCache.java b/src/main/java/org/elasticsearch/indices/cache/query/IndicesQueryCache.java index dec054202f0..18f20be3fc8 100644 --- a/src/main/java/org/elasticsearch/indices/cache/query/IndicesQueryCache.java +++ b/src/main/java/org/elasticsearch/indices/cache/query/IndicesQueryCache.java @@ -54,6 +54,7 @@ import org.elasticsearch.search.query.QuerySearchResultProvider; import org.elasticsearch.threadpool.ThreadPool; import java.io.IOException; +import java.util.Collections; import java.util.Iterator; import java.util.Set; import java.util.concurrent.Callable; @@ -282,6 +283,12 @@ public class IndicesQueryCache extends AbstractComponent implements RemovalListe return RamUsageEstimator.NUM_BYTES_OBJECT_REF + RamUsageEstimator.NUM_BYTES_LONG + value.length(); } + @Override + public Iterable getChildResources() { + // TODO: more detailed ram usage? + return Collections.emptyList(); + } + @Override public boolean equals(Object o) { if (this == o) return true; diff --git a/src/main/java/org/elasticsearch/indices/fielddata/cache/IndicesFieldDataCache.java b/src/main/java/org/elasticsearch/indices/fielddata/cache/IndicesFieldDataCache.java index 4e901019b07..8a4443f179a 100644 --- a/src/main/java/org/elasticsearch/indices/fielddata/cache/IndicesFieldDataCache.java +++ b/src/main/java/org/elasticsearch/indices/fielddata/cache/IndicesFieldDataCache.java @@ -20,7 +20,7 @@ package org.elasticsearch.indices.fielddata.cache; import com.google.common.cache.*; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.IndexReader; import org.apache.lucene.index.SegmentReader; import org.apache.lucene.util.Accountable; @@ -163,7 +163,7 @@ public class IndicesFieldDataCache extends AbstractComponent implements RemovalL } @Override - public > FD load(final AtomicReaderContext context, final IFD indexFieldData) throws Exception { + public > FD load(final LeafReaderContext context, final IFD indexFieldData) throws Exception { final Key key = new Key(this, context.reader().getCoreCacheKey()); //noinspection unchecked final Accountable accountable = cache.get(key, new Callable() { diff --git a/src/main/java/org/elasticsearch/indices/recovery/RecoverySource.java b/src/main/java/org/elasticsearch/indices/recovery/RecoverySource.java index 36d9fd8e549..0eaf96b8af7 100644 --- a/src/main/java/org/elasticsearch/indices/recovery/RecoverySource.java +++ b/src/main/java/org/elasticsearch/indices/recovery/RecoverySource.java @@ -148,7 +148,7 @@ public class RecoverySource extends AbstractComponent { final StoreFileMetaData md = recoverySourceMetadata.get(name); if (md == null) { logger.info("Snapshot differs from actual index for file: {} meta: {}", name, recoverySourceMetadata.asMap()); - throw new CorruptIndexException("Snapshot differs from actual index - maybe index was removed metadata has " + recoverySourceMetadata.asMap().size() + " files"); + throw new CorruptIndexException("Snapshot differs from actual index - maybe index was removed metadata has " + recoverySourceMetadata.asMap().size() + " files", name); } } final Store.RecoveryDiff diff = recoverySourceMetadata.recoveryDiff(new Store.MetadataSnapshot(request.existingFiles())); @@ -182,7 +182,7 @@ public class RecoverySource extends AbstractComponent { final CountDownLatch latch = new CountDownLatch(response.phase1FileNames.size()); final CopyOnWriteArrayList exceptions = new CopyOnWriteArrayList<>(); - final AtomicReference corruptedEngine = new AtomicReference<>(); + final AtomicReference corruptedEngine = new AtomicReference<>(); int fileIndex = 0; for (final String name : response.phase1FileNames) { ThreadPoolExecutor pool; @@ -226,8 +226,8 @@ public class RecoverySource extends AbstractComponent { TransportRequestOptions.options().withCompress(shouldCompressRequest).withType(TransportRequestOptions.Type.RECOVERY).withTimeout(internalActionTimeout), EmptyTransportResponseHandler.INSTANCE_SAME).txGet(); } } catch (Throwable e) { - final CorruptIndexException corruptIndexException; - if ((corruptIndexException = ExceptionsHelper.unwrap(e, CorruptIndexException.class)) != null) { + final Throwable corruptIndexException; + if ((corruptIndexException = ExceptionsHelper.unwrapCorruption(e)) != null) { if (store.checkIntegrity(md) == false) { // we are corrupted on the primary -- fail! logger.warn("{} Corrupted file detected {} checksum mismatch", shard.shardId(), md); if (corruptedEngine.compareAndSet(null, corruptIndexException) == false) { diff --git a/src/main/java/org/elasticsearch/indices/recovery/RecoveryStatus.java b/src/main/java/org/elasticsearch/indices/recovery/RecoveryStatus.java index d9187f84ec3..93fd1ac9462 100644 --- a/src/main/java/org/elasticsearch/indices/recovery/RecoveryStatus.java +++ b/src/main/java/org/elasticsearch/indices/recovery/RecoveryStatus.java @@ -19,7 +19,6 @@ package org.elasticsearch.indices.recovery; -import org.apache.lucene.store.Directory; import org.apache.lucene.store.IOContext; import org.apache.lucene.store.IndexOutput; import org.apache.lucene.util.IOUtils; @@ -34,13 +33,10 @@ import org.elasticsearch.index.shard.service.InternalIndexShard; import org.elasticsearch.index.store.Store; import org.elasticsearch.index.store.StoreFileMetaData; -import java.io.FileNotFoundException; import java.io.IOException; -import java.nio.file.NoSuchFileException; import java.util.Iterator; import java.util.Map; import java.util.Map.Entry; -import java.util.Set; import java.util.concurrent.ConcurrentMap; import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicLong; @@ -91,7 +87,7 @@ public class RecoveryStatus extends AbstractRefCounted { store.incRef(); } - private final Set tempFileNames = ConcurrentCollections.newConcurrentSet(); + private final Map tempFileNames = ConcurrentCollections.newConcurrentMap(); public long recoveryId() { return recoveryId; @@ -147,23 +143,7 @@ public class RecoveryStatus extends AbstractRefCounted { /** renames all temporary files to their true name, potentially overriding existing files */ public void renameAllTempFiles() throws IOException { ensureRefCount(); - Iterator tempFileIterator = tempFileNames.iterator(); - final Directory directory = store.directory(); - while (tempFileIterator.hasNext()) { - String tempFile = tempFileIterator.next(); - String origFile = originalNameForTempFile(tempFile); - // first, go and delete the existing ones - try { - directory.deleteFile(origFile); - } catch (FileNotFoundException | NoSuchFileException e) { - } catch (Throwable ex) { - logger.debug("failed to delete file [{}]", ex, origFile); - } - // now, rename the files... and fail it it won't work - store.renameFile(tempFile, origFile); - // upon success, remove the temp file - tempFileIterator.remove(); - } + store.renameFilesSafe(tempFileNames); } /** cancel the recovery. calling this method will clean temporary files and release the store @@ -218,7 +198,7 @@ public class RecoveryStatus extends AbstractRefCounted { /** return true if the give file is a temporary file name issued by this recovery */ private boolean isTempFile(String filename) { - return tempFileNames.contains(filename); + return tempFileNames.containsKey(filename); } public IndexOutput getOpenIndexOutput(String key) { @@ -251,7 +231,7 @@ public class RecoveryStatus extends AbstractRefCounted { ensureRefCount(); String tempFileName = getTempNameForFile(fileName); // add first, before it's created - tempFileNames.add(tempFileName); + tempFileNames.put(tempFileName, fileName); IndexOutput indexOutput = store.createVerifyingOutput(tempFileName, metaData, IOContext.DEFAULT); openIndexOutputs.put(fileName, indexOutput); return indexOutput; @@ -268,7 +248,7 @@ public class RecoveryStatus extends AbstractRefCounted { iterator.remove(); } // trash temporary files - for (String file : tempFileNames) { + for (String file : tempFileNames.keySet()) { logger.trace("cleaning temporary file [{}]", file); store.deleteQuiet(file); } diff --git a/src/main/java/org/elasticsearch/indices/ttl/IndicesTTLService.java b/src/main/java/org/elasticsearch/indices/ttl/IndicesTTLService.java index 72256d05dfc..2a4df54b154 100644 --- a/src/main/java/org/elasticsearch/indices/ttl/IndicesTTLService.java +++ b/src/main/java/org/elasticsearch/indices/ttl/IndicesTTLService.java @@ -19,11 +19,11 @@ package org.elasticsearch.indices.ttl; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.Term; -import org.apache.lucene.search.Collector; import org.apache.lucene.search.Query; import org.apache.lucene.search.Scorer; +import org.apache.lucene.search.SimpleCollector; import org.elasticsearch.ElasticsearchException; import org.elasticsearch.action.ActionListener; import org.elasticsearch.action.bulk.BulkRequest; @@ -237,8 +237,8 @@ public class IndicesTTLService extends AbstractLifecycleComponent docsToPurge = new ArrayList<>(); public ExpiredDocsCollector() { @@ -263,7 +263,7 @@ public class IndicesTTLService extends AbstractLifecycleComponent idFieldData; final IndexSearcher searcher; @@ -62,6 +62,8 @@ abstract class QueryCollector extends Collector { final List aggregatorCollector; + List aggregatorLeafCollectors; + QueryCollector(ESLogger logger, PercolateContext context, boolean isNestedDoc) { this.logger = logger; this.queries = context.percolateQueries(); @@ -93,27 +95,29 @@ abstract class QueryCollector extends Collector { aggregationContext.setNextReader(context.searcher().getIndexReader().getContext()); } aggregatorCollector = aggCollectorBuilder.build(); + aggregatorLeafCollectors = new ArrayList<>(aggregatorCollector.size()); } public void postMatch(int doc) throws IOException { - for (Collector collector : aggregatorCollector) { + for (LeafCollector collector : aggregatorLeafCollectors) { collector.collect(doc); } } @Override public void setScorer(Scorer scorer) throws IOException { - for (Collector collector : aggregatorCollector) { + for (LeafCollector collector : aggregatorLeafCollectors) { collector.setScorer(scorer); } } @Override - public void setNextReader(AtomicReaderContext context) throws IOException { + public void doSetNextReader(LeafReaderContext context) throws IOException { // we use the UID because id might not be indexed values = idFieldData.load(context).getBytesValues(); + aggregatorLeafCollectors.clear(); for (Collector collector : aggregatorCollector) { - collector.setNextReader(context); + aggregatorLeafCollectors.add(collector.getLeafCollector(context)); } } @@ -254,9 +258,10 @@ abstract class QueryCollector extends Collector { } @Override - public void setNextReader(AtomicReaderContext context) throws IOException { - super.setNextReader(context); - topDocsCollector.setNextReader(context); + public void doSetNextReader(LeafReaderContext context) throws IOException { + super.doSetNextReader(context); + LeafCollector leafCollector = topDocsCollector.getLeafCollector(context); + assert leafCollector == topDocsCollector : "TopDocsCollector returns itself as leaf collector"; } @Override diff --git a/src/main/java/org/elasticsearch/percolator/SingleDocumentPercolatorIndex.java b/src/main/java/org/elasticsearch/percolator/SingleDocumentPercolatorIndex.java index e4a43ce102b..57412483d66 100644 --- a/src/main/java/org/elasticsearch/percolator/SingleDocumentPercolatorIndex.java +++ b/src/main/java/org/elasticsearch/percolator/SingleDocumentPercolatorIndex.java @@ -21,6 +21,7 @@ package org.elasticsearch.percolator; import org.apache.lucene.analysis.TokenStream; +import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.IndexReader; import org.apache.lucene.index.IndexableField; import org.apache.lucene.index.memory.MemoryIndex; @@ -49,7 +50,7 @@ class SingleDocumentPercolatorIndex implements PercolatorIndex { public void prepare(PercolateContext context, ParsedDocument parsedDocument) { MemoryIndex memoryIndex = cache.get(); for (IndexableField field : parsedDocument.rootDoc().getFields()) { - if (!field.fieldType().indexed() && field.name().equals(UidFieldMapper.NAME)) { + if (field.fieldType().indexOptions() == IndexOptions.NONE && field.name().equals(UidFieldMapper.NAME)) { continue; } try { diff --git a/src/main/java/org/elasticsearch/rest/action/cat/RestIndicesAction.java b/src/main/java/org/elasticsearch/rest/action/cat/RestIndicesAction.java index d2fb78c5c28..546975867e5 100644 --- a/src/main/java/org/elasticsearch/rest/action/cat/RestIndicesAction.java +++ b/src/main/java/org/elasticsearch/rest/action/cat/RestIndicesAction.java @@ -464,8 +464,8 @@ public class RestIndicesAction extends AbstractCatAction { table.addCell(indexStats == null ? null : indexStats.getTotal().getSegments().getVersionMapMemory()); table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSegments().getVersionMapMemory()); - table.addCell(indexStats == null ? null : indexStats.getTotal().getSegments().getFixedBitSetMemory()); - table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSegments().getFixedBitSetMemory()); + table.addCell(indexStats == null ? null : indexStats.getTotal().getSegments().getBitsetMemory()); + table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSegments().getBitsetMemory()); table.addCell(indexStats == null ? null : indexStats.getTotal().getWarmer().current()); table.addCell(indexStats == null ? null : indexStats.getPrimaries().getWarmer().current()); diff --git a/src/main/java/org/elasticsearch/rest/action/cat/RestNodesAction.java b/src/main/java/org/elasticsearch/rest/action/cat/RestNodesAction.java index 50028897925..daa193bd954 100644 --- a/src/main/java/org/elasticsearch/rest/action/cat/RestNodesAction.java +++ b/src/main/java/org/elasticsearch/rest/action/cat/RestNodesAction.java @@ -296,7 +296,7 @@ public class RestNodesAction extends AbstractCatAction { table.addCell(stats == null ? null : stats.getIndices().getSegments().getIndexWriterMemory()); table.addCell(stats == null ? null : stats.getIndices().getSegments().getIndexWriterMaxMemory()); table.addCell(stats == null ? null : stats.getIndices().getSegments().getVersionMapMemory()); - table.addCell(stats == null ? null : stats.getIndices().getSegments().getFixedBitSetMemory()); + table.addCell(stats == null ? null : stats.getIndices().getSegments().getBitsetMemory()); table.addCell(stats == null ? null : stats.getIndices().getSuggest().getCurrent()); table.addCell(stats == null ? null : stats.getIndices().getSuggest().getTime()); diff --git a/src/main/java/org/elasticsearch/rest/action/cat/RestShardsAction.java b/src/main/java/org/elasticsearch/rest/action/cat/RestShardsAction.java index 8a094c969d8..d0bdba767bd 100644 --- a/src/main/java/org/elasticsearch/rest/action/cat/RestShardsAction.java +++ b/src/main/java/org/elasticsearch/rest/action/cat/RestShardsAction.java @@ -246,7 +246,7 @@ public class RestShardsAction extends AbstractCatAction { table.addCell(shardStats == null ? null : shardStats.getSegments().getIndexWriterMemory()); table.addCell(shardStats == null ? null : shardStats.getSegments().getIndexWriterMaxMemory()); table.addCell(shardStats == null ? null : shardStats.getSegments().getVersionMapMemory()); - table.addCell(shardStats == null ? null : shardStats.getSegments().getFixedBitSetMemory()); + table.addCell(shardStats == null ? null : shardStats.getSegments().getBitsetMemory()); table.addCell(shardStats == null ? null : shardStats.getWarmer().current()); table.addCell(shardStats == null ? null : shardStats.getWarmer().total()); diff --git a/src/main/java/org/elasticsearch/script/AbstractSearchScript.java b/src/main/java/org/elasticsearch/script/AbstractSearchScript.java index 662531141fa..3e48d864eff 100644 --- a/src/main/java/org/elasticsearch/script/AbstractSearchScript.java +++ b/src/main/java/org/elasticsearch/script/AbstractSearchScript.java @@ -19,7 +19,7 @@ package org.elasticsearch.script; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.search.Scorer; import org.elasticsearch.index.fielddata.ScriptDocValues; import org.elasticsearch.search.lookup.*; @@ -99,7 +99,7 @@ public abstract class AbstractSearchScript extends AbstractExecutableScript impl } @Override - public void setNextReader(AtomicReaderContext context) { + public void setNextReader(LeafReaderContext context) { lookup.setNextReader(context); } diff --git a/src/main/java/org/elasticsearch/script/expression/ExpressionScript.java b/src/main/java/org/elasticsearch/script/expression/ExpressionScript.java index 2c39fb93706..103126ab558 100644 --- a/src/main/java/org/elasticsearch/script/expression/ExpressionScript.java +++ b/src/main/java/org/elasticsearch/script/expression/ExpressionScript.java @@ -22,7 +22,7 @@ package org.elasticsearch.script.expression; import org.apache.lucene.expressions.Bindings; import org.apache.lucene.expressions.Expression; import org.apache.lucene.expressions.SimpleBindings; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.queries.function.FunctionValues; import org.apache.lucene.queries.function.ValueSource; import org.apache.lucene.search.Scorer; @@ -80,7 +80,7 @@ class ExpressionScript implements SearchScript { } @Override - public void setNextReader(AtomicReaderContext leaf) { + public void setNextReader(LeafReaderContext leaf) { try { values = source.getValues(context, leaf); } catch (IOException e) { diff --git a/src/main/java/org/elasticsearch/script/expression/FieldDataValueSource.java b/src/main/java/org/elasticsearch/script/expression/FieldDataValueSource.java index a8c455bfe78..16e3d35bb61 100644 --- a/src/main/java/org/elasticsearch/script/expression/FieldDataValueSource.java +++ b/src/main/java/org/elasticsearch/script/expression/FieldDataValueSource.java @@ -20,7 +20,7 @@ package org.elasticsearch.script.expression; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.queries.function.FunctionValues; import org.apache.lucene.queries.function.ValueSource; import org.elasticsearch.index.fielddata.AtomicFieldData; @@ -42,7 +42,7 @@ class FieldDataValueSource extends ValueSource { } @Override - public FunctionValues getValues(Map context, AtomicReaderContext leaf) throws IOException { + public FunctionValues getValues(Map context, LeafReaderContext leaf) throws IOException { AtomicFieldData leafData = fieldData.load(leaf); assert(leafData instanceof AtomicNumericFieldData); return new FieldDataFunctionValues(this, (AtomicNumericFieldData)leafData); diff --git a/src/main/java/org/elasticsearch/script/expression/ReplaceableConstValueSource.java b/src/main/java/org/elasticsearch/script/expression/ReplaceableConstValueSource.java index d4168453b0f..95eb5e4ab1d 100644 --- a/src/main/java/org/elasticsearch/script/expression/ReplaceableConstValueSource.java +++ b/src/main/java/org/elasticsearch/script/expression/ReplaceableConstValueSource.java @@ -19,7 +19,7 @@ package org.elasticsearch.script.expression; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.queries.function.FunctionValues; import org.apache.lucene.queries.function.ValueSource; import org.apache.lucene.queries.function.docvalues.DoubleDocValues; @@ -44,7 +44,7 @@ class ReplaceableConstValueSource extends ValueSource { } @Override - public FunctionValues getValues(Map map, AtomicReaderContext atomicReaderContext) throws IOException { + public FunctionValues getValues(Map map, LeafReaderContext atomicReaderContext) throws IOException { return fv; } diff --git a/src/main/java/org/elasticsearch/script/groovy/GroovyScriptEngineService.java b/src/main/java/org/elasticsearch/script/groovy/GroovyScriptEngineService.java index bbaf772dd85..f5bd1a3100c 100644 --- a/src/main/java/org/elasticsearch/script/groovy/GroovyScriptEngineService.java +++ b/src/main/java/org/elasticsearch/script/groovy/GroovyScriptEngineService.java @@ -22,7 +22,7 @@ package org.elasticsearch.script.groovy; import groovy.lang.Binding; import groovy.lang.GroovyClassLoader; import groovy.lang.Script; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.search.Scorer; import org.codehaus.groovy.ast.ClassCodeExpressionTransformer; import org.codehaus.groovy.ast.ClassNode; @@ -216,7 +216,7 @@ public class GroovyScriptEngineService extends AbstractComponent implements Scri } @Override - public void setNextReader(AtomicReaderContext context) { + public void setNextReader(LeafReaderContext context) { if (lookup != null) { lookup.setNextReader(context); } diff --git a/src/main/java/org/elasticsearch/search/MultiValueMode.java b/src/main/java/org/elasticsearch/search/MultiValueMode.java index ff1e5bcab42..58b87989def 100644 --- a/src/main/java/org/elasticsearch/search/MultiValueMode.java +++ b/src/main/java/org/elasticsearch/search/MultiValueMode.java @@ -22,9 +22,9 @@ package org.elasticsearch.search; import org.apache.lucene.index.*; import org.apache.lucene.util.Bits; +import org.apache.lucene.util.BitSet; import org.apache.lucene.util.BytesRef; import org.apache.lucene.util.BytesRefBuilder; -import org.apache.lucene.util.FixedBitSet; import org.elasticsearch.ElasticsearchIllegalArgumentException; import org.elasticsearch.index.fielddata.FieldData; import org.elasticsearch.index.fielddata.NumericDoubleValues; @@ -438,7 +438,8 @@ public enum MultiValueMode { * * NOTE: Calling the returned instance on docs that are not root docs is illegal */ - public NumericDocValues select(final SortedNumericDocValues values, final long missingValue, final FixedBitSet rootDocs, final FixedBitSet innerDocs, int maxDoc) { + // TODO: technically innerDocs need not be BitSet: only needs advance() ? + public NumericDocValues select(final SortedNumericDocValues values, final long missingValue, final BitSet rootDocs, final BitSet innerDocs, int maxDoc) { if (rootDocs == null || innerDocs == null) { return select(DocValues.emptySortedNumeric(maxDoc), missingValue); } @@ -530,7 +531,8 @@ public enum MultiValueMode { * * NOTE: Calling the returned instance on docs that are not root docs is illegal */ - public NumericDoubleValues select(final SortedNumericDoubleValues values, final double missingValue, final FixedBitSet rootDocs, final FixedBitSet innerDocs, int maxDoc) { + // TODO: technically innerDocs need not be BitSet: only needs advance() ? + public NumericDoubleValues select(final SortedNumericDoubleValues values, final double missingValue, final BitSet rootDocs, final BitSet innerDocs, int maxDoc) { if (rootDocs == null || innerDocs == null) { return select(FieldData.emptySortedNumericDoubles(maxDoc), missingValue); } @@ -613,7 +615,8 @@ public enum MultiValueMode { * * NOTE: Calling the returned instance on docs that are not root docs is illegal */ - public BinaryDocValues select(final SortedBinaryDocValues values, final BytesRef missingValue, final FixedBitSet rootDocs, final FixedBitSet innerDocs, int maxDoc) { + // TODO: technically innerDocs need not be BitSet: only needs advance() ? + public BinaryDocValues select(final SortedBinaryDocValues values, final BytesRef missingValue, final BitSet rootDocs, final BitSet innerDocs, int maxDoc) { if (rootDocs == null || innerDocs == null) { return select(FieldData.emptySortedBinary(maxDoc), missingValue); } @@ -706,7 +709,8 @@ public enum MultiValueMode { * * NOTE: Calling the returned instance on docs that are not root docs is illegal */ - public SortedDocValues select(final RandomAccessOrds values, final FixedBitSet rootDocs, final FixedBitSet innerDocs) { + // TODO: technically innerDocs need not be BitSet: only needs advance() ? + public SortedDocValues select(final RandomAccessOrds values, final BitSet rootDocs, final BitSet innerDocs) { if (rootDocs == null || innerDocs == null) { return select((RandomAccessOrds) DocValues.emptySortedSet()); } diff --git a/src/main/java/org/elasticsearch/search/SearchService.java b/src/main/java/org/elasticsearch/search/SearchService.java index a47611278d4..76cb56af9e7 100644 --- a/src/main/java/org/elasticsearch/search/SearchService.java +++ b/src/main/java/org/elasticsearch/search/SearchService.java @@ -24,7 +24,9 @@ import com.carrotsearch.hppc.ObjectSet; import com.carrotsearch.hppc.cursors.ObjectCursor; import com.google.common.base.Charsets; import com.google.common.collect.ImmutableMap; -import org.apache.lucene.index.AtomicReaderContext; + +import org.apache.lucene.index.IndexOptions; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.NumericDocValues; import org.apache.lucene.search.TopDocs; import org.elasticsearch.ElasticsearchException; @@ -762,7 +764,7 @@ public class SearchService extends AbstractLifecycleComponent { for (DocumentMapper docMapper : mapperService.docMappers(false)) { for (FieldMapper fieldMapper : docMapper.mappers()) { final String indexName = fieldMapper.names().indexName(); - if (fieldMapper.fieldType().indexed() && !fieldMapper.fieldType().omitNorms() && fieldMapper.normsLoading(defaultLoading) == Loading.EAGER) { + if (fieldMapper.fieldType().indexOptions() != IndexOptions.NONE && !fieldMapper.fieldType().omitNorms() && fieldMapper.normsLoading(defaultLoading) == Loading.EAGER) { warmUp.add(indexName); } } @@ -777,7 +779,7 @@ public class SearchService extends AbstractLifecycleComponent { for (Iterator> it = warmUp.iterator(); it.hasNext(); ) { final String indexName = it.next().value; final long start = System.nanoTime(); - for (final AtomicReaderContext ctx : context.searcher().reader().leaves()) { + for (final LeafReaderContext ctx : context.searcher().reader().leaves()) { final NumericDocValues values = ctx.reader().getNormValues(indexName); if (values != null) { values.get(0); @@ -835,7 +837,7 @@ public class SearchService extends AbstractLifecycleComponent { final IndexFieldDataService indexFieldDataService = indexShard.indexFieldDataService(); final Executor executor = threadPool.executor(executor()); final CountDownLatch latch = new CountDownLatch(context.searcher().reader().leaves().size() * warmUp.size()); - for (final AtomicReaderContext ctx : context.searcher().reader().leaves()) { + for (final LeafReaderContext ctx : context.searcher().reader().leaves()) { for (final FieldMapper fieldMapper : warmUp.values()) { executor.execute(new Runnable() { diff --git a/src/main/java/org/elasticsearch/search/aggregations/AggregationPhase.java b/src/main/java/org/elasticsearch/search/aggregations/AggregationPhase.java index 5d87a773a78..a32d33d2035 100644 --- a/src/main/java/org/elasticsearch/search/aggregations/AggregationPhase.java +++ b/src/main/java/org/elasticsearch/search/aggregations/AggregationPhase.java @@ -19,16 +19,17 @@ package org.elasticsearch.search.aggregations; import com.google.common.collect.ImmutableMap; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; +import org.apache.lucene.search.ConstantScoreQuery; import org.apache.lucene.search.Filter; +import org.apache.lucene.search.FilteredQuery; import org.apache.lucene.search.Query; import org.apache.lucene.search.Scorer; +import org.apache.lucene.search.SimpleCollector; import org.elasticsearch.ElasticsearchException; import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.lucene.search.Queries; import org.elasticsearch.common.lucene.search.XCollector; -import org.elasticsearch.common.lucene.search.XConstantScoreQuery; -import org.elasticsearch.common.lucene.search.XFilteredQuery; import org.elasticsearch.search.SearchParseElement; import org.elasticsearch.search.SearchPhase; import org.elasticsearch.search.aggregations.bucket.global.GlobalAggregator; @@ -116,10 +117,10 @@ public class AggregationPhase implements SearchPhase { // optimize the global collector based execution if (!globals.isEmpty()) { AggregationsCollector collector = new AggregationsCollector(globals, context.aggregations().aggregationContext()); - Query query = new XConstantScoreQuery(Queries.MATCH_ALL_FILTER); + Query query = new ConstantScoreQuery(Queries.MATCH_ALL_FILTER); Filter searchFilter = context.searchFilter(context.types()); if (searchFilter != null) { - query = new XFilteredQuery(query, searchFilter); + query = new FilteredQuery(query, searchFilter); } try { context.searcher().search(query, collector); @@ -140,7 +141,7 @@ public class AggregationPhase implements SearchPhase { } - public static class AggregationsCollector extends XCollector { + public static class AggregationsCollector extends SimpleCollector implements XCollector { private final AggregationContext aggregationContext; private final Aggregator[] collectors; @@ -163,7 +164,7 @@ public class AggregationPhase implements SearchPhase { } @Override - public void setNextReader(AtomicReaderContext context) throws IOException { + public void doSetNextReader(LeafReaderContext context) throws IOException { aggregationContext.setNextReader(context); } diff --git a/src/main/java/org/elasticsearch/search/aggregations/Aggregator.java b/src/main/java/org/elasticsearch/search/aggregations/Aggregator.java index e1783a6a5d4..637458c525a 100644 --- a/src/main/java/org/elasticsearch/search/aggregations/Aggregator.java +++ b/src/main/java/org/elasticsearch/search/aggregations/Aggregator.java @@ -20,7 +20,7 @@ package org.elasticsearch.search.aggregations; import com.google.common.base.Predicate; import com.google.common.collect.Iterables; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.search.Scorer; import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.common.ParseField; @@ -204,7 +204,7 @@ public abstract class Aggregator extends BucketCollector implements Releasable { "preCollection not called on new Aggregator before use", null); } @Override - public void setNextReader(AtomicReaderContext reader) { + public void setNextReader(LeafReaderContext reader) { badState(); } diff --git a/src/main/java/org/elasticsearch/search/aggregations/AggregatorFactories.java b/src/main/java/org/elasticsearch/search/aggregations/AggregatorFactories.java index 84106fd92b8..560a9c9f67a 100644 --- a/src/main/java/org/elasticsearch/search/aggregations/AggregatorFactories.java +++ b/src/main/java/org/elasticsearch/search/aggregations/AggregatorFactories.java @@ -18,7 +18,7 @@ */ package org.elasticsearch.search.aggregations; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.elasticsearch.ElasticsearchIllegalArgumentException; import org.elasticsearch.ElasticsearchIllegalStateException; import org.elasticsearch.common.lease.Releasables; @@ -115,7 +115,7 @@ public class AggregatorFactories { } @Override - public void setNextReader(AtomicReaderContext reader) { + public void setNextReader(LeafReaderContext reader) { } @Override diff --git a/src/main/java/org/elasticsearch/search/aggregations/BucketCollector.java b/src/main/java/org/elasticsearch/search/aggregations/BucketCollector.java index 445285f1b2d..98aa1d799b0 100644 --- a/src/main/java/org/elasticsearch/search/aggregations/BucketCollector.java +++ b/src/main/java/org/elasticsearch/search/aggregations/BucketCollector.java @@ -20,7 +20,7 @@ package org.elasticsearch.search.aggregations; import com.google.common.collect.Iterables; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.elasticsearch.common.lucene.ReaderContextAware; import org.elasticsearch.search.aggregations.Aggregator.BucketAggregationMode; @@ -49,7 +49,7 @@ public abstract class BucketCollector implements ReaderContextAware { // no-op } @Override - public void setNextReader(AtomicReaderContext reader) { + public void setNextReader(LeafReaderContext reader) { // no-op } @Override @@ -83,7 +83,7 @@ public abstract class BucketCollector implements ReaderContextAware { } @Override - public void setNextReader(AtomicReaderContext reader) { + public void setNextReader(LeafReaderContext reader) { for (BucketCollector collector : collectors) { collector.setNextReader(reader); } diff --git a/src/main/java/org/elasticsearch/search/aggregations/FilteringBucketCollector.java b/src/main/java/org/elasticsearch/search/aggregations/FilteringBucketCollector.java index 67f5624cc25..fa2a5ccae3c 100644 --- a/src/main/java/org/elasticsearch/search/aggregations/FilteringBucketCollector.java +++ b/src/main/java/org/elasticsearch/search/aggregations/FilteringBucketCollector.java @@ -19,7 +19,7 @@ package org.elasticsearch.search.aggregations; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.elasticsearch.ElasticsearchException; import org.elasticsearch.ElasticsearchIllegalArgumentException; import org.elasticsearch.common.lease.Releasable; @@ -52,7 +52,7 @@ public class FilteringBucketCollector extends BucketCollector implements Releasa } @Override - public final void setNextReader(AtomicReaderContext reader) { + public final void setNextReader(LeafReaderContext reader) { delegate.setNextReader(reader); } diff --git a/src/main/java/org/elasticsearch/search/aggregations/NonCollectingAggregator.java b/src/main/java/org/elasticsearch/search/aggregations/NonCollectingAggregator.java index 2aeb9fd46f3..be47cb0329b 100644 --- a/src/main/java/org/elasticsearch/search/aggregations/NonCollectingAggregator.java +++ b/src/main/java/org/elasticsearch/search/aggregations/NonCollectingAggregator.java @@ -19,7 +19,7 @@ package org.elasticsearch.search.aggregations; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.elasticsearch.search.aggregations.support.AggregationContext; import java.io.IOException; @@ -40,7 +40,7 @@ public abstract class NonCollectingAggregator extends Aggregator { } @Override - public final void setNextReader(AtomicReaderContext reader) { + public final void setNextReader(LeafReaderContext reader) { fail(); } diff --git a/src/main/java/org/elasticsearch/search/aggregations/RecordingPerReaderBucketCollector.java b/src/main/java/org/elasticsearch/search/aggregations/RecordingPerReaderBucketCollector.java index 36252cb3777..711819e5073 100644 --- a/src/main/java/org/elasticsearch/search/aggregations/RecordingPerReaderBucketCollector.java +++ b/src/main/java/org/elasticsearch/search/aggregations/RecordingPerReaderBucketCollector.java @@ -19,7 +19,7 @@ package org.elasticsearch.search.aggregations; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.util.packed.PackedInts; import org.apache.lucene.util.packed.PackedLongValues; import org.elasticsearch.ElasticsearchException; @@ -41,12 +41,12 @@ public class RecordingPerReaderBucketCollector extends RecordingBucketCollector private boolean recordingComplete; static class PerSegmentCollects { - AtomicReaderContext readerContext; + LeafReaderContext readerContext; PackedLongValues.Builder docs; PackedLongValues.Builder buckets; int lastDocId = 0; - PerSegmentCollects(AtomicReaderContext readerContext) { + PerSegmentCollects(LeafReaderContext readerContext) { this.readerContext = readerContext; } @@ -111,7 +111,7 @@ public class RecordingPerReaderBucketCollector extends RecordingBucketCollector } @Override - public void setNextReader(AtomicReaderContext reader) { + public void setNextReader(LeafReaderContext reader) { if(recordingComplete){ // The way registration works for listening on reader changes we have the potential to be called > once // TODO fixup the aggs framework so setNextReader calls are delegated to child aggs and not reliant on diff --git a/src/main/java/org/elasticsearch/search/aggregations/bucket/DeferringBucketCollector.java b/src/main/java/org/elasticsearch/search/aggregations/bucket/DeferringBucketCollector.java index 7c958054ae6..01ede852d8a 100644 --- a/src/main/java/org/elasticsearch/search/aggregations/bucket/DeferringBucketCollector.java +++ b/src/main/java/org/elasticsearch/search/aggregations/bucket/DeferringBucketCollector.java @@ -19,7 +19,7 @@ package org.elasticsearch.search.aggregations.bucket; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.elasticsearch.ElasticsearchException; import org.elasticsearch.common.lease.Releasable; import org.elasticsearch.common.lease.Releasables; @@ -56,7 +56,7 @@ public class DeferringBucketCollector extends BucketCollector implements Releasa } @Override - public void setNextReader(AtomicReaderContext reader) { + public void setNextReader(LeafReaderContext reader) { recording.setNextReader(reader); } @@ -82,7 +82,7 @@ public class DeferringBucketCollector extends BucketCollector implements Releasa BucketCollector subs = new BucketCollector() { @Override - public void setNextReader(AtomicReaderContext reader) { + public void setNextReader(LeafReaderContext reader) { // Need to set AggregationContext otherwise ValueSources in aggs // don't read any values context.setNextReader(reader); diff --git a/src/main/java/org/elasticsearch/search/aggregations/bucket/children/ParentToChildrenAggregator.java b/src/main/java/org/elasticsearch/search/aggregations/bucket/children/ParentToChildrenAggregator.java index 4bfe28bc812..3e3ccf18df2 100644 --- a/src/main/java/org/elasticsearch/search/aggregations/bucket/children/ParentToChildrenAggregator.java +++ b/src/main/java/org/elasticsearch/search/aggregations/bucket/children/ParentToChildrenAggregator.java @@ -18,21 +18,20 @@ */ package org.elasticsearch.search.aggregations.bucket.children; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.SortedDocValues; import org.apache.lucene.search.DocIdSet; import org.apache.lucene.search.DocIdSetIterator; import org.apache.lucene.search.Filter; +import org.apache.lucene.search.join.BitDocIdSetFilter; import org.apache.lucene.util.Bits; import org.elasticsearch.ElasticsearchIllegalStateException; import org.elasticsearch.ExceptionsHelper; import org.elasticsearch.common.lease.Releasables; import org.elasticsearch.common.lucene.ReaderContextAware; import org.elasticsearch.common.lucene.docset.DocIdSets; -import org.elasticsearch.common.lucene.search.ApplyAcceptedDocsFilter; import org.elasticsearch.common.util.LongArray; import org.elasticsearch.common.util.LongObjectPagedHashMap; -import org.elasticsearch.index.cache.fixedbitset.FixedBitSetFilter; import org.elasticsearch.index.search.child.ConstantScorer; import org.elasticsearch.search.aggregations.Aggregator; import org.elasticsearch.search.aggregations.AggregatorFactories; @@ -55,7 +54,7 @@ public class ParentToChildrenAggregator extends SingleBucketAggregator implement private final String parentType; private final Filter childFilter; - private final FixedBitSetFilter parentFilter; + private final BitDocIdSetFilter parentFilter; private final ValuesSource.Bytes.WithOrdinals.ParentChild valuesSource; // Maybe use PagedGrowableWriter? This will be less wasteful than LongArray, but then we don't have the reuse feature of BigArrays. @@ -68,7 +67,7 @@ public class ParentToChildrenAggregator extends SingleBucketAggregator implement private final LongObjectPagedHashMap parentOrdToOtherBuckets; private boolean multipleBucketsPerParentOrd = false; - private List replay = new ArrayList<>(); + private List replay = new ArrayList<>(); private SortedDocValues globalOrdinals; private Bits parentDocs; @@ -80,8 +79,8 @@ public class ParentToChildrenAggregator extends SingleBucketAggregator implement // The child filter doesn't rely on random access it just used to iterate over all docs with a specific type, // so use the filter cache instead. When the filter cache is smarter with what filter impl to pick we can benefit // from it here - this.childFilter = new ApplyAcceptedDocsFilter(aggregationContext.searchContext().filterCache().cache(childFilter)); - this.parentFilter = aggregationContext.searchContext().fixedBitSetFilterCache().getFixedBitSetFilter(parentFilter); + this.childFilter = aggregationContext.searchContext().filterCache().cache(childFilter); + this.parentFilter = aggregationContext.searchContext().bitsetFilterCache().getBitDocIdSetFilter(parentFilter); this.parentOrdToBuckets = aggregationContext.bigArrays().newLongArray(maxOrd, false); this.parentOrdToBuckets.fill(0, maxOrd, -1); this.parentOrdToOtherBuckets = new LongObjectPagedHashMap<>(aggregationContext.bigArrays()); @@ -121,7 +120,7 @@ public class ParentToChildrenAggregator extends SingleBucketAggregator implement } @Override - public void setNextReader(AtomicReaderContext reader) { + public void setNextReader(LeafReaderContext reader) { if (replay == null) { return; } @@ -146,10 +145,10 @@ public class ParentToChildrenAggregator extends SingleBucketAggregator implement @Override protected void doPostCollection() throws IOException { - List replay = this.replay; + List replay = this.replay; this.replay = null; - for (AtomicReaderContext atomicReaderContext : replay) { + for (LeafReaderContext atomicReaderContext : replay) { context.setNextReader(atomicReaderContext); SortedDocValues globalOrdinals = valuesSource.globalOrdinalsValues(parentType); diff --git a/src/main/java/org/elasticsearch/search/aggregations/bucket/filter/FilterAggregator.java b/src/main/java/org/elasticsearch/search/aggregations/bucket/filter/FilterAggregator.java index 9a35510f280..6bd9f28b0c9 100644 --- a/src/main/java/org/elasticsearch/search/aggregations/bucket/filter/FilterAggregator.java +++ b/src/main/java/org/elasticsearch/search/aggregations/bucket/filter/FilterAggregator.java @@ -18,7 +18,7 @@ */ package org.elasticsearch.search.aggregations.bucket.filter; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.search.Filter; import org.apache.lucene.util.Bits; import org.elasticsearch.common.lucene.docset.DocIdSets; @@ -49,7 +49,7 @@ public class FilterAggregator extends SingleBucketAggregator { } @Override - public void setNextReader(AtomicReaderContext reader) { + public void setNextReader(LeafReaderContext reader) { try { bits = DocIdSets.toSafeBits(reader.reader(), filter.getDocIdSet(reader, reader.reader().getLiveDocs())); } catch (IOException ioe) { diff --git a/src/main/java/org/elasticsearch/search/aggregations/bucket/filters/FiltersAggregator.java b/src/main/java/org/elasticsearch/search/aggregations/bucket/filters/FiltersAggregator.java index b0291f4e52d..98561771830 100644 --- a/src/main/java/org/elasticsearch/search/aggregations/bucket/filters/FiltersAggregator.java +++ b/src/main/java/org/elasticsearch/search/aggregations/bucket/filters/FiltersAggregator.java @@ -20,7 +20,7 @@ package org.elasticsearch.search.aggregations.bucket.filters; import com.google.common.collect.Lists; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.search.Filter; import org.apache.lucene.util.Bits; import org.elasticsearch.common.lucene.docset.DocIdSets; @@ -67,7 +67,7 @@ public class FiltersAggregator extends BucketsAggregator { } @Override - public void setNextReader(AtomicReaderContext reader) { + public void setNextReader(LeafReaderContext reader) { try { for (int i = 0; i < filters.length; i++) { bits[i] = DocIdSets.toSafeBits(reader.reader(), filters[i].filter.getDocIdSet(reader, reader.reader().getLiveDocs())); diff --git a/src/main/java/org/elasticsearch/search/aggregations/bucket/geogrid/GeoHashGridAggregator.java b/src/main/java/org/elasticsearch/search/aggregations/bucket/geogrid/GeoHashGridAggregator.java index 31a8ccc0bd6..84c2f6c28d3 100644 --- a/src/main/java/org/elasticsearch/search/aggregations/bucket/geogrid/GeoHashGridAggregator.java +++ b/src/main/java/org/elasticsearch/search/aggregations/bucket/geogrid/GeoHashGridAggregator.java @@ -18,7 +18,7 @@ */ package org.elasticsearch.search.aggregations.bucket.geogrid; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.SortedNumericDocValues; import org.elasticsearch.common.lease.Releasables; import org.elasticsearch.common.util.LongHash; @@ -64,7 +64,7 @@ public class GeoHashGridAggregator extends BucketsAggregator { } @Override - public void setNextReader(AtomicReaderContext reader) { + public void setNextReader(LeafReaderContext reader) { values = valuesSource.longValues(); } diff --git a/src/main/java/org/elasticsearch/search/aggregations/bucket/global/GlobalAggregator.java b/src/main/java/org/elasticsearch/search/aggregations/bucket/global/GlobalAggregator.java index ad5f7db7e68..d6ec410d167 100644 --- a/src/main/java/org/elasticsearch/search/aggregations/bucket/global/GlobalAggregator.java +++ b/src/main/java/org/elasticsearch/search/aggregations/bucket/global/GlobalAggregator.java @@ -18,7 +18,7 @@ */ package org.elasticsearch.search.aggregations.bucket.global; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.elasticsearch.search.aggregations.*; import org.elasticsearch.search.aggregations.bucket.SingleBucketAggregator; import org.elasticsearch.search.aggregations.support.AggregationContext; @@ -36,7 +36,7 @@ public class GlobalAggregator extends SingleBucketAggregator { } @Override - public void setNextReader(AtomicReaderContext reader) { + public void setNextReader(LeafReaderContext reader) { } @Override diff --git a/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/HistogramAggregator.java b/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/HistogramAggregator.java index a7de3f50536..03bb3150408 100644 --- a/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/HistogramAggregator.java +++ b/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/HistogramAggregator.java @@ -18,7 +18,7 @@ */ package org.elasticsearch.search.aggregations.bucket.histogram; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.SortedNumericDocValues; import org.apache.lucene.util.CollectionUtil; import org.elasticsearch.common.inject.internal.Nullable; @@ -81,7 +81,7 @@ public class HistogramAggregator extends BucketsAggregator { } @Override - public void setNextReader(AtomicReaderContext reader) { + public void setNextReader(LeafReaderContext reader) { values = valuesSource.longValues(); } diff --git a/src/main/java/org/elasticsearch/search/aggregations/bucket/missing/MissingAggregator.java b/src/main/java/org/elasticsearch/search/aggregations/bucket/missing/MissingAggregator.java index e598630da2f..41bea16e717 100644 --- a/src/main/java/org/elasticsearch/search/aggregations/bucket/missing/MissingAggregator.java +++ b/src/main/java/org/elasticsearch/search/aggregations/bucket/missing/MissingAggregator.java @@ -18,7 +18,7 @@ */ package org.elasticsearch.search.aggregations.bucket.missing; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.util.Bits; import org.elasticsearch.search.aggregations.Aggregator; import org.elasticsearch.search.aggregations.AggregatorFactories; @@ -47,7 +47,7 @@ public class MissingAggregator extends SingleBucketAggregator { } @Override - public void setNextReader(AtomicReaderContext reader) { + public void setNextReader(LeafReaderContext reader) { if (valuesSource != null) { docsWithValue = valuesSource.docsWithValue(reader.reader().maxDoc()); } else { diff --git a/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/NestedAggregator.java b/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/NestedAggregator.java index ef3208447c0..cfa0aac56b6 100644 --- a/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/NestedAggregator.java +++ b/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/NestedAggregator.java @@ -18,14 +18,14 @@ */ package org.elasticsearch.search.aggregations.bucket.nested; -import org.apache.lucene.index.AtomicReaderContext; -import org.apache.lucene.search.DocIdSet; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.search.Filter; +import org.apache.lucene.search.join.BitDocIdSetFilter; +import org.apache.lucene.util.BitDocIdSet; +import org.apache.lucene.util.BitSet; import org.apache.lucene.util.Bits; -import org.apache.lucene.util.FixedBitSet; import org.elasticsearch.common.lucene.ReaderContextAware; import org.elasticsearch.common.lucene.docset.DocIdSets; -import org.elasticsearch.index.cache.fixedbitset.FixedBitSetFilter; import org.elasticsearch.index.mapper.MapperService; import org.elasticsearch.index.mapper.object.ObjectMapper; import org.elasticsearch.index.search.nested.NonNestedDocsFilter; @@ -44,11 +44,11 @@ public class NestedAggregator extends SingleBucketAggregator implements ReaderCo private final String nestedPath; private final Aggregator parentAggregator; - private FixedBitSetFilter parentFilter; - private final FixedBitSetFilter childFilter; + private BitDocIdSetFilter parentFilter; + private final BitDocIdSetFilter childFilter; private Bits childDocs; - private FixedBitSet parentDocs; + private BitSet parentDocs; public NestedAggregator(String name, AggregatorFactories factories, String nestedPath, AggregationContext aggregationContext, Aggregator parentAggregator, Map metaData) { super(name, factories, aggregationContext, parentAggregator, metaData); @@ -66,11 +66,11 @@ public class NestedAggregator extends SingleBucketAggregator implements ReaderCo throw new AggregationExecutionException("[nested] nested path [" + nestedPath + "] is not nested"); } - childFilter = aggregationContext.searchContext().fixedBitSetFilterCache().getFixedBitSetFilter(objectMapper.nestedTypeFilter()); + childFilter = aggregationContext.searchContext().bitsetFilterCache().getBitDocIdSetFilter(objectMapper.nestedTypeFilter()); } @Override - public void setNextReader(AtomicReaderContext reader) { + public void setNextReader(LeafReaderContext reader) { if (parentFilter == null) { // The aggs are instantiated in reverse, first the most inner nested aggs and lastly the top level aggs // So at the time a nested 'nested' aggs is parsed its closest parent nested aggs hasn't been constructed. @@ -80,17 +80,23 @@ public class NestedAggregator extends SingleBucketAggregator implements ReaderCo if (parentFilterNotCached == null) { parentFilterNotCached = NonNestedDocsFilter.INSTANCE; } - parentFilter = SearchContext.current().fixedBitSetFilterCache().getFixedBitSetFilter(parentFilterNotCached); + parentFilter = SearchContext.current().bitsetFilterCache().getBitDocIdSetFilter(parentFilterNotCached); } try { - DocIdSet docIdSet = parentFilter.getDocIdSet(reader, null); - // In ES if parent is deleted, then also the children are deleted. Therefore acceptedDocs can also null here. - childDocs = DocIdSets.toSafeBits(reader.reader(), childFilter.getDocIdSet(reader, null)); - if (DocIdSets.isEmpty(docIdSet)) { + BitDocIdSet parentSet = parentFilter.getDocIdSet(reader); + if (DocIdSets.isEmpty(parentSet)) { parentDocs = null; + childDocs = null; } else { - parentDocs = (FixedBitSet) docIdSet; + parentDocs = parentSet.bits(); + // In ES if parent is deleted, then also the children are deleted. Therefore acceptedDocs can also null here. + BitDocIdSet childSet = childFilter.getDocIdSet(reader); + if (DocIdSets.isEmpty(childSet)) { + childDocs = new Bits.MatchAllBits(reader.reader().maxDoc()); + } else { + childDocs = childSet.bits(); + } } } catch (IOException ioe) { throw new AggregationExecutionException("Failed to aggregate [" + name + "]", ioe); diff --git a/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/ReverseNestedAggregator.java b/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/ReverseNestedAggregator.java index 0d210433c46..4940ca6cd93 100644 --- a/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/ReverseNestedAggregator.java +++ b/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/ReverseNestedAggregator.java @@ -19,13 +19,13 @@ package org.elasticsearch.search.aggregations.bucket.nested; import com.carrotsearch.hppc.LongIntOpenHashMap; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.search.DocIdSet; import org.apache.lucene.search.DocIdSetIterator; import org.apache.lucene.search.Filter; +import org.apache.lucene.search.join.BitDocIdSetFilter; import org.elasticsearch.common.lucene.ReaderContextAware; import org.elasticsearch.common.lucene.docset.DocIdSets; -import org.elasticsearch.index.cache.fixedbitset.FixedBitSetFilter; import org.elasticsearch.index.mapper.MapperService; import org.elasticsearch.index.mapper.object.ObjectMapper; import org.elasticsearch.index.search.nested.NonNestedDocsFilter; @@ -43,7 +43,7 @@ import java.util.Map; */ public class ReverseNestedAggregator extends SingleBucketAggregator implements ReaderContextAware { - private final FixedBitSetFilter parentFilter; + private final BitDocIdSetFilter parentFilter; private DocIdSetIterator parentDocs; // TODO: Add LongIntPagedHashMap? @@ -58,7 +58,7 @@ public class ReverseNestedAggregator extends SingleBucketAggregator implements R throw new SearchParseException(context.searchContext(), "Reverse nested aggregation [" + name + "] can only be used inside a [nested] aggregation"); } if (nestedPath == null) { - parentFilter = SearchContext.current().fixedBitSetFilterCache().getFixedBitSetFilter(NonNestedDocsFilter.INSTANCE); + parentFilter = SearchContext.current().bitsetFilterCache().getBitDocIdSetFilter(NonNestedDocsFilter.INSTANCE); } else { MapperService.SmartNameObjectMapper mapper = SearchContext.current().smartNameObjectMapper(nestedPath); if (mapper == null) { @@ -71,14 +71,14 @@ public class ReverseNestedAggregator extends SingleBucketAggregator implements R if (!objectMapper.nested().isNested()) { throw new AggregationExecutionException("[reverse_nested] nested path [" + nestedPath + "] is not nested"); } - parentFilter = SearchContext.current().fixedBitSetFilterCache().getFixedBitSetFilter(objectMapper.nestedTypeFilter()); + parentFilter = SearchContext.current().bitsetFilterCache().getBitDocIdSetFilter(objectMapper.nestedTypeFilter()); } bucketOrdToLastCollectedParentDoc = new LongIntOpenHashMap(32); aggregationContext.ensureScoreDocsInOrder(); } @Override - public void setNextReader(AtomicReaderContext reader) { + public void setNextReader(LeafReaderContext reader) { bucketOrdToLastCollectedParentDoc.clear(); try { // In ES if parent is deleted, then also the children are deleted, so the child docs this agg receives diff --git a/src/main/java/org/elasticsearch/search/aggregations/bucket/range/RangeAggregator.java b/src/main/java/org/elasticsearch/search/aggregations/bucket/range/RangeAggregator.java index c99143ae9db..335e736d952 100644 --- a/src/main/java/org/elasticsearch/search/aggregations/bucket/range/RangeAggregator.java +++ b/src/main/java/org/elasticsearch/search/aggregations/bucket/range/RangeAggregator.java @@ -19,7 +19,7 @@ package org.elasticsearch.search.aggregations.bucket.range; import com.google.common.collect.Lists; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.util.InPlaceMergeSorter; import org.elasticsearch.common.Nullable; import org.elasticsearch.index.fielddata.SortedNumericDoubleValues; @@ -128,7 +128,7 @@ public class RangeAggregator extends BucketsAggregator { } @Override - public void setNextReader(AtomicReaderContext reader) { + public void setNextReader(LeafReaderContext reader) { values = valuesSource.doubleValues(); } diff --git a/src/main/java/org/elasticsearch/search/aggregations/bucket/range/geodistance/GeoDistanceParser.java b/src/main/java/org/elasticsearch/search/aggregations/bucket/range/geodistance/GeoDistanceParser.java index c62471a6ce4..d05c1162768 100644 --- a/src/main/java/org/elasticsearch/search/aggregations/bucket/range/geodistance/GeoDistanceParser.java +++ b/src/main/java/org/elasticsearch/search/aggregations/bucket/range/geodistance/GeoDistanceParser.java @@ -18,7 +18,7 @@ */ package org.elasticsearch.search.aggregations.bucket.range.geodistance; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.SortedNumericDocValues; import org.elasticsearch.common.ParseField; import org.elasticsearch.common.geo.GeoDistance; @@ -205,7 +205,7 @@ public class GeoDistanceParser implements Aggregator.Parser { } @Override - public void setNextReader(AtomicReaderContext reader) { + public void setNextReader(LeafReaderContext reader) { final MultiGeoPointValues geoValues = source.geoPointValues(); final FixedSourceDistance distance = distanceType.fixedSourceDistance(origin.getLat(), origin.getLon(), unit); distanceValues = GeoDistance.distanceValues(geoValues, distance); diff --git a/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/GlobalOrdinalsStringTermsAggregator.java b/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/GlobalOrdinalsStringTermsAggregator.java index e3286e35f1f..30b58c13f57 100644 --- a/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/GlobalOrdinalsStringTermsAggregator.java +++ b/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/GlobalOrdinalsStringTermsAggregator.java @@ -19,7 +19,7 @@ package org.elasticsearch.search.aggregations.bucket.terms; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.DocValues; import org.apache.lucene.index.RandomAccessOrds; import org.apache.lucene.index.SortedDocValues; @@ -116,7 +116,7 @@ public class GlobalOrdinalsStringTermsAggregator extends AbstractStringTermsAggr } @Override - public void setNextReader(AtomicReaderContext reader) { + public void setNextReader(LeafReaderContext reader) { globalOrds = valuesSource.globalOrdinalsValues(); if (acceptedGlobalOrdinals != null) { globalOrds = new FilteredOrdinals(globalOrds, acceptedGlobalOrdinals); @@ -373,7 +373,7 @@ public class GlobalOrdinalsStringTermsAggregator extends AbstractStringTermsAggr } @Override - public void setNextReader(AtomicReaderContext reader) { + public void setNextReader(LeafReaderContext reader) { if (segmentOrds != null) { mapSegmentCountsToGlobalCounts(); } diff --git a/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/LongTermsAggregator.java b/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/LongTermsAggregator.java index c63919f9e40..f727bce5e96 100644 --- a/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/LongTermsAggregator.java +++ b/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/LongTermsAggregator.java @@ -18,7 +18,7 @@ */ package org.elasticsearch.search.aggregations.bucket.terms; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.SortedNumericDocValues; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.lease.Releasables; @@ -73,7 +73,7 @@ public class LongTermsAggregator extends TermsAggregator { } @Override - public void setNextReader(AtomicReaderContext reader) { + public void setNextReader(LeafReaderContext reader) { values = getValues(valuesSource); } @@ -108,7 +108,7 @@ public class LongTermsAggregator extends TermsAggregator { if (bucketCountThresholds.getMinDocCount() == 0 && (order != InternalOrder.COUNT_DESC || bucketOrds.size() < bucketCountThresholds.getRequiredSize())) { // we need to fill-in the blanks - for (AtomicReaderContext ctx : context.searchContext().searcher().getTopReaderContext().leaves()) { + for (LeafReaderContext ctx : context.searchContext().searcher().getTopReaderContext().leaves()) { context.setNextReader(ctx); final SortedNumericDocValues values = getValues(valuesSource); for (int docId = 0; docId < ctx.reader().maxDoc(); ++docId) { diff --git a/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/StringTermsAggregator.java b/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/StringTermsAggregator.java index ae8f6087ad3..175ce4773e6 100644 --- a/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/StringTermsAggregator.java +++ b/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/StringTermsAggregator.java @@ -18,7 +18,7 @@ */ package org.elasticsearch.search.aggregations.bucket.terms; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.util.BytesRef; import org.apache.lucene.util.BytesRefBuilder; import org.elasticsearch.common.lease.Releasables; @@ -65,7 +65,7 @@ public class StringTermsAggregator extends AbstractStringTermsAggregator { } @Override - public void setNextReader(AtomicReaderContext reader) { + public void setNextReader(LeafReaderContext reader) { values = valuesSource.bytesValues(); } @@ -102,7 +102,7 @@ public class StringTermsAggregator extends AbstractStringTermsAggregator { if (bucketCountThresholds.getMinDocCount() == 0 && (order != InternalOrder.COUNT_DESC || bucketOrds.size() < bucketCountThresholds.getRequiredSize())) { // we need to fill-in the blanks - for (AtomicReaderContext ctx : context.searchContext().searcher().getTopReaderContext().leaves()) { + for (LeafReaderContext ctx : context.searchContext().searcher().getTopReaderContext().leaves()) { context.setNextReader(ctx); final SortedBinaryDocValues values = valuesSource.bytesValues(); // brute force diff --git a/src/main/java/org/elasticsearch/search/aggregations/metrics/avg/AvgAggregator.java b/src/main/java/org/elasticsearch/search/aggregations/metrics/avg/AvgAggregator.java index 1e9744348c1..e2bbf9c90fc 100644 --- a/src/main/java/org/elasticsearch/search/aggregations/metrics/avg/AvgAggregator.java +++ b/src/main/java/org/elasticsearch/search/aggregations/metrics/avg/AvgAggregator.java @@ -18,7 +18,7 @@ */ package org.elasticsearch.search.aggregations.metrics.avg; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.elasticsearch.common.lease.Releasables; import org.elasticsearch.common.util.DoubleArray; import org.elasticsearch.common.util.LongArray; @@ -61,7 +61,7 @@ public class AvgAggregator extends NumericMetricsAggregator.SingleValue { } @Override - public void setNextReader(AtomicReaderContext reader) { + public void setNextReader(LeafReaderContext reader) { values = valuesSource.doubleValues(); } diff --git a/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/CardinalityAggregator.java b/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/CardinalityAggregator.java index d72806d8d13..169687690bd 100644 --- a/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/CardinalityAggregator.java +++ b/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/CardinalityAggregator.java @@ -21,9 +21,10 @@ package org.elasticsearch.search.aggregations.metrics.cardinality; import com.carrotsearch.hppc.hash.MurmurHash3; import com.google.common.base.Preconditions; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.RandomAccessOrds; import org.apache.lucene.index.SortedNumericDocValues; +import org.apache.lucene.search.DocIdSetIterator; import org.apache.lucene.util.BytesRef; import org.apache.lucene.util.FixedBitSet; import org.apache.lucene.util.RamUsageEstimator; @@ -70,12 +71,12 @@ public class CardinalityAggregator extends NumericMetricsAggregator.SingleValue } @Override - public void setNextReader(AtomicReaderContext reader) { + public void setNextReader(LeafReaderContext reader) { postCollectLastCollector(); collector = createCollector(reader); } - private Collector createCollector(AtomicReaderContext reader) { + private Collector createCollector(LeafReaderContext reader) { // if rehash is false then the value source is either already hashed, or the user explicitly // requested not to hash the values (perhaps they already hashed the values themselves before indexing the doc) @@ -274,7 +275,7 @@ public class CardinalityAggregator extends NumericMetricsAggregator.SingleValue final org.elasticsearch.common.hash.MurmurHash3.Hash128 hash = new org.elasticsearch.common.hash.MurmurHash3.Hash128(); try (LongArray hashes = bigArrays.newLongArray(maxOrd, false)) { - for (int ord = allVisitedOrds.nextSetBit(0); ord != -1; ord = ord + 1 < maxOrd ? allVisitedOrds.nextSetBit(ord + 1) : -1) { + for (int ord = allVisitedOrds.nextSetBit(0); ord < DocIdSetIterator.NO_MORE_DOCS; ord = ord + 1 < maxOrd ? allVisitedOrds.nextSetBit(ord + 1) : DocIdSetIterator.NO_MORE_DOCS) { final BytesRef value = values.lookupOrd(ord); org.elasticsearch.common.hash.MurmurHash3.hash128(value.bytes, value.offset, value.length, 0, hash); hashes.set(ord, hash.h1); @@ -283,7 +284,7 @@ public class CardinalityAggregator extends NumericMetricsAggregator.SingleValue for (long bucket = visitedOrds.size() - 1; bucket >= 0; --bucket) { final FixedBitSet bits = visitedOrds.get(bucket); if (bits != null) { - for (int ord = bits.nextSetBit(0); ord != -1; ord = ord + 1 < maxOrd ? bits.nextSetBit(ord + 1) : -1) { + for (int ord = bits.nextSetBit(0); ord < DocIdSetIterator.NO_MORE_DOCS; ord = ord + 1 < maxOrd ? bits.nextSetBit(ord + 1) : DocIdSetIterator.NO_MORE_DOCS) { counts.collect(bucket, hashes.get(ord)); } } diff --git a/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/HyperLogLogPlusPlus.java b/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/HyperLogLogPlusPlus.java index 82988dd3fa4..38661326818 100644 --- a/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/HyperLogLogPlusPlus.java +++ b/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/HyperLogLogPlusPlus.java @@ -21,7 +21,7 @@ package org.elasticsearch.search.aggregations.metrics.cardinality; import com.google.common.base.Preconditions; import org.apache.lucene.util.BytesRef; -import org.apache.lucene.util.OpenBitSet; +import org.apache.lucene.util.LongBitSet; import org.apache.lucene.util.RamUsageEstimator; import org.apache.lucene.util.packed.PackedInts; import org.elasticsearch.ElasticsearchException; @@ -542,5 +542,32 @@ public final class HyperLogLogPlusPlus implements Releasable { } return counts; } + + /** looks and smells like the old openbitset. */ + static class OpenBitSet { + LongBitSet impl = new LongBitSet(64); + + boolean get(long bit) { + if (bit < impl.length()) { + return impl.get(bit); + } else { + return false; + } + } + + void ensureCapacity(long bit) { + impl = LongBitSet.ensureCapacity(impl, bit); + } + + void set(long bit) { + ensureCapacity(bit); + impl.set(bit); + } + + void clear(long bit) { + ensureCapacity(bit); + impl.clear(bit); + } + } } diff --git a/src/main/java/org/elasticsearch/search/aggregations/metrics/geobounds/GeoBoundsAggregator.java b/src/main/java/org/elasticsearch/search/aggregations/metrics/geobounds/GeoBoundsAggregator.java index 05d09655f5a..4cd169d1c25 100644 --- a/src/main/java/org/elasticsearch/search/aggregations/metrics/geobounds/GeoBoundsAggregator.java +++ b/src/main/java/org/elasticsearch/search/aggregations/metrics/geobounds/GeoBoundsAggregator.java @@ -19,7 +19,7 @@ package org.elasticsearch.search.aggregations.metrics.geobounds; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.elasticsearch.common.geo.GeoPoint; import org.elasticsearch.common.lease.Releasables; import org.elasticsearch.common.util.DoubleArray; @@ -75,7 +75,7 @@ public final class GeoBoundsAggregator extends MetricsAggregator { } @Override - public void setNextReader(AtomicReaderContext reader) { + public void setNextReader(LeafReaderContext reader) { this.values = this.valuesSource.geoPointValues(); } diff --git a/src/main/java/org/elasticsearch/search/aggregations/metrics/max/MaxAggregator.java b/src/main/java/org/elasticsearch/search/aggregations/metrics/max/MaxAggregator.java index e159faa9eaf..d073c47f050 100644 --- a/src/main/java/org/elasticsearch/search/aggregations/metrics/max/MaxAggregator.java +++ b/src/main/java/org/elasticsearch/search/aggregations/metrics/max/MaxAggregator.java @@ -18,7 +18,7 @@ */ package org.elasticsearch.search.aggregations.metrics.max; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.elasticsearch.common.lease.Releasables; import org.elasticsearch.common.util.DoubleArray; import org.elasticsearch.index.fielddata.NumericDoubleValues; @@ -61,7 +61,7 @@ public class MaxAggregator extends NumericMetricsAggregator.SingleValue { } @Override - public void setNextReader(AtomicReaderContext reader) { + public void setNextReader(LeafReaderContext reader) { final SortedNumericDoubleValues values = valuesSource.doubleValues(); this.values = MultiValueMode.MAX.select(values, Double.NEGATIVE_INFINITY); } diff --git a/src/main/java/org/elasticsearch/search/aggregations/metrics/min/MinAggregator.java b/src/main/java/org/elasticsearch/search/aggregations/metrics/min/MinAggregator.java index 527bd8676bc..59128e24215 100644 --- a/src/main/java/org/elasticsearch/search/aggregations/metrics/min/MinAggregator.java +++ b/src/main/java/org/elasticsearch/search/aggregations/metrics/min/MinAggregator.java @@ -18,7 +18,7 @@ */ package org.elasticsearch.search.aggregations.metrics.min; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.elasticsearch.common.lease.Releasables; import org.elasticsearch.common.util.DoubleArray; import org.elasticsearch.index.fielddata.NumericDoubleValues; @@ -61,7 +61,7 @@ public class MinAggregator extends NumericMetricsAggregator.SingleValue { } @Override - public void setNextReader(AtomicReaderContext reader) { + public void setNextReader(LeafReaderContext reader) { final SortedNumericDoubleValues values = valuesSource.doubleValues(); this.values = MultiValueMode.MIN.select(values, Double.POSITIVE_INFINITY); } diff --git a/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/AbstractPercentilesAggregator.java b/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/AbstractPercentilesAggregator.java index 9b161f20d64..6cae6568d52 100644 --- a/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/AbstractPercentilesAggregator.java +++ b/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/AbstractPercentilesAggregator.java @@ -19,7 +19,7 @@ package org.elasticsearch.search.aggregations.metrics.percentiles; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.elasticsearch.common.lease.Releasables; import org.elasticsearch.common.util.ArrayUtils; import org.elasticsearch.common.util.ObjectArray; @@ -62,7 +62,7 @@ public abstract class AbstractPercentilesAggregator extends NumericMetricsAggreg } @Override - public void setNextReader(AtomicReaderContext reader) { + public void setNextReader(LeafReaderContext reader) { values = valuesSource.doubleValues(); } diff --git a/src/main/java/org/elasticsearch/search/aggregations/metrics/scripted/ScriptedMetricAggregator.java b/src/main/java/org/elasticsearch/search/aggregations/metrics/scripted/ScriptedMetricAggregator.java index 8fd13547f4e..5e914864fda 100644 --- a/src/main/java/org/elasticsearch/search/aggregations/metrics/scripted/ScriptedMetricAggregator.java +++ b/src/main/java/org/elasticsearch/search/aggregations/metrics/scripted/ScriptedMetricAggregator.java @@ -19,7 +19,7 @@ package org.elasticsearch.search.aggregations.metrics.scripted; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.elasticsearch.script.ExecutableScript; import org.elasticsearch.script.ScriptService; import org.elasticsearch.script.ScriptService.ScriptType; @@ -86,7 +86,7 @@ public class ScriptedMetricAggregator extends MetricsAggregator { } @Override - public void setNextReader(AtomicReaderContext reader) { + public void setNextReader(LeafReaderContext reader) { mapScript.setNextReader(reader); } diff --git a/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/StatsAggegator.java b/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/StatsAggegator.java index 589d6041f3e..7bce48e92a0 100644 --- a/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/StatsAggegator.java +++ b/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/StatsAggegator.java @@ -18,7 +18,7 @@ */ package org.elasticsearch.search.aggregations.metrics.stats; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.elasticsearch.ElasticsearchIllegalArgumentException; import org.elasticsearch.common.lease.Releasables; import org.elasticsearch.common.util.BigArrays; @@ -69,7 +69,7 @@ public class StatsAggegator extends NumericMetricsAggregator.MultiValue { } @Override - public void setNextReader(AtomicReaderContext reader) { + public void setNextReader(LeafReaderContext reader) { values = valuesSource.doubleValues(); } diff --git a/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/extended/ExtendedStatsAggregator.java b/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/extended/ExtendedStatsAggregator.java index db6cdae2354..df531c869b2 100644 --- a/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/extended/ExtendedStatsAggregator.java +++ b/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/extended/ExtendedStatsAggregator.java @@ -18,7 +18,7 @@ */ package org.elasticsearch.search.aggregations.metrics.stats.extended; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.elasticsearch.ElasticsearchIllegalArgumentException; import org.elasticsearch.common.lease.Releasables; import org.elasticsearch.common.util.BigArrays; @@ -71,7 +71,7 @@ public class ExtendedStatsAggregator extends NumericMetricsAggregator.MultiValue } @Override - public void setNextReader(AtomicReaderContext reader) { + public void setNextReader(LeafReaderContext reader) { values = valuesSource.doubleValues(); } diff --git a/src/main/java/org/elasticsearch/search/aggregations/metrics/sum/SumAggregator.java b/src/main/java/org/elasticsearch/search/aggregations/metrics/sum/SumAggregator.java index 10ab482d3dd..5b0d6268b69 100644 --- a/src/main/java/org/elasticsearch/search/aggregations/metrics/sum/SumAggregator.java +++ b/src/main/java/org/elasticsearch/search/aggregations/metrics/sum/SumAggregator.java @@ -18,7 +18,7 @@ */ package org.elasticsearch.search.aggregations.metrics.sum; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.elasticsearch.common.lease.Releasables; import org.elasticsearch.common.util.DoubleArray; import org.elasticsearch.index.fielddata.SortedNumericDoubleValues; @@ -58,7 +58,7 @@ public class SumAggregator extends NumericMetricsAggregator.SingleValue { } @Override - public void setNextReader(AtomicReaderContext reader) { + public void setNextReader(LeafReaderContext reader) { values = valuesSource.doubleValues(); } diff --git a/src/main/java/org/elasticsearch/search/aggregations/metrics/tophits/TopHitsAggregator.java b/src/main/java/org/elasticsearch/search/aggregations/metrics/tophits/TopHitsAggregator.java index 6a598d2cc2b..e904b8b90ee 100644 --- a/src/main/java/org/elasticsearch/search/aggregations/metrics/tophits/TopHitsAggregator.java +++ b/src/main/java/org/elasticsearch/search/aggregations/metrics/tophits/TopHitsAggregator.java @@ -19,7 +19,7 @@ package org.elasticsearch.search.aggregations.metrics.tophits; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.search.*; import org.elasticsearch.ExceptionsHelper; import org.elasticsearch.common.lease.Releasables; @@ -41,12 +41,22 @@ import java.util.Map; */ public class TopHitsAggregator extends MetricsAggregator implements ScorerAware { + /** Simple wrapper around a top-level collector and the current leaf collector. */ + private static class TopDocsAndLeafCollector { + final TopDocsCollector topLevelCollector; + LeafCollector leafCollector; + + TopDocsAndLeafCollector(TopDocsCollector topLevelCollector) { + this.topLevelCollector = topLevelCollector; + } + } + private final FetchPhase fetchPhase; private final TopHitsContext topHitsContext; - private final LongObjectPagedHashMap topDocsCollectors; + private final LongObjectPagedHashMap topDocsCollectors; private Scorer currentScorer; - private AtomicReaderContext currentContext; + private LeafReaderContext currentContext; public TopHitsAggregator(FetchPhase fetchPhase, TopHitsContext topHitsContext, String name, long estimatedBucketsCount, AggregationContext context, Aggregator parent, Map metaData) { super(name, estimatedBucketsCount, context, parent, metaData); @@ -63,11 +73,11 @@ public class TopHitsAggregator extends MetricsAggregator implements ScorerAware @Override public InternalAggregation buildAggregation(long owningBucketOrdinal) { - TopDocsCollector topDocsCollector = topDocsCollectors.get(owningBucketOrdinal); + TopDocsAndLeafCollector topDocsCollector = topDocsCollectors.get(owningBucketOrdinal); if (topDocsCollector == null) { return buildEmptyAggregation(); } else { - TopDocs topDocs = topDocsCollector.topDocs(); + TopDocs topDocs = topDocsCollector.topLevelCollector.topDocs(); if (topDocs.totalHits == 0) { return buildEmptyAggregation(); } @@ -102,26 +112,25 @@ public class TopHitsAggregator extends MetricsAggregator implements ScorerAware @Override public void collect(int docId, long bucketOrdinal) throws IOException { - TopDocsCollector topDocsCollector = topDocsCollectors.get(bucketOrdinal); - if (topDocsCollector == null) { + TopDocsAndLeafCollector collectors = topDocsCollectors.get(bucketOrdinal); + if (collectors == null) { Sort sort = topHitsContext.sort(); int topN = topHitsContext.from() + topHitsContext.size(); - topDocsCollectors.put( - bucketOrdinal, - topDocsCollector = sort != null ? TopFieldCollector.create(sort, topN, true, topHitsContext.trackScores(), topHitsContext.trackScores(), false) : TopScoreDocCollector.create(topN, false) - ); - topDocsCollector.setNextReader(currentContext); - topDocsCollector.setScorer(currentScorer); + TopDocsCollector topLevelCollector = sort != null ? TopFieldCollector.create(sort, topN, true, topHitsContext.trackScores(), topHitsContext.trackScores(), false) : TopScoreDocCollector.create(topN, false); + collectors = new TopDocsAndLeafCollector(topLevelCollector); + collectors.leafCollector = collectors.topLevelCollector.getLeafCollector(currentContext); + collectors.leafCollector.setScorer(currentScorer); + topDocsCollectors.put(bucketOrdinal, collectors); } - topDocsCollector.collect(docId); + collectors.leafCollector.collect(docId); } @Override - public void setNextReader(AtomicReaderContext context) { + public void setNextReader(LeafReaderContext context) { this.currentContext = context; - for (LongObjectPagedHashMap.Cursor cursor : topDocsCollectors) { + for (LongObjectPagedHashMap.Cursor cursor : topDocsCollectors) { try { - cursor.value.setNextReader(context); + cursor.value.leafCollector = cursor.value.topLevelCollector.getLeafCollector(currentContext); } catch (IOException e) { throw ExceptionsHelper.convertToElastic(e); } @@ -131,9 +140,9 @@ public class TopHitsAggregator extends MetricsAggregator implements ScorerAware @Override public void setScorer(Scorer scorer) { this.currentScorer = scorer; - for (LongObjectPagedHashMap.Cursor cursor : topDocsCollectors) { + for (LongObjectPagedHashMap.Cursor cursor : topDocsCollectors) { try { - cursor.value.setScorer(scorer); + cursor.value.leafCollector.setScorer(scorer); } catch (IOException e) { throw ExceptionsHelper.convertToElastic(e); } diff --git a/src/main/java/org/elasticsearch/search/aggregations/metrics/tophits/TopHitsContext.java b/src/main/java/org/elasticsearch/search/aggregations/metrics/tophits/TopHitsContext.java index 747e9b838d2..88f974208fc 100644 --- a/src/main/java/org/elasticsearch/search/aggregations/metrics/tophits/TopHitsContext.java +++ b/src/main/java/org/elasticsearch/search/aggregations/metrics/tophits/TopHitsContext.java @@ -29,8 +29,8 @@ import org.elasticsearch.action.search.SearchType; import org.elasticsearch.cache.recycler.PageCacheRecycler; import org.elasticsearch.common.util.BigArrays; import org.elasticsearch.index.analysis.AnalysisService; +import org.elasticsearch.index.cache.bitset.BitsetFilterCache; import org.elasticsearch.index.cache.filter.FilterCache; -import org.elasticsearch.index.cache.fixedbitset.FixedBitSetFilterCache; import org.elasticsearch.index.fielddata.IndexFieldDataService; import org.elasticsearch.index.mapper.FieldMapper; import org.elasticsearch.index.mapper.FieldMappers; @@ -317,8 +317,8 @@ public class TopHitsContext extends SearchContext { } @Override - public FixedBitSetFilterCache fixedBitSetFilterCache() { - return context.fixedBitSetFilterCache(); + public BitsetFilterCache bitsetFilterCache() { + return context.bitsetFilterCache(); } @Override diff --git a/src/main/java/org/elasticsearch/search/aggregations/metrics/valuecount/ValueCountAggregator.java b/src/main/java/org/elasticsearch/search/aggregations/metrics/valuecount/ValueCountAggregator.java index d29c56d6eab..04956871232 100644 --- a/src/main/java/org/elasticsearch/search/aggregations/metrics/valuecount/ValueCountAggregator.java +++ b/src/main/java/org/elasticsearch/search/aggregations/metrics/valuecount/ValueCountAggregator.java @@ -18,7 +18,7 @@ */ package org.elasticsearch.search.aggregations.metrics.valuecount; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.elasticsearch.common.lease.Releasables; import org.elasticsearch.common.util.LongArray; import org.elasticsearch.index.fielddata.SortedBinaryDocValues; @@ -63,7 +63,7 @@ public class ValueCountAggregator extends NumericMetricsAggregator.SingleValue { } @Override - public void setNextReader(AtomicReaderContext reader) { + public void setNextReader(LeafReaderContext reader) { values = valuesSource.bytesValues(); } diff --git a/src/main/java/org/elasticsearch/search/aggregations/support/AggregationContext.java b/src/main/java/org/elasticsearch/search/aggregations/support/AggregationContext.java index 154e120bbe4..d64edd5d31d 100644 --- a/src/main/java/org/elasticsearch/search/aggregations/support/AggregationContext.java +++ b/src/main/java/org/elasticsearch/search/aggregations/support/AggregationContext.java @@ -19,7 +19,7 @@ package org.elasticsearch.search.aggregations.support; import com.carrotsearch.hppc.ObjectObjectOpenHashMap; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.IndexReaderContext; import org.apache.lucene.search.Scorer; import org.apache.lucene.util.ArrayUtil; @@ -54,7 +54,7 @@ public class AggregationContext implements ReaderContextAware, ScorerAware { private List topReaderAwares = new ArrayList(); private List scorerAwares = new ArrayList<>(); - private AtomicReaderContext reader; + private LeafReaderContext reader; private Scorer scorer; private boolean scoreDocsInOrder = false; @@ -74,7 +74,7 @@ public class AggregationContext implements ReaderContextAware, ScorerAware { return searchContext.bigArrays(); } - public AtomicReaderContext currentReader() { + public LeafReaderContext currentReader() { return reader; } @@ -88,7 +88,7 @@ public class AggregationContext implements ReaderContextAware, ScorerAware { } } - public void setNextReader(AtomicReaderContext reader) { + public void setNextReader(LeafReaderContext reader) { this.reader = reader; for (ReaderContextAware aware : readerAwares) { aware.setNextReader(reader); diff --git a/src/main/java/org/elasticsearch/search/aggregations/support/ValuesSource.java b/src/main/java/org/elasticsearch/search/aggregations/support/ValuesSource.java index f4b47c28799..5b1c7ca3375 100644 --- a/src/main/java/org/elasticsearch/search/aggregations/support/ValuesSource.java +++ b/src/main/java/org/elasticsearch/search/aggregations/support/ValuesSource.java @@ -84,7 +84,7 @@ public abstract class ValuesSource { public static MetaData load(IndexFieldData indexFieldData, SearchContext context) { MetaData metaData = new MetaData(); metaData.uniqueness = Uniqueness.UNIQUE; - for (AtomicReaderContext readerContext : context.searcher().getTopReaderContext().leaves()) { + for (LeafReaderContext readerContext : context.searcher().getTopReaderContext().leaves()) { AtomicFieldData fieldData = indexFieldData.load(readerContext); if (fieldData instanceof AtomicOrdinalsFieldData) { AtomicOrdinalsFieldData fd = (AtomicOrdinalsFieldData) fieldData; @@ -224,7 +224,7 @@ public abstract class ValuesSource { } @Override - public void setNextReader(AtomicReaderContext reader) { + public void setNextReader(LeafReaderContext reader) { atomicFieldData = indexFieldData.load(reader); if (bytesValues != null) { bytesValues = atomicFieldData.getBytesValues(); @@ -281,7 +281,7 @@ public abstract class ValuesSource { if (indexReader.leaves().isEmpty()) { return maxOrd = 0; } else { - AtomicReaderContext atomicReaderContext = indexReader.leaves().get(0); + LeafReaderContext atomicReaderContext = indexReader.leaves().get(0); IndexOrdinalsFieldData globalFieldData = indexFieldData.loadGlobal(indexReader); AtomicOrdinalsFieldData afd = globalFieldData.load(atomicReaderContext); RandomAccessOrds values = afd.getOrdinalsValues(); @@ -307,7 +307,7 @@ public abstract class ValuesSource { } @Override - public void setNextReader(AtomicReaderContext reader) { + public void setNextReader(LeafReaderContext reader) { atomicFieldData = globalFieldData.load(reader); } @@ -329,7 +329,7 @@ public abstract class ValuesSource { if (indexReader.leaves().isEmpty()) { return maxOrd = 0; } else { - AtomicReaderContext atomicReaderContext = indexReader.leaves().get(0); + LeafReaderContext atomicReaderContext = indexReader.leaves().get(0); IndexParentChildFieldData globalFieldData = indexFieldData.loadGlobal(indexReader); AtomicParentChildFieldData afd = globalFieldData.load(atomicReaderContext); SortedDocValues values = afd.getOrdinalsValues(type); @@ -366,7 +366,7 @@ public abstract class ValuesSource { } @Override - public void setNextReader(AtomicReaderContext reader) { + public void setNextReader(LeafReaderContext reader) { atomicFieldData = indexFieldData.load(reader); if (bytesValues != null) { bytesValues = atomicFieldData.getBytesValues(); @@ -546,7 +546,7 @@ public abstract class ValuesSource { } @Override - public void setNextReader(AtomicReaderContext reader) { + public void setNextReader(LeafReaderContext reader) { atomicFieldData = indexFieldData.load(reader); if (bytesValues != null) { bytesValues = atomicFieldData.getBytesValues(); @@ -699,7 +699,7 @@ public abstract class ValuesSource { } @Override - public void setNextReader(AtomicReaderContext reader) { + public void setNextReader(LeafReaderContext reader) { atomicFieldData = indexFieldData.load(reader); if (bytesValues != null) { bytesValues = atomicFieldData.getBytesValues(); diff --git a/src/main/java/org/elasticsearch/search/dfs/CachedDfSource.java b/src/main/java/org/elasticsearch/search/dfs/CachedDfSource.java index 70795a4b440..3b93dcec58b 100644 --- a/src/main/java/org/elasticsearch/search/dfs/CachedDfSource.java +++ b/src/main/java/org/elasticsearch/search/dfs/CachedDfSource.java @@ -93,7 +93,7 @@ public class CachedDfSource extends IndexSearcher { } @Override - protected void search(List leaves, Weight weight, Collector collector) throws IOException { + protected void search(List leaves, Weight weight, Collector collector) throws IOException { throw new UnsupportedOperationException(); } @@ -103,7 +103,7 @@ public class CachedDfSource extends IndexSearcher { } @Override - protected TopDocs search(List leaves, Weight weight, ScoreDoc after, int nDocs) throws IOException { + protected TopDocs search(List leaves, Weight weight, ScoreDoc after, int nDocs) throws IOException { throw new UnsupportedOperationException(); } @@ -118,7 +118,7 @@ public class CachedDfSource extends IndexSearcher { } @Override - protected TopFieldDocs search(List leaves, Weight weight, FieldDoc after, int nDocs, Sort sort, boolean fillFields, boolean doDocScores, boolean doMaxScore) throws IOException { + protected TopFieldDocs search(List leaves, Weight weight, FieldDoc after, int nDocs, Sort sort, boolean fillFields, boolean doDocScores, boolean doMaxScore) throws IOException { throw new UnsupportedOperationException(); } diff --git a/src/main/java/org/elasticsearch/search/fetch/FetchPhase.java b/src/main/java/org/elasticsearch/search/fetch/FetchPhase.java index 0beedf208f7..78f8762e177 100644 --- a/src/main/java/org/elasticsearch/search/fetch/FetchPhase.java +++ b/src/main/java/org/elasticsearch/search/fetch/FetchPhase.java @@ -20,10 +20,12 @@ package org.elasticsearch.search.fetch; import com.google.common.collect.ImmutableMap; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.ReaderUtil; +import org.apache.lucene.search.DocIdSetIterator; import org.apache.lucene.search.Filter; -import org.apache.lucene.util.FixedBitSet; +import org.apache.lucene.util.BitDocIdSet; +import org.apache.lucene.util.BitSet; import org.elasticsearch.ElasticsearchIllegalArgumentException; import org.elasticsearch.ExceptionsHelper; import org.elasticsearch.common.bytes.BytesReference; @@ -58,7 +60,12 @@ import org.elasticsearch.search.internal.InternalSearchHits; import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; -import java.util.*; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; import static com.google.common.collect.Lists.newArrayList; import static org.elasticsearch.common.xcontent.XContentFactory.contentBuilder; @@ -159,7 +166,7 @@ public class FetchPhase implements SearchPhase { for (int index = 0; index < context.docIdsToLoadSize(); index++) { int docId = context.docIdsToLoad()[context.docIdsToLoadFrom() + index]; int readerIndex = ReaderUtil.subIndex(docId, context.searcher().getIndexReader().leaves()); - AtomicReaderContext subReaderContext = context.searcher().getIndexReader().leaves().get(readerIndex); + LeafReaderContext subReaderContext = context.searcher().getIndexReader().leaves().get(readerIndex); int subDocId = docId - subReaderContext.docBase; final InternalSearchHit searchHit; @@ -192,17 +199,18 @@ public class FetchPhase implements SearchPhase { context.fetchResult().hits(new InternalSearchHits(hits, context.queryResult().topDocs().totalHits, context.queryResult().topDocs().getMaxScore())); } - private int findRootDocumentIfNested(SearchContext context, AtomicReaderContext subReaderContext, int subDocId) throws IOException { + private int findRootDocumentIfNested(SearchContext context, LeafReaderContext subReaderContext, int subDocId) throws IOException { if (context.mapperService().hasNested()) { - FixedBitSet nonNested = context.fixedBitSetFilterCache().getFixedBitSetFilter(NonNestedDocsFilter.INSTANCE).getDocIdSet(subReaderContext, null); - if (!nonNested.get(subDocId)) { - return nonNested.nextSetBit(subDocId); + BitDocIdSet nonNested = context.bitsetFilterCache().getBitDocIdSetFilter(NonNestedDocsFilter.INSTANCE).getDocIdSet(subReaderContext); + BitSet bits = nonNested.bits(); + if (!bits.get(subDocId)) { + return bits.nextSetBit(subDocId); } } return -1; } - private InternalSearchHit createSearchHit(SearchContext context, FieldsVisitor fieldsVisitor, int docId, int subDocId, List extractFieldNames, AtomicReaderContext subReaderContext) { + private InternalSearchHit createSearchHit(SearchContext context, FieldsVisitor fieldsVisitor, int docId, int subDocId, List extractFieldNames, LeafReaderContext subReaderContext) { loadStoredFields(context, subReaderContext, fieldsVisitor, subDocId); fieldsVisitor.postProcess(context.mapperService()); @@ -252,7 +260,7 @@ public class FetchPhase implements SearchPhase { return searchHit; } - private InternalSearchHit createNestedSearchHit(SearchContext context, int nestedTopDocId, int nestedSubDocId, int rootSubDocId, List extractFieldNames, boolean loadAllStored, Set fieldNames, AtomicReaderContext subReaderContext) throws IOException { + private InternalSearchHit createNestedSearchHit(SearchContext context, int nestedTopDocId, int nestedSubDocId, int rootSubDocId, List extractFieldNames, boolean loadAllStored, Set fieldNames, LeafReaderContext subReaderContext) throws IOException { final FieldsVisitor rootFieldsVisitor; if (context.sourceRequested() || extractFieldNames != null) { rootFieldsVisitor = new UidAndSourceFieldsVisitor(); @@ -267,7 +275,7 @@ public class FetchPhase implements SearchPhase { context.lookup().setNextReader(subReaderContext); context.lookup().setNextDocId(nestedSubDocId); - ObjectMapper nestedObjectMapper = documentMapper.findNestedObjectMapper(nestedSubDocId, context.fixedBitSetFilterCache(), subReaderContext); + ObjectMapper nestedObjectMapper = documentMapper.findNestedObjectMapper(nestedSubDocId, context.bitsetFilterCache(), subReaderContext); assert nestedObjectMapper != null; InternalSearchHit.InternalNestedIdentity nestedIdentity = getInternalNestedIdentity(context, nestedSubDocId, subReaderContext, documentMapper, nestedObjectMapper); @@ -315,7 +323,7 @@ public class FetchPhase implements SearchPhase { return searchHit; } - private Map getSearchFields(SearchContext context, int nestedSubDocId, boolean loadAllStored, Set fieldNames, AtomicReaderContext subReaderContext) { + private Map getSearchFields(SearchContext context, int nestedSubDocId, boolean loadAllStored, Set fieldNames, LeafReaderContext subReaderContext) { Map searchFields = null; if (context.hasFieldNames() && !context.fieldNames().isEmpty()) { FieldsVisitor nestedFieldsVisitor = null; @@ -339,7 +347,7 @@ public class FetchPhase implements SearchPhase { return searchFields; } - private InternalSearchHit.InternalNestedIdentity getInternalNestedIdentity(SearchContext context, int nestedSubDocId, AtomicReaderContext subReaderContext, DocumentMapper documentMapper, ObjectMapper nestedObjectMapper) throws IOException { + private InternalSearchHit.InternalNestedIdentity getInternalNestedIdentity(SearchContext context, int nestedSubDocId, LeafReaderContext subReaderContext, DocumentMapper documentMapper, ObjectMapper nestedObjectMapper) throws IOException { int currentParent = nestedSubDocId; ObjectMapper nestedParentObjectMapper; InternalSearchHit.InternalNestedIdentity nestedIdentity = null; @@ -355,11 +363,13 @@ public class FetchPhase implements SearchPhase { parentFilter = NonNestedDocsFilter.INSTANCE; } - FixedBitSet parentBitSet = context.fixedBitSetFilterCache().getFixedBitSetFilter(parentFilter).getDocIdSet(subReaderContext, null); + BitDocIdSet parentBitSet = context.bitsetFilterCache().getBitDocIdSetFilter(parentFilter).getDocIdSet(subReaderContext); + BitSet parentBits = parentBitSet.bits(); int offset = 0; - FixedBitSet nestedDocsBitSet = context.fixedBitSetFilterCache().getFixedBitSetFilter(nestedObjectMapper.nestedTypeFilter()).getDocIdSet(subReaderContext, null); - int nextParent = parentBitSet.nextSetBit(currentParent); - for (int docId = nestedDocsBitSet.nextSetBit(currentParent + 1); docId < nextParent && docId != -1; docId = nestedDocsBitSet.nextSetBit(docId + 1)) { + BitDocIdSet nestedDocsBitSet = context.bitsetFilterCache().getBitDocIdSetFilter(nestedObjectMapper.nestedTypeFilter()).getDocIdSet(subReaderContext); + BitSet nestedBits = nestedDocsBitSet.bits(); + int nextParent = parentBits.nextSetBit(currentParent); + for (int docId = nestedBits.nextSetBit(currentParent + 1); docId < nextParent && docId != DocIdSetIterator.NO_MORE_DOCS; docId = nestedBits.nextSetBit(docId + 1)) { offset++; } currentParent = nextParent; @@ -369,7 +379,7 @@ public class FetchPhase implements SearchPhase { return nestedIdentity; } - private void loadStoredFields(SearchContext searchContext, AtomicReaderContext readerContext, FieldsVisitor fieldVisitor, int docId) { + private void loadStoredFields(SearchContext searchContext, LeafReaderContext readerContext, FieldsVisitor fieldVisitor, int docId) { fieldVisitor.reset(); try { readerContext.reader().document(docId, fieldVisitor); diff --git a/src/main/java/org/elasticsearch/search/fetch/FetchSubPhase.java b/src/main/java/org/elasticsearch/search/fetch/FetchSubPhase.java index 6e6400a1cd9..cbb8615da7f 100644 --- a/src/main/java/org/elasticsearch/search/fetch/FetchSubPhase.java +++ b/src/main/java/org/elasticsearch/search/fetch/FetchSubPhase.java @@ -19,8 +19,8 @@ package org.elasticsearch.search.fetch; import com.google.common.collect.Maps; -import org.apache.lucene.index.AtomicReader; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReader; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.IndexReader; import org.apache.lucene.search.IndexSearcher; import org.elasticsearch.ElasticsearchException; @@ -39,12 +39,12 @@ public interface FetchSubPhase { public static class HitContext { private InternalSearchHit hit; private IndexReader topLevelReader; - private AtomicReaderContext readerContext; + private LeafReaderContext readerContext; private int docId; private Map cache; private IndexSearcher atomicIndexSearcher; - public void reset(InternalSearchHit hit, AtomicReaderContext context, int docId, IndexReader topLevelReader) { + public void reset(InternalSearchHit hit, LeafReaderContext context, int docId, IndexReader topLevelReader) { this.hit = hit; this.readerContext = context; this.docId = docId; @@ -56,11 +56,11 @@ public interface FetchSubPhase { return hit; } - public AtomicReader reader() { + public LeafReader reader() { return readerContext.reader(); } - public AtomicReaderContext readerContext() { + public LeafReaderContext readerContext() { return readerContext; } diff --git a/src/main/java/org/elasticsearch/search/highlight/CustomQueryScorer.java b/src/main/java/org/elasticsearch/search/highlight/CustomQueryScorer.java index 513953519d1..8f12dd0f9b4 100644 --- a/src/main/java/org/elasticsearch/search/highlight/CustomQueryScorer.java +++ b/src/main/java/org/elasticsearch/search/highlight/CustomQueryScorer.java @@ -21,11 +21,11 @@ package org.elasticsearch.search.highlight; import org.apache.lucene.index.IndexReader; import org.apache.lucene.queries.BlendedTermQuery; +import org.apache.lucene.search.FilteredQuery; import org.apache.lucene.search.Query; import org.apache.lucene.search.highlight.QueryScorer; import org.apache.lucene.search.highlight.WeightedSpanTerm; import org.apache.lucene.search.highlight.WeightedSpanTermExtractor; -import org.elasticsearch.common.lucene.search.XFilteredQuery; import org.elasticsearch.common.lucene.search.function.FiltersFunctionScoreQuery; import org.elasticsearch.common.lucene.search.function.FunctionScoreQuery; @@ -84,8 +84,8 @@ public final class CustomQueryScorer extends QueryScorer { } else if (query instanceof FiltersFunctionScoreQuery) { query = ((FiltersFunctionScoreQuery) query).getSubQuery(); extract(query, terms); - } else if (query instanceof XFilteredQuery) { - query = ((XFilteredQuery) query).getQuery(); + } else if (query instanceof FilteredQuery) { + query = ((FilteredQuery) query).getQuery(); extract(query, terms); } else if (query instanceof BlendedTermQuery) { extractWeightedTerms(terms, query); diff --git a/src/main/java/org/elasticsearch/search/highlight/HighlightPhase.java b/src/main/java/org/elasticsearch/search/highlight/HighlightPhase.java index e97b0cb3727..8944dfa730d 100644 --- a/src/main/java/org/elasticsearch/search/highlight/HighlightPhase.java +++ b/src/main/java/org/elasticsearch/search/highlight/HighlightPhase.java @@ -21,8 +21,7 @@ package org.elasticsearch.search.highlight; import com.google.common.collect.ImmutableList; import com.google.common.collect.ImmutableMap; -import com.google.common.collect.ImmutableSet; -import org.apache.lucene.index.FieldInfo; +import org.apache.lucene.index.IndexOptions; import org.elasticsearch.ElasticsearchException; import org.elasticsearch.ElasticsearchIllegalArgumentException; import org.elasticsearch.common.component.AbstractComponent; @@ -40,7 +39,6 @@ import org.elasticsearch.search.internal.SearchContext; import java.util.List; import java.util.Map; -import java.util.Set; import static com.google.common.collect.Maps.newHashMap; @@ -106,7 +104,7 @@ public class HighlightPhase extends AbstractComponent implements FetchSubPhase { boolean useFastVectorHighlighter = fieldMapper.fieldType().storeTermVectors() && fieldMapper.fieldType().storeTermVectorOffsets() && fieldMapper.fieldType().storeTermVectorPositions(); if (useFastVectorHighlighter) { highlighterType = "fvh"; - } else if (fieldMapper.fieldType().indexOptions() == FieldInfo.IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS) { + } else if (fieldMapper.fieldType().indexOptions() == IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS) { highlighterType = "postings"; } else { highlighterType = "plain"; diff --git a/src/main/java/org/elasticsearch/search/highlight/PostingsHighlighter.java b/src/main/java/org/elasticsearch/search/highlight/PostingsHighlighter.java index 509ed52e62a..d1dd67a74e6 100644 --- a/src/main/java/org/elasticsearch/search/highlight/PostingsHighlighter.java +++ b/src/main/java/org/elasticsearch/search/highlight/PostingsHighlighter.java @@ -20,10 +20,17 @@ package org.elasticsearch.search.highlight; import com.google.common.collect.Lists; import com.google.common.collect.Maps; -import org.apache.lucene.index.FieldInfo; +import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.IndexReader; import org.apache.lucene.index.Term; -import org.apache.lucene.search.*; +import org.apache.lucene.search.BooleanClause; +import org.apache.lucene.search.BooleanQuery; +import org.apache.lucene.search.ConstantScoreQuery; +import org.apache.lucene.search.FilteredQuery; +import org.apache.lucene.search.MultiTermQuery; +import org.apache.lucene.search.Query; +import org.apache.lucene.search.ScoringRewrite; +import org.apache.lucene.search.TopTermsRewrite; import org.apache.lucene.search.highlight.Encoder; import org.apache.lucene.search.postingshighlight.CustomPassageFormatter; import org.apache.lucene.search.postingshighlight.CustomPostingsHighlighter; @@ -35,7 +42,6 @@ import org.apache.lucene.util.UnicodeUtil; import org.elasticsearch.ElasticsearchIllegalArgumentException; import org.elasticsearch.common.Strings; import org.elasticsearch.common.collect.Tuple; -import org.elasticsearch.common.lucene.search.XFilteredQuery; import org.elasticsearch.common.text.StringText; import org.elasticsearch.index.mapper.FieldMapper; import org.elasticsearch.search.fetch.FetchPhaseExecutionException; @@ -44,7 +50,13 @@ import org.elasticsearch.search.internal.SearchContext; import java.io.IOException; import java.text.BreakIterator; -import java.util.*; +import java.util.ArrayList; +import java.util.Comparator; +import java.util.List; +import java.util.Locale; +import java.util.Map; +import java.util.SortedSet; +import java.util.TreeSet; public class PostingsHighlighter implements Highlighter { @@ -60,7 +72,7 @@ public class PostingsHighlighter implements Highlighter { FieldMapper fieldMapper = highlighterContext.mapper; SearchContextHighlight.Field field = highlighterContext.field; - if (fieldMapper.fieldType().indexOptions() != FieldInfo.IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS) { + if (fieldMapper.fieldType().indexOptions() != IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS) { throw new ElasticsearchIllegalArgumentException("the field [" + highlighterContext.fieldName + "] should be indexed with positions and offsets in the postings list to be used with postings highlighter"); } @@ -194,10 +206,6 @@ public class PostingsHighlighter implements Highlighter { } } - if (query instanceof XFilteredQuery) { - overrideMultiTermRewriteMethod(((XFilteredQuery) query).getQuery(), modifiedMultiTermQueries); - } - if (query instanceof FilteredQuery) { overrideMultiTermRewriteMethod(((FilteredQuery) query).getQuery(), modifiedMultiTermQueries); } diff --git a/src/main/java/org/elasticsearch/search/highlight/vectorhighlight/FragmentBuilderHelper.java b/src/main/java/org/elasticsearch/search/highlight/vectorhighlight/FragmentBuilderHelper.java index 153004517e9..2baa99e7758 100644 --- a/src/main/java/org/elasticsearch/search/highlight/vectorhighlight/FragmentBuilderHelper.java +++ b/src/main/java/org/elasticsearch/search/highlight/vectorhighlight/FragmentBuilderHelper.java @@ -84,7 +84,7 @@ public final class FragmentBuilderHelper { final CustomAnalyzer a = (CustomAnalyzer) analyzer; if (a.tokenizerFactory() instanceof EdgeNGramTokenizerFactory || (a.tokenizerFactory() instanceof NGramTokenizerFactory - && !((NGramTokenizerFactory)a.tokenizerFactory()).version().onOrAfter(Version.LUCENE_42))) { + && !((NGramTokenizerFactory)a.tokenizerFactory()).version().onOrAfter(Version.LUCENE_4_2))) { // ngram tokenizer is broken before 4.2 return true; } @@ -95,7 +95,7 @@ public final class FragmentBuilderHelper { return true; } if (tokenFilterFactory instanceof NGramTokenFilterFactory - && !((NGramTokenFilterFactory)tokenFilterFactory).version().onOrAfter(Version.LUCENE_42)) { + && !((NGramTokenFilterFactory)tokenFilterFactory).version().onOrAfter(Version.LUCENE_4_2)) { // ngram token filter is broken before 4.2 return true; } diff --git a/src/main/java/org/elasticsearch/search/highlight/vectorhighlight/SourceScoreOrderFragmentsBuilder.java b/src/main/java/org/elasticsearch/search/highlight/vectorhighlight/SourceScoreOrderFragmentsBuilder.java index cd648ff9bdf..2fdcb2f627e 100644 --- a/src/main/java/org/elasticsearch/search/highlight/vectorhighlight/SourceScoreOrderFragmentsBuilder.java +++ b/src/main/java/org/elasticsearch/search/highlight/vectorhighlight/SourceScoreOrderFragmentsBuilder.java @@ -20,7 +20,7 @@ package org.elasticsearch.search.highlight.vectorhighlight; import org.apache.lucene.document.Field; import org.apache.lucene.document.TextField; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.IndexReader; import org.apache.lucene.search.highlight.Encoder; import org.apache.lucene.search.vectorhighlight.BoundaryScanner; @@ -57,7 +57,7 @@ public class SourceScoreOrderFragmentsBuilder extends ScoreOrderFragmentsBuilder protected Field[] getFields(IndexReader reader, int docId, String fieldName) throws IOException { // we know its low level reader, and matching docId, since that's how we call the highlighter with SearchLookup lookup = searchContext.lookup(); - lookup.setNextReader((AtomicReaderContext) reader.getContext()); + lookup.setNextReader((LeafReaderContext) reader.getContext()); lookup.setNextDocId(docId); List values = lookup.source().extractRawValues(hitContext.getSourcePath(mapper.names().sourcePath())); diff --git a/src/main/java/org/elasticsearch/search/highlight/vectorhighlight/SourceSimpleFragmentsBuilder.java b/src/main/java/org/elasticsearch/search/highlight/vectorhighlight/SourceSimpleFragmentsBuilder.java index c8621a91c0a..80dec82620a 100644 --- a/src/main/java/org/elasticsearch/search/highlight/vectorhighlight/SourceSimpleFragmentsBuilder.java +++ b/src/main/java/org/elasticsearch/search/highlight/vectorhighlight/SourceSimpleFragmentsBuilder.java @@ -20,7 +20,7 @@ package org.elasticsearch.search.highlight.vectorhighlight; import org.apache.lucene.document.Field; import org.apache.lucene.document.TextField; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.IndexReader; import org.apache.lucene.search.vectorhighlight.BoundaryScanner; import org.elasticsearch.index.mapper.FieldMapper; @@ -53,7 +53,7 @@ public class SourceSimpleFragmentsBuilder extends SimpleFragmentsBuilder { protected Field[] getFields(IndexReader reader, int docId, String fieldName) throws IOException { // we know its low level reader, and matching docId, since that's how we call the highlighter with SearchLookup lookup = searchContext.lookup(); - lookup.setNextReader((AtomicReaderContext) reader.getContext()); + lookup.setNextReader((LeafReaderContext) reader.getContext()); lookup.setNextDocId(docId); List values = lookup.source().extractRawValues(hitContext.getSourcePath(mapper.names().sourcePath())); diff --git a/src/main/java/org/elasticsearch/search/internal/ContextIndexSearcher.java b/src/main/java/org/elasticsearch/search/internal/ContextIndexSearcher.java index fd1c1e51362..e5f7bc4deda 100644 --- a/src/main/java/org/elasticsearch/search/internal/ContextIndexSearcher.java +++ b/src/main/java/org/elasticsearch/search/internal/ContextIndexSearcher.java @@ -19,15 +19,20 @@ package org.elasticsearch.search.internal; -import org.apache.lucene.index.AtomicReaderContext; -import org.apache.lucene.search.*; +import org.apache.lucene.index.LeafReaderContext; +import org.apache.lucene.search.Collector; +import org.apache.lucene.search.Explanation; +import org.apache.lucene.search.FilteredQuery; +import org.apache.lucene.search.IndexSearcher; +import org.apache.lucene.search.Query; +import org.apache.lucene.search.TimeLimitingCollector; +import org.apache.lucene.search.Weight; import org.elasticsearch.common.lease.Releasable; import org.elasticsearch.common.lucene.Lucene; import org.elasticsearch.common.lucene.MinimumScoreCollector; import org.elasticsearch.common.lucene.MultiCollector; import org.elasticsearch.common.lucene.search.FilteredCollector; import org.elasticsearch.common.lucene.search.XCollector; -import org.elasticsearch.common.lucene.search.XFilteredQuery; import org.elasticsearch.index.engine.Engine; import org.elasticsearch.search.dfs.CachedDfSource; import org.elasticsearch.search.internal.SearchContext.Lifetime; @@ -125,7 +130,7 @@ public class ContextIndexSearcher extends IndexSearcher implements Releasable { } @Override - public void search(List leaves, Weight weight, Collector collector) throws IOException { + public void search(List leaves, Weight weight, Collector collector) throws IOException { final boolean timeoutSet = searchContext.timeoutInMillis() != -1; final boolean terminateAfterSet = searchContext.terminateAfter() != SearchContext.DEFAULT_TERMINATE_AFTER; @@ -194,7 +199,7 @@ public class ContextIndexSearcher extends IndexSearcher implements Releasable { if (searchContext.aliasFilter() == null) { return super.explain(query, doc); } - XFilteredQuery filteredQuery = new XFilteredQuery(query, searchContext.aliasFilter()); + FilteredQuery filteredQuery = new FilteredQuery(query, searchContext.aliasFilter()); return super.explain(filteredQuery, doc); } finally { searchContext.clearReleasables(Lifetime.COLLECTION); diff --git a/src/main/java/org/elasticsearch/search/internal/DefaultSearchContext.java b/src/main/java/org/elasticsearch/search/internal/DefaultSearchContext.java index fc76f4d468b..dbd62a91a6d 100644 --- a/src/main/java/org/elasticsearch/search/internal/DefaultSearchContext.java +++ b/src/main/java/org/elasticsearch/search/internal/DefaultSearchContext.java @@ -21,7 +21,9 @@ package org.elasticsearch.search.internal; import com.google.common.collect.ImmutableList; import com.google.common.collect.Lists; +import org.apache.lucene.search.ConstantScoreQuery; import org.apache.lucene.search.Filter; +import org.apache.lucene.search.FilteredQuery; import org.apache.lucene.search.Query; import org.apache.lucene.search.ScoreDoc; import org.apache.lucene.search.Sort; @@ -33,14 +35,12 @@ import org.elasticsearch.common.Nullable; import org.elasticsearch.common.lease.Releasables; import org.elasticsearch.common.lucene.search.AndFilter; import org.elasticsearch.common.lucene.search.Queries; -import org.elasticsearch.common.lucene.search.XConstantScoreQuery; -import org.elasticsearch.common.lucene.search.XFilteredQuery; import org.elasticsearch.common.lucene.search.function.BoostScoreFunction; import org.elasticsearch.common.lucene.search.function.FunctionScoreQuery; import org.elasticsearch.common.util.BigArrays; import org.elasticsearch.index.analysis.AnalysisService; +import org.elasticsearch.index.cache.bitset.BitsetFilterCache; import org.elasticsearch.index.cache.filter.FilterCache; -import org.elasticsearch.index.cache.fixedbitset.FixedBitSetFilterCache; import org.elasticsearch.index.engine.Engine; import org.elasticsearch.index.fielddata.IndexFieldDataService; import org.elasticsearch.index.mapper.FieldMapper; @@ -233,11 +233,11 @@ public class DefaultSearchContext extends SearchContext { Filter searchFilter = searchFilter(types()); if (searchFilter != null) { if (Queries.isConstantMatchAllQuery(query())) { - Query q = new XConstantScoreQuery(searchFilter); + Query q = new ConstantScoreQuery(searchFilter); q.setBoost(query().getBoost()); parsedQuery(new ParsedQuery(q, parsedQuery())); } else { - parsedQuery(new ParsedQuery(new XFilteredQuery(query(), searchFilter), parsedQuery())); + parsedQuery(new ParsedQuery(new FilteredQuery(query(), searchFilter), parsedQuery())); } } } @@ -440,8 +440,8 @@ public class DefaultSearchContext extends SearchContext { } @Override - public FixedBitSetFilterCache fixedBitSetFilterCache() { - return indexService.fixedBitSetFilterCache(); + public BitsetFilterCache bitsetFilterCache() { + return indexService.bitsetFilterCache(); } public IndexFieldDataService fieldData() { diff --git a/src/main/java/org/elasticsearch/search/internal/SearchContext.java b/src/main/java/org/elasticsearch/search/internal/SearchContext.java index a0350280078..ccb8d255fd6 100644 --- a/src/main/java/org/elasticsearch/search/internal/SearchContext.java +++ b/src/main/java/org/elasticsearch/search/internal/SearchContext.java @@ -33,8 +33,8 @@ import org.elasticsearch.common.lease.Releasable; import org.elasticsearch.common.lease.Releasables; import org.elasticsearch.common.util.BigArrays; import org.elasticsearch.index.analysis.AnalysisService; +import org.elasticsearch.index.cache.bitset.BitsetFilterCache; import org.elasticsearch.index.cache.filter.FilterCache; -import org.elasticsearch.index.cache.fixedbitset.FixedBitSetFilterCache; import org.elasticsearch.index.fielddata.IndexFieldDataService; import org.elasticsearch.index.mapper.FieldMapper; import org.elasticsearch.index.mapper.FieldMappers; @@ -208,7 +208,7 @@ public abstract class SearchContext implements Releasable { public abstract FilterCache filterCache(); - public abstract FixedBitSetFilterCache fixedBitSetFilterCache(); + public abstract BitsetFilterCache bitsetFilterCache(); public abstract IndexFieldDataService fieldData(); diff --git a/src/main/java/org/elasticsearch/search/lookup/DocLookup.java b/src/main/java/org/elasticsearch/search/lookup/DocLookup.java index 78d8fb01803..d28c9c1699a 100644 --- a/src/main/java/org/elasticsearch/search/lookup/DocLookup.java +++ b/src/main/java/org/elasticsearch/search/lookup/DocLookup.java @@ -19,16 +19,14 @@ package org.elasticsearch.search.lookup; import com.google.common.collect.Maps; -import org.apache.lucene.index.AtomicReaderContext; -import org.apache.lucene.search.Scorer; import org.elasticsearch.ElasticsearchIllegalArgumentException; import org.elasticsearch.common.Nullable; import org.elasticsearch.index.fielddata.IndexFieldDataService; import org.elasticsearch.index.fielddata.ScriptDocValues; import org.elasticsearch.index.mapper.FieldMapper; import org.elasticsearch.index.mapper.MapperService; +import org.apache.lucene.index.LeafReaderContext; -import java.io.IOException; import java.util.Arrays; import java.util.Collection; import java.util.Map; @@ -47,7 +45,7 @@ public class DocLookup implements Map { @Nullable private final String[] types; - private AtomicReaderContext reader; + private LeafReaderContext reader; private int docId = -1; @@ -65,7 +63,7 @@ public class DocLookup implements Map { return this.fieldDataService; } - public void setNextReader(AtomicReaderContext context) { + public void setNextReader(LeafReaderContext context) { if (this.reader == context) { // if we are called with the same reader, don't invalidate source return; } diff --git a/src/main/java/org/elasticsearch/search/lookup/FieldsLookup.java b/src/main/java/org/elasticsearch/search/lookup/FieldsLookup.java index 906a63ea74a..529b9276a2a 100644 --- a/src/main/java/org/elasticsearch/search/lookup/FieldsLookup.java +++ b/src/main/java/org/elasticsearch/search/lookup/FieldsLookup.java @@ -20,8 +20,8 @@ package org.elasticsearch.search.lookup; import com.google.common.collect.ImmutableMap; import com.google.common.collect.Maps; -import org.apache.lucene.index.AtomicReader; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReader; +import org.apache.lucene.index.LeafReaderContext; import org.elasticsearch.ElasticsearchIllegalArgumentException; import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.common.Nullable; @@ -45,7 +45,7 @@ public class FieldsLookup implements Map { @Nullable private final String[] types; - private AtomicReader reader; + private LeafReader reader; private int docId = -1; @@ -59,7 +59,7 @@ public class FieldsLookup implements Map { this.fieldVisitor = new SingleFieldsVisitor(null); } - public void setNextReader(AtomicReaderContext context) { + public void setNextReader(LeafReaderContext context) { if (this.reader == context.reader()) { // if we are called with the same reader, don't invalidate source return; } diff --git a/src/main/java/org/elasticsearch/search/lookup/IndexField.java b/src/main/java/org/elasticsearch/search/lookup/IndexField.java index 10eeda54cfb..c70d8deb93a 100644 --- a/src/main/java/org/elasticsearch/search/lookup/IndexField.java +++ b/src/main/java/org/elasticsearch/search/lookup/IndexField.java @@ -19,7 +19,7 @@ package org.elasticsearch.search.lookup; -import org.apache.lucene.index.AtomicReader; +import org.apache.lucene.index.LeafReader; import org.apache.lucene.search.CollectionStatistics; import org.elasticsearch.common.util.MinimalMap; @@ -57,7 +57,7 @@ public class IndexField extends MinimalMap { /* * Uodate posting lists in all TermInfo objects */ - void setReader(AtomicReader reader) { + void setReader(LeafReader reader) { for (IndexFieldTerm ti : terms.values()) { ti.setNextReader(reader); } diff --git a/src/main/java/org/elasticsearch/search/lookup/IndexFieldTerm.java b/src/main/java/org/elasticsearch/search/lookup/IndexFieldTerm.java index 72dd8d6b220..23f5dc603fb 100644 --- a/src/main/java/org/elasticsearch/search/lookup/IndexFieldTerm.java +++ b/src/main/java/org/elasticsearch/search/lookup/IndexFieldTerm.java @@ -65,7 +65,7 @@ public class IndexFieldTerm implements Iterable { // when the reader changes, we have to get the posting list for this term // and reader - void setNextReader(AtomicReader reader) { + void setNextReader(LeafReader reader) { try { // Get the posting list for a specific term. Depending on the flags, // this @@ -104,7 +104,7 @@ public class IndexFieldTerm implements Iterable { } // get the DocsAndPositionsEnum from the reader. - private DocsEnum getDocsAndPosEnum(int luceneFlags, AtomicReader reader) throws IOException { + private DocsEnum getDocsAndPosEnum(int luceneFlags, LeafReader reader) throws IOException { assert identifier.field() != null; assert identifier.bytes() != null; final Fields fields = reader.fields(); @@ -125,7 +125,7 @@ public class IndexFieldTerm implements Iterable { } // get the DocsEnum from the reader. - private DocsEnum getOnlyDocsEnum(int luceneFlags, AtomicReader reader) throws IOException { + private DocsEnum getOnlyDocsEnum(int luceneFlags, LeafReader reader) throws IOException { assert identifier.field() != null; assert identifier.bytes() != null; final Fields fields = reader.fields(); diff --git a/src/main/java/org/elasticsearch/search/lookup/IndexLookup.java b/src/main/java/org/elasticsearch/search/lookup/IndexLookup.java index c4ebbb0678d..223ec3c7c1c 100644 --- a/src/main/java/org/elasticsearch/search/lookup/IndexLookup.java +++ b/src/main/java/org/elasticsearch/search/lookup/IndexLookup.java @@ -63,7 +63,7 @@ public class IndexLookup extends MinimalMap { // Current reader from which we can get the term vectors. No info on term // and field statistics. - private AtomicReader reader; + private LeafReader reader; // The parent reader from which we can get proper field and term // statistics @@ -123,7 +123,7 @@ public class IndexLookup extends MinimalMap { builder.put("_CACHE", IndexLookup.FLAG_CACHE); } - public void setNextReader(AtomicReaderContext context) { + public void setNextReader(LeafReaderContext context) { if (reader == context.reader()) { // if we are called with the same // reader, nothing to do return; @@ -213,7 +213,7 @@ public class IndexLookup extends MinimalMap { return reader.getTermVectors(docId); } - AtomicReader getReader() { + LeafReader getReader() { return reader; } diff --git a/src/main/java/org/elasticsearch/search/lookup/SearchLookup.java b/src/main/java/org/elasticsearch/search/lookup/SearchLookup.java index 2dbda01a0e8..40444b42952 100644 --- a/src/main/java/org/elasticsearch/search/lookup/SearchLookup.java +++ b/src/main/java/org/elasticsearch/search/lookup/SearchLookup.java @@ -20,8 +20,7 @@ package org.elasticsearch.search.lookup; import com.google.common.collect.ImmutableMap; -import org.apache.lucene.index.AtomicReaderContext; -import org.apache.lucene.search.Scorer; +import org.apache.lucene.index.LeafReaderContext; import org.elasticsearch.common.Nullable; import org.elasticsearch.index.fielddata.IndexFieldDataService; import org.elasticsearch.index.mapper.MapperService; @@ -76,7 +75,7 @@ public class SearchLookup { return this.docMap; } - public void setNextReader(AtomicReaderContext context) { + public void setNextReader(LeafReaderContext context) { docMap.setNextReader(context); sourceLookup.setNextReader(context); fieldsLookup.setNextReader(context); diff --git a/src/main/java/org/elasticsearch/search/lookup/SourceLookup.java b/src/main/java/org/elasticsearch/search/lookup/SourceLookup.java index c9ff9d185fb..3204dc9c45c 100644 --- a/src/main/java/org/elasticsearch/search/lookup/SourceLookup.java +++ b/src/main/java/org/elasticsearch/search/lookup/SourceLookup.java @@ -19,8 +19,8 @@ package org.elasticsearch.search.lookup; import com.google.common.collect.ImmutableMap; -import org.apache.lucene.index.AtomicReader; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReader; +import org.apache.lucene.index.LeafReaderContext; import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.collect.Tuple; @@ -39,7 +39,7 @@ import java.util.Set; */ public class SourceLookup implements Map { - private AtomicReader reader; + private LeafReader reader; private int docId = -1; @@ -99,7 +99,7 @@ public class SourceLookup implements Map { return sourceAsMapAndType(bytes, offset, length).v2(); } - public void setNextReader(AtomicReaderContext context) { + public void setNextReader(LeafReaderContext context) { if (this.reader == context.reader()) { // if we are called with the same reader, don't invalidate source return; } diff --git a/src/main/java/org/elasticsearch/search/scan/ScanContext.java b/src/main/java/org/elasticsearch/search/scan/ScanContext.java index 12362edea3f..6a20e67aac6 100644 --- a/src/main/java/org/elasticsearch/search/scan/ScanContext.java +++ b/src/main/java/org/elasticsearch/search/scan/ScanContext.java @@ -19,12 +19,19 @@ package org.elasticsearch.search.scan; -import org.apache.lucene.index.AtomicReaderContext; import org.apache.lucene.index.IndexReader; -import org.apache.lucene.search.*; +import org.apache.lucene.index.LeafReaderContext; +import org.apache.lucene.search.BitsFilteredDocIdSet; +import org.apache.lucene.search.DocIdSet; +import org.apache.lucene.search.Filter; +import org.apache.lucene.search.FilteredQuery; +import org.apache.lucene.search.Query; +import org.apache.lucene.search.ScoreDoc; +import org.apache.lucene.search.Scorer; +import org.apache.lucene.search.SimpleCollector; +import org.apache.lucene.search.TopDocs; import org.apache.lucene.util.Bits; import org.elasticsearch.common.lucene.docset.AllDocIdSet; -import org.elasticsearch.common.lucene.search.XFilteredQuery; import org.elasticsearch.common.util.concurrent.ConcurrentCollections; import org.elasticsearch.search.internal.SearchContext; @@ -47,7 +54,7 @@ public class ScanContext { public TopDocs execute(SearchContext context) throws IOException { ScanCollector collector = new ScanCollector(readerStates, context.from(), context.size(), context.trackScores()); - Query query = new XFilteredQuery(context.query(), new ScanFilter(readerStates, collector)); + Query query = new FilteredQuery(context.query(), new ScanFilter(readerStates, collector)); try { context.searcher().search(query, collector); } catch (ScanCollector.StopCollectingException e) { @@ -56,7 +63,7 @@ public class ScanContext { return collector.topDocs(); } - static class ScanCollector extends Collector { + static class ScanCollector extends SimpleCollector { private final ConcurrentMap readerStates; @@ -111,7 +118,7 @@ public class ScanContext { } @Override - public void setNextReader(AtomicReaderContext context) throws IOException { + public void doSetNextReader(LeafReaderContext context) throws IOException { // if we have a reader state, and we haven't registered one already, register it // we need to check in readersState since even when the filter return null, setNextReader is still // called for that reader (before) @@ -152,13 +159,13 @@ public class ScanContext { } @Override - public DocIdSet getDocIdSet(AtomicReaderContext context, Bits acceptedDocs) throws IOException { + public DocIdSet getDocIdSet(LeafReaderContext context, Bits acceptedDocs) throws IOException { ReaderState readerState = readerStates.get(context.reader()); if (readerState != null && readerState.done) { scanCollector.incCounter(readerState.count); return null; } - return new AllDocIdSet(context.reader().maxDoc()); + return BitsFilteredDocIdSet.wrap(new AllDocIdSet(context.reader().maxDoc()), acceptedDocs); } } diff --git a/src/main/java/org/elasticsearch/search/sort/GeoDistanceSortParser.java b/src/main/java/org/elasticsearch/search/sort/GeoDistanceSortParser.java index fb546fb1467..6d5365fa24f 100644 --- a/src/main/java/org/elasticsearch/search/sort/GeoDistanceSortParser.java +++ b/src/main/java/org/elasticsearch/search/sort/GeoDistanceSortParser.java @@ -19,12 +19,13 @@ package org.elasticsearch.search.sort; -import org.apache.lucene.index.AtomicReaderContext; -import org.apache.lucene.search.FieldCache.Doubles; +import org.apache.lucene.index.NumericDocValues; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.search.FieldComparator; import org.apache.lucene.search.Filter; import org.apache.lucene.search.SortField; -import org.apache.lucene.util.FixedBitSet; +import org.apache.lucene.search.join.BitDocIdSetFilter; +import org.apache.lucene.util.BitSet; import org.elasticsearch.ElasticsearchIllegalArgumentException; import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.common.geo.GeoDistance; @@ -33,7 +34,6 @@ import org.elasticsearch.common.geo.GeoPoint; import org.elasticsearch.common.geo.GeoUtils; import org.elasticsearch.common.unit.DistanceUnit; import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.index.cache.fixedbitset.FixedBitSetFilter; import org.elasticsearch.index.fielddata.*; import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource.Nested; import org.elasticsearch.index.mapper.FieldMapper; @@ -156,12 +156,12 @@ public class GeoDistanceSortParser implements SortParser { } final Nested nested; if (objectMapper != null && objectMapper.nested().isNested()) { - FixedBitSetFilter rootDocumentsFilter = context.fixedBitSetFilterCache().getFixedBitSetFilter(NonNestedDocsFilter.INSTANCE); - FixedBitSetFilter innerDocumentsFilter; + BitDocIdSetFilter rootDocumentsFilter = context.bitsetFilterCache().getBitDocIdSetFilter(NonNestedDocsFilter.INSTANCE); + BitDocIdSetFilter innerDocumentsFilter; if (nestedFilter != null) { - innerDocumentsFilter = context.fixedBitSetFilterCache().getFixedBitSetFilter(nestedFilter); + innerDocumentsFilter = context.bitsetFilterCache().getBitDocIdSetFilter(nestedFilter); } else { - innerDocumentsFilter = context.fixedBitSetFilterCache().getFixedBitSetFilter(objectMapper.nestedTypeFilter()); + innerDocumentsFilter = context.bitsetFilterCache().getBitDocIdSetFilter(objectMapper.nestedTypeFilter()); } nested = new Nested(rootDocumentsFilter, innerDocumentsFilter); } else { @@ -177,25 +177,20 @@ public class GeoDistanceSortParser implements SortParser { @Override public FieldComparator newComparator(String fieldname, int numHits, int sortPos, boolean reversed) throws IOException { - return new FieldComparator.DoubleComparator(numHits, null, null, null) { + return new FieldComparator.DoubleComparator(numHits, null, null) { @Override - protected Doubles getDoubleValues(AtomicReaderContext context, String field) throws IOException { + protected NumericDocValues getNumericDocValues(LeafReaderContext context, String field) throws IOException { final MultiGeoPointValues geoPointValues = geoIndexFieldData.load(context).getGeoPointValues(); final SortedNumericDoubleValues distanceValues = GeoDistance.distanceValues(geoPointValues, distances); final NumericDoubleValues selectedValues; if (nested == null) { selectedValues = finalSortMode.select(distanceValues, Double.MAX_VALUE); } else { - final FixedBitSet rootDocs = nested.rootDocs(context); - final FixedBitSet innerDocs = nested.innerDocs(context); + final BitSet rootDocs = nested.rootDocs(context).bits(); + final BitSet innerDocs = nested.innerDocs(context).bits(); selectedValues = finalSortMode.select(distanceValues, Double.MAX_VALUE, rootDocs, innerDocs, context.reader().maxDoc()); } - return new Doubles() { - @Override - public double get(int docID) { - return selectedValues.get(docID); - } - }; + return selectedValues.getRawDoubleValues(); } }; } diff --git a/src/main/java/org/elasticsearch/search/sort/ScriptSortParser.java b/src/main/java/org/elasticsearch/search/sort/ScriptSortParser.java index b569f602b18..4035f60f4e2 100644 --- a/src/main/java/org/elasticsearch/search/sort/ScriptSortParser.java +++ b/src/main/java/org/elasticsearch/search/sort/ScriptSortParser.java @@ -19,16 +19,16 @@ package org.elasticsearch.search.sort; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.BinaryDocValues; import org.apache.lucene.search.Filter; import org.apache.lucene.search.Scorer; import org.apache.lucene.search.SortField; +import org.apache.lucene.search.join.BitDocIdSetFilter; import org.apache.lucene.util.BytesRef; import org.apache.lucene.util.BytesRefBuilder; import org.elasticsearch.ElasticsearchIllegalArgumentException; import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.index.cache.fixedbitset.FixedBitSetFilter; import org.elasticsearch.index.fielddata.*; import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource.Nested; import org.elasticsearch.index.fielddata.fieldcomparator.BytesRefFieldComparatorSource; @@ -137,12 +137,12 @@ public class ScriptSortParser implements SortParser { throw new ElasticsearchIllegalArgumentException("mapping for explicit nested path is not mapped as nested: [" + nestedPath + "]"); } - FixedBitSetFilter rootDocumentsFilter = context.fixedBitSetFilterCache().getFixedBitSetFilter(NonNestedDocsFilter.INSTANCE); - FixedBitSetFilter innerDocumentsFilter; + BitDocIdSetFilter rootDocumentsFilter = context.bitsetFilterCache().getBitDocIdSetFilter(NonNestedDocsFilter.INSTANCE); + BitDocIdSetFilter innerDocumentsFilter; if (nestedFilter != null) { - innerDocumentsFilter = context.fixedBitSetFilterCache().getFixedBitSetFilter(nestedFilter); + innerDocumentsFilter = context.bitsetFilterCache().getBitDocIdSetFilter(nestedFilter); } else { - innerDocumentsFilter = context.fixedBitSetFilterCache().getFixedBitSetFilter(objectMapper.nestedTypeFilter()); + innerDocumentsFilter = context.bitsetFilterCache().getBitDocIdSetFilter(objectMapper.nestedTypeFilter()); } nested = new Nested(rootDocumentsFilter, innerDocumentsFilter); } else { @@ -154,7 +154,7 @@ public class ScriptSortParser implements SortParser { case STRING_SORT_TYPE: fieldComparatorSource = new BytesRefFieldComparatorSource(null, null, sortMode, nested) { @Override - protected SortedBinaryDocValues getValues(AtomicReaderContext context) { + protected SortedBinaryDocValues getValues(LeafReaderContext context) { searchScript.setNextReader(context); final BinaryDocValues values = new BinaryDocValues() { final BytesRefBuilder spare = new BytesRefBuilder(); @@ -177,7 +177,7 @@ public class ScriptSortParser implements SortParser { // TODO: should we rather sort missing values last? fieldComparatorSource = new DoubleValuesComparatorSource(null, Double.MAX_VALUE, sortMode, nested) { @Override - protected SortedNumericDoubleValues getValues(AtomicReaderContext context) { + protected SortedNumericDoubleValues getValues(LeafReaderContext context) { searchScript.setNextReader(context); final NumericDoubleValues values = new NumericDoubleValues() { @Override diff --git a/src/main/java/org/elasticsearch/search/sort/SortParseElement.java b/src/main/java/org/elasticsearch/search/sort/SortParseElement.java index b8e7b6d5a97..d22b92292ee 100644 --- a/src/main/java/org/elasticsearch/search/sort/SortParseElement.java +++ b/src/main/java/org/elasticsearch/search/sort/SortParseElement.java @@ -24,11 +24,11 @@ import com.google.common.collect.Lists; import org.apache.lucene.search.Filter; import org.apache.lucene.search.Sort; import org.apache.lucene.search.SortField; +import org.apache.lucene.search.join.BitDocIdSetFilter; import org.elasticsearch.ElasticsearchIllegalArgumentException; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.ParseField; import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.index.cache.fixedbitset.FixedBitSetFilter; import org.elasticsearch.index.fielddata.IndexFieldData; import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource.Nested; import org.elasticsearch.index.mapper.FieldMapper; @@ -250,12 +250,12 @@ public class SortParseElement implements SearchParseElement { } final Nested nested; if (objectMapper != null && objectMapper.nested().isNested()) { - FixedBitSetFilter rootDocumentsFilter = context.fixedBitSetFilterCache().getFixedBitSetFilter(NonNestedDocsFilter.INSTANCE); - FixedBitSetFilter innerDocumentsFilter; + BitDocIdSetFilter rootDocumentsFilter = context.bitsetFilterCache().getBitDocIdSetFilter(NonNestedDocsFilter.INSTANCE); + BitDocIdSetFilter innerDocumentsFilter; if (nestedFilter != null) { - innerDocumentsFilter = context.fixedBitSetFilterCache().getFixedBitSetFilter(nestedFilter); + innerDocumentsFilter = context.bitsetFilterCache().getBitDocIdSetFilter(nestedFilter); } else { - innerDocumentsFilter = context.fixedBitSetFilterCache().getFixedBitSetFilter(objectMapper.nestedTypeFilter()); + innerDocumentsFilter = context.bitsetFilterCache().getBitDocIdSetFilter(objectMapper.nestedTypeFilter()); } nested = new Nested(rootDocumentsFilter, innerDocumentsFilter); } else { diff --git a/src/main/java/org/elasticsearch/search/suggest/completion/AnalyzingCompletionLookupProvider.java b/src/main/java/org/elasticsearch/search/suggest/completion/AnalyzingCompletionLookupProvider.java index 14eceb31311..cc946abab67 100644 --- a/src/main/java/org/elasticsearch/search/suggest/completion/AnalyzingCompletionLookupProvider.java +++ b/src/main/java/org/elasticsearch/search/suggest/completion/AnalyzingCompletionLookupProvider.java @@ -22,12 +22,15 @@ package org.elasticsearch.search.suggest.completion; import com.carrotsearch.hppc.ObjectLongOpenHashMap; import org.apache.lucene.analysis.TokenStream; import org.apache.lucene.codecs.*; -import org.apache.lucene.index.FieldInfo; +import org.apache.lucene.index.*; +import org.apache.lucene.search.DocIdSetIterator; import org.apache.lucene.search.suggest.Lookup; import org.apache.lucene.search.suggest.analyzing.XAnalyzingSuggester; import org.apache.lucene.search.suggest.analyzing.XFuzzySuggester; import org.apache.lucene.store.IndexInput; import org.apache.lucene.store.IndexOutput; +import org.apache.lucene.util.Accountable; +import org.apache.lucene.util.Accountables; import org.apache.lucene.util.BytesRef; import org.apache.lucene.util.IOUtils; import org.apache.lucene.util.IntsRef; @@ -87,7 +90,7 @@ public class AnalyzingCompletionLookupProvider extends CompletionLookupProvider public FieldsConsumer consumer(final IndexOutput output) throws IOException { CodecUtil.writeHeader(output, CODEC_NAME, CODEC_VERSION_LATEST); return new FieldsConsumer() { - private Map fieldOffsets = new HashMap<>(); + private Map fieldOffsets = new HashMap<>(); @Override public void close() throws IOException { @@ -98,8 +101,8 @@ public class AnalyzingCompletionLookupProvider extends CompletionLookupProvider */ long pointer = output.getFilePointer(); output.writeVInt(fieldOffsets.size()); - for (Map.Entry entry : fieldOffsets.entrySet()) { - output.writeString(entry.getKey().name); + for (Map.Entry entry : fieldOffsets.entrySet()) { + output.writeString(entry.getKey()); output.writeVLong(entry.getValue()); } output.writeLong(pointer); @@ -110,96 +113,73 @@ public class AnalyzingCompletionLookupProvider extends CompletionLookupProvider } @Override - public TermsConsumer addField(final FieldInfo field) throws IOException { - - return new TermsConsumer() { + public void write(Fields fields) throws IOException { + for(String field : fields) { + Terms terms = fields.terms(field); + if (terms == null) { + continue; + } + TermsEnum termsEnum = terms.iterator(null); + DocsAndPositionsEnum docsEnum = null; + final SuggestPayload spare = new SuggestPayload(); + int maxAnalyzedPathsForOneInput = 0; final XAnalyzingSuggester.XBuilder builder = new XAnalyzingSuggester.XBuilder(maxSurfaceFormsPerAnalyzedForm, hasPayloads, XAnalyzingSuggester.PAYLOAD_SEP); - final CompletionPostingsConsumer postingsConsumer = new CompletionPostingsConsumer(AnalyzingCompletionLookupProvider.this, builder); - - @Override - public PostingsConsumer startTerm(BytesRef text) throws IOException { - builder.startTerm(text); - return postingsConsumer; - } - - @Override - public Comparator getComparator() throws IOException { - return BytesRef.getUTF8SortedAsUnicodeComparator(); - } - - @Override - public void finishTerm(BytesRef text, TermStats stats) throws IOException { - builder.finishTerm(stats.docFreq); // use doc freq as a fallback - } - - @Override - public void finish(long sumTotalTermFreq, long sumDocFreq, int docCount) throws IOException { - /* - * Here we are done processing the field and we can - * buid the FST and write it to disk. - */ - FST> build = builder.build(); - assert build != null || docCount == 0 : "the FST is null but docCount is != 0 actual value: [" + docCount + "]"; - /* - * it's possible that the FST is null if we have 2 segments that get merged - * and all docs that have a value in this field are deleted. This will cause - * a consumer to be created but it doesn't consume any values causing the FSTBuilder - * to return null. - */ - if (build != null) { - fieldOffsets.put(field, output.getFilePointer()); - build.save(output); - /* write some more meta-info */ - output.writeVInt(postingsConsumer.getMaxAnalyzedPathsForOneInput()); - output.writeVInt(maxSurfaceFormsPerAnalyzedForm); - output.writeInt(maxGraphExpansions); // can be negative - int options = 0; - options |= preserveSep ? SERIALIZE_PRESERVE_SEPERATORS : 0; - options |= hasPayloads ? SERIALIZE_HAS_PAYLOADS : 0; - options |= preservePositionIncrements ? SERIALIZE_PRESERVE_POSITION_INCREMENTS : 0; - output.writeVInt(options); - output.writeVInt(XAnalyzingSuggester.SEP_LABEL); - output.writeVInt(XAnalyzingSuggester.END_BYTE); - output.writeVInt(XAnalyzingSuggester.PAYLOAD_SEP); - output.writeVInt(XAnalyzingSuggester.HOLE_CHARACTER); + int docCount = 0; + while (true) { + BytesRef term = termsEnum.next(); + if (term == null) { + break; } + docsEnum = termsEnum.docsAndPositions(null, docsEnum, DocsAndPositionsEnum.FLAG_PAYLOADS); + builder.startTerm(term); + int docFreq = 0; + while (docsEnum.nextDoc() != DocIdSetIterator.NO_MORE_DOCS) { + for (int i = 0; i < docsEnum.freq(); i++) { + final int position = docsEnum.nextPosition(); + AnalyzingCompletionLookupProvider.this.parsePayload(docsEnum.getPayload(), spare); + builder.addSurface(spare.surfaceForm.get(), spare.payload.get(), spare.weight); + // multi fields have the same surface form so we sum up here + maxAnalyzedPathsForOneInput = Math.max(maxAnalyzedPathsForOneInput, position + 1); + } + docFreq++; + docCount = Math.max(docCount, docsEnum.docID()+1); + } + builder.finishTerm(docFreq); } - }; + /* + * Here we are done processing the field and we can + * buid the FST and write it to disk. + */ + FST> build = builder.build(); + assert build != null || docCount == 0: "the FST is null but docCount is != 0 actual value: [" + docCount + "]"; + /* + * it's possible that the FST is null if we have 2 segments that get merged + * and all docs that have a value in this field are deleted. This will cause + * a consumer to be created but it doesn't consume any values causing the FSTBuilder + * to return null. + */ + if (build != null) { + fieldOffsets.put(field, output.getFilePointer()); + build.save(output); + /* write some more meta-info */ + output.writeVInt(maxAnalyzedPathsForOneInput); + output.writeVInt(maxSurfaceFormsPerAnalyzedForm); + output.writeInt(maxGraphExpansions); // can be negative + int options = 0; + options |= preserveSep ? SERIALIZE_PRESERVE_SEPERATORS : 0; + options |= hasPayloads ? SERIALIZE_HAS_PAYLOADS : 0; + options |= preservePositionIncrements ? SERIALIZE_PRESERVE_POSITION_INCREMENTS : 0; + output.writeVInt(options); + output.writeVInt(XAnalyzingSuggester.SEP_LABEL); + output.writeVInt(XAnalyzingSuggester.END_BYTE); + output.writeVInt(XAnalyzingSuggester.PAYLOAD_SEP); + output.writeVInt(XAnalyzingSuggester.HOLE_CHARACTER); + } + } } }; } - private static final class CompletionPostingsConsumer extends PostingsConsumer { - private final SuggestPayload spare = new SuggestPayload(); - private AnalyzingCompletionLookupProvider analyzingSuggestLookupProvider; - private XAnalyzingSuggester.XBuilder builder; - private int maxAnalyzedPathsForOneInput = 0; - - public CompletionPostingsConsumer(AnalyzingCompletionLookupProvider analyzingSuggestLookupProvider, XAnalyzingSuggester.XBuilder builder) { - this.analyzingSuggestLookupProvider = analyzingSuggestLookupProvider; - this.builder = builder; - } - - @Override - public void startDoc(int docID, int freq) throws IOException { - } - - @Override - public void addPosition(int position, BytesRef payload, int startOffset, int endOffset) throws IOException { - analyzingSuggestLookupProvider.parsePayload(payload, spare); - builder.addSurface(spare.surfaceForm.get(), spare.payload.get(), spare.weight); - // multi fields have the same surface form so we sum up here - maxAnalyzedPathsForOneInput = Math.max(maxAnalyzedPathsForOneInput, position + 1); - } - - @Override - public void finishDoc() throws IOException { - } - - public int getMaxAnalyzedPathsForOneInput() { - return maxAnalyzedPathsForOneInput; - } - } @Override public LookupFactory load(IndexInput input) throws IOException { @@ -318,10 +298,15 @@ public class AnalyzingCompletionLookupProvider extends CompletionLookupProvider public long ramBytesUsed() { return ramBytesUsed; } + + @Override + public Iterable getChildResources() { + return Accountables.namedAccountables("field", lookupMap); + } }; } - static class AnalyzingSuggestHolder { + static class AnalyzingSuggestHolder implements Accountable { final boolean preserveSep; final boolean preservePositionIncrements; final int maxSurfaceFormsPerAnalyzedForm; @@ -364,6 +349,24 @@ public class AnalyzingCompletionLookupProvider extends CompletionLookupProvider public boolean hasPayloads() { return hasPayloads; } + + @Override + public long ramBytesUsed() { + if (fst != null) { + return fst.ramBytesUsed(); + } else { + return 0; + } + } + + @Override + public Iterable getChildResources() { + if (fst != null) { + return Collections.singleton(Accountables.namedAccountable("fst", fst)); + } else { + return Collections.emptyList(); + } + } } @Override diff --git a/src/main/java/org/elasticsearch/search/suggest/completion/Completion090PostingsFormat.java b/src/main/java/org/elasticsearch/search/suggest/completion/Completion090PostingsFormat.java index 5a9f6d9ffba..fc9844dbc74 100644 --- a/src/main/java/org/elasticsearch/search/suggest/completion/Completion090PostingsFormat.java +++ b/src/main/java/org/elasticsearch/search/suggest/completion/Completion090PostingsFormat.java @@ -22,10 +22,12 @@ import com.google.common.collect.ImmutableMap; import com.google.common.collect.ImmutableMap.Builder; import org.apache.lucene.codecs.*; import org.apache.lucene.index.*; -import org.apache.lucene.index.FilterAtomicReader.FilterTerms; +import org.apache.lucene.index.FilterLeafReader.FilterTerms; import org.apache.lucene.search.suggest.Lookup; import org.apache.lucene.store.IOContext.Context; import org.apache.lucene.store.*; +import org.apache.lucene.util.Accountable; +import org.apache.lucene.util.Accountables; import org.apache.lucene.util.BytesRef; import org.apache.lucene.util.IOUtils; import org.elasticsearch.ElasticsearchIllegalStateException; @@ -37,8 +39,10 @@ import org.elasticsearch.search.suggest.completion.CompletionTokenStream.ToFinit import java.io.ByteArrayInputStream; import java.io.ByteArrayOutputStream; import java.io.IOException; -import java.util.Comparator; +import java.util.ArrayList; +import java.util.Collections; import java.util.Iterator; +import java.util.List; import java.util.Map; /** @@ -128,35 +132,9 @@ public class Completion090PostingsFormat extends PostingsFormat { } @Override - public TermsConsumer addField(final FieldInfo field) throws IOException { - final TermsConsumer delegateConsumer = delegatesFieldsConsumer.addField(field); - final TermsConsumer suggestTermConsumer = suggestFieldsConsumer.addField(field); - final GroupedPostingsConsumer groupedPostingsConsumer = new GroupedPostingsConsumer(delegateConsumer, suggestTermConsumer); - - return new TermsConsumer() { - @Override - public PostingsConsumer startTerm(BytesRef text) throws IOException { - groupedPostingsConsumer.startTerm(text); - return groupedPostingsConsumer; - } - - @Override - public Comparator getComparator() throws IOException { - return delegateConsumer.getComparator(); - } - - @Override - public void finishTerm(BytesRef text, TermStats stats) throws IOException { - suggestTermConsumer.finishTerm(text, stats); - delegateConsumer.finishTerm(text, stats); - } - - @Override - public void finish(long sumTotalTermFreq, long sumDocFreq, int docCount) throws IOException { - suggestTermConsumer.finish(sumTotalTermFreq, sumDocFreq, docCount); - delegateConsumer.finish(sumTotalTermFreq, sumDocFreq, docCount); - } - }; + public void write(Fields fields) throws IOException { + delegatesFieldsConsumer.write(fields); + suggestFieldsConsumer.write(fields); } @Override @@ -165,46 +143,9 @@ public class Completion090PostingsFormat extends PostingsFormat { } } - private class GroupedPostingsConsumer extends PostingsConsumer { - - private TermsConsumer[] termsConsumers; - private PostingsConsumer[] postingsConsumers; - - public GroupedPostingsConsumer(TermsConsumer... termsConsumersArgs) { - termsConsumers = termsConsumersArgs; - postingsConsumers = new PostingsConsumer[termsConsumersArgs.length]; - } - - @Override - public void startDoc(int docID, int freq) throws IOException { - for (PostingsConsumer postingsConsumer : postingsConsumers) { - postingsConsumer.startDoc(docID, freq); - } - } - - @Override - public void addPosition(int position, BytesRef payload, int startOffset, int endOffset) throws IOException { - for (PostingsConsumer postingsConsumer : postingsConsumers) { - postingsConsumer.addPosition(position, payload, startOffset, endOffset); - } - } - - @Override - public void finishDoc() throws IOException { - for (PostingsConsumer postingsConsumer : postingsConsumers) { - postingsConsumer.finishDoc(); - } - } - - public void startTerm(BytesRef text) throws IOException { - for (int i = 0; i < termsConsumers.length; i++) { - postingsConsumers[i] = termsConsumers[i].startTerm(text); - } - } - } - private static class CompletionFieldsProducer extends FieldsProducer { - + // TODO make this class lazyload all the things in order to take advantage of the new merge instance API + // today we just load everything up-front private final FieldsProducer delegateProducer; private final LookupFactory lookupFactory; private final int version; @@ -276,10 +217,25 @@ public class Completion090PostingsFormat extends PostingsFormat { return (lookupFactory == null ? 0 : lookupFactory.ramBytesUsed()) + delegateProducer.ramBytesUsed(); } + @Override + public Iterable getChildResources() { + List resources = new ArrayList<>(); + if (lookupFactory != null) { + resources.add(Accountables.namedAccountable("lookup", lookupFactory)); + } + resources.add(Accountables.namedAccountable("delegate", delegateProducer)); + return Collections.unmodifiableList(resources); + } + @Override public void checkIntegrity() throws IOException { delegateProducer.checkIntegrity(); } + + @Override + public FieldsProducer getMergeInstance() throws IOException { + return delegateProducer.getMergeInstance(); + } } public static final class CompletionTerms extends FilterTerms { @@ -351,8 +307,8 @@ public class Completion090PostingsFormat extends PostingsFormat { public CompletionStats completionStats(IndexReader indexReader, String ... fields) { CompletionStats completionStats = new CompletionStats(); - for (AtomicReaderContext atomicReaderContext : indexReader.leaves()) { - AtomicReader atomicReader = atomicReaderContext.reader(); + for (LeafReaderContext atomicReaderContext : indexReader.leaves()) { + LeafReader atomicReader = atomicReaderContext.reader(); try { for (String fieldName : atomicReader.fields()) { Terms terms = atomicReader.fields().terms(fieldName); @@ -369,10 +325,9 @@ public class Completion090PostingsFormat extends PostingsFormat { return completionStats; } - public static abstract class LookupFactory { + public static abstract class LookupFactory implements Accountable { public abstract Lookup getLookup(CompletionFieldMapper mapper, CompletionSuggestionContext suggestionContext); public abstract CompletionStats stats(String ... fields); abstract AnalyzingCompletionLookupProvider.AnalyzingSuggestHolder getAnalyzingSuggestHolder(CompletionFieldMapper mapper); - public abstract long ramBytesUsed(); } } diff --git a/src/main/java/org/elasticsearch/search/suggest/completion/CompletionSuggester.java b/src/main/java/org/elasticsearch/search/suggest/completion/CompletionSuggester.java index b0c11e8f707..3e3ceba438e 100644 --- a/src/main/java/org/elasticsearch/search/suggest/completion/CompletionSuggester.java +++ b/src/main/java/org/elasticsearch/search/suggest/completion/CompletionSuggester.java @@ -19,8 +19,8 @@ package org.elasticsearch.search.suggest.completion; import com.google.common.collect.Maps; -import org.apache.lucene.index.AtomicReader; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReader; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.IndexReader; import org.apache.lucene.index.Terms; import org.apache.lucene.search.suggest.Lookup; @@ -61,8 +61,8 @@ public class CompletionSuggester extends Suggester String fieldName = suggestionContext.getField(); Map results = Maps.newHashMapWithExpectedSize(indexReader.leaves().size() * suggestionContext.getSize()); - for (AtomicReaderContext atomicReaderContext : indexReader.leaves()) { - AtomicReader atomicReader = atomicReaderContext.reader(); + for (LeafReaderContext atomicReaderContext : indexReader.leaves()) { + LeafReader atomicReader = atomicReaderContext.reader(); Terms terms = atomicReader.fields().terms(fieldName); if (terms instanceof Completion090PostingsFormat.CompletionTerms) { final Completion090PostingsFormat.CompletionTerms lookupTerms = (Completion090PostingsFormat.CompletionTerms) terms; diff --git a/src/main/java/org/elasticsearch/search/suggest/context/GeolocationContextMapping.java b/src/main/java/org/elasticsearch/search/suggest/context/GeolocationContextMapping.java index d63053b9e33..0506d7f145e 100644 --- a/src/main/java/org/elasticsearch/search/suggest/context/GeolocationContextMapping.java +++ b/src/main/java/org/elasticsearch/search/suggest/context/GeolocationContextMapping.java @@ -23,6 +23,7 @@ import com.carrotsearch.hppc.IntOpenHashSet; import com.google.common.collect.Lists; import org.apache.lucene.analysis.PrefixAnalyzer.PrefixTokenFilter; import org.apache.lucene.analysis.TokenStream; +import org.apache.lucene.index.DocValuesType; import org.apache.lucene.index.IndexableField; import org.apache.lucene.util.automaton.Automata; import org.apache.lucene.util.automaton.Automaton; @@ -624,7 +625,7 @@ public class GeolocationContextMapping extends ContextMapping { IndexableField latField = latFields[i]; assert lonField.fieldType().docValueType() == latField.fieldType().docValueType(); // we write doc values fields differently: one field for all values, so we need to only care about indexed fields - if (lonField.fieldType().docValueType() == null) { + if (lonField.fieldType().docValueType() == DocValuesType.NONE) { spare.reset(latField.numericValue().doubleValue(), lonField.numericValue().doubleValue()); geohashes.add(spare.geohash()); } diff --git a/src/test/java/org/apache/lucene/analysis/miscellaneous/TruncateTokenFilterTests.java b/src/test/java/org/apache/lucene/analysis/miscellaneous/TruncateTokenFilterTests.java index 893020f8f8a..9f95ec6147d 100644 --- a/src/test/java/org/apache/lucene/analysis/miscellaneous/TruncateTokenFilterTests.java +++ b/src/test/java/org/apache/lucene/analysis/miscellaneous/TruncateTokenFilterTests.java @@ -20,16 +20,14 @@ package org.apache.lucene.analysis.miscellaneous; import org.apache.lucene.analysis.Analyzer; +import org.apache.lucene.analysis.MockTokenizer; import org.apache.lucene.analysis.TokenStream; import org.apache.lucene.analysis.Tokenizer; -import org.apache.lucene.analysis.core.WhitespaceTokenizer; import org.apache.lucene.analysis.tokenattributes.CharTermAttribute; -import org.elasticsearch.common.lucene.Lucene; import org.elasticsearch.test.ElasticsearchTestCase; import org.junit.Test; import java.io.IOException; -import java.io.Reader; import static org.hamcrest.Matchers.equalTo; /** @@ -41,9 +39,8 @@ public class TruncateTokenFilterTests extends ElasticsearchTestCase { public void simpleTest() throws IOException { Analyzer analyzer = new Analyzer() { @Override - protected TokenStreamComponents createComponents(String fieldName, - Reader reader) { - Tokenizer t = new WhitespaceTokenizer(Lucene.VERSION, reader); + protected TokenStreamComponents createComponents(String fieldName) { + Tokenizer t = new MockTokenizer(MockTokenizer.WHITESPACE, false); return new TokenStreamComponents(t, new TruncateTokenFilter(t, 3)); } }; diff --git a/src/test/java/org/apache/lucene/analysis/miscellaneous/UniqueTokenFilterTests.java b/src/test/java/org/apache/lucene/analysis/miscellaneous/UniqueTokenFilterTests.java index e1d49f3971b..e8c074be477 100644 --- a/src/test/java/org/apache/lucene/analysis/miscellaneous/UniqueTokenFilterTests.java +++ b/src/test/java/org/apache/lucene/analysis/miscellaneous/UniqueTokenFilterTests.java @@ -20,16 +20,14 @@ package org.apache.lucene.analysis.miscellaneous; import org.apache.lucene.analysis.Analyzer; +import org.apache.lucene.analysis.MockTokenizer; import org.apache.lucene.analysis.TokenStream; import org.apache.lucene.analysis.Tokenizer; -import org.apache.lucene.analysis.core.WhitespaceTokenizer; import org.apache.lucene.analysis.tokenattributes.CharTermAttribute; -import org.elasticsearch.common.lucene.Lucene; import org.elasticsearch.test.ElasticsearchTestCase; import org.junit.Test; import java.io.IOException; -import java.io.Reader; import static org.hamcrest.Matchers.equalTo; @@ -41,9 +39,8 @@ public class UniqueTokenFilterTests extends ElasticsearchTestCase { public void simpleTest() throws IOException { Analyzer analyzer = new Analyzer() { @Override - protected TokenStreamComponents createComponents(String fieldName, - Reader reader) { - Tokenizer t = new WhitespaceTokenizer(Lucene.VERSION, reader); + protected TokenStreamComponents createComponents(String fieldName) { + Tokenizer t = new MockTokenizer(MockTokenizer.WHITESPACE, false); return new TokenStreamComponents(t, new UniqueTokenFilter(t)); } }; diff --git a/src/test/java/org/apache/lucene/queries/BlendedTermQueryTest.java b/src/test/java/org/apache/lucene/queries/BlendedTermQueryTest.java index 78d40104adf..36507a7df2b 100644 --- a/src/test/java/org/apache/lucene/queries/BlendedTermQueryTest.java +++ b/src/test/java/org/apache/lucene/queries/BlendedTermQueryTest.java @@ -24,7 +24,7 @@ import org.apache.lucene.document.Field; import org.apache.lucene.document.FieldType; import org.apache.lucene.document.TextField; import org.apache.lucene.index.DirectoryReader; -import org.apache.lucene.index.FieldInfo; +import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.IndexWriter; import org.apache.lucene.index.Term; import org.apache.lucene.search.*; @@ -110,12 +110,12 @@ public class BlendedTermQueryTest extends ElasticsearchLuceneTestCase { }; final boolean omitNorms = random().nextBoolean(); FieldType ft = new FieldType(TextField.TYPE_NOT_STORED); - ft.setIndexOptions(random().nextBoolean() ? FieldInfo.IndexOptions.DOCS_ONLY : FieldInfo.IndexOptions.DOCS_AND_FREQS); + ft.setIndexOptions(random().nextBoolean() ? IndexOptions.DOCS : IndexOptions.DOCS_AND_FREQS); ft.setOmitNorms(omitNorms); ft.freeze(); FieldType ft1 = new FieldType(TextField.TYPE_NOT_STORED); - ft1.setIndexOptions(random().nextBoolean() ? FieldInfo.IndexOptions.DOCS_ONLY : FieldInfo.IndexOptions.DOCS_AND_FREQS); + ft1.setIndexOptions(random().nextBoolean() ? IndexOptions.DOCS : IndexOptions.DOCS_AND_FREQS); ft1.setOmitNorms(omitNorms); ft1.freeze(); for (int i = 0; i < username.length; i++) { diff --git a/src/test/java/org/apache/lucene/search/postingshighlight/CustomPostingsHighlighterTests.java b/src/test/java/org/apache/lucene/search/postingshighlight/CustomPostingsHighlighterTests.java index 7bfac7bccc9..b8a0fb8f776 100644 --- a/src/test/java/org/apache/lucene/search/postingshighlight/CustomPostingsHighlighterTests.java +++ b/src/test/java/org/apache/lucene/search/postingshighlight/CustomPostingsHighlighterTests.java @@ -50,7 +50,7 @@ public class CustomPostingsHighlighterTests extends ElasticsearchLuceneTestCase RandomIndexWriter iw = new RandomIndexWriter(random(), dir, iwc); FieldType offsetsType = new FieldType(TextField.TYPE_STORED); - offsetsType.setIndexOptions(FieldInfo.IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); + offsetsType.setIndexOptions(IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); Field body = new Field("body", "", offsetsType); final String firstValue = "This is a test. Just a test highlighting from postings highlighter."; Document doc = new Document(); @@ -135,7 +135,7 @@ public class CustomPostingsHighlighterTests extends ElasticsearchLuceneTestCase RandomIndexWriter iw = new RandomIndexWriter(random(), dir, iwc); FieldType offsetsType = new FieldType(TextField.TYPE_STORED); - offsetsType.setIndexOptions(FieldInfo.IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); + offsetsType.setIndexOptions(IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); //good position but only one match final String firstValue = "This is a test. Just a test1 highlighting from postings highlighter."; @@ -250,7 +250,7 @@ public class CustomPostingsHighlighterTests extends ElasticsearchLuceneTestCase RandomIndexWriter iw = new RandomIndexWriter(random(), dir, iwc); FieldType offsetsType = new FieldType(TextField.TYPE_STORED); - offsetsType.setIndexOptions(FieldInfo.IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); + offsetsType.setIndexOptions(IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); //good position but only one match final String firstValue = "This is a test. Just a test1 highlighting from postings highlighter."; @@ -362,7 +362,7 @@ public class CustomPostingsHighlighterTests extends ElasticsearchLuceneTestCase RandomIndexWriter iw = new RandomIndexWriter(random(), dir, iwc); FieldType offsetsType = new FieldType(TextField.TYPE_STORED); - offsetsType.setIndexOptions(FieldInfo.IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); + offsetsType.setIndexOptions(IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); Field body = new Field("body", "", offsetsType); Field none = new Field("none", "", offsetsType); Document doc = new Document(); @@ -418,7 +418,7 @@ public class CustomPostingsHighlighterTests extends ElasticsearchLuceneTestCase RandomIndexWriter iw = new RandomIndexWriter(random(), dir, iwc); FieldType offsetsType = new FieldType(TextField.TYPE_STORED); - offsetsType.setIndexOptions(FieldInfo.IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); + offsetsType.setIndexOptions(IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); Field body = new Field("body", "", offsetsType); Field none = new Field("none", "", offsetsType); Document doc = new Document(); diff --git a/src/test/java/org/apache/lucene/search/postingshighlight/XPostingsHighlighterTests.java b/src/test/java/org/apache/lucene/search/postingshighlight/XPostingsHighlighterTests.java index 68f4d233378..6f70595d005 100644 --- a/src/test/java/org/apache/lucene/search/postingshighlight/XPostingsHighlighterTests.java +++ b/src/test/java/org/apache/lucene/search/postingshighlight/XPostingsHighlighterTests.java @@ -56,7 +56,7 @@ public class XPostingsHighlighterTests extends ElasticsearchLuceneTestCase { RandomIndexWriter iw = new RandomIndexWriter(random(), dir, iwc); FieldType offsetsType = new FieldType(TextField.TYPE_STORED); - offsetsType.setIndexOptions(FieldInfo.IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); + offsetsType.setIndexOptions(IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); Field body = new Field("body", "", offsetsType); final String firstValue = "This is a test. Just a test highlighting from postings highlighter."; Document doc = new Document(); @@ -153,7 +153,7 @@ public class XPostingsHighlighterTests extends ElasticsearchLuceneTestCase { RandomIndexWriter iw = new RandomIndexWriter(random(), dir, iwc); FieldType offsetsType = new FieldType(TextField.TYPE_STORED); - offsetsType.setIndexOptions(FieldInfo.IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); + offsetsType.setIndexOptions(IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); Field body = new Field("body", "", offsetsType); final String firstValue = "This is a test. Just a test highlighting from postings highlighter."; Document doc = new Document(); @@ -259,7 +259,7 @@ public class XPostingsHighlighterTests extends ElasticsearchLuceneTestCase { RandomIndexWriter iw = new RandomIndexWriter(random(), dir, iwc); FieldType offsetsType = new FieldType(TextField.TYPE_STORED); - offsetsType.setIndexOptions(FieldInfo.IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); + offsetsType.setIndexOptions(IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); Field body = new Field("body", "", offsetsType); final String firstValue = "This is a highlighting test. Just a test highlighting from postings highlighter."; Document doc = new Document(); @@ -332,7 +332,7 @@ public class XPostingsHighlighterTests extends ElasticsearchLuceneTestCase { RandomIndexWriter iw = new RandomIndexWriter(random(), dir, iwc); FieldType offsetsType = new FieldType(TextField.TYPE_STORED); - offsetsType.setIndexOptions(FieldInfo.IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); + offsetsType.setIndexOptions(IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); Field body = new Field("body", "", offsetsType); final String firstValue = "This is the first sentence. This is the second sentence."; Document doc = new Document(); @@ -429,7 +429,7 @@ public class XPostingsHighlighterTests extends ElasticsearchLuceneTestCase { RandomIndexWriter iw = new RandomIndexWriter(random(), dir, iwc); FieldType offsetsType = new FieldType(TextField.TYPE_STORED); - offsetsType.setIndexOptions(FieldInfo.IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); + offsetsType.setIndexOptions(IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); Field body = new Field("body", "", offsetsType); Document doc = new Document(); doc.add(body); @@ -495,7 +495,7 @@ public class XPostingsHighlighterTests extends ElasticsearchLuceneTestCase { RandomIndexWriter iw = new RandomIndexWriter(random(), dir, iwc); FieldType offsetsType = new FieldType(TextField.TYPE_STORED); - offsetsType.setIndexOptions(FieldInfo.IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); + offsetsType.setIndexOptions(IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); Field body = new Field("body", "", offsetsType); Field none = new Field("none", "", offsetsType); Document doc = new Document(); @@ -552,7 +552,7 @@ public class XPostingsHighlighterTests extends ElasticsearchLuceneTestCase { RandomIndexWriter iw = new RandomIndexWriter(random(), dir, iwc); FieldType offsetsType = new FieldType(TextField.TYPE_STORED); - offsetsType.setIndexOptions(FieldInfo.IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); + offsetsType.setIndexOptions(IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); Field body = new Field("body", "", offsetsType); Document doc = new Document(); doc.add(body); @@ -610,7 +610,7 @@ public class XPostingsHighlighterTests extends ElasticsearchLuceneTestCase { RandomIndexWriter iw = new RandomIndexWriter(random(), dir, iwc); FieldType offsetsType = new FieldType(TextField.TYPE_STORED); - offsetsType.setIndexOptions(FieldInfo.IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); + offsetsType.setIndexOptions(IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); Field body = new Field("body", "", offsetsType); Document doc = new Document(); doc.add(body); @@ -680,7 +680,7 @@ public class XPostingsHighlighterTests extends ElasticsearchLuceneTestCase { RandomIndexWriter iw = new RandomIndexWriter(random(), dir, iwc); final FieldType fieldType = new FieldType(TextField.TYPE_STORED); - fieldType.setIndexOptions(FieldInfo.IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); + fieldType.setIndexOptions(IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); final Field body = new Field("body", bodyText, fieldType); Document doc = new Document(); @@ -715,7 +715,7 @@ public class XPostingsHighlighterTests extends ElasticsearchLuceneTestCase { RandomIndexWriter iw = new RandomIndexWriter(random(), dir, iwc); FieldType offsetsType = new FieldType(TextField.TYPE_STORED); - offsetsType.setIndexOptions(FieldInfo.IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); + offsetsType.setIndexOptions(IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); Field body = new Field("body", "", offsetsType); Document doc = new Document(); doc.add(body); @@ -749,7 +749,7 @@ public class XPostingsHighlighterTests extends ElasticsearchLuceneTestCase { RandomIndexWriter iw = new RandomIndexWriter(random(), dir, iwc); FieldType offsetsType = new FieldType(TextField.TYPE_STORED); - offsetsType.setIndexOptions(FieldInfo.IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); + offsetsType.setIndexOptions(IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); Field body = new Field("body", "", offsetsType); Document doc = new Document(); doc.add(body); @@ -786,7 +786,7 @@ public class XPostingsHighlighterTests extends ElasticsearchLuceneTestCase { RandomIndexWriter iw = new RandomIndexWriter(random(), dir, iwc); FieldType offsetsType = new FieldType(TextField.TYPE_STORED); - offsetsType.setIndexOptions(FieldInfo.IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); + offsetsType.setIndexOptions(IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); Document doc = new Document(); for(int i = 0; i < 3 ; i++) { @@ -822,7 +822,7 @@ public class XPostingsHighlighterTests extends ElasticsearchLuceneTestCase { RandomIndexWriter iw = new RandomIndexWriter(random(), dir, iwc); FieldType offsetsType = new FieldType(TextField.TYPE_STORED); - offsetsType.setIndexOptions(FieldInfo.IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); + offsetsType.setIndexOptions(IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); Field body = new Field("body", "", offsetsType); Field title = new Field("title", "", offsetsType); Document doc = new Document(); @@ -864,7 +864,7 @@ public class XPostingsHighlighterTests extends ElasticsearchLuceneTestCase { RandomIndexWriter iw = new RandomIndexWriter(random(), dir, iwc); FieldType offsetsType = new FieldType(TextField.TYPE_STORED); - offsetsType.setIndexOptions(FieldInfo.IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); + offsetsType.setIndexOptions(IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); Field body = new Field("body", "", offsetsType); Document doc = new Document(); doc.add(body); @@ -902,7 +902,7 @@ public class XPostingsHighlighterTests extends ElasticsearchLuceneTestCase { RandomIndexWriter iw = new RandomIndexWriter(random(), dir, iwc); FieldType offsetsType = new FieldType(TextField.TYPE_STORED); - offsetsType.setIndexOptions(FieldInfo.IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); + offsetsType.setIndexOptions(IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); Field body = new Field("body", "", offsetsType); Document doc = new Document(); doc.add(body); @@ -937,7 +937,7 @@ public class XPostingsHighlighterTests extends ElasticsearchLuceneTestCase { RandomIndexWriter iw = new RandomIndexWriter(random(), dir, iwc); FieldType positionsType = new FieldType(TextField.TYPE_STORED); - positionsType.setIndexOptions(FieldInfo.IndexOptions.DOCS_AND_FREQS_AND_POSITIONS); + positionsType.setIndexOptions(IndexOptions.DOCS_AND_FREQS_AND_POSITIONS); Field body = new Field("body", "", positionsType); Field title = new StringField("title", "", Field.Store.YES); Document doc = new Document(); @@ -992,7 +992,7 @@ public class XPostingsHighlighterTests extends ElasticsearchLuceneTestCase { RandomIndexWriter iw = new RandomIndexWriter(random(), dir, analyzer); FieldType positionsType = new FieldType(TextField.TYPE_STORED); - positionsType.setIndexOptions(FieldInfo.IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); + positionsType.setIndexOptions(IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); Field body = new Field("body", text, positionsType); Document document = new Document(); document.add(body); @@ -1023,7 +1023,7 @@ public class XPostingsHighlighterTests extends ElasticsearchLuceneTestCase { Analyzer analyzer = new MockAnalyzer(random(), MockTokenizer.SIMPLE, true); RandomIndexWriter iw = new RandomIndexWriter(random(), dir, analyzer); FieldType positionsType = new FieldType(TextField.TYPE_STORED); - positionsType.setIndexOptions(FieldInfo.IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); + positionsType.setIndexOptions(IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); Field body = new Field("body", text, positionsType); Document document = new Document(); document.add(body); @@ -1054,7 +1054,7 @@ public class XPostingsHighlighterTests extends ElasticsearchLuceneTestCase { Analyzer analyzer = new MockAnalyzer(random(), MockTokenizer.SIMPLE, true); RandomIndexWriter iw = new RandomIndexWriter(random(), dir, analyzer); FieldType positionsType = new FieldType(TextField.TYPE_STORED); - positionsType.setIndexOptions(FieldInfo.IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); + positionsType.setIndexOptions(IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); Field body = new Field("body", text, positionsType); Document document = new Document(); document.add(body); @@ -1085,7 +1085,7 @@ public class XPostingsHighlighterTests extends ElasticsearchLuceneTestCase { RandomIndexWriter iw = new RandomIndexWriter(random(), dir, iwc); FieldType offsetsType = new FieldType(TextField.TYPE_STORED); - offsetsType.setIndexOptions(FieldInfo.IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); + offsetsType.setIndexOptions(IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); Field body = new Field("body", "", offsetsType); Document doc = new Document(); doc.add(body); @@ -1115,7 +1115,7 @@ public class XPostingsHighlighterTests extends ElasticsearchLuceneTestCase { Analyzer analyzer = new MockAnalyzer(random(), MockTokenizer.SIMPLE, true); RandomIndexWriter iw = new RandomIndexWriter(random(), dir, analyzer); FieldType positionsType = new FieldType(TextField.TYPE_STORED); - positionsType.setIndexOptions(FieldInfo.IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); + positionsType.setIndexOptions(IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); Field body = new Field("body", "This sentence has both terms. This sentence has only terms.", positionsType); Document document = new Document(); document.add(body); @@ -1146,7 +1146,7 @@ public class XPostingsHighlighterTests extends ElasticsearchLuceneTestCase { RandomIndexWriter iw = new RandomIndexWriter(random(), dir, iwc); FieldType offsetsType = new FieldType(TextField.TYPE_STORED); - offsetsType.setIndexOptions(FieldInfo.IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); + offsetsType.setIndexOptions(IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); Field body = new Field("body", "", offsetsType); Document doc = new Document(); doc.add(body); @@ -1183,7 +1183,7 @@ public class XPostingsHighlighterTests extends ElasticsearchLuceneTestCase { RandomIndexWriter iw = new RandomIndexWriter(random(), dir, iwc); FieldType offsetsType = new FieldType(TextField.TYPE_STORED); - offsetsType.setIndexOptions(FieldInfo.IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); + offsetsType.setIndexOptions(IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); Field body = new Field("body", "", offsetsType); Document doc = new Document(); doc.add(body); @@ -1224,7 +1224,7 @@ public class XPostingsHighlighterTests extends ElasticsearchLuceneTestCase { Document doc = new Document(); FieldType offsetsType = new FieldType(TextField.TYPE_NOT_STORED); - offsetsType.setIndexOptions(FieldInfo.IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); + offsetsType.setIndexOptions(IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); final String text = "This is a test. Just highlighting from postings. This is also a much sillier test. Feel free to test test test test test test test."; Field body = new Field("body", text, offsetsType); doc.add(body); @@ -1272,7 +1272,7 @@ public class XPostingsHighlighterTests extends ElasticsearchLuceneTestCase { RandomIndexWriter iw = new RandomIndexWriter(random(), dir, iwc); FieldType offsetsType = new FieldType(TextField.TYPE_STORED); - offsetsType.setIndexOptions(FieldInfo.IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); + offsetsType.setIndexOptions(IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); Document doc = new Document(); Field body = new Field("body", "test this is. another sentence this test has. far away is that planet.", offsetsType); @@ -1304,7 +1304,7 @@ public class XPostingsHighlighterTests extends ElasticsearchLuceneTestCase { RandomIndexWriter iw = new RandomIndexWriter(random(), dir, iwc); FieldType offsetsType = new FieldType(TextField.TYPE_STORED); - offsetsType.setIndexOptions(FieldInfo.IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); + offsetsType.setIndexOptions(IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); Document doc = new Document(); Field body = new Field("body", "test this is. another sentence this test has. far away is that planet.", offsetsType); @@ -1341,7 +1341,7 @@ public class XPostingsHighlighterTests extends ElasticsearchLuceneTestCase { RandomIndexWriter iw = new RandomIndexWriter(random(), dir, iwc); FieldType offsetsType = new FieldType(TextField.TYPE_STORED); - offsetsType.setIndexOptions(FieldInfo.IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); + offsetsType.setIndexOptions(IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); Document doc = new Document(); Field body = new Field("body", "test this is. another sentence this test has. far away is that planet.", offsetsType); @@ -1378,7 +1378,7 @@ public class XPostingsHighlighterTests extends ElasticsearchLuceneTestCase { RandomIndexWriter iw = new RandomIndexWriter(random(), dir, iwc); FieldType offsetsType = new FieldType(TextField.TYPE_STORED); - offsetsType.setIndexOptions(FieldInfo.IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); + offsetsType.setIndexOptions(IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); Document doc = new Document(); Field body = new Field("body", "test this is. another sentence this test has. far away is that planet.", offsetsType); @@ -1408,7 +1408,7 @@ public class XPostingsHighlighterTests extends ElasticsearchLuceneTestCase { RandomIndexWriter iw = new RandomIndexWriter(random(), dir, iwc); FieldType offsetsType = new FieldType(TextField.TYPE_STORED); - offsetsType.setIndexOptions(FieldInfo.IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); + offsetsType.setIndexOptions(IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); Document doc = new Document(); doc.add(new Field("body", " ", offsetsType)); @@ -1445,7 +1445,7 @@ public class XPostingsHighlighterTests extends ElasticsearchLuceneTestCase { RandomIndexWriter iw = new RandomIndexWriter(random(), dir, iwc); FieldType offsetsType = new FieldType(TextField.TYPE_STORED); - offsetsType.setIndexOptions(FieldInfo.IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); + offsetsType.setIndexOptions(IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); Document doc = new Document(); doc.add(new Field("body", "", offsetsType)); @@ -1482,7 +1482,7 @@ public class XPostingsHighlighterTests extends ElasticsearchLuceneTestCase { RandomIndexWriter iw = new RandomIndexWriter(random(), dir, iwc); FieldType offsetsType = new FieldType(TextField.TYPE_STORED); - offsetsType.setIndexOptions(FieldInfo.IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); + offsetsType.setIndexOptions(IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); int numDocs = scaledRandomIntBetween(100, 1000); for(int i=0;i booleanFilters = new ArrayList<>(); booleanFilters.add(createBooleanFilter( newFilterClause(0, 'a', MUST, false), newFilterClause(1, 'b', MUST, false), @@ -334,7 +337,16 @@ public class XBooleanFilterTests extends ElasticsearchLuceneTestCase { ); DocIdSet docIdSet = booleanFilter.getDocIdSet(reader.getContext(), reader.getLiveDocs()); - assertThat(docIdSet, equalTo(null)); + boolean empty = false; + if (docIdSet == null) { + empty = true; + } else { + DocIdSetIterator it = docIdSet.iterator(); + if (it == null || it.nextDoc() == DocIdSetIterator.NO_MORE_DOCS) { + empty = true; + } + } + assertTrue(empty); } @Test @@ -530,7 +542,7 @@ public class XBooleanFilterTests extends ElasticsearchLuceneTestCase { } - public static final class PrettyPrintFieldCacheTermsFilter extends FieldCacheTermsFilter { + public static final class PrettyPrintFieldCacheTermsFilter extends DocValuesTermsFilter { private final String value; private final String field; @@ -550,7 +562,7 @@ public class XBooleanFilterTests extends ElasticsearchLuceneTestCase { public final class EmptyFilter extends Filter { @Override - public DocIdSet getDocIdSet(AtomicReaderContext context, Bits acceptDocs) throws IOException { + public DocIdSet getDocIdSet(LeafReaderContext context, Bits acceptDocs) throws IOException { return random().nextBoolean() ? new Empty() : null; } @@ -560,6 +572,11 @@ public class XBooleanFilterTests extends ElasticsearchLuceneTestCase { public DocIdSetIterator iterator() throws IOException { return null; } + + @Override + public long ramBytesUsed() { + return 0; + } } } diff --git a/src/test/java/org/elasticsearch/common/lucene/uid/VersionsTests.java b/src/test/java/org/elasticsearch/common/lucene/uid/VersionsTests.java index d4f32d16af9..d9dcd408523 100644 --- a/src/test/java/org/elasticsearch/common/lucene/uid/VersionsTests.java +++ b/src/test/java/org/elasticsearch/common/lucene/uid/VersionsTests.java @@ -27,7 +27,7 @@ import org.apache.lucene.analysis.tokenattributes.PayloadAttribute; import org.apache.lucene.document.*; import org.apache.lucene.document.Field.Store; import org.apache.lucene.index.*; -import org.apache.lucene.index.FieldInfo.IndexOptions; +import org.apache.lucene.index.IndexOptions; import org.apache.lucene.store.Directory; import org.apache.lucene.util.BytesRef; import org.elasticsearch.common.Numbers; @@ -64,7 +64,7 @@ public class VersionsTests extends ElasticsearchLuceneTestCase { @Test public void testVersions() throws Exception { Directory dir = newDirectory(); - IndexWriter writer = new IndexWriter(dir, new IndexWriterConfig(Lucene.VERSION, Lucene.STANDARD_ANALYZER)); + IndexWriter writer = new IndexWriter(dir, new IndexWriterConfig(Lucene.STANDARD_ANALYZER)); DirectoryReader directoryReader = DirectoryReader.open(writer, true); MatcherAssert.assertThat(Versions.loadVersion(directoryReader, new Term(UidFieldMapper.NAME, "1")), equalTo(Versions.NOT_FOUND)); @@ -116,7 +116,7 @@ public class VersionsTests extends ElasticsearchLuceneTestCase { @Test public void testNestedDocuments() throws IOException { Directory dir = newDirectory(); - IndexWriter writer = new IndexWriter(dir, new IndexWriterConfig(Lucene.VERSION, Lucene.STANDARD_ANALYZER)); + IndexWriter writer = new IndexWriter(dir, new IndexWriterConfig(Lucene.STANDARD_ANALYZER)); List docs = new ArrayList<>(); for (int i = 0; i < 4; ++i) { @@ -157,7 +157,7 @@ public class VersionsTests extends ElasticsearchLuceneTestCase { @Test public void testBackwardCompatibility() throws IOException { Directory dir = newDirectory(); - IndexWriter writer = new IndexWriter(dir, new IndexWriterConfig(Lucene.VERSION, Lucene.STANDARD_ANALYZER)); + IndexWriter writer = new IndexWriter(dir, new IndexWriterConfig(Lucene.STANDARD_ANALYZER)); DirectoryReader directoryReader = DirectoryReader.open(writer, true); MatcherAssert.assertThat(Versions.loadVersion(directoryReader, new Term(UidFieldMapper.NAME, "1")), equalTo(Versions.NOT_FOUND)); @@ -186,7 +186,6 @@ public class VersionsTests extends ElasticsearchLuceneTestCase { private static final FieldType FIELD_TYPE = new FieldType(); static { FIELD_TYPE.setTokenized(true); - FIELD_TYPE.setIndexed(true); FIELD_TYPE.setIndexOptions(IndexOptions.DOCS_AND_FREQS_AND_POSITIONS); FIELD_TYPE.setStored(true); FIELD_TYPE.freeze(); @@ -224,7 +223,7 @@ public class VersionsTests extends ElasticsearchLuceneTestCase { @Test public void testMergingOldIndices() throws Exception { - final IndexWriterConfig iwConf = new IndexWriterConfig(Lucene.VERSION, new KeywordAnalyzer()); + final IndexWriterConfig iwConf = new IndexWriterConfig(new KeywordAnalyzer()); iwConf.setMergePolicy(new ElasticsearchMergePolicy(iwConf.getMergePolicy())); final Directory dir = newDirectory(); final IndexWriter iw = new IndexWriter(dir, iwConf); @@ -267,7 +266,7 @@ public class VersionsTests extends ElasticsearchLuceneTestCase { // Force merge and check versions iw.forceMerge(1, true); - final AtomicReader ir = SlowCompositeReaderWrapper.wrap(DirectoryReader.open(iw.getDirectory())); + final LeafReader ir = SlowCompositeReaderWrapper.wrap(DirectoryReader.open(iw.getDirectory())); final NumericDocValues versions = ir.getNumericDocValues(VersionFieldMapper.NAME); assertThat(versions, notNullValue()); for (int i = 0; i < ir.maxDoc(); ++i) { diff --git a/src/test/java/org/elasticsearch/deps/lucene/SimpleLuceneTests.java b/src/test/java/org/elasticsearch/deps/lucene/SimpleLuceneTests.java index 3a70c05a121..a5ac75a8ab3 100644 --- a/src/test/java/org/elasticsearch/deps/lucene/SimpleLuceneTests.java +++ b/src/test/java/org/elasticsearch/deps/lucene/SimpleLuceneTests.java @@ -29,7 +29,6 @@ import org.apache.lucene.util.BytesRefBuilder; import org.apache.lucene.util.NumericUtils; import org.elasticsearch.common.lucene.Lucene; import org.elasticsearch.test.ElasticsearchTestCase; -import org.elasticsearch.test.ElasticsearchTestCase.UsesLuceneFieldCacheOnPurpose; import org.junit.Test; import java.io.IOException; @@ -40,19 +39,20 @@ import static org.hamcrest.Matchers.equalTo; /** * */ -@UsesLuceneFieldCacheOnPurpose public class SimpleLuceneTests extends ElasticsearchTestCase { @Test public void testSortValues() throws Exception { Directory dir = new RAMDirectory(); - IndexWriter indexWriter = new IndexWriter(dir, new IndexWriterConfig(Lucene.VERSION, Lucene.STANDARD_ANALYZER)); + IndexWriter indexWriter = new IndexWriter(dir, new IndexWriterConfig(Lucene.STANDARD_ANALYZER)); for (int i = 0; i < 10; i++) { Document document = new Document(); - document.add(new TextField("str", new String(new char[]{(char) (97 + i), (char) (97 + i)}), Field.Store.YES)); + String text = new String(new char[]{(char) (97 + i), (char) (97 + i)}); + document.add(new TextField("str", text, Field.Store.YES)); + document.add(new SortedDocValuesField("str", new BytesRef(text))); indexWriter.addDocument(document); } - IndexReader reader = DirectoryReader.open(indexWriter, true); + IndexReader reader = SlowCompositeReaderWrapper.wrap(DirectoryReader.open(indexWriter, true)); IndexSearcher searcher = new IndexSearcher(reader); TopFieldDocs docs = searcher.search(new MatchAllDocsQuery(), null, 10, new Sort(new SortField("str", SortField.Type.STRING))); for (int i = 0; i < 10; i++) { @@ -64,7 +64,7 @@ public class SimpleLuceneTests extends ElasticsearchTestCase { @Test public void testAddDocAfterPrepareCommit() throws Exception { Directory dir = new RAMDirectory(); - IndexWriter indexWriter = new IndexWriter(dir, new IndexWriterConfig(Lucene.VERSION, Lucene.STANDARD_ANALYZER)); + IndexWriter indexWriter = new IndexWriter(dir, new IndexWriterConfig(Lucene.STANDARD_ANALYZER)); Document document = new Document(); document.add(new TextField("_id", "1", Field.Store.YES)); indexWriter.addDocument(document); @@ -86,7 +86,7 @@ public class SimpleLuceneTests extends ElasticsearchTestCase { @Test public void testSimpleNumericOps() throws Exception { Directory dir = new RAMDirectory(); - IndexWriter indexWriter = new IndexWriter(dir, new IndexWriterConfig(Lucene.VERSION, Lucene.STANDARD_ANALYZER)); + IndexWriter indexWriter = new IndexWriter(dir, new IndexWriterConfig(Lucene.STANDARD_ANALYZER)); Document document = new Document(); document.add(new TextField("_id", "1", Field.Store.YES)); @@ -118,7 +118,7 @@ public class SimpleLuceneTests extends ElasticsearchTestCase { @Test public void testOrdering() throws Exception { Directory dir = new RAMDirectory(); - IndexWriter indexWriter = new IndexWriter(dir, new IndexWriterConfig(Lucene.VERSION, Lucene.STANDARD_ANALYZER)); + IndexWriter indexWriter = new IndexWriter(dir, new IndexWriterConfig(Lucene.STANDARD_ANALYZER)); Document document = new Document(); document.add(new TextField("_id", "1", Field.Store.YES)); @@ -147,7 +147,7 @@ public class SimpleLuceneTests extends ElasticsearchTestCase { @Test public void testBoost() throws Exception { Directory dir = new RAMDirectory(); - IndexWriter indexWriter = new IndexWriter(dir, new IndexWriterConfig(Lucene.VERSION, Lucene.STANDARD_ANALYZER)); + IndexWriter indexWriter = new IndexWriter(dir, new IndexWriterConfig(Lucene.STANDARD_ANALYZER)); for (int i = 0; i < 100; i++) { // TODO (just setting the boost value does not seem to work...) @@ -182,7 +182,7 @@ public class SimpleLuceneTests extends ElasticsearchTestCase { @Test public void testNRTSearchOnClosedWriter() throws Exception { Directory dir = new RAMDirectory(); - IndexWriter indexWriter = new IndexWriter(dir, new IndexWriterConfig(Lucene.VERSION, Lucene.STANDARD_ANALYZER)); + IndexWriter indexWriter = new IndexWriter(dir, new IndexWriterConfig(Lucene.STANDARD_ANALYZER)); DirectoryReader reader = DirectoryReader.open(indexWriter, true); for (int i = 0; i < 100; i++) { @@ -207,7 +207,7 @@ public class SimpleLuceneTests extends ElasticsearchTestCase { @Test public void testNumericTermDocsFreqs() throws Exception { Directory dir = new RAMDirectory(); - IndexWriter indexWriter = new IndexWriter(dir, new IndexWriterConfig(Lucene.VERSION, Lucene.STANDARD_ANALYZER)); + IndexWriter indexWriter = new IndexWriter(dir, new IndexWriterConfig(Lucene.STANDARD_ANALYZER)); Document doc = new Document(); FieldType type = IntField.TYPE_NOT_STORED; @@ -215,7 +215,7 @@ public class SimpleLuceneTests extends ElasticsearchTestCase { doc.add(field); type = new FieldType(IntField.TYPE_NOT_STORED); - type.setIndexOptions(FieldInfo.IndexOptions.DOCS_AND_FREQS); + type.setIndexOptions(IndexOptions.DOCS_AND_FREQS); type.freeze(); field = new IntField("int1", 1, type); @@ -230,7 +230,7 @@ public class SimpleLuceneTests extends ElasticsearchTestCase { indexWriter.addDocument(doc); IndexReader reader = DirectoryReader.open(indexWriter, true); - AtomicReader atomicReader = SlowCompositeReaderWrapper.wrap(reader); + LeafReader atomicReader = SlowCompositeReaderWrapper.wrap(reader); Terms terms = atomicReader.terms("int1"); TermsEnum termsEnum = terms.iterator(null); diff --git a/src/test/java/org/elasticsearch/deps/lucene/VectorHighlighterTests.java b/src/test/java/org/elasticsearch/deps/lucene/VectorHighlighterTests.java index 1d5e7cc0dfc..f3b2944a497 100644 --- a/src/test/java/org/elasticsearch/deps/lucene/VectorHighlighterTests.java +++ b/src/test/java/org/elasticsearch/deps/lucene/VectorHighlighterTests.java @@ -43,7 +43,7 @@ public class VectorHighlighterTests extends ElasticsearchTestCase { @Test public void testVectorHighlighter() throws Exception { Directory dir = new RAMDirectory(); - IndexWriter indexWriter = new IndexWriter(dir, new IndexWriterConfig(Lucene.VERSION, Lucene.STANDARD_ANALYZER)); + IndexWriter indexWriter = new IndexWriter(dir, new IndexWriterConfig(Lucene.STANDARD_ANALYZER)); Document document = new Document(); document.add(new TextField("_id", "1", Field.Store.YES)); @@ -66,7 +66,7 @@ public class VectorHighlighterTests extends ElasticsearchTestCase { @Test public void testVectorHighlighterPrefixQuery() throws Exception { Directory dir = new RAMDirectory(); - IndexWriter indexWriter = new IndexWriter(dir, new IndexWriterConfig(Lucene.VERSION, Lucene.STANDARD_ANALYZER)); + IndexWriter indexWriter = new IndexWriter(dir, new IndexWriterConfig(Lucene.STANDARD_ANALYZER)); Document document = new Document(); document.add(new TextField("_id", "1", Field.Store.YES)); @@ -82,7 +82,7 @@ public class VectorHighlighterTests extends ElasticsearchTestCase { FastVectorHighlighter highlighter = new FastVectorHighlighter(); PrefixQuery prefixQuery = new PrefixQuery(new Term("content", "ba")); - assertThat(prefixQuery.getRewriteMethod().getClass().getName(), equalTo(PrefixQuery.CONSTANT_SCORE_AUTO_REWRITE_DEFAULT.getClass().getName())); + assertThat(prefixQuery.getRewriteMethod().getClass().getName(), equalTo(PrefixQuery.CONSTANT_SCORE_FILTER_REWRITE.getClass().getName())); String fragment = highlighter.getBestFragment(highlighter.getFieldQuery(prefixQuery), reader, topDocs.scoreDocs[0].doc, "content", 30); assertThat(fragment, nullValue()); @@ -95,7 +95,7 @@ public class VectorHighlighterTests extends ElasticsearchTestCase { // now check with the custom field query prefixQuery = new PrefixQuery(new Term("content", "ba")); - assertThat(prefixQuery.getRewriteMethod().getClass().getName(), equalTo(PrefixQuery.CONSTANT_SCORE_AUTO_REWRITE_DEFAULT.getClass().getName())); + assertThat(prefixQuery.getRewriteMethod().getClass().getName(), equalTo(PrefixQuery.CONSTANT_SCORE_FILTER_REWRITE.getClass().getName())); fragment = highlighter.getBestFragment(new CustomFieldQuery(prefixQuery, reader, highlighter), reader, topDocs.scoreDocs[0].doc, "content", 30); assertThat(fragment, notNullValue()); @@ -104,7 +104,7 @@ public class VectorHighlighterTests extends ElasticsearchTestCase { @Test public void testVectorHighlighterNoStore() throws Exception { Directory dir = new RAMDirectory(); - IndexWriter indexWriter = new IndexWriter(dir, new IndexWriterConfig(Lucene.VERSION, Lucene.STANDARD_ANALYZER)); + IndexWriter indexWriter = new IndexWriter(dir, new IndexWriterConfig(Lucene.STANDARD_ANALYZER)); Document document = new Document(); document.add(new TextField("_id", "1", Field.Store.YES)); @@ -126,7 +126,7 @@ public class VectorHighlighterTests extends ElasticsearchTestCase { @Test public void testVectorHighlighterNoTermVector() throws Exception { Directory dir = new RAMDirectory(); - IndexWriter indexWriter = new IndexWriter(dir, new IndexWriterConfig(Lucene.VERSION, Lucene.STANDARD_ANALYZER)); + IndexWriter indexWriter = new IndexWriter(dir, new IndexWriterConfig(Lucene.STANDARD_ANALYZER)); Document document = new Document(); document.add(new TextField("_id", "1", Field.Store.YES)); diff --git a/src/test/java/org/elasticsearch/gateway/local/state/meta/MetaDataStateFormatTest.java b/src/test/java/org/elasticsearch/gateway/local/state/meta/MetaDataStateFormatTest.java index d8204b1eacd..bf58726e178 100644 --- a/src/test/java/org/elasticsearch/gateway/local/state/meta/MetaDataStateFormatTest.java +++ b/src/test/java/org/elasticsearch/gateway/local/state/meta/MetaDataStateFormatTest.java @@ -47,11 +47,13 @@ import org.junit.Test; import java.io.Closeable; import java.io.File; import java.io.FileOutputStream; +import java.io.InputStream; import java.io.IOException; import java.io.RandomAccessFile; import java.net.URISyntaxException; -import java.net.URL; import java.nio.file.Files; +import java.nio.file.Path; + import java.util.ArrayList; import java.util.Arrays; import java.util.Collections; @@ -86,9 +88,12 @@ public class MetaDataStateFormatTest extends ElasticsearchTestCase { return MetaData.Builder.fromXContent(parser); } }; - final URL resource = this.getClass().getResource("global-3.st"); + Path tmp = newTempDir().toPath(); + final InputStream resource = this.getClass().getResourceAsStream("global-3.st"); assertThat(resource, notNullValue()); - MetaData read = format.read(new File(resource.toURI()), 3); + Path dst = tmp.resolve("global-3.st"); + Files.copy(resource, dst); + MetaData read = format.read(dst.toFile(), 3); assertThat(read, notNullValue()); assertThat(read.uuid(), equalTo("3O1tDF1IRB6fSJ-GrTMUtg")); // indices are empty since they are serialized separately @@ -209,7 +214,7 @@ public class MetaDataStateFormatTest extends ElasticsearchTestCase { public static void corruptFile(File file, ESLogger logger) throws IOException { File fileToCorrupt = file; - try (final SimpleFSDirectory dir = new SimpleFSDirectory(fileToCorrupt.getParentFile())) { + try (final SimpleFSDirectory dir = new SimpleFSDirectory(fileToCorrupt.getParentFile().toPath())) { long checksumBeforeCorruption; try (IndexInput input = dir.openInput(fileToCorrupt.getName(), IOContext.DEFAULT)) { checksumBeforeCorruption = CodecUtil.retrieveChecksum(input); diff --git a/src/test/java/org/elasticsearch/index/analysis/ASCIIFoldingTokenFilterFactoryTests.java b/src/test/java/org/elasticsearch/index/analysis/ASCIIFoldingTokenFilterFactoryTests.java index b892fb78b59..04413f5d41e 100644 --- a/src/test/java/org/elasticsearch/index/analysis/ASCIIFoldingTokenFilterFactoryTests.java +++ b/src/test/java/org/elasticsearch/index/analysis/ASCIIFoldingTokenFilterFactoryTests.java @@ -38,7 +38,8 @@ public class ASCIIFoldingTokenFilterFactoryTests extends ElasticsearchTokenStrea TokenFilterFactory tokenFilter = analysisService.tokenFilter("my_ascii_folding"); String source = "Ansprüche"; String[] expected = new String[]{"Anspruche"}; - Tokenizer tokenizer = new WhitespaceTokenizer(TEST_VERSION_CURRENT, new StringReader(source)); + Tokenizer tokenizer = new WhitespaceTokenizer(); + tokenizer.setReader(new StringReader(source)); assertTokenStreamContents(tokenFilter.create(tokenizer), expected); } @@ -51,7 +52,8 @@ public class ASCIIFoldingTokenFilterFactoryTests extends ElasticsearchTokenStrea TokenFilterFactory tokenFilter = analysisService.tokenFilter("my_ascii_folding"); String source = "Ansprüche"; String[] expected = new String[]{"Anspruche", "Ansprüche"}; - Tokenizer tokenizer = new WhitespaceTokenizer(TEST_VERSION_CURRENT, new StringReader(source)); + Tokenizer tokenizer = new WhitespaceTokenizer(); + tokenizer.setReader(new StringReader(source)); assertTokenStreamContents(tokenFilter.create(tokenizer), expected); } diff --git a/src/test/java/org/elasticsearch/index/analysis/AnalysisFactoryTests.java b/src/test/java/org/elasticsearch/index/analysis/AnalysisFactoryTests.java index 895ac2c9211..e8b8f7256bf 100644 --- a/src/test/java/org/elasticsearch/index/analysis/AnalysisFactoryTests.java +++ b/src/test/java/org/elasticsearch/index/analysis/AnalysisFactoryTests.java @@ -152,7 +152,9 @@ public class AnalysisFactoryTests extends ElasticsearchTestCase { put("worddelimiter", WordDelimiterTokenFilterFactory.class); // TODO: these tokenfilters are not yet exposed: useful? - + + // suggest stop + put("suggeststop", Void.class); // capitalizes tokens put("capitalization", Void.class); // like length filter (but codepoints) diff --git a/src/test/java/org/elasticsearch/index/analysis/AnalysisModuleTests.java b/src/test/java/org/elasticsearch/index/analysis/AnalysisModuleTests.java index fb0432e1406..601c555e3c4 100644 --- a/src/test/java/org/elasticsearch/index/analysis/AnalysisModuleTests.java +++ b/src/test/java/org/elasticsearch/index/analysis/AnalysisModuleTests.java @@ -91,7 +91,7 @@ public class AnalysisModuleTests extends ElasticsearchTestCase { } @Test - public void testDefaultFactoryTokenFilters() { + public void testDefaultFactoryTokenFilters() throws IOException { assertTokenFilter("keyword_repeat", KeywordRepeatFilter.class); assertTokenFilter("persian_normalization", PersianNormalizationFilter.class); assertTokenFilter("arabic_normalization", ArabicNormalizationFilter.class); @@ -116,10 +116,11 @@ public class AnalysisModuleTests extends ElasticsearchTestCase { assertEquals(Version.V_0_90_0.luceneVersion, analysisService2.analyzer("thai").analyzer().getVersion()); } - private void assertTokenFilter(String name, Class clazz) { + private void assertTokenFilter(String name, Class clazz) throws IOException { AnalysisService analysisService = AnalysisTestsHelper.createAnalysisServiceFromSettings(ImmutableSettings.settingsBuilder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT).build()); TokenFilterFactory tokenFilter = analysisService.tokenFilter(name); - Tokenizer tokenizer = new WhitespaceTokenizer(Version.CURRENT.luceneVersion, new StringReader("foo bar")); + Tokenizer tokenizer = new WhitespaceTokenizer(); + tokenizer.setReader(new StringReader("foo bar")); TokenStream stream = tokenFilter.create(tokenizer); assertThat(stream, instanceOf(clazz)); } @@ -187,7 +188,7 @@ public class AnalysisModuleTests extends ElasticsearchTestCase { // assertThat(dictionaryDecompounderAnalyze.tokenFilters().length, equalTo(1)); // assertThat(dictionaryDecompounderAnalyze.tokenFilters()[0], instanceOf(DictionaryCompoundWordTokenFilterFactory.class)); - Set wordList = Analysis.getWordSet(null, settings, "index.analysis.filter.dict_dec.word_list", Lucene.VERSION); + Set wordList = Analysis.getWordSet(null, settings, "index.analysis.filter.dict_dec.word_list"); MatcherAssert.assertThat(wordList.size(), equalTo(6)); // MatcherAssert.assertThat(wordList, hasItems("donau", "dampf", "schiff", "spargel", "creme", "suppe")); } @@ -200,7 +201,7 @@ public class AnalysisModuleTests extends ElasticsearchTestCase { File wordListFile = generateWordList(words); Settings settings = settingsBuilder().loadFromSource("index: \n word_list_path: " + wordListFile.getAbsolutePath()).build(); - Set wordList = Analysis.getWordSet(env, settings, "index.word_list", Lucene.VERSION); + Set wordList = Analysis.getWordSet(env, settings, "index.word_list"); MatcherAssert.assertThat(wordList.size(), equalTo(6)); // MatcherAssert.assertThat(wordList, hasItems(words)); } diff --git a/src/test/java/org/elasticsearch/index/analysis/AnalysisTests.java b/src/test/java/org/elasticsearch/index/analysis/AnalysisTests.java index acc5a950450..5a89cf1d6d4 100644 --- a/src/test/java/org/elasticsearch/index/analysis/AnalysisTests.java +++ b/src/test/java/org/elasticsearch/index/analysis/AnalysisTests.java @@ -20,7 +20,6 @@ package org.elasticsearch.index.analysis; import org.apache.lucene.analysis.util.CharArraySet; -import org.elasticsearch.Version; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.test.ElasticsearchTestCase; import org.junit.Test; @@ -34,14 +33,14 @@ public class AnalysisTests extends ElasticsearchTestCase { /* Comma separated list */ Settings settings = settingsBuilder().put("stem_exclusion", "foo,bar").build(); - CharArraySet set = Analysis.parseStemExclusion(settings, CharArraySet.EMPTY_SET, Version.CURRENT.luceneVersion); + CharArraySet set = Analysis.parseStemExclusion(settings, CharArraySet.EMPTY_SET); assertThat(set.contains("foo"), is(true)); assertThat(set.contains("bar"), is(true)); assertThat(set.contains("baz"), is(false)); /* Array */ settings = settingsBuilder().putArray("stem_exclusion", "foo","bar").build(); - set = Analysis.parseStemExclusion(settings, CharArraySet.EMPTY_SET, Version.CURRENT.luceneVersion); + set = Analysis.parseStemExclusion(settings, CharArraySet.EMPTY_SET); assertThat(set.contains("foo"), is(true)); assertThat(set.contains("bar"), is(true)); assertThat(set.contains("baz"), is(false)); diff --git a/src/test/java/org/elasticsearch/index/analysis/CJKFilterFactoryTests.java b/src/test/java/org/elasticsearch/index/analysis/CJKFilterFactoryTests.java index e6577319a21..b0bdda19be0 100644 --- a/src/test/java/org/elasticsearch/index/analysis/CJKFilterFactoryTests.java +++ b/src/test/java/org/elasticsearch/index/analysis/CJKFilterFactoryTests.java @@ -37,7 +37,8 @@ public class CJKFilterFactoryTests extends ElasticsearchTokenStreamTestCase { TokenFilterFactory tokenFilter = analysisService.tokenFilter("cjk_bigram"); String source = "多くの学生が試験に落ちた。"; String[] expected = new String[]{"多く", "くの", "の学", "学生", "生が", "が試", "試験", "験に", "に落", "落ち", "ちた" }; - Tokenizer tokenizer = new StandardTokenizer(TEST_VERSION_CURRENT, new StringReader(source)); + Tokenizer tokenizer = new StandardTokenizer(); + tokenizer.setReader(new StringReader(source)); assertTokenStreamContents(tokenFilter.create(tokenizer), expected); } @@ -47,7 +48,8 @@ public class CJKFilterFactoryTests extends ElasticsearchTokenStreamTestCase { TokenFilterFactory tokenFilter = analysisService.tokenFilter("cjk_no_flags"); String source = "多くの学生が試験に落ちた。"; String[] expected = new String[]{"多く", "くの", "の学", "学生", "生が", "が試", "試験", "験に", "に落", "落ち", "ちた" }; - Tokenizer tokenizer = new StandardTokenizer(TEST_VERSION_CURRENT, new StringReader(source)); + Tokenizer tokenizer = new StandardTokenizer(); + tokenizer.setReader(new StringReader(source)); assertTokenStreamContents(tokenFilter.create(tokenizer), expected); } @@ -57,7 +59,8 @@ public class CJKFilterFactoryTests extends ElasticsearchTokenStreamTestCase { TokenFilterFactory tokenFilter = analysisService.tokenFilter("cjk_han_only"); String source = "多くの学生が試験に落ちた。"; String[] expected = new String[]{"多", "く", "の", "学生", "が", "試験", "に", "落", "ち", "た" }; - Tokenizer tokenizer = new StandardTokenizer(TEST_VERSION_CURRENT, new StringReader(source)); + Tokenizer tokenizer = new StandardTokenizer(); + tokenizer.setReader(new StringReader(source)); assertTokenStreamContents(tokenFilter.create(tokenizer), expected); } @@ -67,7 +70,8 @@ public class CJKFilterFactoryTests extends ElasticsearchTokenStreamTestCase { TokenFilterFactory tokenFilter = analysisService.tokenFilter("cjk_han_unigram_only"); String source = "多くの学生が試験に落ちた。"; String[] expected = new String[]{"多", "く", "の", "学", "学生", "生", "が", "試", "試験", "験", "に", "落", "ち", "た" }; - Tokenizer tokenizer = new StandardTokenizer(TEST_VERSION_CURRENT, new StringReader(source)); + Tokenizer tokenizer = new StandardTokenizer(); + tokenizer.setReader(new StringReader(source)); assertTokenStreamContents(tokenFilter.create(tokenizer), expected); } diff --git a/src/test/java/org/elasticsearch/index/analysis/KeepFilterFactoryTests.java b/src/test/java/org/elasticsearch/index/analysis/KeepFilterFactoryTests.java index cc2df1b45f3..cd4ababaaf4 100644 --- a/src/test/java/org/elasticsearch/index/analysis/KeepFilterFactoryTests.java +++ b/src/test/java/org/elasticsearch/index/analysis/KeepFilterFactoryTests.java @@ -95,7 +95,8 @@ public class KeepFilterFactoryTests extends ElasticsearchTokenStreamTestCase { assertThat(tokenFilter, instanceOf(KeepWordFilterFactory.class)); String source = "hello small world"; String[] expected = new String[]{"hello", "world"}; - Tokenizer tokenizer = new WhitespaceTokenizer(TEST_VERSION_CURRENT, new StringReader(source)); + Tokenizer tokenizer = new WhitespaceTokenizer(); + tokenizer.setReader(new StringReader(source)); assertTokenStreamContents(tokenFilter.create(tokenizer), expected, new int[]{1, 2}); } @@ -106,7 +107,8 @@ public class KeepFilterFactoryTests extends ElasticsearchTokenStreamTestCase { assertThat(tokenFilter, instanceOf(KeepWordFilterFactory.class)); String source = "Hello small world"; String[] expected = new String[]{"Hello"}; - Tokenizer tokenizer = new WhitespaceTokenizer(TEST_VERSION_CURRENT, new StringReader(source)); + Tokenizer tokenizer = new WhitespaceTokenizer(); + tokenizer.setReader(new StringReader(source)); assertTokenStreamContents(tokenFilter.create(tokenizer), expected, new int[]{1}); } diff --git a/src/test/java/org/elasticsearch/index/analysis/KeepTypesFilterFactoryTests.java b/src/test/java/org/elasticsearch/index/analysis/KeepTypesFilterFactoryTests.java index 425784d64da..95c942e19fd 100644 --- a/src/test/java/org/elasticsearch/index/analysis/KeepTypesFilterFactoryTests.java +++ b/src/test/java/org/elasticsearch/index/analysis/KeepTypesFilterFactoryTests.java @@ -44,7 +44,8 @@ public class KeepTypesFilterFactoryTests extends ElasticsearchTokenStreamTestCas assertThat(tokenFilter, instanceOf(KeepTypesFilterFactory.class)); String source = "Hello 123 world"; String[] expected = new String[]{"123"}; - Tokenizer tokenizer = new StandardTokenizer(TEST_VERSION_CURRENT, new StringReader(source)); + Tokenizer tokenizer = new StandardTokenizer(); + tokenizer.setReader(new StringReader(source)); assertTokenStreamContents(tokenFilter.create(tokenizer), expected, new int[]{2}); } } diff --git a/src/test/java/org/elasticsearch/index/analysis/LimitTokenCountFilterFactoryTests.java b/src/test/java/org/elasticsearch/index/analysis/LimitTokenCountFilterFactoryTests.java index 9d43d7edf08..5473c635980 100644 --- a/src/test/java/org/elasticsearch/index/analysis/LimitTokenCountFilterFactoryTests.java +++ b/src/test/java/org/elasticsearch/index/analysis/LimitTokenCountFilterFactoryTests.java @@ -39,14 +39,16 @@ public class LimitTokenCountFilterFactoryTests extends ElasticsearchTokenStreamT TokenFilterFactory tokenFilter = analysisService.tokenFilter("limit_default"); String source = "the quick brown fox"; String[] expected = new String[] { "the" }; - Tokenizer tokenizer = new WhitespaceTokenizer(TEST_VERSION_CURRENT, new StringReader(source)); + Tokenizer tokenizer = new WhitespaceTokenizer(); + tokenizer.setReader(new StringReader(source)); assertTokenStreamContents(tokenFilter.create(tokenizer), expected); } { TokenFilterFactory tokenFilter = analysisService.tokenFilter("limit"); String source = "the quick brown fox"; String[] expected = new String[] { "the" }; - Tokenizer tokenizer = new WhitespaceTokenizer(TEST_VERSION_CURRENT, new StringReader(source)); + Tokenizer tokenizer = new WhitespaceTokenizer(); + tokenizer.setReader(new StringReader(source)); assertTokenStreamContents(tokenFilter.create(tokenizer), expected); } } @@ -61,7 +63,8 @@ public class LimitTokenCountFilterFactoryTests extends ElasticsearchTokenStreamT TokenFilterFactory tokenFilter = analysisService.tokenFilter("limit_1"); String source = "the quick brown fox"; String[] expected = new String[] { "the", "quick", "brown" }; - Tokenizer tokenizer = new WhitespaceTokenizer(TEST_VERSION_CURRENT, new StringReader(source)); + Tokenizer tokenizer = new WhitespaceTokenizer(); + tokenizer.setReader(new StringReader(source)); assertTokenStreamContents(tokenFilter.create(tokenizer), expected); } { @@ -72,7 +75,8 @@ public class LimitTokenCountFilterFactoryTests extends ElasticsearchTokenStreamT TokenFilterFactory tokenFilter = analysisService.tokenFilter("limit_1"); String source = "the quick brown fox"; String[] expected = new String[] { "the", "quick", "brown" }; - Tokenizer tokenizer = new WhitespaceTokenizer(TEST_VERSION_CURRENT, new StringReader(source)); + Tokenizer tokenizer = new WhitespaceTokenizer(); + tokenizer.setReader(new StringReader(source)); assertTokenStreamContents(tokenFilter.create(tokenizer), expected); } @@ -84,7 +88,8 @@ public class LimitTokenCountFilterFactoryTests extends ElasticsearchTokenStreamT TokenFilterFactory tokenFilter = analysisService.tokenFilter("limit_1"); String source = "the quick brown fox"; String[] expected = new String[] { "the", "quick", "brown", "fox" }; - Tokenizer tokenizer = new WhitespaceTokenizer(TEST_VERSION_CURRENT, new StringReader(source)); + Tokenizer tokenizer = new WhitespaceTokenizer(); + tokenizer.setReader(new StringReader(source)); assertTokenStreamContents(tokenFilter.create(tokenizer), expected); } } diff --git a/src/test/java/org/elasticsearch/index/analysis/NGramTokenizerFactoryTests.java b/src/test/java/org/elasticsearch/index/analysis/NGramTokenizerFactoryTests.java index 2953e6e582d..669613b4ebe 100644 --- a/src/test/java/org/elasticsearch/index/analysis/NGramTokenizerFactoryTests.java +++ b/src/test/java/org/elasticsearch/index/analysis/NGramTokenizerFactoryTests.java @@ -56,7 +56,7 @@ public class NGramTokenizerFactoryTests extends ElasticsearchTokenStreamTestCase for (String tokenChars : Arrays.asList("letters", "number", "DIRECTIONALITY_UNDEFINED")) { final Settings settings = newAnalysisSettingsBuilder().put("min_gram", 2).put("max_gram", 3).put("token_chars", tokenChars).build(); try { - new NGramTokenizerFactory(index, indexSettings, name, settings).create(new StringReader("")); + new NGramTokenizerFactory(index, indexSettings, name, settings).create(); fail(); } catch (ElasticsearchIllegalArgumentException expected) { // OK @@ -64,7 +64,7 @@ public class NGramTokenizerFactoryTests extends ElasticsearchTokenStreamTestCase } for (String tokenChars : Arrays.asList("letter", " digit ", "punctuation", "DIGIT", "CoNtRoL", "dash_punctuation")) { final Settings settings = newAnalysisSettingsBuilder().put("min_gram", 2).put("max_gram", 3).put("token_chars", tokenChars).build(); - new NGramTokenizerFactory(index, indexSettings, name, settings).create(new StringReader("")); + new NGramTokenizerFactory(index, indexSettings, name, settings).create(); // no exception } } @@ -75,7 +75,8 @@ public class NGramTokenizerFactoryTests extends ElasticsearchTokenStreamTestCase final String name = "ngr"; final Settings indexSettings = newAnalysisSettingsBuilder().build(); final Settings settings = newAnalysisSettingsBuilder().put("min_gram", 2).put("max_gram", 4).putArray("token_chars", new String[0]).build(); - Tokenizer tokenizer = new NGramTokenizerFactory(index, indexSettings, name, settings).create(new StringReader("1.34")); + Tokenizer tokenizer = new NGramTokenizerFactory(index, indexSettings, name, settings).create(); + tokenizer.setReader(new StringReader("1.34")); assertTokenStreamContents(tokenizer, new String[] {"1.", "1.3", "1.34", ".3", ".34", "34"}); } @@ -86,10 +87,14 @@ public class NGramTokenizerFactoryTests extends ElasticsearchTokenStreamTestCase final String name = "ngr"; final Settings indexSettings = newAnalysisSettingsBuilder().build(); Settings settings = newAnalysisSettingsBuilder().put("min_gram", 2).put("max_gram", 3).put("token_chars", "letter,digit").build(); - assertTokenStreamContents(new NGramTokenizerFactory(index, indexSettings, name, settings).create(new StringReader("Åbc déf g\uD801\uDC00f ")), + Tokenizer tokenizer = new NGramTokenizerFactory(index, indexSettings, name, settings).create(); + tokenizer.setReader(new StringReader("Åbc déf g\uD801\uDC00f ")); + assertTokenStreamContents(tokenizer, new String[] {"Åb", "Åbc", "bc", "dé", "déf", "éf", "g\uD801\uDC00", "g\uD801\uDC00f", "\uD801\uDC00f"}); settings = newAnalysisSettingsBuilder().put("min_gram", 2).put("max_gram", 3).put("token_chars", "letter,digit,punctuation,whitespace,symbol").build(); - assertTokenStreamContents(new NGramTokenizerFactory(index, indexSettings, name, settings).create(new StringReader(" a!$ 9")), + tokenizer = new NGramTokenizerFactory(index, indexSettings, name, settings).create(); + tokenizer.setReader(new StringReader(" a!$ 9")); + assertTokenStreamContents(tokenizer, new String[] {" a", " a!", "a!", "a!$", "!$", "!$ ", "$ ", "$ 9", " 9"}); } @@ -100,15 +105,19 @@ public class NGramTokenizerFactoryTests extends ElasticsearchTokenStreamTestCase final String name = "ngr"; final Settings indexSettings = newAnalysisSettingsBuilder().build(); Settings settings = newAnalysisSettingsBuilder().put("min_gram", 2).put("max_gram", 3).put("token_chars", "letter,digit").build(); - assertTokenStreamContents(new EdgeNGramTokenizerFactory(index, indexSettings, name, settings).create(new StringReader("Åbc déf g\uD801\uDC00f ")), + Tokenizer tokenizer = new EdgeNGramTokenizerFactory(index, indexSettings, name, settings).create(); + tokenizer.setReader(new StringReader("Åbc déf g\uD801\uDC00f ")); + assertTokenStreamContents(tokenizer, new String[] {"Åb", "Åbc", "dé", "déf", "g\uD801\uDC00", "g\uD801\uDC00f"}); settings = newAnalysisSettingsBuilder().put("min_gram", 2).put("max_gram", 3).put("token_chars", "letter,digit,punctuation,whitespace,symbol").build(); - assertTokenStreamContents(new EdgeNGramTokenizerFactory(index, indexSettings, name, settings).create(new StringReader(" a!$ 9")), + tokenizer = new EdgeNGramTokenizerFactory(index, indexSettings, name, settings).create(); + tokenizer.setReader(new StringReader(" a!$ 9")); + assertTokenStreamContents(tokenizer, new String[] {" a", " a!"}); } @Test - public void testBackwardsCompatibilityEdgeNgramTokenizer() throws IllegalArgumentException, IllegalAccessException { + public void testBackwardsCompatibilityEdgeNgramTokenizer() throws Exception { int iters = scaledRandomIntBetween(20, 100); final Index index = new Index("test"); final String name = "ngr"; @@ -123,8 +132,8 @@ public class NGramTokenizerFactoryTests extends ElasticsearchTokenStreamTestCase } Settings settings = builder.build(); Settings indexSettings = newAnalysisSettingsBuilder().put(IndexMetaData.SETTING_VERSION_CREATED, v.id).build(); - Tokenizer edgeNGramTokenizer = new EdgeNGramTokenizerFactory(index, indexSettings, name, settings).create(new StringReader( - "foo bar")); + Tokenizer edgeNGramTokenizer = new EdgeNGramTokenizerFactory(index, indexSettings, name, settings).create(); + edgeNGramTokenizer.setReader(new StringReader("foo bar")); if (compatVersion) { assertThat(edgeNGramTokenizer, instanceOf(Lucene43EdgeNGramTokenizer.class)); } else { @@ -134,15 +143,15 @@ public class NGramTokenizerFactoryTests extends ElasticsearchTokenStreamTestCase } else { Settings settings = newAnalysisSettingsBuilder().put("min_gram", 2).put("max_gram", 3).put("side", "back").build(); Settings indexSettings = newAnalysisSettingsBuilder().put(IndexMetaData.SETTING_VERSION_CREATED, v.id).build(); - Tokenizer edgeNGramTokenizer = new EdgeNGramTokenizerFactory(index, indexSettings, name, settings).create(new StringReader( - "foo bar")); + Tokenizer edgeNGramTokenizer = new EdgeNGramTokenizerFactory(index, indexSettings, name, settings).create(); + edgeNGramTokenizer.setReader(new StringReader("foo bar")); assertThat(edgeNGramTokenizer, instanceOf(Lucene43EdgeNGramTokenizer.class)); } } Settings settings = newAnalysisSettingsBuilder().put("min_gram", 2).put("max_gram", 3).put("side", "back").build(); Settings indexSettings = newAnalysisSettingsBuilder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT).build(); try { - new EdgeNGramTokenizerFactory(index, indexSettings, name, settings).create(new StringReader("foo bar")); + new EdgeNGramTokenizerFactory(index, indexSettings, name, settings).create(); fail("should fail side:back is not supported anymore"); } catch (ElasticsearchIllegalArgumentException ex) { } @@ -150,7 +159,7 @@ public class NGramTokenizerFactoryTests extends ElasticsearchTokenStreamTestCase } @Test - public void testBackwardsCompatibilityNgramTokenizer() throws IllegalArgumentException, IllegalAccessException { + public void testBackwardsCompatibilityNgramTokenizer() throws Exception { int iters = scaledRandomIntBetween(20, 100); for (int i = 0; i < iters; i++) { final Index index = new Index("test"); @@ -164,8 +173,8 @@ public class NGramTokenizerFactoryTests extends ElasticsearchTokenStreamTestCase } Settings settings = builder.build(); Settings indexSettings = newAnalysisSettingsBuilder().put(IndexMetaData.SETTING_VERSION_CREATED, v.id).build(); - Tokenizer nGramTokenizer = new NGramTokenizerFactory(index, indexSettings, name, settings).create(new StringReader( - "foo bar")); + Tokenizer nGramTokenizer = new NGramTokenizerFactory(index, indexSettings, name, settings).create(); + nGramTokenizer.setReader(new StringReader("foo bar")); if (compatVersion) { assertThat(nGramTokenizer, instanceOf(Lucene43NGramTokenizer.class)); } else { @@ -175,15 +184,15 @@ public class NGramTokenizerFactoryTests extends ElasticsearchTokenStreamTestCase } else { Settings settings = newAnalysisSettingsBuilder().put("min_gram", 2).put("max_gram", 3).build(); Settings indexSettings = newAnalysisSettingsBuilder().put(IndexMetaData.SETTING_VERSION_CREATED, v.id).build(); - Tokenizer nGramTokenizer = new NGramTokenizerFactory(index, indexSettings, name, settings).create(new StringReader( - "foo bar")); + Tokenizer nGramTokenizer = new NGramTokenizerFactory(index, indexSettings, name, settings).create(); + nGramTokenizer.setReader(new StringReader("foo bar")); assertThat(nGramTokenizer, instanceOf(Lucene43NGramTokenizer.class)); } } } @Test - public void testBackwardsCompatibilityEdgeNgramTokenFilter() throws IllegalArgumentException, IllegalAccessException { + public void testBackwardsCompatibilityEdgeNgramTokenFilter() throws Exception { int iters = scaledRandomIntBetween(20, 100); for (int i = 0; i < iters; i++) { final Index index = new Index("test"); @@ -201,12 +210,13 @@ public class NGramTokenizerFactoryTests extends ElasticsearchTokenStreamTestCase } Settings settings = builder.build(); Settings indexSettings = newAnalysisSettingsBuilder().put(IndexMetaData.SETTING_VERSION_CREATED, v.id).build(); - TokenStream edgeNGramTokenFilter = new EdgeNGramTokenFilterFactory(index, indexSettings, name, settings).create(new MockTokenizer(new StringReader( - "foo bar"))); - if (compatVersion) { - assertThat(edgeNGramTokenFilter, instanceOf(EdgeNGramTokenFilter.class)); - } else if (reverse && !compatVersion){ + Tokenizer tokenizer = new MockTokenizer(); + tokenizer.setReader(new StringReader("foo bar")); + TokenStream edgeNGramTokenFilter = new EdgeNGramTokenFilterFactory(index, indexSettings, name, settings).create(tokenizer); + if (reverse) { assertThat(edgeNGramTokenFilter, instanceOf(ReverseStringFilter.class)); + } else if (compatVersion) { + assertThat(edgeNGramTokenFilter, instanceOf(Lucene43EdgeNGramTokenFilter.class)); } else { assertThat(edgeNGramTokenFilter, instanceOf(EdgeNGramTokenFilter.class)); } @@ -219,9 +229,14 @@ public class NGramTokenizerFactoryTests extends ElasticsearchTokenStreamTestCase } Settings settings = builder.build(); Settings indexSettings = newAnalysisSettingsBuilder().put(IndexMetaData.SETTING_VERSION_CREATED, v.id).build(); - TokenStream edgeNGramTokenFilter = new EdgeNGramTokenFilterFactory(index, indexSettings, name, settings).create(new MockTokenizer(new StringReader( - "foo bar"))); - assertThat(edgeNGramTokenFilter, instanceOf(EdgeNGramTokenFilter.class)); + Tokenizer tokenizer = new MockTokenizer(); + tokenizer.setReader(new StringReader("foo bar")); + TokenStream edgeNGramTokenFilter = new EdgeNGramTokenFilterFactory(index, indexSettings, name, settings).create(tokenizer); + if (reverse) { + assertThat(edgeNGramTokenFilter, instanceOf(ReverseStringFilter.class)); + } else { + assertThat(edgeNGramTokenFilter, instanceOf(Lucene43EdgeNGramTokenFilter.class)); + } } } } diff --git a/src/test/java/org/elasticsearch/index/analysis/PatternAnalyzerTest.java b/src/test/java/org/elasticsearch/index/analysis/PatternAnalyzerTest.java new file mode 100644 index 00000000000..98197a15c04 --- /dev/null +++ b/src/test/java/org/elasticsearch/index/analysis/PatternAnalyzerTest.java @@ -0,0 +1,154 @@ +package org.elasticsearch.index.analysis; + +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +import java.io.IOException; +import java.lang.Thread.UncaughtExceptionHandler; +import java.util.Arrays; +import java.util.regex.Pattern; + +import org.apache.lucene.analysis.Analyzer; +import org.apache.lucene.analysis.core.StopAnalyzer; +import org.elasticsearch.test.ElasticsearchTokenStreamTestCase; + +/** + * Verifies the behavior of PatternAnalyzer. + */ +public class PatternAnalyzerTest extends ElasticsearchTokenStreamTestCase { + + /** + * Test PatternAnalyzer when it is configured with a non-word pattern. + */ + public void testNonWordPattern() throws IOException { + // Split on non-letter pattern, do not lowercase, no stopwords + PatternAnalyzer a = new PatternAnalyzer(Pattern.compile("\\W+"), false, null); + assertAnalyzesTo(a, "The quick brown Fox,the abcd1234 (56.78) dc.", + new String[] { "The", "quick", "brown", "Fox", "the", "abcd1234", "56", "78", "dc" }); + + // split on non-letter pattern, lowercase, english stopwords + PatternAnalyzer b = new PatternAnalyzer(Pattern.compile("\\W+"), true, + StopAnalyzer.ENGLISH_STOP_WORDS_SET); + assertAnalyzesTo(b, "The quick brown Fox,the abcd1234 (56.78) dc.", + new String[] { "quick", "brown", "fox", "abcd1234", "56", "78", "dc" }); + } + + /** + * Test PatternAnalyzer when it is configured with a whitespace pattern. + * Behavior can be similar to WhitespaceAnalyzer (depending upon options) + */ + public void testWhitespacePattern() throws IOException { + // Split on whitespace patterns, do not lowercase, no stopwords + PatternAnalyzer a = new PatternAnalyzer(Pattern.compile("\\s+"), false, null); + assertAnalyzesTo(a, "The quick brown Fox,the abcd1234 (56.78) dc.", + new String[] { "The", "quick", "brown", "Fox,the", "abcd1234", "(56.78)", "dc." }); + + // Split on whitespace patterns, lowercase, english stopwords + PatternAnalyzer b = new PatternAnalyzer(Pattern.compile("\\s+"), true, + StopAnalyzer.ENGLISH_STOP_WORDS_SET); + assertAnalyzesTo(b, "The quick brown Fox,the abcd1234 (56.78) dc.", + new String[] { "quick", "brown", "fox,the", "abcd1234", "(56.78)", "dc." }); + } + + /** + * Test PatternAnalyzer when it is configured with a custom pattern. In this + * case, text is tokenized on the comma "," + */ + public void testCustomPattern() throws IOException { + // Split on comma, do not lowercase, no stopwords + PatternAnalyzer a = new PatternAnalyzer(Pattern.compile(","), false, null); + assertAnalyzesTo(a, "Here,Are,some,Comma,separated,words,", + new String[] { "Here", "Are", "some", "Comma", "separated", "words" }); + + // split on comma, lowercase, english stopwords + PatternAnalyzer b = new PatternAnalyzer(Pattern.compile(","), true, + StopAnalyzer.ENGLISH_STOP_WORDS_SET); + assertAnalyzesTo(b, "Here,Are,some,Comma,separated,words,", + new String[] { "here", "some", "comma", "separated", "words" }); + } + + /** + * Test PatternAnalyzer against a large document. + */ + public void testHugeDocument() throws IOException { + StringBuilder document = new StringBuilder(); + // 5000 a's + char largeWord[] = new char[5000]; + Arrays.fill(largeWord, 'a'); + document.append(largeWord); + + // a space + document.append(' '); + + // 2000 b's + char largeWord2[] = new char[2000]; + Arrays.fill(largeWord2, 'b'); + document.append(largeWord2); + + // Split on whitespace patterns, do not lowercase, no stopwords + PatternAnalyzer a = new PatternAnalyzer(Pattern.compile("\\s+"), false, null); + assertAnalyzesTo(a, document.toString(), + new String[] { new String(largeWord), new String(largeWord2) }); + } + + /** blast some random strings through the analyzer */ + public void testRandomStrings() throws Exception { + Analyzer a = new PatternAnalyzer(Pattern.compile(","), true, StopAnalyzer.ENGLISH_STOP_WORDS_SET); + + // dodge jre bug http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=7104012 + final UncaughtExceptionHandler savedHandler = Thread.getDefaultUncaughtExceptionHandler(); + Thread.setDefaultUncaughtExceptionHandler(new Thread.UncaughtExceptionHandler() { + @Override + public void uncaughtException(Thread thread, Throwable throwable) { + assumeTrue("not failing due to jre bug ", !isJREBug7104012(throwable)); + // otherwise its some other bug, pass to default handler + savedHandler.uncaughtException(thread, throwable); + } + }); + + try { + Thread.getDefaultUncaughtExceptionHandler(); + checkRandomData(random(), a, 10000*RANDOM_MULTIPLIER); + } catch (ArrayIndexOutOfBoundsException ex) { + assumeTrue("not failing due to jre bug ", !isJREBug7104012(ex)); + throw ex; // otherwise rethrow + } finally { + Thread.setDefaultUncaughtExceptionHandler(savedHandler); + } + } + + static boolean isJREBug7104012(Throwable t) { + if (!(t instanceof ArrayIndexOutOfBoundsException)) { + // BaseTokenStreamTestCase now wraps exc in a new RuntimeException: + t = t.getCause(); + if (!(t instanceof ArrayIndexOutOfBoundsException)) { + return false; + } + } + StackTraceElement trace[] = t.getStackTrace(); + for (StackTraceElement st : trace) { + if ("java.text.RuleBasedBreakIterator".equals(st.getClassName()) || + "sun.util.locale.provider.RuleBasedBreakIterator".equals(st.getClassName()) + && "lookupBackwardState".equals(st.getMethodName())) { + return true; + } + } + return false; + } +} diff --git a/src/test/java/org/elasticsearch/index/analysis/ShingleTokenFilterFactoryTests.java b/src/test/java/org/elasticsearch/index/analysis/ShingleTokenFilterFactoryTests.java index 5d0088a866c..a6c193df2ec 100644 --- a/src/test/java/org/elasticsearch/index/analysis/ShingleTokenFilterFactoryTests.java +++ b/src/test/java/org/elasticsearch/index/analysis/ShingleTokenFilterFactoryTests.java @@ -44,7 +44,8 @@ public class ShingleTokenFilterFactoryTests extends ElasticsearchTokenStreamTest TokenFilterFactory tokenFilter = analysisService.tokenFilter("shingle"); String source = "the quick brown fox"; String[] expected = new String[]{"the", "the quick", "quick", "quick brown", "brown", "brown fox", "fox"}; - Tokenizer tokenizer = new WhitespaceTokenizer(TEST_VERSION_CURRENT, new StringReader(source)); + Tokenizer tokenizer = new WhitespaceTokenizer(); + tokenizer.setReader(new StringReader(source)); assertTokenStreamContents(tokenFilter.create(tokenizer), expected); } @@ -55,7 +56,8 @@ public class ShingleTokenFilterFactoryTests extends ElasticsearchTokenStreamTest assertThat(tokenFilter, instanceOf(ShingleTokenFilterFactory.class)); String source = "the quick brown fox"; String[] expected = new String[]{"the_quick_brown", "quick_brown_fox"}; - Tokenizer tokenizer = new WhitespaceTokenizer(TEST_VERSION_CURRENT, new StringReader(source)); + Tokenizer tokenizer = new WhitespaceTokenizer(); + tokenizer.setReader(new StringReader(source)); assertTokenStreamContents(tokenFilter.create(tokenizer), expected); } @@ -66,7 +68,8 @@ public class ShingleTokenFilterFactoryTests extends ElasticsearchTokenStreamTest assertThat(tokenFilter, instanceOf(ShingleTokenFilterFactory.class)); String source = "the quick"; String[] expected = new String[]{"the", "quick"}; - Tokenizer tokenizer = new WhitespaceTokenizer(TEST_VERSION_CURRENT, new StringReader(source)); + Tokenizer tokenizer = new WhitespaceTokenizer(); + tokenizer.setReader(new StringReader(source)); assertTokenStreamContents(tokenFilter.create(tokenizer), expected); } @@ -76,7 +79,9 @@ public class ShingleTokenFilterFactoryTests extends ElasticsearchTokenStreamTest TokenFilterFactory tokenFilter = analysisService.tokenFilter("shingle_filler"); String source = "simon the sorcerer"; String[] expected = new String[]{"simon FILLER", "simon FILLER sorcerer", "FILLER sorcerer"}; - TokenStream tokenizer = new StopFilter(TEST_VERSION_CURRENT, new WhitespaceTokenizer(TEST_VERSION_CURRENT, new StringReader(source)), StopFilter.makeStopSet(TEST_VERSION_CURRENT, "the")); - assertTokenStreamContents(tokenFilter.create(tokenizer), expected); + Tokenizer tokenizer = new WhitespaceTokenizer(); + tokenizer.setReader(new StringReader(source)); + TokenStream stream = new StopFilter(tokenizer, StopFilter.makeStopSet("the")); + assertTokenStreamContents(tokenFilter.create(stream), expected); } } diff --git a/src/test/java/org/elasticsearch/index/analysis/SnowballAnalyzerTests.java b/src/test/java/org/elasticsearch/index/analysis/SnowballAnalyzerTests.java new file mode 100644 index 00000000000..a34a5c674ac --- /dev/null +++ b/src/test/java/org/elasticsearch/index/analysis/SnowballAnalyzerTests.java @@ -0,0 +1,59 @@ +package org.elasticsearch.index.analysis; + +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +import org.apache.lucene.analysis.Analyzer; +import org.apache.lucene.analysis.standard.StandardAnalyzer; +import org.elasticsearch.test.ElasticsearchTokenStreamTestCase; + +public class SnowballAnalyzerTests extends ElasticsearchTokenStreamTestCase { + + public void testEnglish() throws Exception { + Analyzer a = new SnowballAnalyzer("English"); + assertAnalyzesTo(a, "he abhorred accents", + new String[]{"he", "abhor", "accent"}); + } + + public void testStopwords() throws Exception { + Analyzer a = new SnowballAnalyzer("English", + StandardAnalyzer.STOP_WORDS_SET); + assertAnalyzesTo(a, "the quick brown fox jumped", + new String[]{"quick", "brown", "fox", "jump"}); + } + + /** + * Test turkish lowercasing + */ + public void testTurkish() throws Exception { + Analyzer a = new SnowballAnalyzer("Turkish"); + + assertAnalyzesTo(a, "ağacı", new String[] { "ağaç" }); + assertAnalyzesTo(a, "AĞACI", new String[] { "ağaç" }); + } + + + public void testReusableTokenStream() throws Exception { + Analyzer a = new SnowballAnalyzer("English"); + assertAnalyzesTo(a, "he abhorred accents", + new String[]{"he", "abhor", "accent"}); + assertAnalyzesTo(a, "she abhorred him", + new String[]{"she", "abhor", "him"}); + } +} \ No newline at end of file diff --git a/src/test/java/org/elasticsearch/index/analysis/StemmerTokenFilterFactoryTests.java b/src/test/java/org/elasticsearch/index/analysis/StemmerTokenFilterFactoryTests.java index 1ceaea7cef1..15f18f016ec 100644 --- a/src/test/java/org/elasticsearch/index/analysis/StemmerTokenFilterFactoryTests.java +++ b/src/test/java/org/elasticsearch/index/analysis/StemmerTokenFilterFactoryTests.java @@ -18,6 +18,7 @@ */ package org.elasticsearch.index.analysis; +import org.apache.lucene.analysis.Tokenizer; import org.apache.lucene.analysis.TokenStream; import org.apache.lucene.analysis.core.WhitespaceTokenizer; import org.apache.lucene.analysis.en.PorterStemFilter; @@ -58,7 +59,9 @@ public class StemmerTokenFilterFactoryTests extends ElasticsearchTokenStreamTest AnalysisService analysisService = AnalysisTestsHelper.createAnalysisServiceFromSettings(settings); TokenFilterFactory tokenFilter = analysisService.tokenFilter("my_english"); assertThat(tokenFilter, instanceOf(StemmerTokenFilterFactory.class)); - TokenStream create = tokenFilter.create(new WhitespaceTokenizer(TEST_VERSION_CURRENT, new StringReader("foo bar"))); + Tokenizer tokenizer = new WhitespaceTokenizer(); + tokenizer.setReader(new StringReader("foo bar")); + TokenStream create = tokenFilter.create(tokenizer); NamedAnalyzer analyzer = analysisService.analyzer("my_english"); if (v.onOrAfter(Version.V_1_3_0)) { @@ -89,7 +92,9 @@ public class StemmerTokenFilterFactoryTests extends ElasticsearchTokenStreamTest AnalysisService analysisService = AnalysisTestsHelper.createAnalysisServiceFromSettings(settings); TokenFilterFactory tokenFilter = analysisService.tokenFilter("my_porter2"); assertThat(tokenFilter, instanceOf(StemmerTokenFilterFactory.class)); - TokenStream create = tokenFilter.create(new WhitespaceTokenizer(TEST_VERSION_CURRENT, new StringReader("foo bar"))); + Tokenizer tokenizer = new WhitespaceTokenizer(); + tokenizer.setReader(new StringReader("foo bar")); + TokenStream create = tokenFilter.create(tokenizer); NamedAnalyzer analyzer = analysisService.analyzer("my_porter2"); assertThat(create, instanceOf(SnowballFilter.class)); diff --git a/src/test/java/org/elasticsearch/index/analysis/StopTokenFilterTests.java b/src/test/java/org/elasticsearch/index/analysis/StopTokenFilterTests.java index ec98562b23d..929d4f335d8 100644 --- a/src/test/java/org/elasticsearch/index/analysis/StopTokenFilterTests.java +++ b/src/test/java/org/elasticsearch/index/analysis/StopTokenFilterTests.java @@ -19,7 +19,9 @@ package org.elasticsearch.index.analysis; +import org.apache.lucene.analysis.Tokenizer; import org.apache.lucene.analysis.TokenStream; +import org.apache.lucene.analysis.core.Lucene43StopFilter; import org.apache.lucene.analysis.core.StopFilter; import org.apache.lucene.analysis.core.WhitespaceTokenizer; import org.apache.lucene.search.suggest.analyzing.SuggestStopFilter; @@ -45,7 +47,7 @@ public class StopTokenFilterTests extends ElasticsearchTokenStreamTestCase { Builder builder = ImmutableSettings.settingsBuilder().put("index.analysis.filter.my_stop.type", "stop") .put("index.analysis.filter.my_stop.enable_position_increments", false); if (random().nextBoolean()) { - builder.put("index.analysis.filter.my_stop.version", "4.4"); + builder.put("index.analysis.filter.my_stop.version", "5.0"); } Settings settings = builder.build(); AnalysisService analysisService = AnalysisTestsHelper.createAnalysisServiceFromSettings(settings); @@ -55,36 +57,42 @@ public class StopTokenFilterTests extends ElasticsearchTokenStreamTestCase { @Test public void testCorrectPositionIncrementSetting() throws IOException { Builder builder = ImmutableSettings.settingsBuilder().put("index.analysis.filter.my_stop.type", "stop"); - if (random().nextBoolean()) { - builder.put("index.analysis.filter.my_stop.enable_position_increments", true); - } int thingToDo = random().nextInt(3); if (thingToDo == 0) { builder.put("index.analysis.filter.my_stop.version", Version.LATEST); } else if (thingToDo == 1) { - builder.put("index.analysis.filter.my_stop.version", Version.LUCENE_30); + builder.put("index.analysis.filter.my_stop.version", Version.LUCENE_4_0); + if (random().nextBoolean()) { + builder.put("index.analysis.filter.my_stop.enable_position_increments", true); + } } else { // don't specify } AnalysisService analysisService = AnalysisTestsHelper.createAnalysisServiceFromSettings(builder.build()); TokenFilterFactory tokenFilter = analysisService.tokenFilter("my_stop"); assertThat(tokenFilter, instanceOf(StopTokenFilterFactory.class)); - TokenStream create = tokenFilter.create(new WhitespaceTokenizer(TEST_VERSION_CURRENT, new StringReader("foo bar"))); - assertThat(create, instanceOf(StopFilter.class)); - assertThat(((StopFilter)create).getEnablePositionIncrements(), equalTo(true)); + Tokenizer tokenizer = new WhitespaceTokenizer(); + tokenizer.setReader(new StringReader("foo bar")); + TokenStream create = tokenFilter.create(tokenizer); + if (thingToDo == 1) { + assertThat(create, instanceOf(Lucene43StopFilter.class)); + } else { + assertThat(create, instanceOf(StopFilter.class)); + } } @Test - public void testDeprecatedPositionIncrementSettingWithVerions() throws IOException { + public void testDeprecatedPositionIncrementSettingWithVersions() throws IOException { Settings settings = ImmutableSettings.settingsBuilder().put("index.analysis.filter.my_stop.type", "stop") .put("index.analysis.filter.my_stop.enable_position_increments", false).put("index.analysis.filter.my_stop.version", "4.3") .build(); AnalysisService analysisService = AnalysisTestsHelper.createAnalysisServiceFromSettings(settings); TokenFilterFactory tokenFilter = analysisService.tokenFilter("my_stop"); assertThat(tokenFilter, instanceOf(StopTokenFilterFactory.class)); - TokenStream create = tokenFilter.create(new WhitespaceTokenizer(TEST_VERSION_CURRENT, new StringReader("foo bar"))); - assertThat(create, instanceOf(StopFilter.class)); - assertThat(((StopFilter)create).getEnablePositionIncrements(), equalTo(false)); + Tokenizer tokenizer = new WhitespaceTokenizer(); + tokenizer.setReader(new StringReader("foo bar")); + TokenStream create = tokenFilter.create(tokenizer); + assertThat(create, instanceOf(Lucene43StopFilter.class)); } @Test @@ -96,7 +104,9 @@ public class StopTokenFilterTests extends ElasticsearchTokenStreamTestCase { AnalysisService analysisService = AnalysisTestsHelper.createAnalysisServiceFromSettings(settings); TokenFilterFactory tokenFilter = analysisService.tokenFilter("my_stop"); assertThat(tokenFilter, instanceOf(StopTokenFilterFactory.class)); - TokenStream create = tokenFilter.create(new WhitespaceTokenizer(TEST_VERSION_CURRENT, new StringReader("foo an"))); + Tokenizer tokenizer = new WhitespaceTokenizer(); + tokenizer.setReader(new StringReader("foo an")); + TokenStream create = tokenFilter.create(tokenizer); assertThat(create, instanceOf(SuggestStopFilter.class)); } } diff --git a/src/test/java/org/elasticsearch/index/analysis/WordDelimiterTokenFilterFactoryTests.java b/src/test/java/org/elasticsearch/index/analysis/WordDelimiterTokenFilterFactoryTests.java index f014302af90..24aba316d6e 100644 --- a/src/test/java/org/elasticsearch/index/analysis/WordDelimiterTokenFilterFactoryTests.java +++ b/src/test/java/org/elasticsearch/index/analysis/WordDelimiterTokenFilterFactoryTests.java @@ -39,8 +39,9 @@ public class WordDelimiterTokenFilterFactoryTests extends ElasticsearchTokenStre TokenFilterFactory tokenFilter = analysisService.tokenFilter("my_word_delimiter"); String source = "PowerShot 500-42 wi-fi wi-fi-4000 j2se O'Neil's"; String[] expected = new String[]{"Power", "Shot", "500", "42", "wi", "fi", "wi", "fi", "4000", "j", "2", "se", "O", "Neil"}; - Tokenizer tokenizer = new WhitespaceTokenizer(TEST_VERSION_CURRENT, new StringReader(source)); - assertTokenStreamContents(tokenFilter.create(tokenizer), expected); + Tokenizer tokenizer = new WhitespaceTokenizer(); + tokenizer.setReader(new StringReader(source)); + assertTokenStreamContents(tokenFilter.create(tokenizer), expected); } @Test @@ -53,8 +54,9 @@ public class WordDelimiterTokenFilterFactoryTests extends ElasticsearchTokenStre TokenFilterFactory tokenFilter = analysisService.tokenFilter("my_word_delimiter"); String source = "PowerShot 500-42 wi-fi wi-fi-4000 j2se O'Neil's"; String[] expected = new String[]{"PowerShot", "500", "42", "wifi", "wifi", "4000", "j", "2", "se", "ONeil"}; - Tokenizer tokenizer = new WhitespaceTokenizer(TEST_VERSION_CURRENT, new StringReader(source)); - assertTokenStreamContents(tokenFilter.create(tokenizer), expected); + Tokenizer tokenizer = new WhitespaceTokenizer(); + tokenizer.setReader(new StringReader(source)); + assertTokenStreamContents(tokenFilter.create(tokenizer), expected); } @Test @@ -67,8 +69,9 @@ public class WordDelimiterTokenFilterFactoryTests extends ElasticsearchTokenStre TokenFilterFactory tokenFilter = analysisService.tokenFilter("my_word_delimiter"); String source = "PowerShot 500-42 wi-fi wi-fi-4000 j2se O'Neil's"; String[] expected = new String[]{"Power", "Shot", "50042", "wi", "fi", "wi", "fi", "4000", "j", "2", "se", "O", "Neil"}; - Tokenizer tokenizer = new WhitespaceTokenizer(TEST_VERSION_CURRENT, new StringReader(source)); - assertTokenStreamContents(tokenFilter.create(tokenizer), expected); + Tokenizer tokenizer = new WhitespaceTokenizer(); + tokenizer.setReader(new StringReader(source)); + assertTokenStreamContents(tokenFilter.create(tokenizer), expected); } @Test @@ -82,7 +85,8 @@ public class WordDelimiterTokenFilterFactoryTests extends ElasticsearchTokenStre TokenFilterFactory tokenFilter = analysisService.tokenFilter("my_word_delimiter"); String source = "PowerShot 500-42 wi-fi wi-fi-4000 j2se O'Neil's"; String[] expected = new String[]{"PowerShot", "50042", "wifi", "wifi4000", "j2se", "ONeil"}; - Tokenizer tokenizer = new WhitespaceTokenizer(TEST_VERSION_CURRENT, new StringReader(source)); + Tokenizer tokenizer = new WhitespaceTokenizer(); + tokenizer.setReader(new StringReader(source)); assertTokenStreamContents(tokenFilter.create(tokenizer), expected); } @@ -95,7 +99,8 @@ public class WordDelimiterTokenFilterFactoryTests extends ElasticsearchTokenStre TokenFilterFactory tokenFilter = analysisService.tokenFilter("my_word_delimiter"); String source = "PowerShot"; String[] expected = new String[]{"PowerShot"}; - Tokenizer tokenizer = new WhitespaceTokenizer(TEST_VERSION_CURRENT, new StringReader(source)); + Tokenizer tokenizer = new WhitespaceTokenizer(); + tokenizer.setReader(new StringReader(source)); assertTokenStreamContents(tokenFilter.create(tokenizer), expected); } @@ -108,7 +113,8 @@ public class WordDelimiterTokenFilterFactoryTests extends ElasticsearchTokenStre TokenFilterFactory tokenFilter = analysisService.tokenFilter("my_word_delimiter"); String source = "PowerShot 500-42 wi-fi wi-fi-4000 j2se O'Neil's"; String[] expected = new String[]{"PowerShot", "Power", "Shot", "500-42", "500", "42", "wi-fi", "wi", "fi", "wi-fi-4000", "wi", "fi", "4000", "j2se", "j", "2", "se", "O'Neil's", "O", "Neil"}; - Tokenizer tokenizer = new WhitespaceTokenizer(TEST_VERSION_CURRENT, new StringReader(source)); + Tokenizer tokenizer = new WhitespaceTokenizer(); + tokenizer.setReader(new StringReader(source)); assertTokenStreamContents(tokenFilter.create(tokenizer), expected); } @@ -121,7 +127,8 @@ public class WordDelimiterTokenFilterFactoryTests extends ElasticsearchTokenStre TokenFilterFactory tokenFilter = analysisService.tokenFilter("my_word_delimiter"); String source = "PowerShot 500-42 wi-fi wi-fi-4000 j2se O'Neil's"; String[] expected = new String[]{"Power", "Shot", "500", "42", "wi", "fi", "wi", "fi", "4000", "j", "2", "se", "O", "Neil", "s"}; - Tokenizer tokenizer = new WhitespaceTokenizer(TEST_VERSION_CURRENT, new StringReader(source)); + Tokenizer tokenizer = new WhitespaceTokenizer(); + tokenizer.setReader(new StringReader(source)); assertTokenStreamContents(tokenFilter.create(tokenizer), expected); } @@ -136,8 +143,9 @@ public class WordDelimiterTokenFilterFactoryTests extends ElasticsearchTokenStre TokenFilterFactory tokenFilter = analysisService.tokenFilter("my_word_delimiter"); String source = "PowerShot"; String[] expected = new String[]{"Power", "PowerShot", "Shot" }; - Tokenizer tokenizer = new WhitespaceTokenizer(TEST_VERSION_CURRENT, new StringReader(source)); - assertTokenStreamContents(tokenFilter.create(tokenizer), expected); + Tokenizer tokenizer = new WhitespaceTokenizer(); + tokenizer.setReader(new StringReader(source)); + assertTokenStreamContents(tokenFilter.create(tokenizer), expected); } /** Back compat: @@ -153,8 +161,9 @@ public class WordDelimiterTokenFilterFactoryTests extends ElasticsearchTokenStre TokenFilterFactory tokenFilter = analysisService.tokenFilter("my_word_delimiter"); String source = "PowerShot"; String[] expected = new String[]{"Power", "Shot", "PowerShot" }; - Tokenizer tokenizer = new WhitespaceTokenizer(TEST_VERSION_CURRENT, new StringReader(source)); - assertTokenStreamContents(tokenFilter.create(tokenizer), expected); + Tokenizer tokenizer = new WhitespaceTokenizer(); + tokenizer.setReader(new StringReader(source)); + assertTokenStreamContents(tokenFilter.create(tokenizer), expected); } } diff --git a/src/test/java/org/elasticsearch/index/analysis/commongrams/CommonGramsTokenFilterFactoryTests.java b/src/test/java/org/elasticsearch/index/analysis/commongrams/CommonGramsTokenFilterFactoryTests.java index dcbc026a5f6..9947a9bd36d 100644 --- a/src/test/java/org/elasticsearch/index/analysis/commongrams/CommonGramsTokenFilterFactoryTests.java +++ b/src/test/java/org/elasticsearch/index/analysis/commongrams/CommonGramsTokenFilterFactoryTests.java @@ -61,7 +61,8 @@ public class CommonGramsTokenFilterFactoryTests extends ElasticsearchTokenStream TokenFilterFactory tokenFilter = analysisService.tokenFilter("common_grams_default"); String source = "the quick brown is a fox Or noT"; String[] expected = new String[] { "the", "quick", "brown", "is", "a", "fox", "Or", "noT" }; - Tokenizer tokenizer = new WhitespaceTokenizer(TEST_VERSION_CURRENT, new StringReader(source)); + Tokenizer tokenizer = new WhitespaceTokenizer(); + tokenizer.setReader(new StringReader(source)); assertTokenStreamContents(tokenFilter.create(tokenizer), expected); } } @@ -76,7 +77,8 @@ public class CommonGramsTokenFilterFactoryTests extends ElasticsearchTokenStream TokenFilterFactory tokenFilter = analysisService.tokenFilter("common_grams_default"); String source = "the quick brown is a fox Or noT"; String[] expected = new String[] { "the", "quick", "brown", "is", "a", "fox", "Or", "noT" }; - Tokenizer tokenizer = new WhitespaceTokenizer(TEST_VERSION_CURRENT, new StringReader(source)); + Tokenizer tokenizer = new WhitespaceTokenizer(); + tokenizer.setReader(new StringReader(source)); assertTokenStreamContents(tokenFilter.create(tokenizer), expected); } } @@ -93,7 +95,8 @@ public class CommonGramsTokenFilterFactoryTests extends ElasticsearchTokenStream TokenFilterFactory tokenFilter = analysisService.tokenFilter("common_grams_1"); String source = "the quick brown is a fox or noT"; String[] expected = new String[] { "the", "the_quick", "quick", "brown", "brown_is", "is", "is_a", "a", "a_fox", "fox", "fox_or", "or", "or_noT", "noT" }; - Tokenizer tokenizer = new WhitespaceTokenizer(TEST_VERSION_CURRENT, new StringReader(source)); + Tokenizer tokenizer = new WhitespaceTokenizer(); + tokenizer.setReader(new StringReader(source)); assertTokenStreamContents(tokenFilter.create(tokenizer), expected); } { @@ -105,7 +108,8 @@ public class CommonGramsTokenFilterFactoryTests extends ElasticsearchTokenStream TokenFilterFactory tokenFilter = analysisService.tokenFilter("common_grams_2"); String source = "the quick brown is a fox or why noT"; String[] expected = new String[] { "the", "the_quick", "quick", "brown", "brown_is", "is", "is_a", "a", "a_fox", "fox", "or", "why", "why_noT", "noT" }; - Tokenizer tokenizer = new WhitespaceTokenizer(TEST_VERSION_CURRENT, new StringReader(source)); + Tokenizer tokenizer = new WhitespaceTokenizer(); + tokenizer.setReader(new StringReader(source)); assertTokenStreamContents(tokenFilter.create(tokenizer), expected); } { @@ -116,7 +120,8 @@ public class CommonGramsTokenFilterFactoryTests extends ElasticsearchTokenStream TokenFilterFactory tokenFilter = analysisService.tokenFilter("common_grams_3"); String source = "the quick brown is a fox Or noT"; String[] expected = new String[] { "the", "the_quick", "quick", "brown", "brown_is", "is", "is_a", "a", "a_fox", "fox", "Or", "noT" }; - Tokenizer tokenizer = new WhitespaceTokenizer(TEST_VERSION_CURRENT, new StringReader(source)); + Tokenizer tokenizer = new WhitespaceTokenizer(); + tokenizer.setReader(new StringReader(source)); assertTokenStreamContents(tokenFilter.create(tokenizer), expected); } } @@ -152,7 +157,8 @@ public class CommonGramsTokenFilterFactoryTests extends ElasticsearchTokenStream TokenFilterFactory tokenFilter = analysisService.tokenFilter("common_grams_1"); String source = "the quick brown is a fox or noT"; String[] expected = new String[] { "the_quick", "quick", "brown_is", "is_a", "a_fox", "fox_or", "or_noT" }; - Tokenizer tokenizer = new WhitespaceTokenizer(TEST_VERSION_CURRENT, new StringReader(source)); + Tokenizer tokenizer = new WhitespaceTokenizer(); + tokenizer.setReader(new StringReader(source)); assertTokenStreamContents(tokenFilter.create(tokenizer), expected); } { @@ -165,7 +171,8 @@ public class CommonGramsTokenFilterFactoryTests extends ElasticsearchTokenStream TokenFilterFactory tokenFilter = analysisService.tokenFilter("common_grams_2"); String source = "the quick brown is a fox or why noT"; String[] expected = new String[] { "the_quick", "quick", "brown_is", "is_a", "a_fox", "fox", "or", "why_noT" }; - Tokenizer tokenizer = new WhitespaceTokenizer(TEST_VERSION_CURRENT, new StringReader(source)); + Tokenizer tokenizer = new WhitespaceTokenizer(); + tokenizer.setReader(new StringReader(source)); assertTokenStreamContents(tokenFilter.create(tokenizer), expected); } { @@ -177,7 +184,8 @@ public class CommonGramsTokenFilterFactoryTests extends ElasticsearchTokenStream TokenFilterFactory tokenFilter = analysisService.tokenFilter("common_grams_3"); String source = "the quick brown is a fox or why noT"; String[] expected = new String[] { "the_quick", "quick", "brown_is", "is_a", "a_fox", "fox", "or", "why_noT" }; - Tokenizer tokenizer = new WhitespaceTokenizer(TEST_VERSION_CURRENT, new StringReader(source)); + Tokenizer tokenizer = new WhitespaceTokenizer(); + tokenizer.setReader(new StringReader(source)); assertTokenStreamContents(tokenFilter.create(tokenizer), expected); } { @@ -189,7 +197,8 @@ public class CommonGramsTokenFilterFactoryTests extends ElasticsearchTokenStream TokenFilterFactory tokenFilter = analysisService.tokenFilter("common_grams_4"); String source = "the quick brown is a fox Or noT"; String[] expected = new String[] { "the_quick", "quick", "brown_is", "is_a", "a_fox", "fox", "Or", "noT" }; - Tokenizer tokenizer = new WhitespaceTokenizer(TEST_VERSION_CURRENT, new StringReader(source)); + Tokenizer tokenizer = new WhitespaceTokenizer(); + tokenizer.setReader(new StringReader(source)); assertTokenStreamContents(tokenFilter.create(tokenizer), expected); } } diff --git a/src/test/java/org/elasticsearch/index/analysis/filter1/MyFilterTokenFilterFactory.java b/src/test/java/org/elasticsearch/index/analysis/filter1/MyFilterTokenFilterFactory.java index a30d520f9a2..01b3f71e08f 100644 --- a/src/test/java/org/elasticsearch/index/analysis/filter1/MyFilterTokenFilterFactory.java +++ b/src/test/java/org/elasticsearch/index/analysis/filter1/MyFilterTokenFilterFactory.java @@ -37,6 +37,6 @@ public class MyFilterTokenFilterFactory extends AbstractTokenFilterFactory { @Override public TokenStream create(TokenStream tokenStream) { - return new StopFilter(version, tokenStream, StopAnalyzer.ENGLISH_STOP_WORDS_SET); + return new StopFilter(tokenStream, StopAnalyzer.ENGLISH_STOP_WORDS_SET); } } \ No newline at end of file diff --git a/src/test/java/org/elasticsearch/index/cache/fixedbitset/FixedBitSetFilterCacheTest.java b/src/test/java/org/elasticsearch/index/cache/bitset/BitSetFilterCacheTest.java similarity index 74% rename from src/test/java/org/elasticsearch/index/cache/fixedbitset/FixedBitSetFilterCacheTest.java rename to src/test/java/org/elasticsearch/index/cache/bitset/BitSetFilterCacheTest.java index cdbf559861e..b3d030560e8 100644 --- a/src/test/java/org/elasticsearch/index/cache/fixedbitset/FixedBitSetFilterCacheTest.java +++ b/src/test/java/org/elasticsearch/index/cache/bitset/BitSetFilterCacheTest.java @@ -17,19 +17,24 @@ * under the License. */ -package org.elasticsearch.index.cache.fixedbitset; +package org.elasticsearch.index.cache.bitset; import org.apache.lucene.analysis.standard.StandardAnalyzer; import org.apache.lucene.document.Document; import org.apache.lucene.document.Field; import org.apache.lucene.document.StringField; -import org.apache.lucene.index.*; +import org.apache.lucene.index.DirectoryReader; +import org.apache.lucene.index.IndexReader; +import org.apache.lucene.index.IndexWriter; +import org.apache.lucene.index.IndexWriterConfig; +import org.apache.lucene.index.LogByteSizeMergePolicy; +import org.apache.lucene.index.Term; import org.apache.lucene.queries.TermFilter; +import org.apache.lucene.search.ConstantScoreQuery; import org.apache.lucene.search.IndexSearcher; import org.apache.lucene.search.TopDocs; +import org.apache.lucene.search.join.BitDocIdSetFilter; import org.apache.lucene.store.RAMDirectory; -import org.elasticsearch.common.lucene.Lucene; -import org.elasticsearch.common.lucene.search.XConstantScoreQuery; import org.elasticsearch.common.settings.ImmutableSettings; import org.elasticsearch.index.Index; import org.elasticsearch.test.ElasticsearchTestCase; @@ -39,13 +44,13 @@ import static org.hamcrest.Matchers.equalTo; /** */ -public class FixedBitSetFilterCacheTest extends ElasticsearchTestCase { +public class BitSetFilterCacheTest extends ElasticsearchTestCase { @Test public void testInvalidateEntries() throws Exception { IndexWriter writer = new IndexWriter( new RAMDirectory(), - new IndexWriterConfig(Lucene.VERSION, new StandardAnalyzer(Lucene.VERSION)).setMergePolicy(new LogByteSizeMergePolicy()) + new IndexWriterConfig(new StandardAnalyzer()).setMergePolicy(new LogByteSizeMergePolicy()) ); Document document = new Document(); document.add(new StringField("field", "value", Field.Store.NO)); @@ -65,13 +70,13 @@ public class FixedBitSetFilterCacheTest extends ElasticsearchTestCase { IndexReader reader = DirectoryReader.open(writer, false); IndexSearcher searcher = new IndexSearcher(reader); - FixedBitSetFilterCache cache = new FixedBitSetFilterCache(new Index("test"), ImmutableSettings.EMPTY); - FixedBitSetFilter filter = cache.getFixedBitSetFilter(new TermFilter(new Term("field", "value"))); - TopDocs docs = searcher.search(new XConstantScoreQuery(filter), 1); + BitsetFilterCache cache = new BitsetFilterCache(new Index("test"), ImmutableSettings.EMPTY); + BitDocIdSetFilter filter = cache.getBitDocIdSetFilter(new TermFilter(new Term("field", "value"))); + TopDocs docs = searcher.search(new ConstantScoreQuery(filter), 1); assertThat(docs.totalHits, equalTo(3)); // now cached - docs = searcher.search(new XConstantScoreQuery(filter), 1); + docs = searcher.search(new ConstantScoreQuery(filter), 1); assertThat(docs.totalHits, equalTo(3)); // There are 3 segments assertThat(cache.getLoadedFilters().size(), equalTo(3l)); @@ -81,11 +86,11 @@ public class FixedBitSetFilterCacheTest extends ElasticsearchTestCase { reader = DirectoryReader.open(writer, false); searcher = new IndexSearcher(reader); - docs = searcher.search(new XConstantScoreQuery(filter), 1); + docs = searcher.search(new ConstantScoreQuery(filter), 1); assertThat(docs.totalHits, equalTo(3)); // now cached - docs = searcher.search(new XConstantScoreQuery(filter), 1); + docs = searcher.search(new ConstantScoreQuery(filter), 1); assertThat(docs.totalHits, equalTo(3)); // Only one segment now, so the size must be 1 assertThat(cache.getLoadedFilters().size(), equalTo(1l)); diff --git a/src/test/java/org/elasticsearch/index/codec/CodecTests.java b/src/test/java/org/elasticsearch/index/codec/CodecTests.java index 188ce05b833..f05832e90b9 100644 --- a/src/test/java/org/elasticsearch/index/codec/CodecTests.java +++ b/src/test/java/org/elasticsearch/index/codec/CodecTests.java @@ -19,24 +19,26 @@ package org.elasticsearch.index.codec; -import java.util.Arrays; - import org.apache.lucene.codecs.Codec; import org.apache.lucene.codecs.bloom.BloomFilteringPostingsFormat; import org.apache.lucene.codecs.lucene40.Lucene40Codec; import org.apache.lucene.codecs.lucene41.Lucene41Codec; import org.apache.lucene.codecs.lucene41.Lucene41PostingsFormat; +import org.apache.lucene.codecs.lucene410.Lucene410Codec; +import org.apache.lucene.codecs.lucene410.Lucene410DocValuesFormat; import org.apache.lucene.codecs.lucene42.Lucene42Codec; import org.apache.lucene.codecs.lucene45.Lucene45Codec; import org.apache.lucene.codecs.lucene46.Lucene46Codec; import org.apache.lucene.codecs.lucene49.Lucene49Codec; -import org.apache.lucene.codecs.lucene410.Lucene410Codec; -import org.apache.lucene.codecs.lucene410.Lucene410DocValuesFormat; +import org.apache.lucene.codecs.lucene50.Lucene50Codec; +import org.apache.lucene.codecs.lucene50.Lucene50DocValuesFormat; import org.apache.lucene.codecs.perfield.PerFieldPostingsFormat; import org.elasticsearch.common.settings.ImmutableSettings; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentFactory; -import org.elasticsearch.index.codec.docvaluesformat.*; +import org.elasticsearch.index.codec.docvaluesformat.DefaultDocValuesFormatProvider; +import org.elasticsearch.index.codec.docvaluesformat.DocValuesFormatService; +import org.elasticsearch.index.codec.docvaluesformat.PreBuiltDocValuesFormatProvider; import org.elasticsearch.index.codec.postingsformat.*; import org.elasticsearch.index.mapper.DocumentMapper; import org.elasticsearch.index.mapper.internal.UidFieldMapper; @@ -46,6 +48,8 @@ import org.elasticsearch.test.ElasticsearchSingleNodeLuceneTestCase; import org.junit.Before; import org.junit.Test; +import java.util.Arrays; + import static org.hamcrest.Matchers.equalTo; import static org.hamcrest.Matchers.instanceOf; @@ -62,7 +66,7 @@ public class CodecTests extends ElasticsearchSingleNodeLuceneTestCase { public void testResolveDefaultCodecs() throws Exception { CodecService codecService = createCodecService(); assertThat(codecService.codec("default"), instanceOf(PerFieldMappingPostingFormatCodec.class)); - assertThat(codecService.codec("default"), instanceOf(Lucene410Codec.class)); + assertThat(codecService.codec("default"), instanceOf(Lucene50Codec.class)); assertThat(codecService.codec("Lucene410"), instanceOf(Lucene410Codec.class)); assertThat(codecService.codec("Lucene49"), instanceOf(Lucene49Codec.class)); assertThat(codecService.codec("Lucene46"), instanceOf(Lucene46Codec.class)); @@ -82,7 +86,7 @@ public class CodecTests extends ElasticsearchSingleNodeLuceneTestCase { assertThat(((Elasticsearch090PostingsFormat)postingsFormatService.get("default").get()).getDefaultWrapped(), instanceOf(((PerFieldPostingsFormat) Codec.getDefault().postingsFormat()).getPostingsFormatForField("").getClass())); assertThat(postingsFormatService.get("Lucene41"), instanceOf(PreBuiltPostingsFormatProvider.class)); // Should fail when upgrading Lucene with codec changes - assertThat(postingsFormatService.get("Lucene41").get(), instanceOf(((PerFieldPostingsFormat) Codec.getDefault().postingsFormat()).getPostingsFormatForField(null).getClass())); + assertThat(postingsFormatService.get("Lucene50").get(), instanceOf(((PerFieldPostingsFormat) Codec.getDefault().postingsFormat()).getPostingsFormatForField(null).getClass())); assertThat(postingsFormatService.get("bloom_default"), instanceOf(PreBuiltPostingsFormatProvider.class)); if (PostingFormats.luceneBloomFilter) { @@ -104,7 +108,7 @@ public class CodecTests extends ElasticsearchSingleNodeLuceneTestCase { for (String dvf : Arrays.asList("default")) { assertThat(docValuesFormatService.get(dvf), instanceOf(PreBuiltDocValuesFormatProvider.class)); } - assertThat(docValuesFormatService.get("default").get(), instanceOf(Lucene410DocValuesFormat.class)); + assertThat(docValuesFormatService.get("default").get(), instanceOf(Lucene50DocValuesFormat.class)); } @Test @@ -159,7 +163,7 @@ public class CodecTests extends ElasticsearchSingleNodeLuceneTestCase { CodecService codecService = createCodecService(indexSettings); DocumentMapper documentMapper = codecService.mapperService().documentMapperParser().parse(mapping); assertThat(documentMapper.mappers().name("field1").mapper().docValuesFormatProvider(), instanceOf(PreBuiltDocValuesFormatProvider.class)); - assertThat(documentMapper.mappers().name("field1").mapper().docValuesFormatProvider().get(), instanceOf(Lucene410DocValuesFormat.class)); + assertThat(documentMapper.mappers().name("field1").mapper().docValuesFormatProvider().get(), instanceOf(Lucene50DocValuesFormat.class)); assertThat(documentMapper.mappers().name("field2").mapper().docValuesFormatProvider(), instanceOf(DefaultDocValuesFormatProvider.class)); } diff --git a/src/test/java/org/elasticsearch/index/codec/postingformat/DefaultPostingsFormatTests.java b/src/test/java/org/elasticsearch/index/codec/postingformat/DefaultPostingsFormatTests.java index 96cfc89c862..91248eb5478 100644 --- a/src/test/java/org/elasticsearch/index/codec/postingformat/DefaultPostingsFormatTests.java +++ b/src/test/java/org/elasticsearch/index/codec/postingformat/DefaultPostingsFormatTests.java @@ -22,7 +22,7 @@ package org.elasticsearch.index.codec.postingformat; import org.apache.lucene.analysis.core.WhitespaceAnalyzer; import org.apache.lucene.codecs.Codec; import org.apache.lucene.codecs.PostingsFormat; -import org.apache.lucene.codecs.lucene410.Lucene410Codec; +import org.apache.lucene.codecs.lucene50.Lucene50Codec; import org.apache.lucene.document.Field.Store; import org.apache.lucene.document.TextField; import org.apache.lucene.index.*; @@ -48,7 +48,7 @@ import static org.hamcrest.Matchers.*; */ public class DefaultPostingsFormatTests extends ElasticsearchTestCase { - private final class TestCodec extends Lucene410Codec { + private final class TestCodec extends Lucene50Codec { @Override public PostingsFormat getPostingsFormatForField(String field) { @@ -61,15 +61,15 @@ public class DefaultPostingsFormatTests extends ElasticsearchTestCase { Codec codec = new TestCodec(); Directory d = new RAMDirectory(); - IndexWriterConfig config = new IndexWriterConfig(Lucene.VERSION, new WhitespaceAnalyzer(Lucene.VERSION)); + IndexWriterConfig config = new IndexWriterConfig(new WhitespaceAnalyzer()); config.setCodec(codec); IndexWriter writer = new IndexWriter(d, config); writer.addDocument(Arrays.asList(new TextField("foo", "bar", Store.YES), new TextField(UidFieldMapper.NAME, "1234", Store.YES))); writer.commit(); DirectoryReader reader = DirectoryReader.open(writer, false); - List leaves = reader.leaves(); + List leaves = reader.leaves(); assertThat(leaves.size(), equalTo(1)); - AtomicReader ar = leaves.get(0).reader(); + LeafReader ar = leaves.get(0).reader(); Terms terms = ar.terms("foo"); Terms uidTerms = ar.terms(UidFieldMapper.NAME); @@ -87,7 +87,7 @@ public class DefaultPostingsFormatTests extends ElasticsearchTestCase { Codec codec = new TestCodec(); Directory d = new RAMDirectory(); - IndexWriterConfig config = new IndexWriterConfig(Lucene.VERSION, new WhitespaceAnalyzer(Lucene.VERSION)); + IndexWriterConfig config = new IndexWriterConfig(new WhitespaceAnalyzer()); config.setCodec(codec); IndexWriter writer = new IndexWriter(d, config); for (int i = 0; i < 100; i++) { @@ -97,9 +97,9 @@ public class DefaultPostingsFormatTests extends ElasticsearchTestCase { writer.commit(); DirectoryReader reader = DirectoryReader.open(writer, false); - List leaves = reader.leaves(); + List leaves = reader.leaves(); assertThat(leaves.size(), equalTo(1)); - AtomicReader ar = leaves.get(0).reader(); + LeafReader ar = leaves.get(0).reader(); Terms terms = ar.terms("foo"); Terms some_other_field = ar.terms("some_other_field"); diff --git a/src/test/java/org/elasticsearch/index/deletionpolicy/SnapshotDeletionPolicyTests.java b/src/test/java/org/elasticsearch/index/deletionpolicy/SnapshotDeletionPolicyTests.java index 297d6aedec7..2499d96caa0 100644 --- a/src/test/java/org/elasticsearch/index/deletionpolicy/SnapshotDeletionPolicyTests.java +++ b/src/test/java/org/elasticsearch/index/deletionpolicy/SnapshotDeletionPolicyTests.java @@ -53,7 +53,7 @@ public class SnapshotDeletionPolicyTests extends ElasticsearchTestCase { super.setUp(); dir = new RAMDirectory(); deletionPolicy = new SnapshotDeletionPolicy(new KeepOnlyLastDeletionPolicy(shardId, EMPTY_SETTINGS)); - indexWriter = new IndexWriter(dir, new IndexWriterConfig(TEST_VERSION_CURRENT, Lucene.STANDARD_ANALYZER) + indexWriter = new IndexWriter(dir, new IndexWriterConfig(Lucene.STANDARD_ANALYZER) .setIndexDeletionPolicy(deletionPolicy) .setOpenMode(IndexWriterConfig.OpenMode.CREATE)); } diff --git a/src/test/java/org/elasticsearch/index/engine/internal/InternalEngineSettingsTest.java b/src/test/java/org/elasticsearch/index/engine/internal/InternalEngineSettingsTest.java index f055416441d..0e3c1cf5554 100644 --- a/src/test/java/org/elasticsearch/index/engine/internal/InternalEngineSettingsTest.java +++ b/src/test/java/org/elasticsearch/index/engine/internal/InternalEngineSettingsTest.java @@ -34,13 +34,6 @@ public class InternalEngineSettingsTest extends ElasticsearchSingleNodeTest { assertThat(engine(service).currentIndexWriterConfig().getUseCompoundFile(), is(false)); client().admin().indices().prepareUpdateSettings("foo").setSettings(ImmutableSettings.builder().put(InternalEngine.INDEX_COMPOUND_ON_FLUSH, true).build()).get(); assertThat(engine(service).currentIndexWriterConfig().getUseCompoundFile(), is(true)); - - // INDEX_CHECKSUM_ON_MERGE - assertThat(engine(service).currentIndexWriterConfig().getCheckIntegrityAtMerge(), is(true)); - client().admin().indices().prepareUpdateSettings("foo").setSettings(ImmutableSettings.builder().put(InternalEngine.INDEX_CHECKSUM_ON_MERGE, false).build()).get(); - assertThat(engine(service).currentIndexWriterConfig().getCheckIntegrityAtMerge(), is(false)); - client().admin().indices().prepareUpdateSettings("foo").setSettings(ImmutableSettings.builder().put(InternalEngine.INDEX_CHECKSUM_ON_MERGE, true).build()).get(); - assertThat(engine(service).currentIndexWriterConfig().getCheckIntegrityAtMerge(), is(true)); } diff --git a/src/test/java/org/elasticsearch/index/engine/internal/InternalEngineTests.java b/src/test/java/org/elasticsearch/index/engine/internal/InternalEngineTests.java index 552cbe998ee..30e05f98aa8 100644 --- a/src/test/java/org/elasticsearch/index/engine/internal/InternalEngineTests.java +++ b/src/test/java/org/elasticsearch/index/engine/internal/InternalEngineTests.java @@ -119,7 +119,6 @@ public class InternalEngineTests extends ElasticsearchTestCase { super.setUp(); defaultSettings = ImmutableSettings.builder() .put(InternalEngine.INDEX_COMPOUND_ON_FLUSH, getRandom().nextBoolean()) - .put(InternalEngine.INDEX_CHECKSUM_ON_MERGE, getRandom().nextBoolean()) .put(InternalEngine.INDEX_GC_DELETES, "1h") // make sure this doesn't kick in on us .put(InternalEngine.INDEX_FAIL_ON_CORRUPTION, randomBoolean()) .build(); // TODO randomize more settings @@ -657,21 +656,21 @@ public class InternalEngineTests extends ElasticsearchTestCase { @Override public void phase1(SnapshotIndexCommit snapshot) throws EngineException { if (failInPhase == 1) { - throw new RuntimeException("bar", new CorruptIndexException("Foo")); + throw new RuntimeException("bar", new CorruptIndexException("Foo", "fake file description")); } } @Override public void phase2(Translog.Snapshot snapshot) throws EngineException { if (failInPhase == 2) { - throw new RuntimeException("bar", new CorruptIndexException("Foo")); + throw new RuntimeException("bar", new CorruptIndexException("Foo", "fake file description")); } } @Override public void phase3(Translog.Snapshot snapshot) throws EngineException { if (failInPhase == 3) { - throw new RuntimeException("bar", new CorruptIndexException("Foo")); + throw new RuntimeException("bar", new CorruptIndexException("Foo", "fake file description")); } } }); diff --git a/src/test/java/org/elasticsearch/index/fielddata/AbstractFieldDataImplTests.java b/src/test/java/org/elasticsearch/index/fielddata/AbstractFieldDataImplTests.java index 82ffdaf7e9a..d517f4cdd7a 100644 --- a/src/test/java/org/elasticsearch/index/fielddata/AbstractFieldDataImplTests.java +++ b/src/test/java/org/elasticsearch/index/fielddata/AbstractFieldDataImplTests.java @@ -19,7 +19,7 @@ package org.elasticsearch.index.fielddata; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.DirectoryReader; import org.apache.lucene.search.*; import org.apache.lucene.util.BytesRef; @@ -63,7 +63,7 @@ public abstract class AbstractFieldDataImplTests extends AbstractFieldDataTests public void testDeletedDocs() throws Exception { add2SingleValuedDocumentsAndDeleteOneOfThem(); IndexFieldData indexFieldData = getForField("value"); - AtomicReaderContext readerContext = refreshReader(); + LeafReaderContext readerContext = refreshReader(); AtomicFieldData fieldData = indexFieldData.load(readerContext); SortedBinaryDocValues values = fieldData.getBytesValues(); for (int i = 0; i < readerContext.reader().maxDoc(); ++i) { @@ -76,7 +76,7 @@ public abstract class AbstractFieldDataImplTests extends AbstractFieldDataTests public void testSingleValueAllSet() throws Exception { fillSingleValueAllSet(); IndexFieldData indexFieldData = getForField("value"); - AtomicReaderContext readerContext = refreshReader(); + LeafReaderContext readerContext = refreshReader(); AtomicFieldData fieldData = indexFieldData.load(readerContext); assertThat(fieldData.ramBytesUsed(), greaterThan(0l)); diff --git a/src/test/java/org/elasticsearch/index/fielddata/AbstractFieldDataTests.java b/src/test/java/org/elasticsearch/index/fielddata/AbstractFieldDataTests.java index 3e23205a60a..ece2826a26e 100644 --- a/src/test/java/org/elasticsearch/index/fielddata/AbstractFieldDataTests.java +++ b/src/test/java/org/elasticsearch/index/fielddata/AbstractFieldDataTests.java @@ -19,6 +19,8 @@ package org.elasticsearch.index.fielddata; +import org.elasticsearch.index.cache.bitset.BitsetFilterCache; + import org.apache.lucene.analysis.standard.StandardAnalyzer; import org.apache.lucene.index.*; import org.apache.lucene.search.Filter; @@ -32,7 +34,6 @@ import org.elasticsearch.index.mapper.FieldMapper; import org.elasticsearch.index.mapper.Mapper.BuilderContext; import org.elasticsearch.index.mapper.MapperBuilders; import org.elasticsearch.index.mapper.MapperService; -import org.elasticsearch.index.cache.fixedbitset.FixedBitSetFilterCache; import org.elasticsearch.index.service.IndexService; import org.elasticsearch.indices.fielddata.cache.IndicesFieldDataCache; import org.elasticsearch.test.ElasticsearchSingleNodeTest; @@ -50,7 +51,7 @@ public abstract class AbstractFieldDataTests extends ElasticsearchSingleNodeTest protected IndexFieldDataService ifdService; protected MapperService mapperService; protected IndexWriter writer; - protected AtomicReaderContext readerContext; + protected LeafReaderContext readerContext; protected IndexReader topLevelReader; protected IndicesFieldDataCache indicesFieldDataCache; @@ -101,14 +102,14 @@ public abstract class AbstractFieldDataTests extends ElasticsearchSingleNodeTest indicesFieldDataCache = indexService.injector().getInstance(IndicesFieldDataCache.class); ifdService = indexService.fieldData(); // LogByteSizeMP to preserve doc ID order - writer = new IndexWriter(new RAMDirectory(), new IndexWriterConfig(Lucene.VERSION, new StandardAnalyzer(Lucene.VERSION)).setMergePolicy(new LogByteSizeMergePolicy())); + writer = new IndexWriter(new RAMDirectory(), new IndexWriterConfig(new StandardAnalyzer()).setMergePolicy(new LogByteSizeMergePolicy())); } - protected AtomicReaderContext refreshReader() throws Exception { + protected LeafReaderContext refreshReader() throws Exception { if (readerContext != null) { readerContext.reader().close(); } - AtomicReader reader = SlowCompositeReaderWrapper.wrap(topLevelReader = DirectoryReader.open(writer, true)); + LeafReader reader = SlowCompositeReaderWrapper.wrap(topLevelReader = DirectoryReader.open(writer, true)); readerContext = reader.getContext(); return readerContext; } @@ -123,8 +124,8 @@ public abstract class AbstractFieldDataTests extends ElasticsearchSingleNodeTest } protected Nested createNested(Filter parentFilter, Filter childFilter) { - FixedBitSetFilterCache s = indexService.fixedBitSetFilterCache(); - return new Nested(s.getFixedBitSetFilter(parentFilter), s.getFixedBitSetFilter(childFilter)); + BitsetFilterCache s = indexService.bitsetFilterCache(); + return new Nested(s.getBitDocIdSetFilter(parentFilter), s.getBitDocIdSetFilter(childFilter)); } } diff --git a/src/test/java/org/elasticsearch/index/fielddata/AbstractStringFieldDataTests.java b/src/test/java/org/elasticsearch/index/fielddata/AbstractStringFieldDataTests.java index e56fea2e9ef..e1ef53cd61b 100644 --- a/src/test/java/org/elasticsearch/index/fielddata/AbstractStringFieldDataTests.java +++ b/src/test/java/org/elasticsearch/index/fielddata/AbstractStringFieldDataTests.java @@ -24,15 +24,28 @@ import org.apache.lucene.document.Document; import org.apache.lucene.document.Field; import org.apache.lucene.document.Field.Store; import org.apache.lucene.document.StringField; -import org.apache.lucene.index.*; +import org.apache.lucene.index.DirectoryReader; +import org.apache.lucene.index.LeafReaderContext; +import org.apache.lucene.index.RandomAccessOrds; +import org.apache.lucene.index.Term; +import org.apache.lucene.index.TermsEnum; import org.apache.lucene.queries.TermFilter; -import org.apache.lucene.search.*; -import org.apache.lucene.search.join.FixedBitSetCachingWrapperFilter; +import org.apache.lucene.search.Filter; +import org.apache.lucene.search.FilteredQuery; +import org.apache.lucene.search.IndexSearcher; +import org.apache.lucene.search.MatchAllDocsQuery; +import org.apache.lucene.search.Sort; +import org.apache.lucene.search.SortField; +import org.apache.lucene.search.TopFieldDocs; +import org.apache.lucene.search.join.BitDocIdSetCachingWrapperFilter; import org.apache.lucene.search.join.ScoreMode; import org.apache.lucene.search.join.ToParentBlockJoinQuery; -import org.apache.lucene.util.*; +import org.apache.lucene.util.Accountable; +import org.apache.lucene.util.BytesRef; +import org.apache.lucene.util.FixedBitSet; +import org.apache.lucene.util.TestUtil; +import org.apache.lucene.util.UnicodeUtil; import org.elasticsearch.common.lucene.search.NotFilter; -import org.elasticsearch.common.lucene.search.XFilteredQuery; import org.elasticsearch.common.settings.ImmutableSettings; import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource; import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource.Nested; @@ -45,7 +58,12 @@ import java.io.IOException; import java.util.ArrayList; import java.util.List; -import static org.hamcrest.Matchers.*; +import static org.hamcrest.Matchers.equalTo; +import static org.hamcrest.Matchers.instanceOf; +import static org.hamcrest.Matchers.is; +import static org.hamcrest.Matchers.not; +import static org.hamcrest.Matchers.nullValue; +import static org.hamcrest.Matchers.sameInstance; /** */ @@ -332,7 +350,7 @@ public abstract class AbstractStringFieldDataTests extends AbstractFieldDataImpl } final int numParents = scaledRandomIntBetween(10, 10000); List docs = new ArrayList<>(); - final OpenBitSet parents = new OpenBitSet(); + FixedBitSet parents = new FixedBitSet(64); for (int i = 0; i < numParents; ++i) { docs.clear(); final int numChildren = randomInt(4); @@ -352,7 +370,9 @@ public abstract class AbstractStringFieldDataTests extends AbstractFieldDataImpl parent.add(new StringField("text", value, Store.YES)); } docs.add(parent); - parents.set(parents.prevSetBit(parents.length() - 1) + docs.size()); + int bit = parents.prevSetBit(parents.length() - 1) + docs.size(); + parents = FixedBitSet.ensureCapacity(parents, bit); + parents.set(bit); writer.addDocuments(docs); if (randomInt(10) == 0) { writer.commit(); @@ -379,7 +399,7 @@ public abstract class AbstractStringFieldDataTests extends AbstractFieldDataImpl Filter childFilter = new NotFilter(parentFilter); Nested nested = createNested(parentFilter, childFilter); BytesRefFieldComparatorSource nestedComparatorSource = new BytesRefFieldComparatorSource(fieldData, missingValue, sortMode, nested); - ToParentBlockJoinQuery query = new ToParentBlockJoinQuery(new XFilteredQuery(new MatchAllDocsQuery(), childFilter), new FixedBitSetCachingWrapperFilter(parentFilter), ScoreMode.None); + ToParentBlockJoinQuery query = new ToParentBlockJoinQuery(new FilteredQuery(new MatchAllDocsQuery(), childFilter), new BitDocIdSetCachingWrapperFilter(parentFilter), ScoreMode.None); Sort sort = new Sort(new SortField("text", nestedComparatorSource)); TopFieldDocs topDocs = searcher.search(query, randomIntBetween(1, numParents), sort); assertTrue(topDocs.scoreDocs.length > 0); @@ -514,7 +534,7 @@ public abstract class AbstractStringFieldDataTests extends AbstractFieldDataImpl @Test public void testTermsEnum() throws Exception { fillExtendedMvSet(); - AtomicReaderContext atomicReaderContext = refreshReader(); + LeafReaderContext atomicReaderContext = refreshReader(); IndexOrdinalsFieldData ifd = getForField("value"); AtomicOrdinalsFieldData afd = ifd.load(atomicReaderContext); diff --git a/src/test/java/org/elasticsearch/index/fielddata/BinaryDVFieldDataTests.java b/src/test/java/org/elasticsearch/index/fielddata/BinaryDVFieldDataTests.java index 6339ac5929f..46013ac9921 100644 --- a/src/test/java/org/elasticsearch/index/fielddata/BinaryDVFieldDataTests.java +++ b/src/test/java/org/elasticsearch/index/fielddata/BinaryDVFieldDataTests.java @@ -20,7 +20,7 @@ package org.elasticsearch.index.fielddata; import com.carrotsearch.hppc.ObjectArrayList; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.util.BytesRef; import org.elasticsearch.common.settings.ImmutableSettings; import org.elasticsearch.common.util.CollectionUtils; @@ -75,7 +75,7 @@ public class BinaryDVFieldDataTests extends AbstractFieldDataTests { d = mapper.parse("test", "4", doc.bytes()); writer.addDocument(d.rootDoc()); - AtomicReaderContext reader = refreshReader(); + LeafReaderContext reader = refreshReader(); IndexFieldData indexFieldData = getForField("field"); AtomicFieldData fieldData = indexFieldData.load(reader); diff --git a/src/test/java/org/elasticsearch/index/fielddata/DuelFieldDataTests.java b/src/test/java/org/elasticsearch/index/fielddata/DuelFieldDataTests.java index f6cc63d037d..ac331641965 100644 --- a/src/test/java/org/elasticsearch/index/fielddata/DuelFieldDataTests.java +++ b/src/test/java/org/elasticsearch/index/fielddata/DuelFieldDataTests.java @@ -29,7 +29,6 @@ import org.apache.lucene.index.*; import org.apache.lucene.util.BytesRef; import org.apache.lucene.util.BytesRefBuilder; import org.apache.lucene.util.English; -import org.apache.lucene.util.LuceneTestCase; import org.elasticsearch.common.geo.GeoDistance; import org.elasticsearch.common.geo.GeoPoint; import org.elasticsearch.common.settings.ImmutableSettings; @@ -57,7 +56,7 @@ public class DuelFieldDataTests extends AbstractFieldDataTests { public void testDuelAllTypesSingleValue() throws Exception { final String mapping = XContentFactory.jsonBuilder().startObject().startObject("type") .startObject("properties") - .startObject("bytes").field("type", "string").field("index", "not_analyzed").startObject("fielddata").field("format", LuceneTestCase.defaultCodecSupportsSortedSet() ? "doc_values" : "fst").endObject().endObject() + .startObject("bytes").field("type", "string").field("index", "not_analyzed").startObject("fielddata").field("format", "doc_values").endObject().endObject() .startObject("byte").field("type", "byte").startObject("fielddata").field("format", "doc_values").endObject().endObject() .startObject("short").field("type", "short").startObject("fielddata").field("format", "doc_values").endObject().endObject() .startObject("integer").field("type", "integer").startObject("fielddata").field("format", "doc_values").endObject().endObject() @@ -86,7 +85,7 @@ public class DuelFieldDataTests extends AbstractFieldDataTests { refreshReader(); } } - AtomicReaderContext context = refreshReader(); + LeafReaderContext context = refreshReader(); Map typeMap = new HashMap<>(); typeMap.put(new FieldDataType("string", ImmutableSettings.builder().put("format", "fst")), Type.Bytes); typeMap.put(new FieldDataType("string", ImmutableSettings.builder().put("format", "paged_bytes")), Type.Bytes); @@ -124,8 +123,8 @@ public class DuelFieldDataTests extends AbstractFieldDataTests { DirectoryReader perSegment = DirectoryReader.open(writer, true); CompositeReaderContext composite = perSegment.getContext(); - List leaves = composite.leaves(); - for (AtomicReaderContext atomicReaderContext : leaves) { + List leaves = composite.leaves(); + for (LeafReaderContext atomicReaderContext : leaves) { duelFieldDataBytes(random, atomicReaderContext, leftFieldData, rightFieldData, pre); } } @@ -178,7 +177,7 @@ public class DuelFieldDataTests extends AbstractFieldDataTests { refreshReader(); } } - AtomicReaderContext context = refreshReader(); + LeafReaderContext context = refreshReader(); Map typeMap = new HashMap<>(); typeMap.put(new FieldDataType("byte", ImmutableSettings.builder().put("format", "array")), Type.Integer); typeMap.put(new FieldDataType("short", ImmutableSettings.builder().put("format", "array")), Type.Integer); @@ -208,8 +207,8 @@ public class DuelFieldDataTests extends AbstractFieldDataTests { DirectoryReader perSegment = DirectoryReader.open(writer, true); CompositeReaderContext composite = perSegment.getContext(); - List leaves = composite.leaves(); - for (AtomicReaderContext atomicReaderContext : leaves) { + List leaves = composite.leaves(); + for (LeafReaderContext atomicReaderContext : leaves) { duelFieldDataLong(random, atomicReaderContext, leftFieldData, rightFieldData); } } @@ -264,7 +263,7 @@ public class DuelFieldDataTests extends AbstractFieldDataTests { refreshReader(); } } - AtomicReaderContext context = refreshReader(); + LeafReaderContext context = refreshReader(); Map typeMap = new HashMap<>(); typeMap.put(new FieldDataType("double", ImmutableSettings.builder().put("format", "array")), Type.Double); typeMap.put(new FieldDataType("float", ImmutableSettings.builder().put("format", "array")), Type.Float); @@ -291,8 +290,8 @@ public class DuelFieldDataTests extends AbstractFieldDataTests { DirectoryReader perSegment = DirectoryReader.open(writer, true); CompositeReaderContext composite = perSegment.getContext(); - List leaves = composite.leaves(); - for (AtomicReaderContext atomicReaderContext : leaves) { + List leaves = composite.leaves(); + for (LeafReaderContext atomicReaderContext : leaves) { duelFieldDataDouble(random, atomicReaderContext, leftFieldData, rightFieldData); } } @@ -324,7 +323,7 @@ public class DuelFieldDataTests extends AbstractFieldDataTests { refreshReader(); } } - AtomicReaderContext context = refreshReader(); + LeafReaderContext context = refreshReader(); Map typeMap = new HashMap<>(); typeMap.put(new FieldDataType("string", ImmutableSettings.builder().put("format", "fst")), Type.Bytes); typeMap.put(new FieldDataType("string", ImmutableSettings.builder().put("format", "paged_bytes")), Type.Bytes); @@ -352,8 +351,8 @@ public class DuelFieldDataTests extends AbstractFieldDataTests { DirectoryReader perSegment = DirectoryReader.open(writer, true); CompositeReaderContext composite = perSegment.getContext(); - List leaves = composite.leaves(); - for (AtomicReaderContext atomicReaderContext : leaves) { + List leaves = composite.leaves(); + for (LeafReaderContext atomicReaderContext : leaves) { duelFieldDataBytes(random, atomicReaderContext, leftFieldData, rightFieldData, pre); } perSegment.close(); @@ -434,7 +433,7 @@ public class DuelFieldDataTests extends AbstractFieldDataTests { refreshReader(); } } - AtomicReaderContext context = refreshReader(); + LeafReaderContext context = refreshReader(); Map typeMap = new HashMap<>(); final Distance precision = new Distance(1, randomFrom(DistanceUnit.values())); typeMap.put(new FieldDataType("geo_point", ImmutableSettings.builder().put("format", "array")), Type.GeoPoint); @@ -462,8 +461,8 @@ public class DuelFieldDataTests extends AbstractFieldDataTests { DirectoryReader perSegment = DirectoryReader.open(writer, true); CompositeReaderContext composite = perSegment.getContext(); - List leaves = composite.leaves(); - for (AtomicReaderContext atomicReaderContext : leaves) { + List leaves = composite.leaves(); + for (LeafReaderContext atomicReaderContext : leaves) { duelFieldDataGeoPoint(random, atomicReaderContext, leftFieldData, rightFieldData, precision); } perSegment.close(); @@ -483,7 +482,7 @@ public class DuelFieldDataTests extends AbstractFieldDataTests { } - private static void duelFieldDataBytes(Random random, AtomicReaderContext context, IndexFieldData left, IndexFieldData right, Preprocessor pre) throws Exception { + private static void duelFieldDataBytes(Random random, LeafReaderContext context, IndexFieldData left, IndexFieldData right, Preprocessor pre) throws Exception { AtomicFieldData leftData = random.nextBoolean() ? left.load(context) : left.loadDirect(context); AtomicFieldData rightData = random.nextBoolean() ? right.load(context) : right.loadDirect(context); @@ -514,7 +513,7 @@ public class DuelFieldDataTests extends AbstractFieldDataTests { } - private static void duelFieldDataDouble(Random random, AtomicReaderContext context, IndexNumericFieldData left, IndexNumericFieldData right) throws Exception { + private static void duelFieldDataDouble(Random random, LeafReaderContext context, IndexNumericFieldData left, IndexNumericFieldData right) throws Exception { AtomicNumericFieldData leftData = random.nextBoolean() ? left.load(context) : left.loadDirect(context); AtomicNumericFieldData rightData = random.nextBoolean() ? right.load(context) : right.loadDirect(context); @@ -542,7 +541,7 @@ public class DuelFieldDataTests extends AbstractFieldDataTests { } } - private static void duelFieldDataLong(Random random, AtomicReaderContext context, IndexNumericFieldData left, IndexNumericFieldData right) throws Exception { + private static void duelFieldDataLong(Random random, LeafReaderContext context, IndexNumericFieldData left, IndexNumericFieldData right) throws Exception { AtomicNumericFieldData leftData = random.nextBoolean() ? left.load(context) : left.loadDirect(context); AtomicNumericFieldData rightData = random.nextBoolean() ? right.load(context) : right.loadDirect(context); @@ -566,7 +565,7 @@ public class DuelFieldDataTests extends AbstractFieldDataTests { } } - private static void duelFieldDataGeoPoint(Random random, AtomicReaderContext context, IndexGeoPointFieldData left, IndexGeoPointFieldData right, Distance precision) throws Exception { + private static void duelFieldDataGeoPoint(Random random, LeafReaderContext context, IndexGeoPointFieldData left, IndexGeoPointFieldData right, Distance precision) throws Exception { AtomicGeoPointFieldData leftData = random.nextBoolean() ? left.load(context) : left.loadDirect(context); AtomicGeoPointFieldData rightData = random.nextBoolean() ? right.load(context) : right.loadDirect(context); diff --git a/src/test/java/org/elasticsearch/index/fielddata/FilterFieldDataTest.java b/src/test/java/org/elasticsearch/index/fielddata/FilterFieldDataTest.java index a364be30186..030e7c69a26 100644 --- a/src/test/java/org/elasticsearch/index/fielddata/FilterFieldDataTest.java +++ b/src/test/java/org/elasticsearch/index/fielddata/FilterFieldDataTest.java @@ -21,7 +21,7 @@ package org.elasticsearch.index.fielddata; import org.apache.lucene.document.Document; import org.apache.lucene.document.Field; import org.apache.lucene.document.StringField; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.RandomAccessOrds; import org.elasticsearch.common.settings.ImmutableSettings; import org.junit.Test; @@ -59,7 +59,7 @@ public class FilterFieldDataTest extends AbstractFieldDataTests { writer.addDocument(d); } writer.forceMerge(1, true); - AtomicReaderContext context = refreshReader(); + LeafReaderContext context = refreshReader(); String[] formats = new String[] { "fst", "paged_bytes"}; for (String format : formats) { @@ -152,7 +152,7 @@ public class FilterFieldDataTest extends AbstractFieldDataTests { } logger.debug(hundred + " " + ten + " " + five); writer.forceMerge(1, true); - AtomicReaderContext context = refreshReader(); + LeafReaderContext context = refreshReader(); String[] formats = new String[] { "fst", "paged_bytes"}; for (String format : formats) { { diff --git a/src/test/java/org/elasticsearch/index/fielddata/IndexFieldDataServiceTests.java b/src/test/java/org/elasticsearch/index/fielddata/IndexFieldDataServiceTests.java index 4cf7b9bde0f..4f496630873 100644 --- a/src/test/java/org/elasticsearch/index/fielddata/IndexFieldDataServiceTests.java +++ b/src/test/java/org/elasticsearch/index/fielddata/IndexFieldDataServiceTests.java @@ -134,15 +134,15 @@ public class IndexFieldDataServiceTests extends ElasticsearchSingleNodeTest { final IndexFieldDataService ifdService = indexService.fieldData(); final BuilderContext ctx = new BuilderContext(indexService.settingsService().getSettings(), new ContentPath(1)); final StringFieldMapper mapper1 = MapperBuilders.stringField("s").tokenized(false).fieldDataSettings(ImmutableSettings.builder().put(FieldDataType.FORMAT_KEY, "paged_bytes").build()).build(ctx); - final IndexWriter writer = new IndexWriter(new RAMDirectory(), new IndexWriterConfig(TEST_VERSION_CURRENT, new KeywordAnalyzer())); + final IndexWriter writer = new IndexWriter(new RAMDirectory(), new IndexWriterConfig(new KeywordAnalyzer())); Document doc = new Document(); doc.add(new StringField("s", "thisisastring", Store.NO)); writer.addDocument(doc); final IndexReader reader1 = DirectoryReader.open(writer, true); IndexFieldData ifd = ifdService.getForField(mapper1); assertThat(ifd, instanceOf(PagedBytesIndexFieldData.class)); - Set oldSegments = Collections.newSetFromMap(new IdentityHashMap()); - for (AtomicReaderContext arc : reader1.leaves()) { + Set oldSegments = Collections.newSetFromMap(new IdentityHashMap()); + for (LeafReaderContext arc : reader1.leaves()) { oldSegments.add(arc.reader()); AtomicFieldData afd = ifd.load(arc); assertThat(afd, instanceOf(PagedBytesAtomicFieldData.class)); @@ -154,7 +154,7 @@ public class IndexFieldDataServiceTests extends ElasticsearchSingleNodeTest { ifdService.onMappingUpdate(); ifd = ifdService.getForField(mapper2); assertThat(ifd, instanceOf(FSTBytesIndexFieldData.class)); - for (AtomicReaderContext arc : reader2.leaves()) { + for (LeafReaderContext arc : reader2.leaves()) { AtomicFieldData afd = ifd.load(arc); if (oldSegments.contains(arc.reader())) { assertThat(afd, instanceOf(PagedBytesAtomicFieldData.class)); diff --git a/src/test/java/org/elasticsearch/index/fielddata/NoOrdinalsStringFieldDataTests.java b/src/test/java/org/elasticsearch/index/fielddata/NoOrdinalsStringFieldDataTests.java index 7bbb86da120..99bc38b5c84 100644 --- a/src/test/java/org/elasticsearch/index/fielddata/NoOrdinalsStringFieldDataTests.java +++ b/src/test/java/org/elasticsearch/index/fielddata/NoOrdinalsStringFieldDataTests.java @@ -19,7 +19,7 @@ package org.elasticsearch.index.fielddata; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.IndexReader; import org.elasticsearch.index.Index; import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource.Nested; @@ -51,12 +51,12 @@ public class NoOrdinalsStringFieldDataTests extends PagedBytesStringFieldDataTes } @Override - public AtomicFieldData load(AtomicReaderContext context) { + public AtomicFieldData load(LeafReaderContext context) { return in.load(context); } @Override - public AtomicFieldData loadDirect(AtomicReaderContext context) throws Exception { + public AtomicFieldData loadDirect(LeafReaderContext context) throws Exception { return in.loadDirect(context); } diff --git a/src/test/java/org/elasticsearch/index/fielddata/fieldcomparator/TestReplaceMissing.java b/src/test/java/org/elasticsearch/index/fielddata/fieldcomparator/TestReplaceMissing.java index c0a854f5887..f7583eea2e6 100644 --- a/src/test/java/org/elasticsearch/index/fielddata/fieldcomparator/TestReplaceMissing.java +++ b/src/test/java/org/elasticsearch/index/fielddata/fieldcomparator/TestReplaceMissing.java @@ -50,7 +50,7 @@ public class TestReplaceMissing extends ElasticsearchLuceneTestCase { iw.close(); DirectoryReader reader = DirectoryReader.open(dir); - AtomicReader ar = getOnlySegmentReader(reader); + LeafReader ar = getOnlySegmentReader(reader); SortedDocValues raw = ar.getSortedDocValues("field"); assertEquals(2, raw.getValueCount()); diff --git a/src/test/java/org/elasticsearch/index/mapper/boost/BoostMappingTests.java b/src/test/java/org/elasticsearch/index/mapper/boost/BoostMappingTests.java index 3d87c5f013b..78ee24c3941 100644 --- a/src/test/java/org/elasticsearch/index/mapper/boost/BoostMappingTests.java +++ b/src/test/java/org/elasticsearch/index/mapper/boost/BoostMappingTests.java @@ -19,6 +19,7 @@ package org.elasticsearch.index.mapper.boost; +import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.IndexableField; import org.elasticsearch.common.compress.CompressedString; import org.elasticsearch.common.xcontent.XContentFactory; @@ -80,7 +81,7 @@ public class BoostMappingTests extends ElasticsearchSingleNodeTest { String mapping = XContentFactory.jsonBuilder().startObject().startObject("type").endObject().string(); DocumentMapper docMapper = createIndex("test").mapperService().documentMapperParser().parse(mapping); assertThat(docMapper.boostFieldMapper().fieldType().stored(), equalTo(BoostFieldMapper.Defaults.FIELD_TYPE.stored())); - assertThat(docMapper.boostFieldMapper().fieldType().indexed(), equalTo(BoostFieldMapper.Defaults.FIELD_TYPE.indexed())); + assertThat(docMapper.boostFieldMapper().fieldType().indexOptions(), equalTo(BoostFieldMapper.Defaults.FIELD_TYPE.indexOptions())); } @Test @@ -93,10 +94,10 @@ public class BoostMappingTests extends ElasticsearchSingleNodeTest { IndexService indexServices = createIndex("test"); DocumentMapper docMapper = indexServices.mapperService().documentMapperParser().parse("type", mapping); assertThat(docMapper.boostFieldMapper().fieldType().stored(), equalTo(true)); - assertThat(docMapper.boostFieldMapper().fieldType().indexed(), equalTo(true)); + assertEquals(IndexOptions.DOCS, docMapper.boostFieldMapper().fieldType().indexOptions()); docMapper.refreshSource(); docMapper = indexServices.mapperService().documentMapperParser().parse("type", docMapper.mappingSource().string()); assertThat(docMapper.boostFieldMapper().fieldType().stored(), equalTo(true)); - assertThat(docMapper.boostFieldMapper().fieldType().indexed(), equalTo(true)); + assertEquals(IndexOptions.DOCS, docMapper.boostFieldMapper().fieldType().indexOptions()); } } diff --git a/src/test/java/org/elasticsearch/index/mapper/core/TokenCountFieldMapperIntegrationTests.java b/src/test/java/org/elasticsearch/index/mapper/core/TokenCountFieldMapperIntegrationTests.java index 39a076f9255..c8760ddd561 100644 --- a/src/test/java/org/elasticsearch/index/mapper/core/TokenCountFieldMapperIntegrationTests.java +++ b/src/test/java/org/elasticsearch/index/mapper/core/TokenCountFieldMapperIntegrationTests.java @@ -134,7 +134,7 @@ public class TokenCountFieldMapperIntegrationTests extends ElasticsearchIntegrat .field("type", "token_count") .field("analyzer", "standard") .startObject("fielddata") - .field("format", LuceneTestCase.defaultCodecSupportsSortedSet() ? "doc_values" : null) + .field("format", "doc_values") .endObject() .endObject() .endObject() diff --git a/src/test/java/org/elasticsearch/index/mapper/dynamictemplate/simple/SimpleDynamicTemplatesTests.java b/src/test/java/org/elasticsearch/index/mapper/dynamictemplate/simple/SimpleDynamicTemplatesTests.java index adceb7ed86e..8a17fdf56a4 100644 --- a/src/test/java/org/elasticsearch/index/mapper/dynamictemplate/simple/SimpleDynamicTemplatesTests.java +++ b/src/test/java/org/elasticsearch/index/mapper/dynamictemplate/simple/SimpleDynamicTemplatesTests.java @@ -19,6 +19,7 @@ package org.elasticsearch.index.mapper.dynamictemplate.simple; +import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.IndexableField; import org.elasticsearch.common.bytes.BytesArray; import org.elasticsearch.common.xcontent.XContentBuilder; @@ -53,10 +54,10 @@ public class SimpleDynamicTemplatesTests extends ElasticsearchSingleNodeTest { DocumentFieldMappers mappers = docMapper.mappers(); assertThat(mappers.smartName("s"), Matchers.notNullValue()); - assertThat(mappers.smartName("s").mapper().fieldType().indexed(), equalTo(false)); + assertEquals(IndexOptions.NONE, mappers.smartName("s").mapper().fieldType().indexOptions()); assertThat(mappers.smartName("l"), Matchers.notNullValue()); - assertThat(mappers.smartName("l").mapper().fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, mappers.smartName("l").mapper().fieldType().indexOptions()); } @@ -72,7 +73,7 @@ public class SimpleDynamicTemplatesTests extends ElasticsearchSingleNodeTest { IndexableField f = doc.getField("name"); assertThat(f.name(), equalTo("name")); assertThat(f.stringValue(), equalTo("some name")); - assertThat(f.fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, f.fieldType().indexOptions()); assertThat(f.fieldType().tokenized(), equalTo(false)); FieldMappers fieldMappers = docMapper.mappers().fullName("name"); @@ -81,7 +82,7 @@ public class SimpleDynamicTemplatesTests extends ElasticsearchSingleNodeTest { f = doc.getField("multi1"); assertThat(f.name(), equalTo("multi1")); assertThat(f.stringValue(), equalTo("multi 1")); - assertThat(f.fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, f.fieldType().indexOptions()); assertThat(f.fieldType().tokenized(), equalTo(true)); fieldMappers = docMapper.mappers().fullName("multi1"); @@ -90,7 +91,7 @@ public class SimpleDynamicTemplatesTests extends ElasticsearchSingleNodeTest { f = doc.getField("multi1.org"); assertThat(f.name(), equalTo("multi1.org")); assertThat(f.stringValue(), equalTo("multi 1")); - assertThat(f.fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, f.fieldType().indexOptions()); assertThat(f.fieldType().tokenized(), equalTo(false)); fieldMappers = docMapper.mappers().fullName("multi1.org"); @@ -99,7 +100,7 @@ public class SimpleDynamicTemplatesTests extends ElasticsearchSingleNodeTest { f = doc.getField("multi2"); assertThat(f.name(), equalTo("multi2")); assertThat(f.stringValue(), equalTo("multi 2")); - assertThat(f.fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, f.fieldType().indexOptions()); assertThat(f.fieldType().tokenized(), equalTo(true)); fieldMappers = docMapper.mappers().fullName("multi2"); @@ -108,7 +109,7 @@ public class SimpleDynamicTemplatesTests extends ElasticsearchSingleNodeTest { f = doc.getField("multi2.org"); assertThat(f.name(), equalTo("multi2.org")); assertThat(f.stringValue(), equalTo("multi 2")); - assertThat(f.fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, f.fieldType().indexOptions()); assertThat(f.fieldType().tokenized(), equalTo(false)); fieldMappers = docMapper.mappers().fullName("multi2.org"); @@ -129,7 +130,7 @@ public class SimpleDynamicTemplatesTests extends ElasticsearchSingleNodeTest { IndexableField f = doc.getField("name"); assertThat(f.name(), equalTo("name")); assertThat(f.stringValue(), equalTo("some name")); - assertThat(f.fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, f.fieldType().indexOptions()); assertThat(f.fieldType().tokenized(), equalTo(false)); FieldMappers fieldMappers = docMapper.mappers().fullName("name"); @@ -138,7 +139,7 @@ public class SimpleDynamicTemplatesTests extends ElasticsearchSingleNodeTest { f = doc.getField("multi1"); assertThat(f.name(), equalTo("multi1")); assertThat(f.stringValue(), equalTo("multi 1")); - assertThat(f.fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, f.fieldType().indexOptions()); assertThat(f.fieldType().tokenized(), equalTo(true)); fieldMappers = docMapper.mappers().fullName("multi1"); @@ -147,7 +148,7 @@ public class SimpleDynamicTemplatesTests extends ElasticsearchSingleNodeTest { f = doc.getField("multi1.org"); assertThat(f.name(), equalTo("multi1.org")); assertThat(f.stringValue(), equalTo("multi 1")); - assertThat(f.fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, f.fieldType().indexOptions()); assertThat(f.fieldType().tokenized(), equalTo(false)); fieldMappers = docMapper.mappers().fullName("multi1.org"); @@ -156,7 +157,7 @@ public class SimpleDynamicTemplatesTests extends ElasticsearchSingleNodeTest { f = doc.getField("multi2"); assertThat(f.name(), equalTo("multi2")); assertThat(f.stringValue(), equalTo("multi 2")); - assertThat(f.fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, f.fieldType().indexOptions()); assertThat(f.fieldType().tokenized(), equalTo(true)); fieldMappers = docMapper.mappers().fullName("multi2"); @@ -165,7 +166,7 @@ public class SimpleDynamicTemplatesTests extends ElasticsearchSingleNodeTest { f = doc.getField("multi2.org"); assertThat(f.name(), equalTo("multi2.org")); assertThat(f.stringValue(), equalTo("multi 2")); - assertThat(f.fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, f.fieldType().indexOptions()); assertThat(f.fieldType().tokenized(), equalTo(false)); fieldMappers = docMapper.mappers().fullName("multi2.org"); diff --git a/src/test/java/org/elasticsearch/index/mapper/lucene/DoubleIndexingDocTest.java b/src/test/java/org/elasticsearch/index/mapper/lucene/DoubleIndexingDocTest.java index 38ff2240b0b..fab48c814cf 100644 --- a/src/test/java/org/elasticsearch/index/mapper/lucene/DoubleIndexingDocTest.java +++ b/src/test/java/org/elasticsearch/index/mapper/lucene/DoubleIndexingDocTest.java @@ -41,7 +41,7 @@ public class DoubleIndexingDocTest extends ElasticsearchSingleNodeLuceneTestCase @Test public void testDoubleIndexingSameDoc() throws Exception { Directory dir = newDirectory(); - IndexWriter writer = new IndexWriter(dir, newIndexWriterConfig(random(), TEST_VERSION_CURRENT, Lucene.STANDARD_ANALYZER)); + IndexWriter writer = new IndexWriter(dir, newIndexWriterConfig(random(), Lucene.STANDARD_ANALYZER)); String mapping = XContentFactory.jsonBuilder().startObject().startObject("type") .startObject("properties").endObject() diff --git a/src/test/java/org/elasticsearch/index/mapper/lucene/StoredNumericValuesTest.java b/src/test/java/org/elasticsearch/index/mapper/lucene/StoredNumericValuesTest.java index b5925b89521..324763bc1a0 100644 --- a/src/test/java/org/elasticsearch/index/mapper/lucene/StoredNumericValuesTest.java +++ b/src/test/java/org/elasticsearch/index/mapper/lucene/StoredNumericValuesTest.java @@ -50,7 +50,7 @@ public class StoredNumericValuesTest extends ElasticsearchSingleNodeTest { @Test public void testBytesAndNumericRepresentation() throws Exception { - IndexWriter writer = new IndexWriter(new RAMDirectory(), new IndexWriterConfig(Lucene.VERSION, Lucene.STANDARD_ANALYZER)); + IndexWriter writer = new IndexWriter(new RAMDirectory(), new IndexWriterConfig(Lucene.STANDARD_ANALYZER)); String mapping = XContentFactory.jsonBuilder() .startObject() diff --git a/src/test/java/org/elasticsearch/index/mapper/multifield/MultiFieldTests.java b/src/test/java/org/elasticsearch/index/mapper/multifield/MultiFieldTests.java index 7f4366de347..519767accb8 100644 --- a/src/test/java/org/elasticsearch/index/mapper/multifield/MultiFieldTests.java +++ b/src/test/java/org/elasticsearch/index/mapper/multifield/MultiFieldTests.java @@ -19,6 +19,7 @@ package org.elasticsearch.index.mapper.multifield; +import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.IndexableField; import org.elasticsearch.common.bytes.BytesArray; import org.elasticsearch.common.bytes.BytesReference; @@ -71,19 +72,19 @@ public class MultiFieldTests extends ElasticsearchSingleNodeTest { assertThat(f.name(), equalTo("name")); assertThat(f.stringValue(), equalTo("some name")); assertThat(f.fieldType().stored(), equalTo(true)); - assertThat(f.fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, f.fieldType().indexOptions()); f = doc.getField("name.indexed"); assertThat(f.name(), equalTo("name.indexed")); assertThat(f.stringValue(), equalTo("some name")); assertThat(f.fieldType().stored(), equalTo(false)); - assertThat(f.fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, f.fieldType().indexOptions()); f = doc.getField("name.not_indexed"); assertThat(f.name(), equalTo("name.not_indexed")); assertThat(f.stringValue(), equalTo("some name")); assertThat(f.fieldType().stored(), equalTo(true)); - assertThat(f.fieldType().indexed(), equalTo(false)); + assertEquals(IndexOptions.NONE, f.fieldType().indexOptions()); f = doc.getField("object1.multi1"); assertThat(f.name(), equalTo("object1.multi1")); @@ -94,32 +95,32 @@ public class MultiFieldTests extends ElasticsearchSingleNodeTest { assertThat(docMapper.mappers().fullName("name").mapper(), notNullValue()); assertThat(docMapper.mappers().fullName("name").mapper(), instanceOf(StringFieldMapper.class)); - assertThat(docMapper.mappers().fullName("name").mapper().fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, docMapper.mappers().fullName("name").mapper().fieldType().indexOptions()); assertThat(docMapper.mappers().fullName("name").mapper().fieldType().stored(), equalTo(true)); assertThat(docMapper.mappers().fullName("name").mapper().fieldType().tokenized(), equalTo(true)); assertThat(docMapper.mappers().fullName("name.indexed").mapper(), notNullValue()); assertThat(docMapper.mappers().fullName("name.indexed").mapper(), instanceOf(StringFieldMapper.class)); - assertThat(docMapper.mappers().fullName("name.indexed").mapper().fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, docMapper.mappers().fullName("name.indexed").mapper().fieldType().indexOptions()); assertThat(docMapper.mappers().fullName("name.indexed").mapper().fieldType().stored(), equalTo(false)); assertThat(docMapper.mappers().fullName("name.indexed").mapper().fieldType().tokenized(), equalTo(true)); assertThat(docMapper.mappers().fullName("name.not_indexed").mapper(), notNullValue()); assertThat(docMapper.mappers().fullName("name.not_indexed").mapper(), instanceOf(StringFieldMapper.class)); - assertThat(docMapper.mappers().fullName("name.not_indexed").mapper().fieldType().indexed(), equalTo(false)); + assertEquals(IndexOptions.NONE, docMapper.mappers().fullName("name.not_indexed").mapper().fieldType().indexOptions()); assertThat(docMapper.mappers().fullName("name.not_indexed").mapper().fieldType().stored(), equalTo(true)); assertThat(docMapper.mappers().fullName("name.not_indexed").mapper().fieldType().tokenized(), equalTo(true)); assertThat(docMapper.mappers().fullName("name.test1").mapper(), notNullValue()); assertThat(docMapper.mappers().fullName("name.test1").mapper(), instanceOf(StringFieldMapper.class)); - assertThat(docMapper.mappers().fullName("name.test1").mapper().fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, docMapper.mappers().fullName("name.test1").mapper().fieldType().indexOptions()); assertThat(docMapper.mappers().fullName("name.test1").mapper().fieldType().stored(), equalTo(true)); assertThat(docMapper.mappers().fullName("name.test1").mapper().fieldType().tokenized(), equalTo(true)); assertThat(docMapper.mappers().fullName("name.test1").mapper().fieldDataType().getLoading(), equalTo(FieldMapper.Loading.EAGER)); assertThat(docMapper.mappers().fullName("name.test2").mapper(), notNullValue()); assertThat(docMapper.mappers().fullName("name.test2").mapper(), instanceOf(TokenCountFieldMapper.class)); - assertThat(docMapper.mappers().fullName("name.test2").mapper().fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, docMapper.mappers().fullName("name.test2").mapper().fieldType().indexOptions()); assertThat(docMapper.mappers().fullName("name.test2").mapper().fieldType().stored(), equalTo(true)); assertThat(docMapper.mappers().fullName("name.test2").mapper().fieldType().tokenized(), equalTo(false)); assertThat(((TokenCountFieldMapper) docMapper.mappers().fullName("name.test2").mapper()).analyzer(), equalTo("simple")); @@ -129,7 +130,7 @@ public class MultiFieldTests extends ElasticsearchSingleNodeTest { assertThat(docMapper.mappers().fullName("object1.multi1").mapper(), instanceOf(DateFieldMapper.class)); assertThat(docMapper.mappers().fullName("object1.multi1.string").mapper(), notNullValue()); assertThat(docMapper.mappers().fullName("object1.multi1.string").mapper(), instanceOf(StringFieldMapper.class)); - assertThat(docMapper.mappers().fullName("object1.multi1.string").mapper().fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, docMapper.mappers().fullName("object1.multi1.string").mapper().fieldType().indexOptions()); assertThat(docMapper.mappers().fullName("object1.multi1.string").mapper().fieldType().tokenized(), equalTo(false)); } @@ -159,20 +160,20 @@ public class MultiFieldTests extends ElasticsearchSingleNodeTest { assertThat(f.name(), equalTo("name")); assertThat(f.stringValue(), equalTo("some name")); assertThat(f.fieldType().stored(), equalTo(true)); - assertThat(f.fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, f.fieldType().indexOptions()); f = doc.getField("name.indexed"); assertThat(f.name(), equalTo("name.indexed")); assertThat(f.stringValue(), equalTo("some name")); assertThat(f.fieldType().tokenized(), equalTo(true)); assertThat(f.fieldType().stored(), equalTo(false)); - assertThat(f.fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, f.fieldType().indexOptions()); f = doc.getField("name.not_indexed"); assertThat(f.name(), equalTo("name.not_indexed")); assertThat(f.stringValue(), equalTo("some name")); assertThat(f.fieldType().stored(), equalTo(true)); - assertThat(f.fieldType().indexed(), equalTo(false)); + assertEquals(IndexOptions.NONE, f.fieldType().indexOptions()); } @Test @@ -187,29 +188,29 @@ public class MultiFieldTests extends ElasticsearchSingleNodeTest { assertThat(f.name(), equalTo("name.indexed")); assertThat(f.stringValue(), equalTo("some name")); assertThat(f.fieldType().stored(), equalTo(false)); - assertThat(f.fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, f.fieldType().indexOptions()); f = doc.getField("name.not_indexed"); assertThat(f.name(), equalTo("name.not_indexed")); assertThat(f.stringValue(), equalTo("some name")); assertThat(f.fieldType().stored(), equalTo(true)); - assertThat(f.fieldType().indexed(), equalTo(false)); + assertEquals(IndexOptions.NONE, f.fieldType().indexOptions()); assertThat(docMapper.mappers().fullName("name").mapper(), notNullValue()); assertThat(docMapper.mappers().fullName("name").mapper(), instanceOf(StringFieldMapper.class)); - assertThat(docMapper.mappers().fullName("name").mapper().fieldType().indexed(), equalTo(false)); + assertEquals(IndexOptions.NONE, docMapper.mappers().fullName("name").mapper().fieldType().indexOptions()); assertThat(docMapper.mappers().fullName("name").mapper().fieldType().stored(), equalTo(false)); assertThat(docMapper.mappers().fullName("name").mapper().fieldType().tokenized(), equalTo(true)); assertThat(docMapper.mappers().fullName("name.indexed").mapper(), notNullValue()); assertThat(docMapper.mappers().fullName("name.indexed").mapper(), instanceOf(StringFieldMapper.class)); - assertThat(docMapper.mappers().fullName("name.indexed").mapper().fieldType().indexed(), equalTo(true)); + assertNotNull(docMapper.mappers().fullName("name.indexed").mapper().fieldType().indexOptions()); assertThat(docMapper.mappers().fullName("name.indexed").mapper().fieldType().stored(), equalTo(false)); assertThat(docMapper.mappers().fullName("name.indexed").mapper().fieldType().tokenized(), equalTo(true)); assertThat(docMapper.mappers().fullName("name.not_indexed").mapper(), notNullValue()); assertThat(docMapper.mappers().fullName("name.not_indexed").mapper(), instanceOf(StringFieldMapper.class)); - assertThat(docMapper.mappers().fullName("name.not_indexed").mapper().fieldType().indexed(), equalTo(false)); + assertEquals(IndexOptions.NONE, docMapper.mappers().fullName("name.not_indexed").mapper().fieldType().indexOptions()); assertThat(docMapper.mappers().fullName("name.not_indexed").mapper().fieldType().stored(), equalTo(true)); assertThat(docMapper.mappers().fullName("name.not_indexed").mapper().fieldType().tokenized(), equalTo(true)); @@ -218,29 +219,29 @@ public class MultiFieldTests extends ElasticsearchSingleNodeTest { assertThat(f.name(), equalTo("age.not_stored")); assertThat(f.numericValue(), equalTo((Number) 28L)); assertThat(f.fieldType().stored(), equalTo(false)); - assertThat(f.fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, f.fieldType().indexOptions()); f = doc.getField("age.stored"); assertThat(f.name(), equalTo("age.stored")); assertThat(f.numericValue(), equalTo((Number) 28L)); assertThat(f.fieldType().stored(), equalTo(true)); - assertThat(f.fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, f.fieldType().indexOptions()); assertThat(docMapper.mappers().fullName("age").mapper(), notNullValue()); assertThat(docMapper.mappers().fullName("age").mapper(), instanceOf(LongFieldMapper.class)); - assertThat(docMapper.mappers().fullName("age").mapper().fieldType().indexed(), equalTo(false)); + assertEquals(IndexOptions.NONE, docMapper.mappers().fullName("age").mapper().fieldType().indexOptions()); assertThat(docMapper.mappers().fullName("age").mapper().fieldType().stored(), equalTo(false)); assertThat(docMapper.mappers().fullName("age").mapper().fieldType().tokenized(), equalTo(false)); assertThat(docMapper.mappers().fullName("age.not_stored").mapper(), notNullValue()); assertThat(docMapper.mappers().fullName("age.not_stored").mapper(), instanceOf(LongFieldMapper.class)); - assertThat(docMapper.mappers().fullName("age.not_stored").mapper().fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, docMapper.mappers().fullName("age.not_stored").mapper().fieldType().indexOptions()); assertThat(docMapper.mappers().fullName("age.not_stored").mapper().fieldType().stored(), equalTo(false)); assertThat(docMapper.mappers().fullName("age.not_stored").mapper().fieldType().tokenized(), equalTo(false)); assertThat(docMapper.mappers().fullName("age.stored").mapper(), notNullValue()); assertThat(docMapper.mappers().fullName("age.stored").mapper(), instanceOf(LongFieldMapper.class)); - assertThat(docMapper.mappers().fullName("age.stored").mapper().fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, docMapper.mappers().fullName("age.stored").mapper().fieldType().indexOptions()); assertThat(docMapper.mappers().fullName("age.stored").mapper().fieldType().stored(), equalTo(true)); assertThat(docMapper.mappers().fullName("age.stored").mapper().fieldType().tokenized(), equalTo(false)); } @@ -252,13 +253,13 @@ public class MultiFieldTests extends ElasticsearchSingleNodeTest { assertThat(docMapper.mappers().fullName("a").mapper(), notNullValue()); assertThat(docMapper.mappers().fullName("a").mapper(), instanceOf(StringFieldMapper.class)); - assertThat(docMapper.mappers().fullName("a").mapper().fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, docMapper.mappers().fullName("a").mapper().fieldType().indexOptions()); assertThat(docMapper.mappers().fullName("a").mapper().fieldType().stored(), equalTo(false)); assertThat(docMapper.mappers().fullName("a").mapper().fieldType().tokenized(), equalTo(false)); assertThat(docMapper.mappers().fullName("a.b").mapper(), notNullValue()); assertThat(docMapper.mappers().fullName("a.b").mapper(), instanceOf(GeoPointFieldMapper.class)); - assertThat(docMapper.mappers().fullName("a.b").mapper().fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, docMapper.mappers().fullName("a.b").mapper().fieldType().indexOptions()); assertThat(docMapper.mappers().fullName("a.b").mapper().fieldType().stored(), equalTo(false)); assertThat(docMapper.mappers().fullName("a.b").mapper().fieldType().tokenized(), equalTo(false)); @@ -273,24 +274,24 @@ public class MultiFieldTests extends ElasticsearchSingleNodeTest { assertThat(f.name(), equalTo("a")); assertThat(f.stringValue(), equalTo("-1,-1")); assertThat(f.fieldType().stored(), equalTo(false)); - assertThat(f.fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, f.fieldType().indexOptions()); f = doc.getField("a.b"); assertThat(f, notNullValue()); assertThat(f.name(), equalTo("a.b")); assertThat(f.stringValue(), equalTo("-1.0,-1.0")); assertThat(f.fieldType().stored(), equalTo(false)); - assertThat(f.fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, f.fieldType().indexOptions()); assertThat(docMapper.mappers().fullName("b").mapper(), notNullValue()); assertThat(docMapper.mappers().fullName("b").mapper(), instanceOf(GeoPointFieldMapper.class)); - assertThat(docMapper.mappers().fullName("b").mapper().fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, docMapper.mappers().fullName("b").mapper().fieldType().indexOptions()); assertThat(docMapper.mappers().fullName("b").mapper().fieldType().stored(), equalTo(false)); assertThat(docMapper.mappers().fullName("b").mapper().fieldType().tokenized(), equalTo(false)); assertThat(docMapper.mappers().fullName("b.a").mapper(), notNullValue()); assertThat(docMapper.mappers().fullName("b.a").mapper(), instanceOf(StringFieldMapper.class)); - assertThat(docMapper.mappers().fullName("b.a").mapper().fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, docMapper.mappers().fullName("b.a").mapper().fieldType().indexOptions()); assertThat(docMapper.mappers().fullName("b.a").mapper().fieldType().stored(), equalTo(false)); assertThat(docMapper.mappers().fullName("b.a").mapper().fieldType().tokenized(), equalTo(false)); @@ -305,14 +306,14 @@ public class MultiFieldTests extends ElasticsearchSingleNodeTest { assertThat(f.name(), equalTo("b")); assertThat(f.stringValue(), equalTo("-1.0,-1.0")); assertThat(f.fieldType().stored(), equalTo(false)); - assertThat(f.fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, f.fieldType().indexOptions()); f = doc.getField("b.a"); assertThat(f, notNullValue()); assertThat(f.name(), equalTo("b.a")); assertThat(f.stringValue(), equalTo("-1,-1")); assertThat(f.fieldType().stored(), equalTo(false)); - assertThat(f.fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, f.fieldType().indexOptions()); json = jsonBuilder().startObject() .field("_id", "1") @@ -325,14 +326,14 @@ public class MultiFieldTests extends ElasticsearchSingleNodeTest { assertThat(f.name(), equalTo("b")); assertThat(f.stringValue(), equalTo("-1.0,-1.0")); assertThat(f.fieldType().stored(), equalTo(false)); - assertThat(f.fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, f.fieldType().indexOptions()); f = doc.getFields("b")[1]; assertThat(f, notNullValue()); assertThat(f.name(), equalTo("b")); assertThat(f.stringValue(), equalTo("-2.0,-2.0")); assertThat(f.fieldType().stored(), equalTo(false)); - assertThat(f.fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, f.fieldType().indexOptions()); f = doc.getField("b.a"); assertThat(f, notNullValue()); @@ -342,7 +343,7 @@ public class MultiFieldTests extends ElasticsearchSingleNodeTest { // This happens if coordinates are specified as array and object. assertThat(f.stringValue(), equalTo("]")); assertThat(f.fieldType().stored(), equalTo(false)); - assertThat(f.fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, f.fieldType().indexOptions()); } @Test @@ -352,13 +353,13 @@ public class MultiFieldTests extends ElasticsearchSingleNodeTest { assertThat(docMapper.mappers().fullName("a").mapper(), notNullValue()); assertThat(docMapper.mappers().fullName("a").mapper(), instanceOf(StringFieldMapper.class)); - assertThat(docMapper.mappers().fullName("a").mapper().fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, docMapper.mappers().fullName("a").mapper().fieldType().indexOptions()); assertThat(docMapper.mappers().fullName("a").mapper().fieldType().stored(), equalTo(false)); assertThat(docMapper.mappers().fullName("a").mapper().fieldType().tokenized(), equalTo(false)); assertThat(docMapper.mappers().fullName("a.b").mapper(), notNullValue()); assertThat(docMapper.mappers().fullName("a.b").mapper(), instanceOf(CompletionFieldMapper.class)); - assertThat(docMapper.mappers().fullName("a.b").mapper().fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, docMapper.mappers().fullName("a.b").mapper().fieldType().indexOptions()); assertThat(docMapper.mappers().fullName("a.b").mapper().fieldType().stored(), equalTo(false)); assertThat(docMapper.mappers().fullName("a.b").mapper().fieldType().tokenized(), equalTo(true)); @@ -373,24 +374,24 @@ public class MultiFieldTests extends ElasticsearchSingleNodeTest { assertThat(f.name(), equalTo("a")); assertThat(f.stringValue(), equalTo("complete me")); assertThat(f.fieldType().stored(), equalTo(false)); - assertThat(f.fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, f.fieldType().indexOptions()); f = doc.getField("a.b"); assertThat(f, notNullValue()); assertThat(f.name(), equalTo("a.b")); assertThat(f.stringValue(), equalTo("complete me")); assertThat(f.fieldType().stored(), equalTo(false)); - assertThat(f.fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, f.fieldType().indexOptions()); assertThat(docMapper.mappers().fullName("b").mapper(), notNullValue()); assertThat(docMapper.mappers().fullName("b").mapper(), instanceOf(CompletionFieldMapper.class)); - assertThat(docMapper.mappers().fullName("b").mapper().fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, docMapper.mappers().fullName("b").mapper().fieldType().indexOptions()); assertThat(docMapper.mappers().fullName("b").mapper().fieldType().stored(), equalTo(false)); assertThat(docMapper.mappers().fullName("b").mapper().fieldType().tokenized(), equalTo(true)); assertThat(docMapper.mappers().fullName("b.a").mapper(), notNullValue()); assertThat(docMapper.mappers().fullName("b.a").mapper(), instanceOf(StringFieldMapper.class)); - assertThat(docMapper.mappers().fullName("b.a").mapper().fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, docMapper.mappers().fullName("b.a").mapper().fieldType().indexOptions()); assertThat(docMapper.mappers().fullName("b.a").mapper().fieldType().stored(), equalTo(false)); assertThat(docMapper.mappers().fullName("b.a").mapper().fieldType().tokenized(), equalTo(false)); @@ -405,14 +406,14 @@ public class MultiFieldTests extends ElasticsearchSingleNodeTest { assertThat(f.name(), equalTo("b")); assertThat(f.stringValue(), equalTo("complete me")); assertThat(f.fieldType().stored(), equalTo(false)); - assertThat(f.fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, f.fieldType().indexOptions()); f = doc.getField("b.a"); assertThat(f, notNullValue()); assertThat(f.name(), equalTo("b.a")); assertThat(f.stringValue(), equalTo("complete me")); assertThat(f.fieldType().stored(), equalTo(false)); - assertThat(f.fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, f.fieldType().indexOptions()); } @Test diff --git a/src/test/java/org/elasticsearch/index/mapper/multifield/merge/JavaMultiFieldMergeTests.java b/src/test/java/org/elasticsearch/index/mapper/multifield/merge/JavaMultiFieldMergeTests.java index f2eb0f30e31..8f083ccfbba 100644 --- a/src/test/java/org/elasticsearch/index/mapper/multifield/merge/JavaMultiFieldMergeTests.java +++ b/src/test/java/org/elasticsearch/index/mapper/multifield/merge/JavaMultiFieldMergeTests.java @@ -19,6 +19,7 @@ package org.elasticsearch.index.mapper.multifield.merge; +import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.IndexableField; import org.elasticsearch.common.bytes.BytesArray; import org.elasticsearch.common.bytes.BytesReference; @@ -47,7 +48,7 @@ public class JavaMultiFieldMergeTests extends ElasticsearchSingleNodeTest { DocumentMapper docMapper = parser.parse(mapping); - assertThat(docMapper.mappers().fullName("name").mapper().fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, docMapper.mappers().fullName("name").mapper().fieldType().indexOptions()); assertThat(docMapper.mappers().fullName("name.indexed"), nullValue()); BytesReference json = new BytesArray(copyToBytesFromClasspath("/org/elasticsearch/index/mapper/multifield/merge/test-data.json")); @@ -66,9 +67,9 @@ public class JavaMultiFieldMergeTests extends ElasticsearchSingleNodeTest { docMapper.merge(docMapper2, mergeFlags().simulate(false)); - assertThat(docMapper.mappers().name("name").mapper().fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, docMapper.mappers().name("name").mapper().fieldType().indexOptions()); - assertThat(docMapper.mappers().fullName("name").mapper().fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, docMapper.mappers().fullName("name").mapper().fieldType().indexOptions()); assertThat(docMapper.mappers().fullName("name.indexed").mapper(), notNullValue()); assertThat(docMapper.mappers().fullName("name.not_indexed").mapper(), notNullValue()); assertThat(docMapper.mappers().fullName("name.not_indexed2"), nullValue()); @@ -89,9 +90,9 @@ public class JavaMultiFieldMergeTests extends ElasticsearchSingleNodeTest { docMapper.merge(docMapper3, mergeFlags().simulate(false)); - assertThat(docMapper.mappers().name("name").mapper().fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, docMapper.mappers().name("name").mapper().fieldType().indexOptions()); - assertThat(docMapper.mappers().fullName("name").mapper().fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, docMapper.mappers().fullName("name").mapper().fieldType().indexOptions()); assertThat(docMapper.mappers().fullName("name.indexed").mapper(), notNullValue()); assertThat(docMapper.mappers().fullName("name.not_indexed").mapper(), notNullValue()); assertThat(docMapper.mappers().fullName("name.not_indexed2").mapper(), notNullValue()); @@ -107,9 +108,9 @@ public class JavaMultiFieldMergeTests extends ElasticsearchSingleNodeTest { docMapper.merge(docMapper4, mergeFlags().simulate(false)); - assertThat(docMapper.mappers().name("name").mapper().fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, docMapper.mappers().name("name").mapper().fieldType().indexOptions()); - assertThat(docMapper.mappers().fullName("name").mapper().fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, docMapper.mappers().fullName("name").mapper().fieldType().indexOptions()); assertThat(docMapper.mappers().fullName("name.indexed").mapper(), notNullValue()); assertThat(docMapper.mappers().fullName("name.not_indexed").mapper(), notNullValue()); assertThat(docMapper.mappers().fullName("name.not_indexed2").mapper(), notNullValue()); @@ -123,7 +124,7 @@ public class JavaMultiFieldMergeTests extends ElasticsearchSingleNodeTest { DocumentMapper docMapper = parser.parse(mapping); - assertThat(docMapper.mappers().fullName("name").mapper().fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, docMapper.mappers().fullName("name").mapper().fieldType().indexOptions()); assertThat(docMapper.mappers().fullName("name.indexed"), nullValue()); BytesReference json = new BytesArray(copyToBytesFromClasspath("/org/elasticsearch/index/mapper/multifield/merge/test-data.json")); @@ -142,9 +143,9 @@ public class JavaMultiFieldMergeTests extends ElasticsearchSingleNodeTest { docMapper.merge(docMapper2, mergeFlags().simulate(false)); - assertThat(docMapper.mappers().name("name").mapper().fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, docMapper.mappers().name("name").mapper().fieldType().indexOptions()); - assertThat(docMapper.mappers().fullName("name").mapper().fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, docMapper.mappers().fullName("name").mapper().fieldType().indexOptions()); assertThat(docMapper.mappers().fullName("name.indexed").mapper(), notNullValue()); assertThat(docMapper.mappers().fullName("name.not_indexed").mapper(), notNullValue()); assertThat(docMapper.mappers().fullName("name.not_indexed2"), nullValue()); @@ -165,9 +166,9 @@ public class JavaMultiFieldMergeTests extends ElasticsearchSingleNodeTest { docMapper.merge(docMapper3, mergeFlags().simulate(false)); - assertThat(docMapper.mappers().name("name").mapper().fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, docMapper.mappers().name("name").mapper().fieldType().indexOptions()); - assertThat(docMapper.mappers().fullName("name").mapper().fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, docMapper.mappers().fullName("name").mapper().fieldType().indexOptions()); assertThat(docMapper.mappers().fullName("name.indexed").mapper(), notNullValue()); assertThat(docMapper.mappers().fullName("name.not_indexed").mapper(), notNullValue()); assertThat(docMapper.mappers().fullName("name.not_indexed2").mapper(), notNullValue()); @@ -184,12 +185,12 @@ public class JavaMultiFieldMergeTests extends ElasticsearchSingleNodeTest { mergeResult = docMapper.merge(docMapper4, mergeFlags().simulate(false)); assertThat(Arrays.toString(mergeResult.conflicts()), mergeResult.hasConflicts(), equalTo(true)); - assertThat(docMapper.mappers().name("name").mapper().fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, docMapper.mappers().name("name").mapper().fieldType().indexOptions()); assertThat(mergeResult.conflicts()[0], equalTo("mapper [name] has different index values")); assertThat(mergeResult.conflicts()[1], equalTo("mapper [name] has different store values")); // There are conflicts, but the `name.not_indexed3` has been added, b/c that field has no conflicts - assertThat(docMapper.mappers().fullName("name").mapper().fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, docMapper.mappers().fullName("name").mapper().fieldType().indexOptions()); assertThat(docMapper.mappers().fullName("name.indexed").mapper(), notNullValue()); assertThat(docMapper.mappers().fullName("name.not_indexed").mapper(), notNullValue()); assertThat(docMapper.mappers().fullName("name.not_indexed2").mapper(), notNullValue()); diff --git a/src/test/java/org/elasticsearch/index/mapper/numeric/SimpleNumericTests.java b/src/test/java/org/elasticsearch/index/mapper/numeric/SimpleNumericTests.java index 472c1405806..96de4c3f6ee 100644 --- a/src/test/java/org/elasticsearch/index/mapper/numeric/SimpleNumericTests.java +++ b/src/test/java/org/elasticsearch/index/mapper/numeric/SimpleNumericTests.java @@ -22,7 +22,7 @@ package org.elasticsearch.index.mapper.numeric; import org.apache.lucene.analysis.NumericTokenStream; import org.apache.lucene.analysis.TokenStream; import org.apache.lucene.document.Field; -import org.apache.lucene.index.FieldInfo.DocValuesType; +import org.apache.lucene.index.DocValuesType; import org.apache.lucene.index.IndexableField; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentFactory; diff --git a/src/test/java/org/elasticsearch/index/mapper/routing/RoutingTypeMapperTests.java b/src/test/java/org/elasticsearch/index/mapper/routing/RoutingTypeMapperTests.java index b3f2a12cd7a..301fc44c6a9 100644 --- a/src/test/java/org/elasticsearch/index/mapper/routing/RoutingTypeMapperTests.java +++ b/src/test/java/org/elasticsearch/index/mapper/routing/RoutingTypeMapperTests.java @@ -19,6 +19,7 @@ package org.elasticsearch.index.mapper.routing; +import org.apache.lucene.index.IndexOptions; import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; @@ -65,7 +66,7 @@ public class RoutingTypeMapperTests extends ElasticsearchSingleNodeTest { .endObject().endObject().string(); DocumentMapper docMapper = createIndex("test").mapperService().documentMapperParser().parse(mapping); assertThat(docMapper.routingFieldMapper().fieldType().stored(), equalTo(false)); - assertThat(docMapper.routingFieldMapper().fieldType().indexed(), equalTo(false)); + assertEquals(IndexOptions.NONE, docMapper.routingFieldMapper().fieldType().indexOptions()); assertThat(docMapper.routingFieldMapper().path(), equalTo("route")); } diff --git a/src/test/java/org/elasticsearch/index/mapper/string/SimpleStringMappingTests.java b/src/test/java/org/elasticsearch/index/mapper/string/SimpleStringMappingTests.java index b6014cbafe3..1066d8797ab 100644 --- a/src/test/java/org/elasticsearch/index/mapper/string/SimpleStringMappingTests.java +++ b/src/test/java/org/elasticsearch/index/mapper/string/SimpleStringMappingTests.java @@ -21,7 +21,8 @@ package org.elasticsearch.index.mapper.string; import com.google.common.collect.ImmutableMap; import org.apache.lucene.index.FieldInfo; -import org.apache.lucene.index.FieldInfo.DocValuesType; +import org.apache.lucene.index.DocValuesType; +import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.IndexableField; import org.apache.lucene.index.IndexableFieldType; import org.elasticsearch.common.settings.ImmutableSettings; @@ -94,7 +95,7 @@ public class SimpleStringMappingTests extends ElasticsearchSingleNodeTest { private void assertDefaultAnalyzedFieldType(IndexableFieldType fieldType) { assertThat(fieldType.omitNorms(), equalTo(false)); - assertThat(fieldType.indexOptions(), equalTo(FieldInfo.IndexOptions.DOCS_AND_FREQS_AND_POSITIONS)); + assertThat(fieldType.indexOptions(), equalTo(IndexOptions.DOCS_AND_FREQS_AND_POSITIONS)); assertThat(fieldType.storeTermVectors(), equalTo(false)); assertThat(fieldType.storeTermVectorOffsets(), equalTo(false)); assertThat(fieldType.storeTermVectorPositions(), equalTo(false)); @@ -102,7 +103,6 @@ public class SimpleStringMappingTests extends ElasticsearchSingleNodeTest { } private void assertEquals(IndexableFieldType ft1, IndexableFieldType ft2) { - assertEquals(ft1.indexed(), ft2.indexed()); assertEquals(ft1.tokenized(), ft2.tokenized()); assertEquals(ft1.omitNorms(), ft2.omitNorms()); assertEquals(ft1.indexOptions(), ft2.indexOptions()); @@ -156,7 +156,7 @@ public class SimpleStringMappingTests extends ElasticsearchSingleNodeTest { IndexableFieldType fieldType = doc.rootDoc().getField("field").fieldType(); assertThat(fieldType.omitNorms(), equalTo(true)); - assertThat(fieldType.indexOptions(), equalTo(FieldInfo.IndexOptions.DOCS_ONLY)); + assertThat(fieldType.indexOptions(), equalTo(IndexOptions.DOCS)); assertThat(fieldType.storeTermVectors(), equalTo(false)); assertThat(fieldType.storeTermVectorOffsets(), equalTo(false)); assertThat(fieldType.storeTermVectorPositions(), equalTo(false)); @@ -179,7 +179,7 @@ public class SimpleStringMappingTests extends ElasticsearchSingleNodeTest { fieldType = doc.rootDoc().getField("field").fieldType(); assertThat(fieldType.omitNorms(), equalTo(false)); - assertThat(fieldType.indexOptions(), equalTo(FieldInfo.IndexOptions.DOCS_AND_FREQS)); + assertThat(fieldType.indexOptions(), equalTo(IndexOptions.DOCS_AND_FREQS)); assertThat(fieldType.storeTermVectors(), equalTo(false)); assertThat(fieldType.storeTermVectorOffsets(), equalTo(false)); assertThat(fieldType.storeTermVectorPositions(), equalTo(false)); @@ -321,17 +321,17 @@ public class SimpleStringMappingTests extends ElasticsearchSingleNodeTest { .endObject() .bytes()); final Document doc = parsedDoc.rootDoc(); - assertEquals(null, docValuesType(doc, "str1")); + assertEquals(DocValuesType.NONE, docValuesType(doc, "str1")); assertEquals(DocValuesType.SORTED_SET, docValuesType(doc, "str2")); } public static DocValuesType docValuesType(Document document, String fieldName) { for (IndexableField field : document.getFields(fieldName)) { - if (field.fieldType().docValueType() != null) { + if (field.fieldType().docValueType() != DocValuesType.NONE) { return field.fieldType().docValueType(); } } - return null; + return DocValuesType.NONE; } @Test diff --git a/src/test/java/org/elasticsearch/index/mapper/timestamp/TimestampMappingTests.java b/src/test/java/org/elasticsearch/index/mapper/timestamp/TimestampMappingTests.java index 57dfeabd231..c15eb8c5cba 100644 --- a/src/test/java/org/elasticsearch/index/mapper/timestamp/TimestampMappingTests.java +++ b/src/test/java/org/elasticsearch/index/mapper/timestamp/TimestampMappingTests.java @@ -19,6 +19,7 @@ package org.elasticsearch.index.mapper.timestamp; +import org.apache.lucene.index.IndexOptions; import org.elasticsearch.Version; import org.elasticsearch.action.TimestampParsingException; import org.elasticsearch.action.index.IndexRequest; @@ -93,7 +94,7 @@ public class TimestampMappingTests extends ElasticsearchSingleNodeTest { ParsedDocument doc = docMapper.parse(SourceToParse.source(source).type("type").id("1").timestamp(1)); assertThat(doc.rootDoc().getField("_timestamp").fieldType().stored(), equalTo(true)); - assertThat(doc.rootDoc().getField("_timestamp").fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, doc.rootDoc().getField("_timestamp").fieldType().indexOptions()); assertThat(doc.rootDoc().getField("_timestamp").tokenStream(docMapper.indexAnalyzer(), null), notNullValue()); } @@ -106,7 +107,7 @@ public class TimestampMappingTests extends ElasticsearchSingleNodeTest { DocumentMapper docMapper = createIndex("test", ImmutableSettings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, version).build()).mapperService().documentMapperParser().parse(mapping); assertThat(docMapper.timestampFieldMapper().enabled(), equalTo(TimestampFieldMapper.Defaults.ENABLED.enabled)); assertThat(docMapper.timestampFieldMapper().fieldType().stored(), equalTo(version.onOrAfter(Version.V_2_0_0) ? true : false)); - assertThat(docMapper.timestampFieldMapper().fieldType().indexed(), equalTo(TimestampFieldMapper.Defaults.FIELD_TYPE.indexed())); + assertThat(docMapper.timestampFieldMapper().fieldType().indexOptions(), equalTo(TimestampFieldMapper.Defaults.FIELD_TYPE.indexOptions())); assertThat(docMapper.timestampFieldMapper().path(), equalTo(TimestampFieldMapper.Defaults.PATH)); assertThat(docMapper.timestampFieldMapper().dateTimeFormatter().format(), equalTo(TimestampFieldMapper.DEFAULT_DATE_TIME_FORMAT)); assertAcked(client().admin().indices().prepareDelete("test").execute().get()); @@ -126,7 +127,7 @@ public class TimestampMappingTests extends ElasticsearchSingleNodeTest { DocumentMapper docMapper = createIndex("test").mapperService().documentMapperParser().parse(mapping); assertThat(docMapper.timestampFieldMapper().enabled(), equalTo(true)); assertThat(docMapper.timestampFieldMapper().fieldType().stored(), equalTo(false)); - assertThat(docMapper.timestampFieldMapper().fieldType().indexed(), equalTo(false)); + assertEquals(IndexOptions.NONE, docMapper.timestampFieldMapper().fieldType().indexOptions()); assertThat(docMapper.timestampFieldMapper().path(), equalTo("timestamp")); assertThat(docMapper.timestampFieldMapper().dateTimeFormatter().format(), equalTo("year")); } diff --git a/src/test/java/org/elasticsearch/index/mapper/ttl/TTLMappingTests.java b/src/test/java/org/elasticsearch/index/mapper/ttl/TTLMappingTests.java index d5e2aa4987d..d60be66668c 100644 --- a/src/test/java/org/elasticsearch/index/mapper/ttl/TTLMappingTests.java +++ b/src/test/java/org/elasticsearch/index/mapper/ttl/TTLMappingTests.java @@ -19,6 +19,7 @@ package org.elasticsearch.index.mapper.ttl; +import org.apache.lucene.index.IndexOptions; import org.elasticsearch.action.admin.indices.mapping.get.GetMappingsResponse; import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.compress.CompressedString; @@ -67,7 +68,7 @@ public class TTLMappingTests extends ElasticsearchSingleNodeTest { ParsedDocument doc = docMapper.parse(SourceToParse.source(source).type("type").id("1").ttl(Long.MAX_VALUE)); assertThat(doc.rootDoc().getField("_ttl").fieldType().stored(), equalTo(true)); - assertThat(doc.rootDoc().getField("_ttl").fieldType().indexed(), equalTo(true)); + assertNotSame(IndexOptions.NONE, doc.rootDoc().getField("_ttl").fieldType().indexOptions()); assertThat(doc.rootDoc().getField("_ttl").tokenStream(docMapper.indexAnalyzer(), null), notNullValue()); } @@ -77,7 +78,7 @@ public class TTLMappingTests extends ElasticsearchSingleNodeTest { DocumentMapper docMapper = createIndex("test").mapperService().documentMapperParser().parse(mapping); assertThat(docMapper.TTLFieldMapper().enabled(), equalTo(TTLFieldMapper.Defaults.ENABLED_STATE.enabled)); assertThat(docMapper.TTLFieldMapper().fieldType().stored(), equalTo(TTLFieldMapper.Defaults.TTL_FIELD_TYPE.stored())); - assertThat(docMapper.TTLFieldMapper().fieldType().indexed(), equalTo(TTLFieldMapper.Defaults.TTL_FIELD_TYPE.indexed())); + assertThat(docMapper.TTLFieldMapper().fieldType().indexOptions(), equalTo(TTLFieldMapper.Defaults.TTL_FIELD_TYPE.indexOptions())); } @@ -91,7 +92,7 @@ public class TTLMappingTests extends ElasticsearchSingleNodeTest { DocumentMapper docMapper = createIndex("test").mapperService().documentMapperParser().parse(mapping); assertThat(docMapper.TTLFieldMapper().enabled(), equalTo(true)); assertThat(docMapper.TTLFieldMapper().fieldType().stored(), equalTo(false)); - assertThat(docMapper.TTLFieldMapper().fieldType().indexed(), equalTo(false)); + assertEquals(IndexOptions.NONE, docMapper.TTLFieldMapper().fieldType().indexOptions()); } @Test diff --git a/src/test/java/org/elasticsearch/index/query/SimpleIndexQueryParserTests.java b/src/test/java/org/elasticsearch/index/query/SimpleIndexQueryParserTests.java index b2c9875083c..76e4ba1059c 100644 --- a/src/test/java/org/elasticsearch/index/query/SimpleIndexQueryParserTests.java +++ b/src/test/java/org/elasticsearch/index/query/SimpleIndexQueryParserTests.java @@ -23,12 +23,44 @@ package org.elasticsearch.index.query; import com.google.common.collect.Lists; import com.google.common.collect.Sets; import org.apache.lucene.analysis.core.WhitespaceAnalyzer; -import org.apache.lucene.index.*; +import org.apache.lucene.index.Fields; +import org.apache.lucene.index.MultiFields; +import org.apache.lucene.index.Term; +import org.apache.lucene.index.Terms; +import org.apache.lucene.index.TermsEnum; import org.apache.lucene.index.memory.MemoryIndex; -import org.apache.lucene.queries.*; +import org.apache.lucene.queries.BoostingQuery; +import org.apache.lucene.queries.ExtendedCommonTermsQuery; +import org.apache.lucene.queries.FilterClause; +import org.apache.lucene.queries.TermFilter; +import org.apache.lucene.queries.TermsFilter; import org.apache.lucene.sandbox.queries.FuzzyLikeThisQuery; -import org.apache.lucene.search.*; -import org.apache.lucene.search.spans.*; +import org.apache.lucene.search.BooleanClause; +import org.apache.lucene.search.BooleanQuery; +import org.apache.lucene.search.ConstantScoreQuery; +import org.apache.lucene.search.DisjunctionMaxQuery; +import org.apache.lucene.search.Filter; +import org.apache.lucene.search.FilteredQuery; +import org.apache.lucene.search.FuzzyQuery; +import org.apache.lucene.search.MatchAllDocsQuery; +import org.apache.lucene.search.MultiTermQuery; +import org.apache.lucene.search.NumericRangeFilter; +import org.apache.lucene.search.NumericRangeQuery; +import org.apache.lucene.search.PrefixFilter; +import org.apache.lucene.search.PrefixQuery; +import org.apache.lucene.search.Query; +import org.apache.lucene.search.QueryWrapperFilter; +import org.apache.lucene.search.RegexpQuery; +import org.apache.lucene.search.TermQuery; +import org.apache.lucene.search.TermRangeQuery; +import org.apache.lucene.search.WildcardQuery; +import org.apache.lucene.search.spans.FieldMaskingSpanQuery; +import org.apache.lucene.search.spans.SpanFirstQuery; +import org.apache.lucene.search.spans.SpanMultiTermQueryWrapper; +import org.apache.lucene.search.spans.SpanNearQuery; +import org.apache.lucene.search.spans.SpanNotQuery; +import org.apache.lucene.search.spans.SpanOrQuery; +import org.apache.lucene.search.spans.SpanTermQuery; import org.apache.lucene.spatial.prefix.IntersectsPrefixTreeFilter; import org.apache.lucene.util.BytesRef; import org.apache.lucene.util.BytesRefBuilder; @@ -41,8 +73,15 @@ import org.elasticsearch.action.termvector.TermVectorRequest; import org.elasticsearch.common.Strings; import org.elasticsearch.common.bytes.BytesArray; import org.elasticsearch.common.compress.CompressedString; -import org.elasticsearch.common.lucene.Lucene; -import org.elasticsearch.common.lucene.search.*; +import org.elasticsearch.common.lucene.search.AndFilter; +import org.elasticsearch.common.lucene.search.LimitFilter; +import org.elasticsearch.common.lucene.search.MatchAllDocsFilter; +import org.elasticsearch.common.lucene.search.MoreLikeThisQuery; +import org.elasticsearch.common.lucene.search.NotFilter; +import org.elasticsearch.common.lucene.search.OrFilter; +import org.elasticsearch.common.lucene.search.Queries; +import org.elasticsearch.common.lucene.search.RegexpFilter; +import org.elasticsearch.common.lucene.search.XBooleanFilter; import org.elasticsearch.common.lucene.search.function.BoostScoreFunction; import org.elasticsearch.common.lucene.search.function.FunctionScoreQuery; import org.elasticsearch.common.lucene.search.function.WeightFactorFunction; @@ -50,9 +89,9 @@ import org.elasticsearch.common.settings.ImmutableSettings; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.DistanceUnit; import org.elasticsearch.common.unit.Fuzziness; +import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentHelper; import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.index.cache.filter.support.CacheKeyFilter; import org.elasticsearch.index.mapper.MapperService; import org.elasticsearch.index.mapper.core.NumberFieldMapper; @@ -79,12 +118,52 @@ import java.util.List; import static org.elasticsearch.common.io.Streams.copyToBytesFromClasspath; import static org.elasticsearch.common.io.Streams.copyToStringFromClasspath; import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder; -import static org.elasticsearch.index.query.FilterBuilders.*; -import static org.elasticsearch.index.query.QueryBuilders.*; -import static org.elasticsearch.index.query.RegexpFlag.*; +import static org.elasticsearch.index.query.FilterBuilders.andFilter; +import static org.elasticsearch.index.query.FilterBuilders.boolFilter; +import static org.elasticsearch.index.query.FilterBuilders.notFilter; +import static org.elasticsearch.index.query.FilterBuilders.orFilter; +import static org.elasticsearch.index.query.FilterBuilders.prefixFilter; +import static org.elasticsearch.index.query.FilterBuilders.queryFilter; +import static org.elasticsearch.index.query.FilterBuilders.rangeFilter; +import static org.elasticsearch.index.query.FilterBuilders.termFilter; +import static org.elasticsearch.index.query.FilterBuilders.termsFilter; +import static org.elasticsearch.index.query.QueryBuilders.boolQuery; +import static org.elasticsearch.index.query.QueryBuilders.boostingQuery; +import static org.elasticsearch.index.query.QueryBuilders.constantScoreQuery; +import static org.elasticsearch.index.query.QueryBuilders.disMaxQuery; +import static org.elasticsearch.index.query.QueryBuilders.filteredQuery; +import static org.elasticsearch.index.query.QueryBuilders.functionScoreQuery; +import static org.elasticsearch.index.query.QueryBuilders.fuzzyLikeThisFieldQuery; +import static org.elasticsearch.index.query.QueryBuilders.fuzzyLikeThisQuery; +import static org.elasticsearch.index.query.QueryBuilders.fuzzyQuery; +import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery; +import static org.elasticsearch.index.query.QueryBuilders.moreLikeThisQuery; +import static org.elasticsearch.index.query.QueryBuilders.prefixQuery; +import static org.elasticsearch.index.query.QueryBuilders.queryString; +import static org.elasticsearch.index.query.QueryBuilders.rangeQuery; +import static org.elasticsearch.index.query.QueryBuilders.regexpQuery; +import static org.elasticsearch.index.query.QueryBuilders.spanFirstQuery; +import static org.elasticsearch.index.query.QueryBuilders.spanNearQuery; +import static org.elasticsearch.index.query.QueryBuilders.spanNotQuery; +import static org.elasticsearch.index.query.QueryBuilders.spanOrQuery; +import static org.elasticsearch.index.query.QueryBuilders.spanTermQuery; +import static org.elasticsearch.index.query.QueryBuilders.termQuery; +import static org.elasticsearch.index.query.QueryBuilders.termsQuery; +import static org.elasticsearch.index.query.QueryBuilders.wildcardQuery; +import static org.elasticsearch.index.query.RegexpFlag.COMPLEMENT; +import static org.elasticsearch.index.query.RegexpFlag.EMPTY; +import static org.elasticsearch.index.query.RegexpFlag.INTERSECTION; import static org.elasticsearch.index.query.functionscore.ScoreFunctionBuilders.factorFunction; import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertBooleanSubQuery; -import static org.hamcrest.Matchers.*; +import static org.hamcrest.Matchers.closeTo; +import static org.hamcrest.Matchers.containsString; +import static org.hamcrest.Matchers.equalTo; +import static org.hamcrest.Matchers.instanceOf; +import static org.hamcrest.Matchers.is; +import static org.hamcrest.Matchers.not; +import static org.hamcrest.Matchers.notNullValue; +import static org.hamcrest.Matchers.nullValue; +import static org.hamcrest.Matchers.sameInstance; /** * @@ -513,22 +592,22 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { } @Test - public void testPrefixFilteredQueryBuilder() throws IOException { + public void testPrefiFilteredQueryBuilder() throws IOException { IndexQueryParserService queryParser = queryParser(); Query parsedQuery = queryParser.parse(filteredQuery(termQuery("name.first", "shay"), prefixFilter("name.first", "sh"))).query(); - assertThat(parsedQuery, instanceOf(XFilteredQuery.class)); - XFilteredQuery filteredQuery = (XFilteredQuery) parsedQuery; + assertThat(parsedQuery, instanceOf(FilteredQuery.class)); + FilteredQuery filteredQuery = (FilteredQuery) parsedQuery; PrefixFilter prefixFilter = (PrefixFilter) filteredQuery.getFilter(); assertThat(prefixFilter.getPrefix(), equalTo(new Term("name.first", "sh"))); } @Test - public void testPrefixFilteredQuery() throws IOException { + public void testPrefiFilteredQuery() throws IOException { IndexQueryParserService queryParser = queryParser(); String query = copyToStringFromClasspath("/org/elasticsearch/index/query/prefix-filter.json"); Query parsedQuery = queryParser.parse(query).query(); - assertThat(parsedQuery, instanceOf(XFilteredQuery.class)); - XFilteredQuery filteredQuery = (XFilteredQuery) parsedQuery; + assertThat(parsedQuery, instanceOf(FilteredQuery.class)); + FilteredQuery filteredQuery = (FilteredQuery) parsedQuery; PrefixFilter prefixFilter = (PrefixFilter) filteredQuery.getFilter(); assertThat(prefixFilter.getPrefix(), equalTo(new Term("name.first", "sh"))); } @@ -539,8 +618,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { String query = copyToStringFromClasspath("/org/elasticsearch/index/query/prefix-filter-named.json"); ParsedQuery parsedQuery = queryParser.parse(query); assertThat(parsedQuery.namedFilters().containsKey("test"), equalTo(true)); - assertThat(parsedQuery.query(), instanceOf(XFilteredQuery.class)); - XFilteredQuery filteredQuery = (XFilteredQuery) parsedQuery.query(); + assertThat(parsedQuery.query(), instanceOf(FilteredQuery.class)); + FilteredQuery filteredQuery = (FilteredQuery) parsedQuery.query(); PrefixFilter prefixFilter = (PrefixFilter) filteredQuery.getFilter(); assertThat(prefixFilter.getPrefix(), equalTo(new Term("name.first", "sh"))); } @@ -600,8 +679,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { IndexQueryParserService queryParser = queryParser(); String query = copyToStringFromClasspath("/org/elasticsearch/index/query/regexp-filter.json"); Query parsedQuery = queryParser.parse(query).query(); - assertThat(parsedQuery, instanceOf(XFilteredQuery.class)); - Filter filter = ((XFilteredQuery) parsedQuery).getFilter(); + assertThat(parsedQuery, instanceOf(FilteredQuery.class)); + Filter filter = ((FilteredQuery) parsedQuery).getFilter(); assertThat(filter, instanceOf(RegexpFilter.class)); RegexpFilter regexpFilter = (RegexpFilter) filter; assertThat(regexpFilter.field(), equalTo("name.first")); @@ -614,8 +693,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { String query = copyToStringFromClasspath("/org/elasticsearch/index/query/regexp-filter-named.json"); ParsedQuery parsedQuery = queryParser.parse(query); assertThat(parsedQuery.namedFilters().containsKey("test"), equalTo(true)); - assertThat(parsedQuery.query(), instanceOf(XFilteredQuery.class)); - Filter filter = ((XFilteredQuery) parsedQuery.query()).getFilter(); + assertThat(parsedQuery.query(), instanceOf(FilteredQuery.class)); + Filter filter = ((FilteredQuery) parsedQuery.query()).getFilter(); assertThat(filter, instanceOf(RegexpFilter.class)); RegexpFilter regexpFilter = (RegexpFilter) filter; assertThat(regexpFilter.field(), equalTo("name.first")); @@ -627,8 +706,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { IndexQueryParserService queryParser = queryParser(); String query = copyToStringFromClasspath("/org/elasticsearch/index/query/regexp-filter-flags.json"); ParsedQuery parsedQuery = queryParser.parse(query); - assertThat(parsedQuery.query(), instanceOf(XFilteredQuery.class)); - Filter filter = ((XFilteredQuery) parsedQuery.query()).getFilter(); + assertThat(parsedQuery.query(), instanceOf(FilteredQuery.class)); + Filter filter = ((FilteredQuery) parsedQuery.query()).getFilter(); assertThat(filter, instanceOf(RegexpFilter.class)); RegexpFilter regexpFilter = (RegexpFilter) filter; assertThat(regexpFilter.field(), equalTo("name.first")); @@ -641,8 +720,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { IndexQueryParserService queryParser = queryParser(); String query = copyToStringFromClasspath("/org/elasticsearch/index/query/regexp-filter-flags-named-cached.json"); ParsedQuery parsedQuery = queryParser.parse(query); - assertThat(parsedQuery.query(), instanceOf(XFilteredQuery.class)); - Filter filter = ((XFilteredQuery) parsedQuery.query()).getFilter(); + assertThat(parsedQuery.query(), instanceOf(FilteredQuery.class)); + Filter filter = ((FilteredQuery) parsedQuery.query()).getFilter(); assertThat(filter, instanceOf(CacheKeyFilter.Wrapper.class)); CacheKeyFilter.Wrapper wrapper = (CacheKeyFilter.Wrapper) filter; assertThat(new BytesRef(wrapper.cacheKey().bytes()).utf8ToString(), equalTo("key")); @@ -743,8 +822,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { IndexQueryParserService queryParser = queryParser(); Query parsedQuery = queryParser.parse(filteredQuery(termQuery("name.first", "shay"), rangeFilter("age").from(23).to(54).includeLower(true).includeUpper(false))).query(); // since age is automatically registered in data, we encode it as numeric - assertThat(parsedQuery, instanceOf(XFilteredQuery.class)); - Filter filter = ((XFilteredQuery) parsedQuery).getFilter(); + assertThat(parsedQuery, instanceOf(FilteredQuery.class)); + Filter filter = ((FilteredQuery) parsedQuery).getFilter(); assertThat(filter, instanceOf(NumericRangeFilter.class)); NumericRangeFilter rangeFilter = (NumericRangeFilter) filter; assertThat(rangeFilter.getField(), equalTo("age")); @@ -760,8 +839,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { String query = copyToStringFromClasspath("/org/elasticsearch/index/query/range-filter.json"); Query parsedQuery = queryParser.parse(query).query(); // since age is automatically registered in data, we encode it as numeric - assertThat(parsedQuery, instanceOf(XFilteredQuery.class)); - Filter filter = ((XFilteredQuery) parsedQuery).getFilter(); + assertThat(parsedQuery, instanceOf(FilteredQuery.class)); + Filter filter = ((FilteredQuery) parsedQuery).getFilter(); assertThat(filter, instanceOf(NumericRangeFilter.class)); NumericRangeFilter rangeFilter = (NumericRangeFilter) filter; assertThat(rangeFilter.getField(), equalTo("age")); @@ -777,8 +856,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { String query = copyToStringFromClasspath("/org/elasticsearch/index/query/range-filter-named.json"); ParsedQuery parsedQuery = queryParser.parse(query); assertThat(parsedQuery.namedFilters().containsKey("test"), equalTo(true)); - assertThat(parsedQuery.query(), instanceOf(XFilteredQuery.class)); - Filter filter = ((XFilteredQuery) parsedQuery.query()).getFilter(); + assertThat(parsedQuery.query(), instanceOf(FilteredQuery.class)); + Filter filter = ((FilteredQuery) parsedQuery.query()).getFilter(); assertThat(filter, instanceOf(NumericRangeFilter.class)); NumericRangeFilter rangeFilter = (NumericRangeFilter) filter; assertThat(rangeFilter.getField(), equalTo("age")); @@ -792,8 +871,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { public void testRangeFilteredQueryBuilder_executionFieldData() throws IOException { IndexQueryParserService queryParser = queryParser(); Query parsedQuery = queryParser.parse(filteredQuery(termQuery("name.first", "shay"), rangeFilter("age").from(23).to(54).includeLower(true).includeUpper(false).setExecution("fielddata"))).query(); - assertThat(parsedQuery, instanceOf(XFilteredQuery.class)); - Filter filter = ((XFilteredQuery) parsedQuery).getFilter(); + assertThat(parsedQuery, instanceOf(FilteredQuery.class)); + Filter filter = ((FilteredQuery) parsedQuery).getFilter(); assertThat(filter, instanceOf(NumericRangeFieldDataFilter.class)); NumericRangeFieldDataFilter rangeFilter = (NumericRangeFieldDataFilter) filter; assertThat(rangeFilter.getField(), equalTo("age")); @@ -808,8 +887,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { IndexQueryParserService queryParser = queryParser(); Query parsedQuery = queryParser.parse(filteredQuery(termQuery("name.first", "shay"), boolFilter().must(termFilter("name.first", "shay1"), termFilter("name.first", "shay4")).mustNot(termFilter("name.first", "shay2")).should(termFilter("name.first", "shay3")))).query(); - assertThat(parsedQuery, instanceOf(XFilteredQuery.class)); - XFilteredQuery filteredQuery = (XFilteredQuery) parsedQuery; + assertThat(parsedQuery, instanceOf(FilteredQuery.class)); + FilteredQuery filteredQuery = (FilteredQuery) parsedQuery; XBooleanFilter booleanFilter = (XBooleanFilter) filteredQuery.getFilter(); Iterator iterator = booleanFilter.iterator(); @@ -842,8 +921,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { IndexQueryParserService queryParser = queryParser(); String query = copyToStringFromClasspath("/org/elasticsearch/index/query/bool-filter.json"); Query parsedQuery = queryParser.parse(query).query(); - assertThat(parsedQuery, instanceOf(XFilteredQuery.class)); - XFilteredQuery filteredQuery = (XFilteredQuery) parsedQuery; + assertThat(parsedQuery, instanceOf(FilteredQuery.class)); + FilteredQuery filteredQuery = (FilteredQuery) parsedQuery; XBooleanFilter booleanFilter = (XBooleanFilter) filteredQuery.getFilter(); Iterator iterator = booleanFilter.iterator(); @@ -874,8 +953,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { public void testAndFilteredQueryBuilder() throws IOException { IndexQueryParserService queryParser = queryParser(); Query parsedQuery = queryParser.parse(filteredQuery(matchAllQuery(), andFilter(termFilter("name.first", "shay1"), termFilter("name.first", "shay4")))).query(); - assertThat(parsedQuery, instanceOf(XConstantScoreQuery.class)); - XConstantScoreQuery constantScoreQuery = (XConstantScoreQuery) parsedQuery; + assertThat(parsedQuery, instanceOf(ConstantScoreQuery.class)); + ConstantScoreQuery constantScoreQuery = (ConstantScoreQuery) parsedQuery; AndFilter andFilter = (AndFilter) constantScoreQuery.getFilter(); assertThat(andFilter.filters().size(), equalTo(2)); @@ -888,8 +967,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { IndexQueryParserService queryParser = queryParser(); String query = copyToStringFromClasspath("/org/elasticsearch/index/query/and-filter.json"); Query parsedQuery = queryParser.parse(query).query(); - assertThat(parsedQuery, instanceOf(XFilteredQuery.class)); - XFilteredQuery filteredQuery = (XFilteredQuery) parsedQuery; + assertThat(parsedQuery, instanceOf(FilteredQuery.class)); + FilteredQuery filteredQuery = (FilteredQuery) parsedQuery; AndFilter andFilter = (AndFilter) filteredQuery.getFilter(); assertThat(andFilter.filters().size(), equalTo(2)); @@ -903,8 +982,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { String query = copyToStringFromClasspath("/org/elasticsearch/index/query/and-filter-named.json"); ParsedQuery parsedQuery = queryParser.parse(query); assertThat(parsedQuery.namedFilters().containsKey("test"), equalTo(true)); - assertThat(parsedQuery.query(), instanceOf(XFilteredQuery.class)); - XFilteredQuery filteredQuery = (XFilteredQuery) parsedQuery.query(); + assertThat(parsedQuery.query(), instanceOf(FilteredQuery.class)); + FilteredQuery filteredQuery = (FilteredQuery) parsedQuery.query(); AndFilter andFilter = (AndFilter) filteredQuery.getFilter(); assertThat(andFilter.filters().size(), equalTo(2)); @@ -917,8 +996,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { IndexQueryParserService queryParser = queryParser(); String query = copyToStringFromClasspath("/org/elasticsearch/index/query/and-filter2.json"); Query parsedQuery = queryParser.parse(query).query(); - assertThat(parsedQuery, instanceOf(XFilteredQuery.class)); - XFilteredQuery filteredQuery = (XFilteredQuery) parsedQuery; + assertThat(parsedQuery, instanceOf(FilteredQuery.class)); + FilteredQuery filteredQuery = (FilteredQuery) parsedQuery; AndFilter andFilter = (AndFilter) filteredQuery.getFilter(); assertThat(andFilter.filters().size(), equalTo(2)); @@ -930,8 +1009,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { public void testOrFilteredQueryBuilder() throws IOException { IndexQueryParserService queryParser = queryParser(); Query parsedQuery = queryParser.parse(filteredQuery(matchAllQuery(), orFilter(termFilter("name.first", "shay1"), termFilter("name.first", "shay4")))).query(); - assertThat(parsedQuery, instanceOf(XConstantScoreQuery.class)); - XConstantScoreQuery constantScoreQuery = (XConstantScoreQuery) parsedQuery; + assertThat(parsedQuery, instanceOf(ConstantScoreQuery.class)); + ConstantScoreQuery constantScoreQuery = (ConstantScoreQuery) parsedQuery; OrFilter andFilter = (OrFilter) constantScoreQuery.getFilter(); assertThat(andFilter.filters().size(), equalTo(2)); @@ -944,8 +1023,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { IndexQueryParserService queryParser = queryParser(); String query = copyToStringFromClasspath("/org/elasticsearch/index/query/or-filter.json"); Query parsedQuery = queryParser.parse(query).query(); - assertThat(parsedQuery, instanceOf(XFilteredQuery.class)); - XFilteredQuery filteredQuery = (XFilteredQuery) parsedQuery; + assertThat(parsedQuery, instanceOf(FilteredQuery.class)); + FilteredQuery filteredQuery = (FilteredQuery) parsedQuery; OrFilter orFilter = (OrFilter) filteredQuery.getFilter(); assertThat(orFilter.filters().size(), equalTo(2)); @@ -958,8 +1037,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { IndexQueryParserService queryParser = queryParser(); String query = copyToStringFromClasspath("/org/elasticsearch/index/query/or-filter2.json"); Query parsedQuery = queryParser.parse(query).query(); - assertThat(parsedQuery, instanceOf(XFilteredQuery.class)); - XFilteredQuery filteredQuery = (XFilteredQuery) parsedQuery; + assertThat(parsedQuery, instanceOf(FilteredQuery.class)); + FilteredQuery filteredQuery = (FilteredQuery) parsedQuery; OrFilter orFilter = (OrFilter) filteredQuery.getFilter(); assertThat(orFilter.filters().size(), equalTo(2)); @@ -971,8 +1050,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { public void testNotFilteredQueryBuilder() throws IOException { IndexQueryParserService queryParser = queryParser(); Query parsedQuery = queryParser.parse(filteredQuery(matchAllQuery(), notFilter(termFilter("name.first", "shay1")))).query(); - assertThat(parsedQuery, instanceOf(XConstantScoreQuery.class)); - XConstantScoreQuery constantScoreQuery = (XConstantScoreQuery) parsedQuery; + assertThat(parsedQuery, instanceOf(ConstantScoreQuery.class)); + ConstantScoreQuery constantScoreQuery = (ConstantScoreQuery) parsedQuery; NotFilter notFilter = (NotFilter) constantScoreQuery.getFilter(); assertThat(((TermFilter) notFilter.filter()).getTerm(), equalTo(new Term("name.first", "shay1"))); @@ -983,8 +1062,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { IndexQueryParserService queryParser = queryParser(); String query = copyToStringFromClasspath("/org/elasticsearch/index/query/not-filter.json"); Query parsedQuery = queryParser.parse(query).query(); - assertThat(parsedQuery, instanceOf(XFilteredQuery.class)); - XFilteredQuery filteredQuery = (XFilteredQuery) parsedQuery; + assertThat(parsedQuery, instanceOf(FilteredQuery.class)); + FilteredQuery filteredQuery = (FilteredQuery) parsedQuery; assertThat(((TermQuery) filteredQuery.getQuery()).getTerm(), equalTo(new Term("name.first", "shay"))); NotFilter notFilter = (NotFilter) filteredQuery.getFilter(); @@ -996,8 +1075,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { IndexQueryParserService queryParser = queryParser(); String query = copyToStringFromClasspath("/org/elasticsearch/index/query/not-filter2.json"); Query parsedQuery = queryParser.parse(query).query(); - assertThat(parsedQuery, instanceOf(XFilteredQuery.class)); - XFilteredQuery filteredQuery = (XFilteredQuery) parsedQuery; + assertThat(parsedQuery, instanceOf(FilteredQuery.class)); + FilteredQuery filteredQuery = (FilteredQuery) parsedQuery; assertThat(((TermQuery) filteredQuery.getQuery()).getTerm(), equalTo(new Term("name.first", "shay"))); NotFilter notFilter = (NotFilter) filteredQuery.getFilter(); @@ -1009,8 +1088,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { IndexQueryParserService queryParser = queryParser(); String query = copyToStringFromClasspath("/org/elasticsearch/index/query/not-filter3.json"); Query parsedQuery = queryParser.parse(query).query(); - assertThat(parsedQuery, instanceOf(XFilteredQuery.class)); - XFilteredQuery filteredQuery = (XFilteredQuery) parsedQuery; + assertThat(parsedQuery, instanceOf(FilteredQuery.class)); + FilteredQuery filteredQuery = (FilteredQuery) parsedQuery; assertThat(((TermQuery) filteredQuery.getQuery()).getTerm(), equalTo(new Term("name.first", "shay"))); NotFilter notFilter = (NotFilter) filteredQuery.getFilter(); @@ -1182,8 +1261,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { public void testFilteredQueryBuilder() throws IOException { IndexQueryParserService queryParser = queryParser(); Query parsedQuery = queryParser.parse(filteredQuery(termQuery("name.first", "shay"), termFilter("name.last", "banon"))).query(); - assertThat(parsedQuery, instanceOf(XFilteredQuery.class)); - XFilteredQuery filteredQuery = (XFilteredQuery) parsedQuery; + assertThat(parsedQuery, instanceOf(FilteredQuery.class)); + FilteredQuery filteredQuery = (FilteredQuery) parsedQuery; assertThat(((TermQuery) filteredQuery.getQuery()).getTerm(), equalTo(new Term("name.first", "shay"))); assertThat(((TermFilter) filteredQuery.getFilter()).getTerm(), equalTo(new Term("name.last", "banon"))); } @@ -1193,8 +1272,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { IndexQueryParserService queryParser = queryParser(); String query = copyToStringFromClasspath("/org/elasticsearch/index/query/filtered-query.json"); Query parsedQuery = queryParser.parse(query).query(); - assertThat(parsedQuery, instanceOf(XFilteredQuery.class)); - XFilteredQuery filteredQuery = (XFilteredQuery) parsedQuery; + assertThat(parsedQuery, instanceOf(FilteredQuery.class)); + FilteredQuery filteredQuery = (FilteredQuery) parsedQuery; assertThat(((TermQuery) filteredQuery.getQuery()).getTerm(), equalTo(new Term("name.first", "shay"))); assertThat(((TermFilter) filteredQuery.getFilter()).getTerm(), equalTo(new Term("name.last", "banon"))); } @@ -1204,8 +1283,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { IndexQueryParserService queryParser = queryParser(); String query = copyToStringFromClasspath("/org/elasticsearch/index/query/filtered-query2.json"); Query parsedQuery = queryParser.parse(query).query(); - assertThat(parsedQuery, instanceOf(XFilteredQuery.class)); - XFilteredQuery filteredQuery = (XFilteredQuery) parsedQuery; + assertThat(parsedQuery, instanceOf(FilteredQuery.class)); + FilteredQuery filteredQuery = (FilteredQuery) parsedQuery; assertThat(((TermQuery) filteredQuery.getQuery()).getTerm(), equalTo(new Term("name.first", "shay"))); assertThat(((TermFilter) filteredQuery.getFilter()).getTerm(), equalTo(new Term("name.last", "banon"))); } @@ -1215,8 +1294,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { IndexQueryParserService queryParser = queryParser(); String query = copyToStringFromClasspath("/org/elasticsearch/index/query/filtered-query3.json"); Query parsedQuery = queryParser.parse(query).query(); - assertThat(parsedQuery, instanceOf(XFilteredQuery.class)); - XFilteredQuery filteredQuery = (XFilteredQuery) parsedQuery; + assertThat(parsedQuery, instanceOf(FilteredQuery.class)); + FilteredQuery filteredQuery = (FilteredQuery) parsedQuery; assertThat(((TermQuery) filteredQuery.getQuery()).getTerm(), equalTo(new Term("name.first", "shay"))); Filter filter = filteredQuery.getFilter(); @@ -1232,8 +1311,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { IndexQueryParserService queryParser = queryParser(); String query = copyToStringFromClasspath("/org/elasticsearch/index/query/filtered-query4.json"); Query parsedQuery = queryParser.parse(query).query(); - assertThat(parsedQuery, instanceOf(XFilteredQuery.class)); - XFilteredQuery filteredQuery = (XFilteredQuery) parsedQuery; + assertThat(parsedQuery, instanceOf(FilteredQuery.class)); + FilteredQuery filteredQuery = (FilteredQuery) parsedQuery; WildcardQuery wildcardQuery = (WildcardQuery) filteredQuery.getQuery(); assertThat(wildcardQuery.getTerm(), equalTo(new Term("name.first", "sh*"))); assertThat((double) wildcardQuery.getBoost(), closeTo(1.1, 0.001)); @@ -1246,8 +1325,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { IndexQueryParserService queryParser = queryParser(); String query = copyToStringFromClasspath("/org/elasticsearch/index/query/limit-filter.json"); Query parsedQuery = queryParser.parse(query).query(); - assertThat(parsedQuery, instanceOf(XFilteredQuery.class)); - XFilteredQuery filteredQuery = (XFilteredQuery) parsedQuery; + assertThat(parsedQuery, instanceOf(FilteredQuery.class)); + FilteredQuery filteredQuery = (FilteredQuery) parsedQuery; assertThat(filteredQuery.getFilter(), instanceOf(LimitFilter.class)); assertThat(((LimitFilter) filteredQuery.getFilter()).getLimit(), equalTo(2)); @@ -1260,8 +1339,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { IndexQueryParserService queryParser = queryParser(); String query = copyToStringFromClasspath("/org/elasticsearch/index/query/term-filter.json"); Query parsedQuery = queryParser.parse(query).query(); - assertThat(parsedQuery, instanceOf(XFilteredQuery.class)); - XFilteredQuery filteredQuery = (XFilteredQuery) parsedQuery; + assertThat(parsedQuery, instanceOf(FilteredQuery.class)); + FilteredQuery filteredQuery = (FilteredQuery) parsedQuery; assertThat(filteredQuery.getFilter(), instanceOf(TermFilter.class)); TermFilter termFilter = (TermFilter) filteredQuery.getFilter(); assertThat(termFilter.getTerm().field(), equalTo("name.last")); @@ -1274,8 +1353,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { String query = copyToStringFromClasspath("/org/elasticsearch/index/query/term-filter-named.json"); ParsedQuery parsedQuery = queryParser.parse(query); assertThat(parsedQuery.namedFilters().containsKey("test"), equalTo(true)); - assertThat(parsedQuery.query(), instanceOf(XFilteredQuery.class)); - XFilteredQuery filteredQuery = (XFilteredQuery) parsedQuery.query(); + assertThat(parsedQuery.query(), instanceOf(FilteredQuery.class)); + FilteredQuery filteredQuery = (FilteredQuery) parsedQuery.query(); assertThat(filteredQuery.getFilter(), instanceOf(TermFilter.class)); TermFilter termFilter = (TermFilter) filteredQuery.getFilter(); assertThat(termFilter.getTerm().field(), equalTo("name.last")); @@ -1286,8 +1365,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { public void testTermsFilterQueryBuilder() throws Exception { IndexQueryParserService queryParser = queryParser(); Query parsedQuery = queryParser.parse(filteredQuery(termQuery("name.first", "shay"), termsFilter("name.last", "banon", "kimchy"))).query(); - assertThat(parsedQuery, instanceOf(XFilteredQuery.class)); - XFilteredQuery filteredQuery = (XFilteredQuery) parsedQuery; + assertThat(parsedQuery, instanceOf(FilteredQuery.class)); + FilteredQuery filteredQuery = (FilteredQuery) parsedQuery; assertThat(filteredQuery.getFilter(), instanceOf(TermsFilter.class)); TermsFilter termsFilter = (TermsFilter) filteredQuery.getFilter(); //assertThat(termsFilter.getTerms().length, equalTo(2)); @@ -1300,8 +1379,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { IndexQueryParserService queryParser = queryParser(); String query = copyToStringFromClasspath("/org/elasticsearch/index/query/terms-filter.json"); Query parsedQuery = queryParser.parse(query).query(); - assertThat(parsedQuery, instanceOf(XFilteredQuery.class)); - XFilteredQuery filteredQuery = (XFilteredQuery) parsedQuery; + assertThat(parsedQuery, instanceOf(FilteredQuery.class)); + FilteredQuery filteredQuery = (FilteredQuery) parsedQuery; assertThat(filteredQuery.getFilter(), instanceOf(TermsFilter.class)); TermsFilter termsFilter = (TermsFilter) filteredQuery.getFilter(); //assertThat(termsFilter.getTerms().length, equalTo(2)); @@ -1314,8 +1393,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { String query = copyToStringFromClasspath("/org/elasticsearch/index/query/terms-filter-named.json"); ParsedQuery parsedQuery = queryParser.parse(query); assertThat(parsedQuery.namedFilters().containsKey("test"), equalTo(true)); - assertThat(parsedQuery.query(), instanceOf(XFilteredQuery.class)); - XFilteredQuery filteredQuery = (XFilteredQuery) parsedQuery.query(); + assertThat(parsedQuery.query(), instanceOf(FilteredQuery.class)); + FilteredQuery filteredQuery = (FilteredQuery) parsedQuery.query(); assertThat(filteredQuery.getFilter(), instanceOf(TermsFilter.class)); TermsFilter termsFilter = (TermsFilter) filteredQuery.getFilter(); //assertThat(termsFilter.getTerms().length, equalTo(2)); @@ -1326,8 +1405,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { public void testConstantScoreQueryBuilder() throws IOException { IndexQueryParserService queryParser = queryParser(); Query parsedQuery = queryParser.parse(constantScoreQuery(termFilter("name.last", "banon"))).query(); - assertThat(parsedQuery, instanceOf(XConstantScoreQuery.class)); - XConstantScoreQuery constantScoreQuery = (XConstantScoreQuery) parsedQuery; + assertThat(parsedQuery, instanceOf(ConstantScoreQuery.class)); + ConstantScoreQuery constantScoreQuery = (ConstantScoreQuery) parsedQuery; assertThat(((TermFilter) constantScoreQuery.getFilter()).getTerm(), equalTo(new Term("name.last", "banon"))); } @@ -1336,8 +1415,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { IndexQueryParserService queryParser = queryParser(); String query = copyToStringFromClasspath("/org/elasticsearch/index/query/constantScore-query.json"); Query parsedQuery = queryParser.parse(query).query(); - assertThat(parsedQuery, instanceOf(XConstantScoreQuery.class)); - XConstantScoreQuery constantScoreQuery = (XConstantScoreQuery) parsedQuery; + assertThat(parsedQuery, instanceOf(ConstantScoreQuery.class)); + ConstantScoreQuery constantScoreQuery = (ConstantScoreQuery) parsedQuery; assertThat(((TermFilter) constantScoreQuery.getFilter()).getTerm(), equalTo(new Term("name.last", "banon"))); } @@ -1357,8 +1436,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { Query parsedQuery = queryParser.parse(functionScoreQuery(factorFunction(1.3f))).query(); assertThat(parsedQuery, instanceOf(FunctionScoreQuery.class)); FunctionScoreQuery functionScoreQuery = (FunctionScoreQuery) parsedQuery; - assertThat(functionScoreQuery.getSubQuery() instanceof XConstantScoreQuery, equalTo(true)); - assertThat(((XConstantScoreQuery) functionScoreQuery.getSubQuery()).getFilter() instanceof MatchAllDocsFilter, equalTo(true)); + assertThat(functionScoreQuery.getSubQuery() instanceof ConstantScoreQuery, equalTo(true)); + assertThat(((ConstantScoreQuery) functionScoreQuery.getSubQuery()).getFilter() instanceof MatchAllDocsFilter, equalTo(true)); assertThat((double) ((BoostScoreFunction) functionScoreQuery.getFunction()).getBoost(), closeTo(1.3, 0.001)); } @@ -1583,8 +1662,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { public void testQueryFilterBuilder() throws Exception { IndexQueryParserService queryParser = queryParser(); Query parsedQuery = queryParser.parse(filteredQuery(termQuery("name.first", "shay"), queryFilter(termQuery("name.last", "banon")))).query(); - assertThat(parsedQuery, instanceOf(XFilteredQuery.class)); - XFilteredQuery filteredQuery = (XFilteredQuery) parsedQuery; + assertThat(parsedQuery, instanceOf(FilteredQuery.class)); + FilteredQuery filteredQuery = (FilteredQuery) parsedQuery; QueryWrapperFilter queryWrapperFilter = (QueryWrapperFilter) filteredQuery.getFilter(); Field field = QueryWrapperFilter.class.getDeclaredField("query"); field.setAccessible(true); @@ -1598,8 +1677,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { IndexQueryParserService queryParser = queryParser(); String query = copyToStringFromClasspath("/org/elasticsearch/index/query/query-filter.json"); Query parsedQuery = queryParser.parse(query).query(); - assertThat(parsedQuery, instanceOf(XFilteredQuery.class)); - XFilteredQuery filteredQuery = (XFilteredQuery) parsedQuery; + assertThat(parsedQuery, instanceOf(FilteredQuery.class)); + FilteredQuery filteredQuery = (FilteredQuery) parsedQuery; QueryWrapperFilter queryWrapperFilter = (QueryWrapperFilter) filteredQuery.getFilter(); Field field = QueryWrapperFilter.class.getDeclaredField("query"); field.setAccessible(true); @@ -1614,8 +1693,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { String query = copyToStringFromClasspath("/org/elasticsearch/index/query/fquery-filter.json"); ParsedQuery parsedQuery = queryParser.parse(query); assertThat(parsedQuery.namedFilters().containsKey("test"), equalTo(true)); - assertThat(parsedQuery.query(), instanceOf(XFilteredQuery.class)); - XFilteredQuery filteredQuery = (XFilteredQuery) parsedQuery.query(); + assertThat(parsedQuery.query(), instanceOf(FilteredQuery.class)); + FilteredQuery filteredQuery = (FilteredQuery) parsedQuery.query(); QueryWrapperFilter queryWrapperFilter = (QueryWrapperFilter) filteredQuery.getFilter(); Field field = QueryWrapperFilter.class.getDeclaredField("query"); field.setAccessible(true); @@ -1731,7 +1810,7 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { private static Fields generateFields(String[] fieldNames, String text) throws IOException { MemoryIndex index = new MemoryIndex(); for (String fieldName : fieldNames) { - index.addField(fieldName, text, new WhitespaceAnalyzer(Lucene.VERSION)); + index.addField(fieldName, text, new WhitespaceAnalyzer()); } return MultiFields.getFields(index.createSearcher().getIndexReader()); } @@ -1823,8 +1902,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { String query = copyToStringFromClasspath("/org/elasticsearch/index/query/geo_distance-named.json"); ParsedQuery parsedQuery = queryParser.parse(query); assertThat(parsedQuery.namedFilters().containsKey("test"), equalTo(true)); - assertThat(parsedQuery.query(), instanceOf(XConstantScoreQuery.class)); - XConstantScoreQuery constantScoreQuery = (XConstantScoreQuery) parsedQuery.query(); + assertThat(parsedQuery.query(), instanceOf(ConstantScoreQuery.class)); + ConstantScoreQuery constantScoreQuery = (ConstantScoreQuery) parsedQuery.query(); GeoDistanceFilter filter = (GeoDistanceFilter) constantScoreQuery.getFilter(); assertThat(filter.fieldName(), equalTo("location")); assertThat(filter.lat(), closeTo(40, 0.00001)); @@ -1837,8 +1916,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { IndexQueryParserService queryParser = queryParser(); String query = copyToStringFromClasspath("/org/elasticsearch/index/query/geo_distance1.json"); Query parsedQuery = queryParser.parse(query).query(); - assertThat(parsedQuery, instanceOf(XConstantScoreQuery.class)); - XConstantScoreQuery constantScoreQuery = (XConstantScoreQuery) parsedQuery; + assertThat(parsedQuery, instanceOf(ConstantScoreQuery.class)); + ConstantScoreQuery constantScoreQuery = (ConstantScoreQuery) parsedQuery; GeoDistanceFilter filter = (GeoDistanceFilter) constantScoreQuery.getFilter(); assertThat(filter.fieldName(), equalTo("location")); assertThat(filter.lat(), closeTo(40, 0.00001)); @@ -1851,8 +1930,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { IndexQueryParserService queryParser = queryParser(); String query = copyToStringFromClasspath("/org/elasticsearch/index/query/geo_distance2.json"); Query parsedQuery = queryParser.parse(query).query(); - assertThat(parsedQuery, instanceOf(XConstantScoreQuery.class)); - XConstantScoreQuery constantScoreQuery = (XConstantScoreQuery) parsedQuery; + assertThat(parsedQuery, instanceOf(ConstantScoreQuery.class)); + ConstantScoreQuery constantScoreQuery = (ConstantScoreQuery) parsedQuery; GeoDistanceFilter filter = (GeoDistanceFilter) constantScoreQuery.getFilter(); assertThat(filter.fieldName(), equalTo("location")); assertThat(filter.lat(), closeTo(40, 0.00001)); @@ -1865,8 +1944,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { IndexQueryParserService queryParser = queryParser(); String query = copyToStringFromClasspath("/org/elasticsearch/index/query/geo_distance3.json"); Query parsedQuery = queryParser.parse(query).query(); - assertThat(parsedQuery, instanceOf(XConstantScoreQuery.class)); - XConstantScoreQuery constantScoreQuery = (XConstantScoreQuery) parsedQuery; + assertThat(parsedQuery, instanceOf(ConstantScoreQuery.class)); + ConstantScoreQuery constantScoreQuery = (ConstantScoreQuery) parsedQuery; GeoDistanceFilter filter = (GeoDistanceFilter) constantScoreQuery.getFilter(); assertThat(filter.fieldName(), equalTo("location")); assertThat(filter.lat(), closeTo(40, 0.00001)); @@ -1879,8 +1958,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { IndexQueryParserService queryParser = queryParser(); String query = copyToStringFromClasspath("/org/elasticsearch/index/query/geo_distance4.json"); Query parsedQuery = queryParser.parse(query).query(); - assertThat(parsedQuery, instanceOf(XConstantScoreQuery.class)); - XConstantScoreQuery constantScoreQuery = (XConstantScoreQuery) parsedQuery; + assertThat(parsedQuery, instanceOf(ConstantScoreQuery.class)); + ConstantScoreQuery constantScoreQuery = (ConstantScoreQuery) parsedQuery; GeoDistanceFilter filter = (GeoDistanceFilter) constantScoreQuery.getFilter(); assertThat(filter.fieldName(), equalTo("location")); assertThat(filter.lat(), closeTo(40, 0.00001)); @@ -1893,8 +1972,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { IndexQueryParserService queryParser = queryParser(); String query = copyToStringFromClasspath("/org/elasticsearch/index/query/geo_distance5.json"); Query parsedQuery = queryParser.parse(query).query(); - assertThat(parsedQuery, instanceOf(XConstantScoreQuery.class)); - XConstantScoreQuery constantScoreQuery = (XConstantScoreQuery) parsedQuery; + assertThat(parsedQuery, instanceOf(ConstantScoreQuery.class)); + ConstantScoreQuery constantScoreQuery = (ConstantScoreQuery) parsedQuery; GeoDistanceFilter filter = (GeoDistanceFilter) constantScoreQuery.getFilter(); assertThat(filter.fieldName(), equalTo("location")); assertThat(filter.lat(), closeTo(40, 0.00001)); @@ -1907,8 +1986,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { IndexQueryParserService queryParser = queryParser(); String query = copyToStringFromClasspath("/org/elasticsearch/index/query/geo_distance6.json"); Query parsedQuery = queryParser.parse(query).query(); - assertThat(parsedQuery, instanceOf(XConstantScoreQuery.class)); - XConstantScoreQuery constantScoreQuery = (XConstantScoreQuery) parsedQuery; + assertThat(parsedQuery, instanceOf(ConstantScoreQuery.class)); + ConstantScoreQuery constantScoreQuery = (ConstantScoreQuery) parsedQuery; GeoDistanceFilter filter = (GeoDistanceFilter) constantScoreQuery.getFilter(); assertThat(filter.fieldName(), equalTo("location")); assertThat(filter.lat(), closeTo(40, 0.00001)); @@ -1921,8 +2000,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { IndexQueryParserService queryParser = queryParser(); String query = copyToStringFromClasspath("/org/elasticsearch/index/query/geo_distance7.json"); Query parsedQuery = queryParser.parse(query).query(); - assertThat(parsedQuery, instanceOf(XConstantScoreQuery.class)); - XConstantScoreQuery constantScoreQuery = (XConstantScoreQuery) parsedQuery; + assertThat(parsedQuery, instanceOf(ConstantScoreQuery.class)); + ConstantScoreQuery constantScoreQuery = (ConstantScoreQuery) parsedQuery; GeoDistanceFilter filter = (GeoDistanceFilter) constantScoreQuery.getFilter(); assertThat(filter.fieldName(), equalTo("location")); assertThat(filter.lat(), closeTo(40, 0.00001)); @@ -1935,8 +2014,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { IndexQueryParserService queryParser = queryParser(); String query = copyToStringFromClasspath("/org/elasticsearch/index/query/geo_distance8.json"); Query parsedQuery = queryParser.parse(query).query(); - assertThat(parsedQuery, instanceOf(XConstantScoreQuery.class)); - XConstantScoreQuery constantScoreQuery = (XConstantScoreQuery) parsedQuery; + assertThat(parsedQuery, instanceOf(ConstantScoreQuery.class)); + ConstantScoreQuery constantScoreQuery = (ConstantScoreQuery) parsedQuery; GeoDistanceFilter filter = (GeoDistanceFilter) constantScoreQuery.getFilter(); assertThat(filter.fieldName(), equalTo("location")); assertThat(filter.lat(), closeTo(40, 0.00001)); @@ -1949,8 +2028,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { IndexQueryParserService queryParser = queryParser(); String query = copyToStringFromClasspath("/org/elasticsearch/index/query/geo_distance9.json"); Query parsedQuery = queryParser.parse(query).query(); - assertThat(parsedQuery, instanceOf(XConstantScoreQuery.class)); - XConstantScoreQuery constantScoreQuery = (XConstantScoreQuery) parsedQuery; + assertThat(parsedQuery, instanceOf(ConstantScoreQuery.class)); + ConstantScoreQuery constantScoreQuery = (ConstantScoreQuery) parsedQuery; GeoDistanceFilter filter = (GeoDistanceFilter) constantScoreQuery.getFilter(); assertThat(filter.fieldName(), equalTo("location")); assertThat(filter.lat(), closeTo(40, 0.00001)); @@ -1963,8 +2042,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { IndexQueryParserService queryParser = queryParser(); String query = copyToStringFromClasspath("/org/elasticsearch/index/query/geo_distance10.json"); Query parsedQuery = queryParser.parse(query).query(); - assertThat(parsedQuery, instanceOf(XConstantScoreQuery.class)); - XConstantScoreQuery constantScoreQuery = (XConstantScoreQuery) parsedQuery; + assertThat(parsedQuery, instanceOf(ConstantScoreQuery.class)); + ConstantScoreQuery constantScoreQuery = (ConstantScoreQuery) parsedQuery; GeoDistanceFilter filter = (GeoDistanceFilter) constantScoreQuery.getFilter(); assertThat(filter.fieldName(), equalTo("location")); assertThat(filter.lat(), closeTo(40, 0.00001)); @@ -1977,8 +2056,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { IndexQueryParserService queryParser = queryParser(); String query = copyToStringFromClasspath("/org/elasticsearch/index/query/geo_distance11.json"); Query parsedQuery = queryParser.parse(query).query(); - assertThat(parsedQuery, instanceOf(XConstantScoreQuery.class)); - XConstantScoreQuery constantScoreQuery = (XConstantScoreQuery) parsedQuery; + assertThat(parsedQuery, instanceOf(ConstantScoreQuery.class)); + ConstantScoreQuery constantScoreQuery = (ConstantScoreQuery) parsedQuery; GeoDistanceFilter filter = (GeoDistanceFilter) constantScoreQuery.getFilter(); assertThat(filter.fieldName(), equalTo("location")); assertThat(filter.lat(), closeTo(40, 0.00001)); @@ -1991,8 +2070,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { IndexQueryParserService queryParser = queryParser(); String query = copyToStringFromClasspath("/org/elasticsearch/index/query/geo_distance12.json"); Query parsedQuery = queryParser.parse(query).query(); - assertThat(parsedQuery, instanceOf(XConstantScoreQuery.class)); - XConstantScoreQuery constantScoreQuery = (XConstantScoreQuery) parsedQuery; + assertThat(parsedQuery, instanceOf(ConstantScoreQuery.class)); + ConstantScoreQuery constantScoreQuery = (ConstantScoreQuery) parsedQuery; GeoDistanceFilter filter = (GeoDistanceFilter) constantScoreQuery.getFilter(); assertThat(filter.fieldName(), equalTo("location")); assertThat(filter.lat(), closeTo(40, 0.00001)); @@ -2005,9 +2084,9 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { IndexQueryParserService queryParser = queryParser(); String query = copyToStringFromClasspath("/org/elasticsearch/index/query/geo_boundingbox-named.json"); ParsedQuery parsedQuery = queryParser.parse(query); - assertThat(parsedQuery.query(), instanceOf(XConstantScoreQuery.class)); + assertThat(parsedQuery.query(), instanceOf(ConstantScoreQuery.class)); assertThat(parsedQuery.namedFilters().containsKey("test"), equalTo(true)); - XConstantScoreQuery constantScoreQuery = (XConstantScoreQuery) parsedQuery.query(); + ConstantScoreQuery constantScoreQuery = (ConstantScoreQuery) parsedQuery.query(); InMemoryGeoBoundingBoxFilter filter = (InMemoryGeoBoundingBoxFilter) constantScoreQuery.getFilter(); assertThat(filter.fieldName(), equalTo("location")); assertThat(filter.topLeft().lat(), closeTo(40, 0.00001)); @@ -2022,8 +2101,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { IndexQueryParserService queryParser = queryParser(); String query = copyToStringFromClasspath("/org/elasticsearch/index/query/geo_boundingbox1.json"); Query parsedQuery = queryParser.parse(query).query(); - assertThat(parsedQuery, instanceOf(XConstantScoreQuery.class)); - XConstantScoreQuery constantScoreQuery = (XConstantScoreQuery) parsedQuery; + assertThat(parsedQuery, instanceOf(ConstantScoreQuery.class)); + ConstantScoreQuery constantScoreQuery = (ConstantScoreQuery) parsedQuery; InMemoryGeoBoundingBoxFilter filter = (InMemoryGeoBoundingBoxFilter) constantScoreQuery.getFilter(); assertThat(filter.fieldName(), equalTo("location")); assertThat(filter.topLeft().lat(), closeTo(40, 0.00001)); @@ -2037,8 +2116,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { IndexQueryParserService queryParser = queryParser(); String query = copyToStringFromClasspath("/org/elasticsearch/index/query/geo_boundingbox2.json"); Query parsedQuery = queryParser.parse(query).query(); - assertThat(parsedQuery, instanceOf(XConstantScoreQuery.class)); - XConstantScoreQuery constantScoreQuery = (XConstantScoreQuery) parsedQuery; + assertThat(parsedQuery, instanceOf(ConstantScoreQuery.class)); + ConstantScoreQuery constantScoreQuery = (ConstantScoreQuery) parsedQuery; InMemoryGeoBoundingBoxFilter filter = (InMemoryGeoBoundingBoxFilter) constantScoreQuery.getFilter(); assertThat(filter.fieldName(), equalTo("location")); assertThat(filter.topLeft().lat(), closeTo(40, 0.00001)); @@ -2052,8 +2131,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { IndexQueryParserService queryParser = queryParser(); String query = copyToStringFromClasspath("/org/elasticsearch/index/query/geo_boundingbox3.json"); Query parsedQuery = queryParser.parse(query).query(); - assertThat(parsedQuery, instanceOf(XConstantScoreQuery.class)); - XConstantScoreQuery constantScoreQuery = (XConstantScoreQuery) parsedQuery; + assertThat(parsedQuery, instanceOf(ConstantScoreQuery.class)); + ConstantScoreQuery constantScoreQuery = (ConstantScoreQuery) parsedQuery; InMemoryGeoBoundingBoxFilter filter = (InMemoryGeoBoundingBoxFilter) constantScoreQuery.getFilter(); assertThat(filter.fieldName(), equalTo("location")); assertThat(filter.topLeft().lat(), closeTo(40, 0.00001)); @@ -2067,8 +2146,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { IndexQueryParserService queryParser = queryParser(); String query = copyToStringFromClasspath("/org/elasticsearch/index/query/geo_boundingbox4.json"); Query parsedQuery = queryParser.parse(query).query(); - assertThat(parsedQuery, instanceOf(XConstantScoreQuery.class)); - XConstantScoreQuery constantScoreQuery = (XConstantScoreQuery) parsedQuery; + assertThat(parsedQuery, instanceOf(ConstantScoreQuery.class)); + ConstantScoreQuery constantScoreQuery = (ConstantScoreQuery) parsedQuery; InMemoryGeoBoundingBoxFilter filter = (InMemoryGeoBoundingBoxFilter) constantScoreQuery.getFilter(); assertThat(filter.fieldName(), equalTo("location")); assertThat(filter.topLeft().lat(), closeTo(40, 0.00001)); @@ -2082,8 +2161,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { IndexQueryParserService queryParser = queryParser(); String query = copyToStringFromClasspath("/org/elasticsearch/index/query/geo_boundingbox5.json"); Query parsedQuery = queryParser.parse(query).query(); - assertThat(parsedQuery, instanceOf(XConstantScoreQuery.class)); - XConstantScoreQuery constantScoreQuery = (XConstantScoreQuery) parsedQuery; + assertThat(parsedQuery, instanceOf(ConstantScoreQuery.class)); + ConstantScoreQuery constantScoreQuery = (ConstantScoreQuery) parsedQuery; InMemoryGeoBoundingBoxFilter filter = (InMemoryGeoBoundingBoxFilter) constantScoreQuery.getFilter(); assertThat(filter.fieldName(), equalTo("location")); assertThat(filter.topLeft().lat(), closeTo(40, 0.00001)); @@ -2097,8 +2176,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { IndexQueryParserService queryParser = queryParser(); String query = copyToStringFromClasspath("/org/elasticsearch/index/query/geo_boundingbox6.json"); Query parsedQuery = queryParser.parse(query).query(); - assertThat(parsedQuery, instanceOf(XConstantScoreQuery.class)); - XConstantScoreQuery constantScoreQuery = (XConstantScoreQuery) parsedQuery; + assertThat(parsedQuery, instanceOf(ConstantScoreQuery.class)); + ConstantScoreQuery constantScoreQuery = (ConstantScoreQuery) parsedQuery; InMemoryGeoBoundingBoxFilter filter = (InMemoryGeoBoundingBoxFilter) constantScoreQuery.getFilter(); assertThat(filter.fieldName(), equalTo("location")); assertThat(filter.topLeft().lat(), closeTo(40, 0.00001)); @@ -2114,8 +2193,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { String query = copyToStringFromClasspath("/org/elasticsearch/index/query/geo_polygon-named.json"); ParsedQuery parsedQuery = queryParser.parse(query); assertThat(parsedQuery.namedFilters().containsKey("test"), equalTo(true)); - assertThat(parsedQuery.query(), instanceOf(XConstantScoreQuery.class)); - XConstantScoreQuery constantScoreQuery = (XConstantScoreQuery) parsedQuery.query(); + assertThat(parsedQuery.query(), instanceOf(ConstantScoreQuery.class)); + ConstantScoreQuery constantScoreQuery = (ConstantScoreQuery) parsedQuery.query(); GeoPolygonFilter filter = (GeoPolygonFilter) constantScoreQuery.getFilter(); assertThat(filter.fieldName(), equalTo("location")); assertThat(filter.points().length, equalTo(4)); @@ -2155,8 +2234,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { IndexQueryParserService queryParser = queryParser(); String query = copyToStringFromClasspath("/org/elasticsearch/index/query/geo_polygon1.json"); Query parsedQuery = queryParser.parse(query).query(); - assertThat(parsedQuery, instanceOf(XConstantScoreQuery.class)); - XConstantScoreQuery constantScoreQuery = (XConstantScoreQuery) parsedQuery; + assertThat(parsedQuery, instanceOf(ConstantScoreQuery.class)); + ConstantScoreQuery constantScoreQuery = (ConstantScoreQuery) parsedQuery; GeoPolygonFilter filter = (GeoPolygonFilter) constantScoreQuery.getFilter(); assertThat(filter.fieldName(), equalTo("location")); assertThat(filter.points().length, equalTo(4)); @@ -2173,8 +2252,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { IndexQueryParserService queryParser = queryParser(); String query = copyToStringFromClasspath("/org/elasticsearch/index/query/geo_polygon2.json"); Query parsedQuery = queryParser.parse(query).query(); - assertThat(parsedQuery, instanceOf(XConstantScoreQuery.class)); - XConstantScoreQuery constantScoreQuery = (XConstantScoreQuery) parsedQuery; + assertThat(parsedQuery, instanceOf(ConstantScoreQuery.class)); + ConstantScoreQuery constantScoreQuery = (ConstantScoreQuery) parsedQuery; GeoPolygonFilter filter = (GeoPolygonFilter) constantScoreQuery.getFilter(); assertThat(filter.fieldName(), equalTo("location")); assertThat(filter.points().length, equalTo(4)); @@ -2191,8 +2270,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { IndexQueryParserService queryParser = queryParser(); String query = copyToStringFromClasspath("/org/elasticsearch/index/query/geo_polygon3.json"); Query parsedQuery = queryParser.parse(query).query(); - assertThat(parsedQuery, instanceOf(XConstantScoreQuery.class)); - XConstantScoreQuery constantScoreQuery = (XConstantScoreQuery) parsedQuery; + assertThat(parsedQuery, instanceOf(ConstantScoreQuery.class)); + ConstantScoreQuery constantScoreQuery = (ConstantScoreQuery) parsedQuery; GeoPolygonFilter filter = (GeoPolygonFilter) constantScoreQuery.getFilter(); assertThat(filter.fieldName(), equalTo("location")); assertThat(filter.points().length, equalTo(4)); @@ -2209,8 +2288,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { IndexQueryParserService queryParser = queryParser(); String query = copyToStringFromClasspath("/org/elasticsearch/index/query/geo_polygon4.json"); Query parsedQuery = queryParser.parse(query).query(); - assertThat(parsedQuery, instanceOf(XConstantScoreQuery.class)); - XConstantScoreQuery constantScoreQuery = (XConstantScoreQuery) parsedQuery; + assertThat(parsedQuery, instanceOf(ConstantScoreQuery.class)); + ConstantScoreQuery constantScoreQuery = (ConstantScoreQuery) parsedQuery; GeoPolygonFilter filter = (GeoPolygonFilter) constantScoreQuery.getFilter(); assertThat(filter.fieldName(), equalTo("location")); assertThat(filter.points().length, equalTo(4)); @@ -2227,8 +2306,8 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { IndexQueryParserService queryParser = queryParser(); String query = copyToStringFromClasspath("/org/elasticsearch/index/query/geoShape-filter.json"); Query parsedQuery = queryParser.parse(query).query(); - assertThat(parsedQuery, instanceOf(XConstantScoreQuery.class)); - XConstantScoreQuery constantScoreQuery = (XConstantScoreQuery) parsedQuery; + assertThat(parsedQuery, instanceOf(ConstantScoreQuery.class)); + ConstantScoreQuery constantScoreQuery = (ConstantScoreQuery) parsedQuery; assertThat(constantScoreQuery.getFilter(), instanceOf(IntersectsPrefixTreeFilter.class)); } @@ -2379,9 +2458,9 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { XContentParser parser = XContentHelper.createParser(new BytesArray(query)); ParsedFilter parsedQuery = queryParser.parseInnerFilter(parser); assertThat(parsedQuery.filter(), instanceOf(QueryWrapperFilter.class)); - assertThat(((QueryWrapperFilter) parsedQuery.filter()).getQuery(), instanceOf(XFilteredQuery.class)); - assertThat(((XFilteredQuery) ((QueryWrapperFilter) parsedQuery.filter()).getQuery()).getFilter(), instanceOf(TermFilter.class)); - TermFilter filter = (TermFilter) ((XFilteredQuery) ((QueryWrapperFilter) parsedQuery.filter()).getQuery()).getFilter(); + assertThat(((QueryWrapperFilter) parsedQuery.filter()).getQuery(), instanceOf(FilteredQuery.class)); + assertThat(((FilteredQuery) ((QueryWrapperFilter) parsedQuery.filter()).getQuery()).getFilter(), instanceOf(TermFilter.class)); + TermFilter filter = (TermFilter) ((FilteredQuery) ((QueryWrapperFilter) parsedQuery.filter()).getQuery()).getFilter(); assertThat(filter.getTerm().toString(), equalTo("text:apache")); } @@ -2480,10 +2559,10 @@ public class SimpleIndexQueryParserTests extends ElasticsearchSingleNodeTest { SearchContext.setCurrent(createSearchContext(indexService)); IndexQueryParserService queryParser = indexService.queryParserService(); Query parsedQuery = queryParser.parse(query).query(); - assertThat(parsedQuery, instanceOf(XConstantScoreQuery.class)); - assertThat(((XConstantScoreQuery) parsedQuery).getFilter(), instanceOf(CustomQueryWrappingFilter.class)); - assertThat(((CustomQueryWrappingFilter) ((XConstantScoreQuery) parsedQuery).getFilter()).getQuery(), instanceOf(ParentConstantScoreQuery.class)); - assertThat(((CustomQueryWrappingFilter) ((XConstantScoreQuery) parsedQuery).getFilter()).getQuery().toString(), equalTo("parent_filter[foo](filtered(*:*)->cache(_type:foo))")); + assertThat(parsedQuery, instanceOf(ConstantScoreQuery.class)); + assertThat(((ConstantScoreQuery) parsedQuery).getFilter(), instanceOf(CustomQueryWrappingFilter.class)); + assertThat(((CustomQueryWrappingFilter) ((ConstantScoreQuery) parsedQuery).getFilter()).getQuery(), instanceOf(ParentConstantScoreQuery.class)); + assertThat(((CustomQueryWrappingFilter) ((ConstantScoreQuery) parsedQuery).getFilter()).getQuery().toString(), equalTo("parent_filter[foo](filtered(*:*)->cache(_type:foo))")); SearchContext.removeCurrent(); } } diff --git a/src/test/java/org/elasticsearch/index/search/FieldDataTermsFilterTests.java b/src/test/java/org/elasticsearch/index/search/FieldDataTermsFilterTests.java index fd061bdbbfe..b873b760fc8 100644 --- a/src/test/java/org/elasticsearch/index/search/FieldDataTermsFilterTests.java +++ b/src/test/java/org/elasticsearch/index/search/FieldDataTermsFilterTests.java @@ -63,7 +63,7 @@ public class FieldDataTermsFilterTests extends ElasticsearchSingleNodeTest { protected QueryParseContext parseContext; protected IndexFieldDataService ifdService; protected IndexWriter writer; - protected AtomicReader reader; + protected LeafReader reader; protected StringFieldMapper strMapper; protected LongFieldMapper lngMapper; protected DoubleFieldMapper dblMapper; @@ -79,7 +79,7 @@ public class FieldDataTermsFilterTests extends ElasticsearchSingleNodeTest { IndexQueryParserService parserService = indexService.queryParserService(); parseContext = new QueryParseContext(indexService.index(), parserService); writer = new IndexWriter(new RAMDirectory(), - new IndexWriterConfig(Lucene.VERSION, new StandardAnalyzer(Lucene.VERSION))); + new IndexWriterConfig(new StandardAnalyzer())); // setup field mappers strMapper = new StringFieldMapper.Builder("str_value") diff --git a/src/test/java/org/elasticsearch/index/search/child/AbstractChildTests.java b/src/test/java/org/elasticsearch/index/search/child/AbstractChildTests.java index 85ab3bf1aad..ae7034c7f88 100644 --- a/src/test/java/org/elasticsearch/index/search/child/AbstractChildTests.java +++ b/src/test/java/org/elasticsearch/index/search/child/AbstractChildTests.java @@ -20,11 +20,12 @@ package org.elasticsearch.index.search.child; import org.apache.lucene.search.*; -import org.apache.lucene.util.FixedBitSet; +import org.apache.lucene.search.join.BitDocIdSetFilter; +import org.apache.lucene.util.BitDocIdSet; +import org.apache.lucene.util.BitSet; import org.apache.lucene.util.LuceneTestCase; import org.elasticsearch.action.admin.indices.mapping.put.PutMappingRequest; import org.elasticsearch.common.compress.CompressedString; -import org.elasticsearch.index.cache.fixedbitset.FixedBitSetFilter; import org.elasticsearch.index.mapper.MapperService; import org.elasticsearch.index.mapper.internal.UidFieldMapper; import org.elasticsearch.index.service.IndexService; @@ -55,9 +56,13 @@ public abstract class AbstractChildTests extends ElasticsearchSingleNodeLuceneTe mapperService.merge(childType, new CompressedString(PutMappingRequest.buildFromSimplifiedDef(childType, "_parent", "type=" + parentType, CHILD_SCORE_NAME, "type=double").string()), true); return createSearchContext(indexService); } + + static void assertBitSet(BitSet actual, BitSet expected, IndexSearcher searcher) throws IOException { + assertBitSet(new BitDocIdSet(actual), new BitDocIdSet(expected), searcher); + } - static void assertBitSet(FixedBitSet actual, FixedBitSet expected, IndexSearcher searcher) throws IOException { - if (!actual.equals(expected)) { + static void assertBitSet(BitDocIdSet actual, BitDocIdSet expected, IndexSearcher searcher) throws IOException { + if (!equals(expected, actual)) { Description description = new StringDescription(); description.appendText(reason(actual, expected, searcher)); description.appendText("\nExpected: "); @@ -68,15 +73,34 @@ public abstract class AbstractChildTests extends ElasticsearchSingleNodeLuceneTe throw new java.lang.AssertionError(description.toString()); } } + + static boolean equals(BitDocIdSet expected, BitDocIdSet actual) { + if (actual == null && expected == null) { + return true; + } else if (actual == null || expected == null) { + return false; + } + BitSet actualBits = actual.bits(); + BitSet expectedBits = expected.bits(); + if (actualBits.length() != expectedBits.length()) { + return false; + } + for (int i = 0; i < expectedBits.length(); i++) { + if (expectedBits.get(i) != actualBits.get(i)) { + return false; + } + } + return true; + } - static String reason(FixedBitSet actual, FixedBitSet expected, IndexSearcher indexSearcher) throws IOException { + static String reason(BitDocIdSet actual, BitDocIdSet expected, IndexSearcher indexSearcher) throws IOException { StringBuilder builder = new StringBuilder(); - builder.append("expected cardinality:").append(expected.cardinality()).append('\n'); + builder.append("expected cardinality:").append(expected.bits().cardinality()).append('\n'); DocIdSetIterator iterator = expected.iterator(); for (int doc = iterator.nextDoc(); doc != DocIdSetIterator.NO_MORE_DOCS; doc = iterator.nextDoc()) { builder.append("Expected doc[").append(doc).append("] with id value ").append(indexSearcher.doc(doc).get(UidFieldMapper.NAME)).append('\n'); } - builder.append("actual cardinality: ").append(actual.cardinality()).append('\n'); + builder.append("actual cardinality: ").append(actual.bits().cardinality()).append('\n'); iterator = actual.iterator(); for (int doc = iterator.nextDoc(); doc != DocIdSetIterator.NO_MORE_DOCS; doc = iterator.nextDoc()) { builder.append("Actual doc[").append(doc).append("] with id value ").append(indexSearcher.doc(doc).get(UidFieldMapper.NAME)).append('\n'); @@ -96,8 +120,8 @@ public abstract class AbstractChildTests extends ElasticsearchSingleNodeLuceneTe } } - static FixedBitSetFilter wrap(Filter filter) { - return SearchContext.current().fixedBitSetFilterCache().getFixedBitSetFilter(filter); + static BitDocIdSetFilter wrap(Filter filter) { + return SearchContext.current().bitsetFilterCache().getBitDocIdSetFilter(filter); } } diff --git a/src/test/java/org/elasticsearch/index/search/child/BitSetCollector.java b/src/test/java/org/elasticsearch/index/search/child/BitSetCollector.java index ee4bea4cbb9..afdc08e232e 100644 --- a/src/test/java/org/elasticsearch/index/search/child/BitSetCollector.java +++ b/src/test/java/org/elasticsearch/index/search/child/BitSetCollector.java @@ -18,7 +18,7 @@ */ package org.elasticsearch.index.search.child; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.util.FixedBitSet; import org.elasticsearch.common.lucene.search.NoopCollector; @@ -39,7 +39,7 @@ class BitSetCollector extends NoopCollector { } @Override - public void setNextReader(AtomicReaderContext context) throws IOException { + protected void doSetNextReader(LeafReaderContext context) throws IOException { docBase = context.docBase; } diff --git a/src/test/java/org/elasticsearch/index/search/child/ChildrenConstantScoreQueryTests.java b/src/test/java/org/elasticsearch/index/search/child/ChildrenConstantScoreQueryTests.java index 5bb5f5b7014..db7d1a485e6 100644 --- a/src/test/java/org/elasticsearch/index/search/child/ChildrenConstantScoreQueryTests.java +++ b/src/test/java/org/elasticsearch/index/search/child/ChildrenConstantScoreQueryTests.java @@ -24,18 +24,31 @@ import org.apache.lucene.analysis.MockAnalyzer; import org.apache.lucene.document.Document; import org.apache.lucene.document.Field; import org.apache.lucene.document.StringField; -import org.apache.lucene.index.*; +import org.apache.lucene.index.DirectoryReader; +import org.apache.lucene.index.DocsEnum; +import org.apache.lucene.index.IndexReader; +import org.apache.lucene.index.IndexWriterConfig; +import org.apache.lucene.index.LeafReader; +import org.apache.lucene.index.RandomIndexWriter; +import org.apache.lucene.index.SlowCompositeReaderWrapper; +import org.apache.lucene.index.Term; +import org.apache.lucene.index.Terms; +import org.apache.lucene.index.TermsEnum; import org.apache.lucene.queries.TermFilter; -import org.apache.lucene.search.*; +import org.apache.lucene.search.ConstantScoreQuery; +import org.apache.lucene.search.Filter; +import org.apache.lucene.search.FilteredQuery; +import org.apache.lucene.search.IndexSearcher; +import org.apache.lucene.search.Query; +import org.apache.lucene.search.QueryUtils; +import org.apache.lucene.search.TermQuery; +import org.apache.lucene.search.join.BitDocIdSetFilter; import org.apache.lucene.store.Directory; import org.apache.lucene.util.FixedBitSet; import org.apache.lucene.util.LuceneTestCase; import org.elasticsearch.common.lease.Releasables; import org.elasticsearch.common.lucene.search.NotFilter; -import org.elasticsearch.common.lucene.search.XConstantScoreQuery; -import org.elasticsearch.common.lucene.search.XFilteredQuery; import org.elasticsearch.index.engine.Engine; -import org.elasticsearch.index.cache.fixedbitset.FixedBitSetFilter; import org.elasticsearch.index.fielddata.plain.ParentChildIndexFieldData; import org.elasticsearch.index.mapper.Uid; import org.elasticsearch.index.mapper.internal.ParentFieldMapper; @@ -76,7 +89,7 @@ public class ChildrenConstantScoreQueryTests extends AbstractChildTests { Query childQuery = new TermQuery(new Term("field", "value")); ParentFieldMapper parentFieldMapper = SearchContext.current().mapperService().documentMapper("child").parentFieldMapper(); ParentChildIndexFieldData parentChildIndexFieldData = SearchContext.current().fieldData().getForField(parentFieldMapper); - FixedBitSetFilter parentFilter = wrap(new TermFilter(new Term(TypeFieldMapper.NAME, "parent"))); + BitDocIdSetFilter parentFilter = wrap(new TermFilter(new Term(TypeFieldMapper.NAME, "parent"))); Query query = new ChildrenConstantScoreQuery(parentChildIndexFieldData, childQuery, "parent", "child", parentFilter, 12, wrap(NonNestedDocsFilter.INSTANCE)); QueryUtils.check(query); } @@ -109,7 +122,7 @@ public class ChildrenConstantScoreQueryTests extends AbstractChildTests { )); TermQuery childQuery = new TermQuery(new Term("field1", "value" + (1 + random().nextInt(3)))); - FixedBitSetFilter parentFilter = wrap(new TermFilter(new Term(TypeFieldMapper.NAME, "parent"))); + BitDocIdSetFilter parentFilter = wrap(new TermFilter(new Term(TypeFieldMapper.NAME, "parent"))); int shortCircuitParentDocSet = random().nextInt(5); ParentFieldMapper parentFieldMapper = SearchContext.current().mapperService().documentMapper("child").parentFieldMapper(); ParentChildIndexFieldData parentChildIndexFieldData = SearchContext.current().fieldData().getForField(parentFieldMapper); @@ -130,8 +143,7 @@ public class ChildrenConstantScoreQueryTests extends AbstractChildTests { public void testRandom() throws Exception { Directory directory = newDirectory(); final Random r = random(); - final IndexWriterConfig iwc = LuceneTestCase.newIndexWriterConfig(r, - LuceneTestCase.TEST_VERSION_CURRENT, new MockAnalyzer(r)) + final IndexWriterConfig iwc = LuceneTestCase.newIndexWriterConfig(r, new MockAnalyzer(r)) .setMaxBufferedDocs(IndexWriterConfig.DISABLE_AUTO_FLUSH) .setRAMBufferSizeMB(scaledRandomIntBetween(16, 64)); // we might index a lot - don't go crazy here RandomIndexWriter indexWriter = new RandomIndexWriter(r, directory, iwc); @@ -204,7 +216,7 @@ public class ChildrenConstantScoreQueryTests extends AbstractChildTests { ParentFieldMapper parentFieldMapper = SearchContext.current().mapperService().documentMapper("child").parentFieldMapper(); ParentChildIndexFieldData parentChildIndexFieldData = SearchContext.current().fieldData().getForField(parentFieldMapper); - FixedBitSetFilter parentFilter = wrap(new TermFilter(new Term(TypeFieldMapper.NAME, "parent"))); + BitDocIdSetFilter parentFilter = wrap(new TermFilter(new Term(TypeFieldMapper.NAME, "parent"))); Filter rawFilterMe = new NotFilter(new TermFilter(new Term("filter", "me"))); int max = numUniqueChildValues / 4; for (int i = 0; i < max; i++) { @@ -247,28 +259,28 @@ public class ChildrenConstantScoreQueryTests extends AbstractChildTests { String childValue = childValues[random().nextInt(numUniqueChildValues)]; TermQuery childQuery = new TermQuery(new Term("field1", childValue)); int shortCircuitParentDocSet = random().nextInt(numParentDocs); - FixedBitSetFilter nonNestedDocsFilter = random().nextBoolean() ? wrap(NonNestedDocsFilter.INSTANCE) : null; + BitDocIdSetFilter nonNestedDocsFilter = random().nextBoolean() ? wrap(NonNestedDocsFilter.INSTANCE) : null; Query query; if (random().nextBoolean()) { // Usage in HasChildQueryParser query = new ChildrenConstantScoreQuery(parentChildIndexFieldData, childQuery, "parent", "child", parentFilter, shortCircuitParentDocSet, nonNestedDocsFilter); } else { // Usage in HasChildFilterParser - query = new XConstantScoreQuery( + query = new ConstantScoreQuery( new CustomQueryWrappingFilter( new ChildrenConstantScoreQuery(parentChildIndexFieldData, childQuery, "parent", "child", parentFilter, shortCircuitParentDocSet, nonNestedDocsFilter) ) ); } - query = new XFilteredQuery(query, filterMe); + query = new FilteredQuery(query, filterMe); BitSetCollector collector = new BitSetCollector(indexReader.maxDoc()); searcher.search(query, collector); FixedBitSet actualResult = collector.getResult(); FixedBitSet expectedResult = new FixedBitSet(indexReader.maxDoc()); if (childValueToParentIds.containsKey(childValue)) { - AtomicReader slowAtomicReader = SlowCompositeReaderWrapper.wrap(indexReader); - Terms terms = slowAtomicReader.terms(UidFieldMapper.NAME); + LeafReader slowLeafReader = SlowCompositeReaderWrapper.wrap(indexReader); + Terms terms = slowLeafReader.terms(UidFieldMapper.NAME); if (terms != null) { NavigableSet parentIds = childValueToParentIds.lget(); TermsEnum termsEnum = terms.iterator(null); @@ -276,7 +288,7 @@ public class ChildrenConstantScoreQueryTests extends AbstractChildTests { for (String id : parentIds) { TermsEnum.SeekStatus seekStatus = termsEnum.seekCeil(Uid.createUidAsBytes("parent", id)); if (seekStatus == TermsEnum.SeekStatus.FOUND) { - docsEnum = termsEnum.docs(slowAtomicReader.getLiveDocs(), docsEnum, DocsEnum.FLAG_NONE); + docsEnum = termsEnum.docs(slowLeafReader.getLiveDocs(), docsEnum, DocsEnum.FLAG_NONE); expectedResult.set(docsEnum.nextDoc()); } else if (seekStatus == TermsEnum.SeekStatus.END) { break; diff --git a/src/test/java/org/elasticsearch/index/search/child/ChildrenQueryTests.java b/src/test/java/org/elasticsearch/index/search/child/ChildrenQueryTests.java index 4a13711c8ff..3a6bcccca75 100644 --- a/src/test/java/org/elasticsearch/index/search/child/ChildrenQueryTests.java +++ b/src/test/java/org/elasticsearch/index/search/child/ChildrenQueryTests.java @@ -27,20 +27,39 @@ import org.apache.lucene.document.Document; import org.apache.lucene.document.DoubleField; import org.apache.lucene.document.Field; import org.apache.lucene.document.StringField; -import org.apache.lucene.index.*; +import org.apache.lucene.index.DirectoryReader; +import org.apache.lucene.index.DocsEnum; +import org.apache.lucene.index.IndexReader; +import org.apache.lucene.index.IndexWriter; +import org.apache.lucene.index.IndexWriterConfig; +import org.apache.lucene.index.LeafReader; +import org.apache.lucene.index.RandomIndexWriter; +import org.apache.lucene.index.SlowCompositeReaderWrapper; +import org.apache.lucene.index.Term; +import org.apache.lucene.index.Terms; +import org.apache.lucene.index.TermsEnum; import org.apache.lucene.queries.TermFilter; -import org.apache.lucene.search.*; +import org.apache.lucene.search.ConstantScoreQuery; +import org.apache.lucene.search.Filter; +import org.apache.lucene.search.FilteredQuery; +import org.apache.lucene.search.IndexSearcher; +import org.apache.lucene.search.MultiCollector; +import org.apache.lucene.search.Query; +import org.apache.lucene.search.QueryUtils; +import org.apache.lucene.search.ScoreDoc; +import org.apache.lucene.search.TermQuery; +import org.apache.lucene.search.TopDocs; +import org.apache.lucene.search.TopScoreDocCollector; +import org.apache.lucene.search.join.BitDocIdSetFilter; import org.apache.lucene.store.Directory; import org.apache.lucene.util.FixedBitSet; import org.apache.lucene.util.LuceneTestCase; import org.elasticsearch.common.lease.Releasables; import org.elasticsearch.common.lucene.search.NotFilter; import org.elasticsearch.common.lucene.search.Queries; -import org.elasticsearch.common.lucene.search.XFilteredQuery; import org.elasticsearch.common.lucene.search.function.FieldValueFactorFunction; import org.elasticsearch.common.lucene.search.function.FunctionScoreQuery; import org.elasticsearch.index.engine.Engine; -import org.elasticsearch.index.cache.fixedbitset.FixedBitSetFilter; import org.elasticsearch.index.fielddata.IndexNumericFieldData; import org.elasticsearch.index.fielddata.plain.ParentChildIndexFieldData; import org.elasticsearch.index.mapper.FieldMapper; @@ -87,7 +106,7 @@ public class ChildrenQueryTests extends AbstractChildTests { ScoreType scoreType = ScoreType.values()[random().nextInt(ScoreType.values().length)]; ParentFieldMapper parentFieldMapper = SearchContext.current().mapperService().documentMapper("child").parentFieldMapper(); ParentChildIndexFieldData parentChildIndexFieldData = SearchContext.current().fieldData().getForField(parentFieldMapper); - FixedBitSetFilter parentFilter = wrap(new TermFilter(new Term(TypeFieldMapper.NAME, "parent"))); + BitDocIdSetFilter parentFilter = wrap(new TermFilter(new Term(TypeFieldMapper.NAME, "parent"))); int minChildren = random().nextInt(10); int maxChildren = scaledRandomIntBetween(minChildren, 10); Query query = new ChildrenQuery(parentChildIndexFieldData, "parent", "child", parentFilter, childQuery, scoreType, minChildren, @@ -99,8 +118,7 @@ public class ChildrenQueryTests extends AbstractChildTests { public void testRandom() throws Exception { Directory directory = newDirectory(); final Random r = random(); - final IndexWriterConfig iwc = LuceneTestCase.newIndexWriterConfig(r, - LuceneTestCase.TEST_VERSION_CURRENT, new MockAnalyzer(r)) + final IndexWriterConfig iwc = LuceneTestCase.newIndexWriterConfig(r, new MockAnalyzer(r)) .setMaxBufferedDocs(IndexWriterConfig.DISABLE_AUTO_FLUSH) .setRAMBufferSizeMB(scaledRandomIntBetween(16, 64)); // we might index a lot - don't go crazy here RandomIndexWriter indexWriter = new RandomIndexWriter(r, directory, iwc); @@ -178,7 +196,7 @@ public class ChildrenQueryTests extends AbstractChildTests { ParentFieldMapper parentFieldMapper = SearchContext.current().mapperService().documentMapper("child").parentFieldMapper(); ParentChildIndexFieldData parentChildIndexFieldData = SearchContext.current().fieldData().getForField(parentFieldMapper); - FixedBitSetFilter parentFilter = wrap(new TermFilter(new Term(TypeFieldMapper.NAME, "parent"))); + BitDocIdSetFilter parentFilter = wrap(new TermFilter(new Term(TypeFieldMapper.NAME, "parent"))); Filter rawFilterMe = new NotFilter(new TermFilter(new Term("filter", "me"))); int max = numUniqueChildValues / 4; for (int i = 0; i < max; i++) { @@ -222,7 +240,7 @@ public class ChildrenQueryTests extends AbstractChildTests { Query childQuery = new ConstantScoreQuery(new TermQuery(new Term("field1", childValue))); int shortCircuitParentDocSet = random().nextInt(numParentDocs); ScoreType scoreType = ScoreType.values()[random().nextInt(ScoreType.values().length)]; - FixedBitSetFilter nonNestedDocsFilter = random().nextBoolean() ? wrap(NonNestedDocsFilter.INSTANCE) : null; + BitDocIdSetFilter nonNestedDocsFilter = random().nextBoolean() ? wrap(NonNestedDocsFilter.INSTANCE) : null; // leave min/max set to 0 half the time int minChildren = random().nextInt(2) * scaledRandomIntBetween(0, 110); @@ -230,7 +248,7 @@ public class ChildrenQueryTests extends AbstractChildTests { Query query = new ChildrenQuery(parentChildIndexFieldData, "parent", "child", parentFilter, childQuery, scoreType, minChildren, maxChildren, shortCircuitParentDocSet, nonNestedDocsFilter); - query = new XFilteredQuery(query, filterMe); + query = new FilteredQuery(query, filterMe); BitSetCollector collector = new BitSetCollector(indexReader.maxDoc()); int numHits = 1 + random().nextInt(25); TopScoreDocCollector actualTopDocsCollector = TopScoreDocCollector.create(numHits, false); @@ -242,8 +260,8 @@ public class ChildrenQueryTests extends AbstractChildTests { TopScoreDocCollector expectedTopDocsCollector = TopScoreDocCollector.create(numHits, false); expectedTopDocsCollector.setScorer(mockScorer); if (childValueToParentIds.containsKey(childValue)) { - AtomicReader slowAtomicReader = SlowCompositeReaderWrapper.wrap(indexReader); - Terms terms = slowAtomicReader.terms(UidFieldMapper.NAME); + LeafReader slowLeafReader = SlowCompositeReaderWrapper.wrap(indexReader); + Terms terms = slowLeafReader.terms(UidFieldMapper.NAME); if (terms != null) { NavigableMap parentIdToChildScores = childValueToParentIds.lget(); TermsEnum termsEnum = terms.iterator(null); @@ -253,7 +271,7 @@ public class ChildrenQueryTests extends AbstractChildTests { if (count >= minChildren && (maxChildren == 0 || count <= maxChildren)) { TermsEnum.SeekStatus seekStatus = termsEnum.seekCeil(Uid.createUidAsBytes("parent", entry.getKey())); if (seekStatus == TermsEnum.SeekStatus.FOUND) { - docsEnum = termsEnum.docs(slowAtomicReader.getLiveDocs(), docsEnum, DocsEnum.FLAG_NONE); + docsEnum = termsEnum.docs(slowLeafReader.getLiveDocs(), docsEnum, DocsEnum.FLAG_NONE); expectedResult.set(docsEnum.nextDoc()); mockScorer.scores = entry.getValue(); expectedTopDocsCollector.collect(docsEnum.docID()); @@ -372,7 +390,7 @@ public class ChildrenQueryTests extends AbstractChildTests { ((TestSearchContext)context).setSearcher(new ContextIndexSearcher(context, engineSearcher)); ParentFieldMapper parentFieldMapper = context.mapperService().documentMapper("child").parentFieldMapper(); ParentChildIndexFieldData parentChildIndexFieldData = context.fieldData().getForField(parentFieldMapper); - FixedBitSetFilter parentFilter = wrap(new TermFilter(new Term(TypeFieldMapper.NAME, "parent"))); + BitDocIdSetFilter parentFilter = wrap(new TermFilter(new Term(TypeFieldMapper.NAME, "parent"))); // child query that returns the score as the value of "childScore" for each child document, // with the parent's score determined by the score type diff --git a/src/test/java/org/elasticsearch/index/search/child/ParentConstantScoreQueryTests.java b/src/test/java/org/elasticsearch/index/search/child/ParentConstantScoreQueryTests.java index 432441607a5..98e982a16b9 100644 --- a/src/test/java/org/elasticsearch/index/search/child/ParentConstantScoreQueryTests.java +++ b/src/test/java/org/elasticsearch/index/search/child/ParentConstantScoreQueryTests.java @@ -24,18 +24,31 @@ import org.apache.lucene.analysis.MockAnalyzer; import org.apache.lucene.document.Document; import org.apache.lucene.document.Field; import org.apache.lucene.document.StringField; -import org.apache.lucene.index.*; +import org.apache.lucene.index.DirectoryReader; +import org.apache.lucene.index.DocsEnum; +import org.apache.lucene.index.IndexReader; +import org.apache.lucene.index.IndexWriterConfig; +import org.apache.lucene.index.LeafReader; +import org.apache.lucene.index.RandomIndexWriter; +import org.apache.lucene.index.SlowCompositeReaderWrapper; +import org.apache.lucene.index.Term; +import org.apache.lucene.index.Terms; +import org.apache.lucene.index.TermsEnum; import org.apache.lucene.queries.TermFilter; -import org.apache.lucene.search.*; +import org.apache.lucene.search.ConstantScoreQuery; +import org.apache.lucene.search.Filter; +import org.apache.lucene.search.FilteredQuery; +import org.apache.lucene.search.IndexSearcher; +import org.apache.lucene.search.Query; +import org.apache.lucene.search.QueryUtils; +import org.apache.lucene.search.TermQuery; +import org.apache.lucene.search.join.BitDocIdSetFilter; import org.apache.lucene.store.Directory; import org.apache.lucene.util.FixedBitSet; import org.apache.lucene.util.LuceneTestCase; import org.elasticsearch.common.lease.Releasables; import org.elasticsearch.common.lucene.search.NotFilter; -import org.elasticsearch.common.lucene.search.XConstantScoreQuery; -import org.elasticsearch.common.lucene.search.XFilteredQuery; import org.elasticsearch.index.engine.Engine; -import org.elasticsearch.index.cache.fixedbitset.FixedBitSetFilter; import org.elasticsearch.index.fielddata.plain.ParentChildIndexFieldData; import org.elasticsearch.index.mapper.Uid; import org.elasticsearch.index.mapper.internal.ParentFieldMapper; @@ -75,7 +88,7 @@ public class ParentConstantScoreQueryTests extends AbstractChildTests { Query parentQuery = new TermQuery(new Term("field", "value")); ParentFieldMapper parentFieldMapper = SearchContext.current().mapperService().documentMapper("child").parentFieldMapper(); ParentChildIndexFieldData parentChildIndexFieldData = SearchContext.current().fieldData().getForField(parentFieldMapper); - FixedBitSetFilter childrenFilter = wrap(new TermFilter(new Term(TypeFieldMapper.NAME, "child"))); + BitDocIdSetFilter childrenFilter = wrap(new TermFilter(new Term(TypeFieldMapper.NAME, "child"))); Query query = new ParentConstantScoreQuery(parentChildIndexFieldData, parentQuery, "parent", childrenFilter); QueryUtils.check(query); } @@ -84,8 +97,7 @@ public class ParentConstantScoreQueryTests extends AbstractChildTests { public void testRandom() throws Exception { Directory directory = newDirectory(); final Random r = random(); - final IndexWriterConfig iwc = LuceneTestCase.newIndexWriterConfig(r, - LuceneTestCase.TEST_VERSION_CURRENT, new MockAnalyzer(r)) + final IndexWriterConfig iwc = LuceneTestCase.newIndexWriterConfig(r, new MockAnalyzer(r)) .setMaxBufferedDocs(IndexWriterConfig.DISABLE_AUTO_FLUSH) .setRAMBufferSizeMB(scaledRandomIntBetween(16, 64)); // we might index a lot - don't go crazy here RandomIndexWriter indexWriter = new RandomIndexWriter(r, directory, iwc); @@ -162,7 +174,7 @@ public class ParentConstantScoreQueryTests extends AbstractChildTests { ParentFieldMapper parentFieldMapper = SearchContext.current().mapperService().documentMapper("child").parentFieldMapper(); ParentChildIndexFieldData parentChildIndexFieldData = SearchContext.current().fieldData().getForField(parentFieldMapper); - FixedBitSetFilter childrenFilter = wrap(new TermFilter(new Term(TypeFieldMapper.NAME, "child"))); + BitDocIdSetFilter childrenFilter = wrap(new TermFilter(new Term(TypeFieldMapper.NAME, "child"))); Filter rawFilterMe = new NotFilter(new TermFilter(new Term("filter", "me"))); int max = numUniqueParentValues / 4; for (int i = 0; i < max; i++) { @@ -208,21 +220,21 @@ public class ParentConstantScoreQueryTests extends AbstractChildTests { query = new ParentConstantScoreQuery(parentChildIndexFieldData, parentQuery, "parent", childrenFilter); } else { // Usage in HasParentFilterParser - query = new XConstantScoreQuery( + query = new ConstantScoreQuery( new CustomQueryWrappingFilter( new ParentConstantScoreQuery(parentChildIndexFieldData, parentQuery, "parent", childrenFilter) ) ); } - query = new XFilteredQuery(query, filterMe); + query = new FilteredQuery(query, filterMe); BitSetCollector collector = new BitSetCollector(indexReader.maxDoc()); searcher.search(query, collector); FixedBitSet actualResult = collector.getResult(); FixedBitSet expectedResult = new FixedBitSet(indexReader.maxDoc()); if (parentValueToChildDocIds.containsKey(parentValue)) { - AtomicReader slowAtomicReader = SlowCompositeReaderWrapper.wrap(indexReader); - Terms terms = slowAtomicReader.terms(UidFieldMapper.NAME); + LeafReader slowLeafReader = SlowCompositeReaderWrapper.wrap(indexReader); + Terms terms = slowLeafReader.terms(UidFieldMapper.NAME); if (terms != null) { NavigableSet childIds = parentValueToChildDocIds.lget(); TermsEnum termsEnum = terms.iterator(null); @@ -230,7 +242,7 @@ public class ParentConstantScoreQueryTests extends AbstractChildTests { for (String id : childIds) { TermsEnum.SeekStatus seekStatus = termsEnum.seekCeil(Uid.createUidAsBytes("child", id)); if (seekStatus == TermsEnum.SeekStatus.FOUND) { - docsEnum = termsEnum.docs(slowAtomicReader.getLiveDocs(), docsEnum, DocsEnum.FLAG_NONE); + docsEnum = termsEnum.docs(slowLeafReader.getLiveDocs(), docsEnum, DocsEnum.FLAG_NONE); expectedResult.set(docsEnum.nextDoc()); } else if (seekStatus == TermsEnum.SeekStatus.END) { break; diff --git a/src/test/java/org/elasticsearch/index/search/child/ParentQueryTests.java b/src/test/java/org/elasticsearch/index/search/child/ParentQueryTests.java index 8cab5b9ed45..7a326f2b34a 100644 --- a/src/test/java/org/elasticsearch/index/search/child/ParentQueryTests.java +++ b/src/test/java/org/elasticsearch/index/search/child/ParentQueryTests.java @@ -25,17 +25,33 @@ import org.apache.lucene.analysis.MockAnalyzer; import org.apache.lucene.document.Document; import org.apache.lucene.document.Field; import org.apache.lucene.document.StringField; -import org.apache.lucene.index.*; +import org.apache.lucene.index.DirectoryReader; +import org.apache.lucene.index.DocsEnum; +import org.apache.lucene.index.IndexReader; +import org.apache.lucene.index.IndexWriterConfig; +import org.apache.lucene.index.LeafReader; +import org.apache.lucene.index.RandomIndexWriter; +import org.apache.lucene.index.SlowCompositeReaderWrapper; +import org.apache.lucene.index.Term; +import org.apache.lucene.index.Terms; +import org.apache.lucene.index.TermsEnum; import org.apache.lucene.queries.TermFilter; -import org.apache.lucene.search.*; +import org.apache.lucene.search.ConstantScoreQuery; +import org.apache.lucene.search.Filter; +import org.apache.lucene.search.FilteredQuery; +import org.apache.lucene.search.IndexSearcher; +import org.apache.lucene.search.MultiCollector; +import org.apache.lucene.search.Query; +import org.apache.lucene.search.QueryUtils; +import org.apache.lucene.search.TermQuery; +import org.apache.lucene.search.TopScoreDocCollector; +import org.apache.lucene.search.join.BitDocIdSetFilter; import org.apache.lucene.store.Directory; import org.apache.lucene.util.FixedBitSet; import org.apache.lucene.util.LuceneTestCase; import org.elasticsearch.common.lease.Releasables; import org.elasticsearch.common.lucene.search.NotFilter; -import org.elasticsearch.common.lucene.search.XFilteredQuery; import org.elasticsearch.index.engine.Engine; -import org.elasticsearch.index.cache.fixedbitset.FixedBitSetFilter; import org.elasticsearch.index.fielddata.plain.ParentChildIndexFieldData; import org.elasticsearch.index.mapper.Uid; import org.elasticsearch.index.mapper.internal.ParentFieldMapper; @@ -74,7 +90,7 @@ public class ParentQueryTests extends AbstractChildTests { Query parentQuery = new TermQuery(new Term("field", "value")); ParentFieldMapper parentFieldMapper = SearchContext.current().mapperService().documentMapper("child").parentFieldMapper(); ParentChildIndexFieldData parentChildIndexFieldData = SearchContext.current().fieldData().getForField(parentFieldMapper); - FixedBitSetFilter childrenFilter = wrap(new TermFilter(new Term(TypeFieldMapper.NAME, "child"))); + BitDocIdSetFilter childrenFilter = wrap(new TermFilter(new Term(TypeFieldMapper.NAME, "child"))); Query query = new ParentQuery(parentChildIndexFieldData, parentQuery, "parent", childrenFilter); QueryUtils.check(query); } @@ -83,8 +99,7 @@ public class ParentQueryTests extends AbstractChildTests { public void testRandom() throws Exception { Directory directory = newDirectory(); final Random r = random(); - final IndexWriterConfig iwc = LuceneTestCase.newIndexWriterConfig(r, - LuceneTestCase.TEST_VERSION_CURRENT, new MockAnalyzer(r)) + final IndexWriterConfig iwc = LuceneTestCase.newIndexWriterConfig(r, new MockAnalyzer(r)) .setMaxBufferedDocs(IndexWriterConfig.DISABLE_AUTO_FLUSH) .setRAMBufferSizeMB(scaledRandomIntBetween(16, 64)); // we might index a lot - don't go crazy here RandomIndexWriter indexWriter = new RandomIndexWriter(r, directory, iwc); @@ -161,7 +176,7 @@ public class ParentQueryTests extends AbstractChildTests { ParentFieldMapper parentFieldMapper = SearchContext.current().mapperService().documentMapper("child").parentFieldMapper(); ParentChildIndexFieldData parentChildIndexFieldData = SearchContext.current().fieldData().getForField(parentFieldMapper); - FixedBitSetFilter childrenFilter = wrap(new TermFilter(new Term(TypeFieldMapper.NAME, "child"))); + BitDocIdSetFilter childrenFilter = wrap(new TermFilter(new Term(TypeFieldMapper.NAME, "child"))); Filter rawFilterMe = new NotFilter(new TermFilter(new Term("filter", "me"))); int max = numUniqueParentValues / 4; for (int i = 0; i < max; i++) { @@ -202,7 +217,7 @@ public class ParentQueryTests extends AbstractChildTests { String parentValue = parentValues[random().nextInt(numUniqueParentValues)]; Query parentQuery = new ConstantScoreQuery(new TermQuery(new Term("field1", parentValue))); Query query = new ParentQuery(parentChildIndexFieldData, parentQuery,"parent", childrenFilter); - query = new XFilteredQuery(query, filterMe); + query = new FilteredQuery(query, filterMe); BitSetCollector collector = new BitSetCollector(indexReader.maxDoc()); int numHits = 1 + random().nextInt(25); TopScoreDocCollector actualTopDocsCollector = TopScoreDocCollector.create(numHits, false); @@ -215,8 +230,8 @@ public class ParentQueryTests extends AbstractChildTests { TopScoreDocCollector expectedTopDocsCollector = TopScoreDocCollector.create(numHits, false); expectedTopDocsCollector.setScorer(mockScorer); if (parentValueToChildIds.containsKey(parentValue)) { - AtomicReader slowAtomicReader = SlowCompositeReaderWrapper.wrap(indexReader); - Terms terms = slowAtomicReader.terms(UidFieldMapper.NAME); + LeafReader slowLeafReader = SlowCompositeReaderWrapper.wrap(indexReader); + Terms terms = slowLeafReader.terms(UidFieldMapper.NAME); if (terms != null) { NavigableMap childIdsAndScore = parentValueToChildIds.lget(); TermsEnum termsEnum = terms.iterator(null); @@ -224,7 +239,7 @@ public class ParentQueryTests extends AbstractChildTests { for (Map.Entry entry : childIdsAndScore.entrySet()) { TermsEnum.SeekStatus seekStatus = termsEnum.seekCeil(Uid.createUidAsBytes("child", entry.getKey())); if (seekStatus == TermsEnum.SeekStatus.FOUND) { - docsEnum = termsEnum.docs(slowAtomicReader.getLiveDocs(), docsEnum, DocsEnum.FLAG_NONE); + docsEnum = termsEnum.docs(slowLeafReader.getLiveDocs(), docsEnum, DocsEnum.FLAG_NONE); expectedResult.set(docsEnum.nextDoc()); mockScorer.scores.add(entry.getValue()); expectedTopDocsCollector.collect(docsEnum.docID()); diff --git a/src/test/java/org/elasticsearch/index/search/geo/GeoUtilsTests.java b/src/test/java/org/elasticsearch/index/search/geo/GeoUtilsTests.java index 58787077565..34c30be9223 100644 --- a/src/test/java/org/elasticsearch/index/search/geo/GeoUtilsTests.java +++ b/src/test/java/org/elasticsearch/index/search/geo/GeoUtilsTests.java @@ -586,12 +586,12 @@ public class GeoUtilsTests extends ElasticsearchTestCase { assertThat("height at level " + i, gNode.getShape().getBoundingBox().getHeight(), equalTo(180.d * height / GeoUtils.EARTH_POLAR_DISTANCE)); - gNode = gNode.getSubCells(null).iterator().next(); + gNode = gNode.getNextLevelCells(null).next(); } QuadPrefixTree quadPrefixTree = new QuadPrefixTree(spatialContext); Cell qNode = quadPrefixTree.getWorldCell(); - for (int i = 0; i < QuadPrefixTree.DEFAULT_MAX_LEVELS; i++) { + for (int i = 0; i < quadPrefixTree.getMaxLevels(); i++) { double degrees = 360.0 / (1L << i); double width = GeoUtils.quadTreeCellWidth(i); @@ -607,7 +607,7 @@ public class GeoUtilsTests extends ElasticsearchTestCase { assertThat("height at level " + i, qNode.getShape().getBoundingBox().getHeight(), equalTo(180.d * height / GeoUtils.EARTH_POLAR_DISTANCE)); - qNode = qNode.getSubCells(null).iterator().next(); + qNode = qNode.getNextLevelCells(null).next(); } } diff --git a/src/test/java/org/elasticsearch/index/search/nested/AbstractNumberNestedSortingTests.java b/src/test/java/org/elasticsearch/index/search/nested/AbstractNumberNestedSortingTests.java index 1a13a833fce..db413fc462e 100644 --- a/src/test/java/org/elasticsearch/index/search/nested/AbstractNumberNestedSortingTests.java +++ b/src/test/java/org/elasticsearch/index/search/nested/AbstractNumberNestedSortingTests.java @@ -25,12 +25,21 @@ import org.apache.lucene.index.DirectoryReader; import org.apache.lucene.index.IndexableField; import org.apache.lucene.index.Term; import org.apache.lucene.queries.TermFilter; -import org.apache.lucene.search.*; -import org.apache.lucene.search.join.FixedBitSetCachingWrapperFilter; +import org.apache.lucene.search.FieldDoc; +import org.apache.lucene.search.Filter; +import org.apache.lucene.search.FilteredQuery; +import org.apache.lucene.search.IndexSearcher; +import org.apache.lucene.search.MatchAllDocsQuery; +import org.apache.lucene.search.Query; +import org.apache.lucene.search.Sort; +import org.apache.lucene.search.SortField; +import org.apache.lucene.search.TermQuery; +import org.apache.lucene.search.TopDocs; +import org.apache.lucene.search.TopFieldDocs; +import org.apache.lucene.search.join.BitDocIdSetCachingWrapperFilter; import org.apache.lucene.search.join.ScoreMode; import org.apache.lucene.search.join.ToParentBlockJoinQuery; import org.elasticsearch.common.lucene.search.NotFilter; -import org.elasticsearch.common.lucene.search.XFilteredQuery; import org.elasticsearch.index.fielddata.AbstractFieldDataTests; import org.elasticsearch.index.fielddata.IndexFieldData; import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource; @@ -211,7 +220,7 @@ public abstract class AbstractNumberNestedSortingTests extends AbstractFieldData Filter parentFilter = new TermFilter(new Term("__type", "parent")); Filter childFilter = new NotFilter(parentFilter); XFieldComparatorSource nestedComparatorSource = createFieldComparator("field2", sortMode, null, createNested(parentFilter, childFilter)); - ToParentBlockJoinQuery query = new ToParentBlockJoinQuery(new XFilteredQuery(new MatchAllDocsQuery(), childFilter), new FixedBitSetCachingWrapperFilter(parentFilter), ScoreMode.None); + ToParentBlockJoinQuery query = new ToParentBlockJoinQuery(new FilteredQuery(new MatchAllDocsQuery(), childFilter), new BitDocIdSetCachingWrapperFilter(parentFilter), ScoreMode.None); Sort sort = new Sort(new SortField("field2", nestedComparatorSource)); TopFieldDocs topDocs = searcher.search(query, 5, sort); @@ -246,8 +255,8 @@ public abstract class AbstractNumberNestedSortingTests extends AbstractFieldData childFilter = new TermFilter(new Term("filter_1", "T")); nestedComparatorSource = createFieldComparator("field2", sortMode, null, createNested(parentFilter, childFilter)); query = new ToParentBlockJoinQuery( - new XFilteredQuery(new MatchAllDocsQuery(), childFilter), - new FixedBitSetCachingWrapperFilter(parentFilter), + new FilteredQuery(new MatchAllDocsQuery(), childFilter), + new BitDocIdSetCachingWrapperFilter(parentFilter), ScoreMode.None ); sort = new Sort(new SortField("field2", nestedComparatorSource, true)); @@ -322,7 +331,7 @@ public abstract class AbstractNumberNestedSortingTests extends AbstractFieldData MultiValueMode sortMode = MultiValueMode.AVG; Filter childFilter = new NotFilter(parentFilter); XFieldComparatorSource nestedComparatorSource = createFieldComparator("field2", sortMode, -127, createNested(parentFilter, childFilter)); - Query query = new ToParentBlockJoinQuery(new XFilteredQuery(new MatchAllDocsQuery(), childFilter), new FixedBitSetCachingWrapperFilter(parentFilter), ScoreMode.None); + Query query = new ToParentBlockJoinQuery(new FilteredQuery(new MatchAllDocsQuery(), childFilter), new BitDocIdSetCachingWrapperFilter(parentFilter), ScoreMode.None); Sort sort = new Sort(new SortField("field2", nestedComparatorSource)); TopDocs topDocs = searcher.search(query, 5, sort); assertThat(topDocs.totalHits, equalTo(7)); diff --git a/src/test/java/org/elasticsearch/index/search/nested/DoubleNestedSortingTests.java b/src/test/java/org/elasticsearch/index/search/nested/DoubleNestedSortingTests.java index d42980bc4ce..55beb6425f5 100644 --- a/src/test/java/org/elasticsearch/index/search/nested/DoubleNestedSortingTests.java +++ b/src/test/java/org/elasticsearch/index/search/nested/DoubleNestedSortingTests.java @@ -21,12 +21,19 @@ package org.elasticsearch.index.search.nested; import org.apache.lucene.document.DoubleField; import org.apache.lucene.document.Field; import org.apache.lucene.index.IndexableField; -import org.apache.lucene.search.*; -import org.apache.lucene.search.join.FixedBitSetCachingWrapperFilter; +import org.apache.lucene.search.FieldDoc; +import org.apache.lucene.search.Filter; +import org.apache.lucene.search.FilteredQuery; +import org.apache.lucene.search.IndexSearcher; +import org.apache.lucene.search.MatchAllDocsQuery; +import org.apache.lucene.search.Query; +import org.apache.lucene.search.Sort; +import org.apache.lucene.search.SortField; +import org.apache.lucene.search.TopDocs; +import org.apache.lucene.search.join.BitDocIdSetCachingWrapperFilter; import org.apache.lucene.search.join.ScoreMode; import org.apache.lucene.search.join.ToParentBlockJoinQuery; import org.elasticsearch.common.lucene.search.NotFilter; -import org.elasticsearch.common.lucene.search.XFilteredQuery; import org.elasticsearch.index.fielddata.FieldDataType; import org.elasticsearch.index.fielddata.IndexFieldData; import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource; @@ -63,7 +70,7 @@ public class DoubleNestedSortingTests extends AbstractNumberNestedSortingTests { MultiValueMode sortMode = MultiValueMode.AVG; Filter childFilter = new NotFilter(parentFilter); XFieldComparatorSource nestedComparatorSource = createFieldComparator("field2", sortMode, -127, createNested(parentFilter, childFilter)); - Query query = new ToParentBlockJoinQuery(new XFilteredQuery(new MatchAllDocsQuery(), childFilter), new FixedBitSetCachingWrapperFilter(parentFilter), ScoreMode.None); + Query query = new ToParentBlockJoinQuery(new FilteredQuery(new MatchAllDocsQuery(), childFilter), new BitDocIdSetCachingWrapperFilter(parentFilter), ScoreMode.None); Sort sort = new Sort(new SortField("field2", nestedComparatorSource)); TopDocs topDocs = searcher.search(query, 5, sort); assertThat(topDocs.totalHits, equalTo(7)); diff --git a/src/test/java/org/elasticsearch/index/search/nested/FloatNestedSortingTests.java b/src/test/java/org/elasticsearch/index/search/nested/FloatNestedSortingTests.java index 5c0dfad0768..ee2f8cf809c 100644 --- a/src/test/java/org/elasticsearch/index/search/nested/FloatNestedSortingTests.java +++ b/src/test/java/org/elasticsearch/index/search/nested/FloatNestedSortingTests.java @@ -21,12 +21,19 @@ package org.elasticsearch.index.search.nested; import org.apache.lucene.document.Field; import org.apache.lucene.document.FloatField; import org.apache.lucene.index.IndexableField; -import org.apache.lucene.search.*; -import org.apache.lucene.search.join.FixedBitSetCachingWrapperFilter; +import org.apache.lucene.search.FieldDoc; +import org.apache.lucene.search.Filter; +import org.apache.lucene.search.FilteredQuery; +import org.apache.lucene.search.IndexSearcher; +import org.apache.lucene.search.MatchAllDocsQuery; +import org.apache.lucene.search.Query; +import org.apache.lucene.search.Sort; +import org.apache.lucene.search.SortField; +import org.apache.lucene.search.TopDocs; +import org.apache.lucene.search.join.BitDocIdSetCachingWrapperFilter; import org.apache.lucene.search.join.ScoreMode; import org.apache.lucene.search.join.ToParentBlockJoinQuery; import org.elasticsearch.common.lucene.search.NotFilter; -import org.elasticsearch.common.lucene.search.XFilteredQuery; import org.elasticsearch.index.fielddata.FieldDataType; import org.elasticsearch.index.fielddata.IndexFieldData; import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource; @@ -63,7 +70,7 @@ public class FloatNestedSortingTests extends DoubleNestedSortingTests { MultiValueMode sortMode = MultiValueMode.AVG; Filter childFilter = new NotFilter(parentFilter); XFieldComparatorSource nestedComparatorSource = createFieldComparator("field2", sortMode, -127, createNested(parentFilter, childFilter)); - Query query = new ToParentBlockJoinQuery(new XFilteredQuery(new MatchAllDocsQuery(), childFilter), new FixedBitSetCachingWrapperFilter(parentFilter), ScoreMode.None); + Query query = new ToParentBlockJoinQuery(new FilteredQuery(new MatchAllDocsQuery(), childFilter), new BitDocIdSetCachingWrapperFilter(parentFilter), ScoreMode.None); Sort sort = new Sort(new SortField("field2", nestedComparatorSource)); TopDocs topDocs = searcher.search(query, 5, sort); assertThat(topDocs.totalHits, equalTo(7)); diff --git a/src/test/java/org/elasticsearch/index/search/nested/NestedSortingTests.java b/src/test/java/org/elasticsearch/index/search/nested/NestedSortingTests.java index d17a64cc574..b2fc6f65eeb 100644 --- a/src/test/java/org/elasticsearch/index/search/nested/NestedSortingTests.java +++ b/src/test/java/org/elasticsearch/index/search/nested/NestedSortingTests.java @@ -25,15 +25,24 @@ import org.apache.lucene.document.StringField; import org.apache.lucene.index.DirectoryReader; import org.apache.lucene.index.Term; import org.apache.lucene.queries.TermFilter; -import org.apache.lucene.search.*; -import org.apache.lucene.search.join.FixedBitSetCachingWrapperFilter; +import org.apache.lucene.search.ConstantScoreQuery; +import org.apache.lucene.search.FieldDoc; +import org.apache.lucene.search.Filter; +import org.apache.lucene.search.FilteredQuery; +import org.apache.lucene.search.IndexSearcher; +import org.apache.lucene.search.MatchAllDocsQuery; +import org.apache.lucene.search.Query; +import org.apache.lucene.search.Sort; +import org.apache.lucene.search.SortField; +import org.apache.lucene.search.TopDocs; +import org.apache.lucene.search.TopFieldDocs; +import org.apache.lucene.search.join.BitDocIdSetCachingWrapperFilter; import org.apache.lucene.search.join.ScoreMode; import org.apache.lucene.search.join.ToParentBlockJoinQuery; import org.apache.lucene.util.BytesRef; import org.apache.lucene.util.TestUtil; import org.elasticsearch.common.lucene.search.AndFilter; import org.elasticsearch.common.lucene.search.NotFilter; -import org.elasticsearch.common.lucene.search.XFilteredQuery; import org.elasticsearch.common.settings.ImmutableSettings; import org.elasticsearch.index.fielddata.AbstractFieldDataTests; import org.elasticsearch.index.fielddata.FieldDataType; @@ -276,7 +285,7 @@ public class NestedSortingTests extends AbstractFieldDataTests { Filter parentFilter = new TermFilter(new Term("__type", "parent")); Filter childFilter = new NotFilter(parentFilter); BytesRefFieldComparatorSource nestedComparatorSource = new BytesRefFieldComparatorSource(indexFieldData, null, sortMode, createNested(parentFilter, childFilter)); - ToParentBlockJoinQuery query = new ToParentBlockJoinQuery(new XFilteredQuery(new MatchAllDocsQuery(), childFilter), new FixedBitSetCachingWrapperFilter(parentFilter), ScoreMode.None); + ToParentBlockJoinQuery query = new ToParentBlockJoinQuery(new FilteredQuery(new MatchAllDocsQuery(), childFilter), new BitDocIdSetCachingWrapperFilter(parentFilter), ScoreMode.None); Sort sort = new Sort(new SortField("field2", nestedComparatorSource)); TopFieldDocs topDocs = searcher.search(query, 5, sort); @@ -314,8 +323,8 @@ public class NestedSortingTests extends AbstractFieldDataTests { childFilter = new AndFilter(Arrays.asList(new NotFilter(parentFilter), new TermFilter(new Term("filter_1", "T")))); nestedComparatorSource = new BytesRefFieldComparatorSource(indexFieldData, null, sortMode, createNested(parentFilter, childFilter)); query = new ToParentBlockJoinQuery( - new XFilteredQuery(new MatchAllDocsQuery(), childFilter), - new FixedBitSetCachingWrapperFilter(parentFilter), + new FilteredQuery(new MatchAllDocsQuery(), childFilter), + new BitDocIdSetCachingWrapperFilter(parentFilter), ScoreMode.None ); sort = new Sort(new SortField("field2", nestedComparatorSource, true)); diff --git a/src/test/java/org/elasticsearch/index/store/CorruptedFileTest.java b/src/test/java/org/elasticsearch/index/store/CorruptedFileTest.java index 26ee0bc1034..31ed433bd25 100644 --- a/src/test/java/org/elasticsearch/index/store/CorruptedFileTest.java +++ b/src/test/java/org/elasticsearch/index/store/CorruptedFileTest.java @@ -120,7 +120,7 @@ public class CorruptedFileTest extends ElasticsearchIntegrationTest { .put(MergePolicyModule.MERGE_POLICY_TYPE_KEY, NoMergePolicyProvider.class) .put(MockFSDirectoryService.CHECK_INDEX_ON_CLOSE, false) // no checkindex - we corrupt shards on purpose .put(InternalEngine.INDEX_FAIL_ON_CORRUPTION, failOnCorruption) - .put(TranslogService.INDEX_TRANSLOG_DISABLE_FLUSH, true) // no translog based flush - it might change the .del / segments.N files + .put(TranslogService.INDEX_TRANSLOG_DISABLE_FLUSH, true) // no translog based flush - it might change the .liv / segments.N files .put("indices.recovery.concurrent_streams", 10) )); if (failOnCorruption == false) { // test the dynamic setting @@ -173,7 +173,7 @@ public class CorruptedFileTest extends ElasticsearchIntegrationTest { final CopyOnWriteArrayList exception = new CopyOnWriteArrayList<>(); final IndicesLifecycle.Listener listener = new IndicesLifecycle.Listener() { @Override - public void beforeIndexShardClosed(ShardId sid, @Nullable IndexShard indexShard) { + public void afterIndexShardClosed(ShardId sid, @Nullable IndexShard indexShard) { if (indexShard != null) { Store store = ((InternalIndexShard) indexShard).store(); store.incRef(); @@ -181,15 +181,16 @@ public class CorruptedFileTest extends ElasticsearchIntegrationTest { if (!Lucene.indexExists(store.directory()) && indexShard.state() == IndexShardState.STARTED) { return; } - CheckIndex checkIndex = new CheckIndex(store.directory()); - BytesStreamOutput os = new BytesStreamOutput(); - PrintStream out = new PrintStream(os, false, Charsets.UTF_8.name()); - checkIndex.setInfoStream(out); - out.flush(); - CheckIndex.Status status = checkIndex.checkIndex(); - if (!status.clean) { - logger.warn("check index [failure]\n{}", new String(os.bytes().toBytes(), Charsets.UTF_8)); - throw new IndexShardException(sid, "index check failure"); + try (CheckIndex checkIndex = new CheckIndex(store.directory())) { + BytesStreamOutput os = new BytesStreamOutput(); + PrintStream out = new PrintStream(os, false, Charsets.UTF_8.name()); + checkIndex.setInfoStream(out); + out.flush(); + CheckIndex.Status status = checkIndex.checkIndex(); + if (!status.clean) { + logger.warn("check index [failure]\n{}", new String(os.bytes().toBytes(), Charsets.UTF_8)); + throw new IndexShardException(sid, "index check failure"); + } } } catch (Throwable t) { exception.add(t); @@ -229,7 +230,7 @@ public class CorruptedFileTest extends ElasticsearchIntegrationTest { .put(MergePolicyModule.MERGE_POLICY_TYPE_KEY, NoMergePolicyProvider.class) .put(MockFSDirectoryService.CHECK_INDEX_ON_CLOSE, false) // no checkindex - we corrupt shards on purpose .put(InternalEngine.INDEX_FAIL_ON_CORRUPTION, true) - .put(TranslogService.INDEX_TRANSLOG_DISABLE_FLUSH, true) // no translog based flush - it might change the .del / segments.N files + .put(TranslogService.INDEX_TRANSLOG_DISABLE_FLUSH, true) // no translog based flush - it might change the .liv / segments.N files .put("indices.recovery.concurrent_streams", 10) )); ensureGreen(); @@ -404,7 +405,7 @@ public class CorruptedFileTest extends ElasticsearchIntegrationTest { .put(MergePolicyModule.MERGE_POLICY_TYPE_KEY, NoMergePolicyProvider.class) .put(MockFSDirectoryService.CHECK_INDEX_ON_CLOSE, false) // no checkindex - we corrupt shards on purpose .put(InternalEngine.INDEX_FAIL_ON_CORRUPTION, true) - .put(TranslogService.INDEX_TRANSLOG_DISABLE_FLUSH, true) // no translog based flush - it might change the .del / segments.N files + .put(TranslogService.INDEX_TRANSLOG_DISABLE_FLUSH, true) // no translog based flush - it might change the .liv / segments.N files .put("indices.recovery.concurrent_streams", 10) )); ensureGreen(); @@ -485,7 +486,7 @@ public class CorruptedFileTest extends ElasticsearchIntegrationTest { File fileToCorrupt = null; if (!files.isEmpty()) { fileToCorrupt = RandomPicks.randomFrom(getRandom(), files); - try (Directory dir = FSDirectory.open(fileToCorrupt.getParentFile())) { + try (Directory dir = FSDirectory.open(fileToCorrupt.getParentFile().toPath())) { long checksumBeforeCorruption; try (IndexInput input = dir.openInput(fileToCorrupt.getName(), IOContext.DEFAULT)) { checksumBeforeCorruption = CodecUtil.retrieveChecksum(input); @@ -526,8 +527,8 @@ public class CorruptedFileTest extends ElasticsearchIntegrationTest { } private static final boolean isPerCommitFile(String fileName) { - // .del and segments_N are per commit files and might change after corruption - return fileName.startsWith("segments") || fileName.endsWith(".del"); + // .liv and segments_N are per commit files and might change after corruption + return fileName.startsWith("segments") || fileName.endsWith(".liv"); } private static final boolean isPerSegmentFile(String fileName) { @@ -540,7 +541,7 @@ public class CorruptedFileTest extends ElasticsearchIntegrationTest { private void pruneOldDeleteGenerations(Set files) { final TreeSet delFiles = new TreeSet<>(); for (File file : files) { - if (file.getName().endsWith(".del")) { + if (file.getName().endsWith(".liv")) { delFiles.add(file); } } diff --git a/src/test/java/org/elasticsearch/index/store/DirectoryUtilsTest.java b/src/test/java/org/elasticsearch/index/store/DirectoryUtilsTest.java index 504f0332354..d638e12fc4e 100644 --- a/src/test/java/org/elasticsearch/index/store/DirectoryUtilsTest.java +++ b/src/test/java/org/elasticsearch/index/store/DirectoryUtilsTest.java @@ -41,7 +41,7 @@ public class DirectoryUtilsTest extends ElasticsearchLuceneTestCase { final int iters = scaledRandomIntBetween(10, 100); for (int i = 0; i < iters; i++) { { - BaseDirectoryWrapper dir = newFSDirectory(file); + BaseDirectoryWrapper dir = newFSDirectory(file.toPath()); FSDirectory directory = DirectoryUtils.getLeaf(new FilterDirectory(dir) {}, FSDirectory.class, null); assertThat(directory, notNullValue()); assertThat(directory, sameInstance(DirectoryUtils.getLeafDirectory(dir))); @@ -49,7 +49,7 @@ public class DirectoryUtilsTest extends ElasticsearchLuceneTestCase { } { - BaseDirectoryWrapper dir = newFSDirectory(file); + BaseDirectoryWrapper dir = newFSDirectory(file.toPath()); FSDirectory directory = DirectoryUtils.getLeaf(dir, FSDirectory.class, null); assertThat(directory, notNullValue()); assertThat(directory, sameInstance(DirectoryUtils.getLeafDirectory(dir))); @@ -58,7 +58,7 @@ public class DirectoryUtilsTest extends ElasticsearchLuceneTestCase { { Set stringSet = Collections.emptySet(); - BaseDirectoryWrapper dir = newFSDirectory(file); + BaseDirectoryWrapper dir = newFSDirectory(file.toPath()); FSDirectory directory = DirectoryUtils.getLeaf(new FileSwitchDirectory(stringSet, dir, dir, random().nextBoolean()), FSDirectory.class, null); assertThat(directory, notNullValue()); assertThat(directory, sameInstance(DirectoryUtils.getLeafDirectory(dir))); @@ -67,7 +67,7 @@ public class DirectoryUtilsTest extends ElasticsearchLuceneTestCase { { Set stringSet = Collections.emptySet(); - BaseDirectoryWrapper dir = newFSDirectory(file); + BaseDirectoryWrapper dir = newFSDirectory(file.toPath()); FSDirectory directory = DirectoryUtils.getLeaf(new FilterDirectory(new FileSwitchDirectory(stringSet, dir, dir, random().nextBoolean())) {}, FSDirectory.class, null); assertThat(directory, notNullValue()); assertThat(directory, sameInstance(DirectoryUtils.getLeafDirectory(dir))); @@ -76,7 +76,7 @@ public class DirectoryUtilsTest extends ElasticsearchLuceneTestCase { { Set stringSet = Collections.emptySet(); - BaseDirectoryWrapper dir = newFSDirectory(file); + BaseDirectoryWrapper dir = newFSDirectory(file.toPath()); RAMDirectory directory = DirectoryUtils.getLeaf(new FilterDirectory(new FileSwitchDirectory(stringSet, dir, dir, random().nextBoolean())) {}, RAMDirectory.class, null); assertThat(directory, nullValue()); dir.close(); diff --git a/src/test/java/org/elasticsearch/index/store/DistributorDirectoryTest.java b/src/test/java/org/elasticsearch/index/store/DistributorDirectoryTest.java index ca339aa8338..2e6df2aedf9 100644 --- a/src/test/java/org/elasticsearch/index/store/DistributorDirectoryTest.java +++ b/src/test/java/org/elasticsearch/index/store/DistributorDirectoryTest.java @@ -18,6 +18,10 @@ */ package org.elasticsearch.index.store; +import com.carrotsearch.randomizedtesting.annotations.Listeners; +import com.carrotsearch.randomizedtesting.annotations.ThreadLeakFilters; +import com.carrotsearch.randomizedtesting.annotations.ThreadLeakScope; +import com.carrotsearch.randomizedtesting.annotations.TimeoutSuite; import com.carrotsearch.randomizedtesting.annotations.*; import com.carrotsearch.randomizedtesting.generators.RandomPicks; import org.apache.lucene.index.IndexFileNames; @@ -31,6 +35,8 @@ import org.elasticsearch.index.store.distributor.Distributor; import org.elasticsearch.test.ElasticsearchThreadFilter; import org.elasticsearch.test.junit.listeners.LoggingListener; +import java.io.IOException; +import java.nio.file.Path; import java.io.File; import java.io.FileNotFoundException; import java.io.IOException; @@ -47,7 +53,7 @@ public class DistributorDirectoryTest extends BaseDirectoryTestCase { protected final ESLogger logger = Loggers.getLogger(getClass()); @Override - protected Directory getDirectory(File path) throws IOException { + protected Directory getDirectory(Path path) throws IOException { Directory[] directories = new Directory[1 + random().nextInt(5)]; for (int i = 0; i < directories.length; i++) { directories[i] = newDirectory(); @@ -115,7 +121,7 @@ public class DistributorDirectoryTest extends BaseDirectoryTestCase { } DistributorDirectory dd = new DistributorDirectory(dirs); - String file = RandomPicks.randomFrom(random(), Arrays.asList(Store.CHECKSUMS_PREFIX, IndexFileNames.SEGMENTS_GEN)); + String file = RandomPicks.randomFrom(random(), Arrays.asList(Store.CHECKSUMS_PREFIX, IndexFileNames.OLD_SEGMENTS_GEN, IndexFileNames.SEGMENTS, IndexFileNames.PENDING_SEGMENTS)); String tmpFileName = RandomPicks.randomFrom(random(), Arrays.asList("recovery.", "foobar.", "test.")) + Math.max(0, Math.abs(random().nextLong())) + "." + file; try (IndexOutput out = dd.createOutput(tmpFileName, IOContext.DEFAULT)) { out.writeInt(1); @@ -132,28 +138,7 @@ public class DistributorDirectoryTest extends BaseDirectoryTestCase { } } assertNotNull("file must be in at least one dir", theDir); - DirectoryService service = new DirectoryService() { - @Override - public Directory[] build() throws IOException { - return new Directory[0]; - } - - @Override - public long throttleTimeInNanos() { - return 0; - } - - @Override - public void renameFile(Directory dir, String from, String to) throws IOException { - dir.copy(dir, from, to, IOContext.DEFAULT); - dir.deleteFile(from); - } - - @Override - public void fullDelete(Directory dir) throws IOException { - } - }; - dd.renameFile(service, tmpFileName, file); + dd.renameFile(tmpFileName, file); try { dd.fileLength(tmpFileName); fail("file ["+tmpFileName + "] was renamed but still exists"); @@ -174,7 +159,7 @@ public class DistributorDirectoryTest extends BaseDirectoryTestCase { out.writeInt(1); } try { - dd.renameFile(service, "foo.bar", file); + dd.renameFile("foo.bar", file); fail("target file already exists"); } catch (IOException ex) { // target file already exists diff --git a/src/test/java/org/elasticsearch/index/store/DistributorInTheWildTest.java b/src/test/java/org/elasticsearch/index/store/DistributorInTheWildTest.java index 695e4445ab5..394bc4df243 100644 --- a/src/test/java/org/elasticsearch/index/store/DistributorInTheWildTest.java +++ b/src/test/java/org/elasticsearch/index/store/DistributorInTheWildTest.java @@ -37,8 +37,8 @@ import org.elasticsearch.test.ElasticsearchThreadFilter; import org.elasticsearch.test.junit.listeners.LoggingListener; import org.junit.Before; -import java.io.File; import java.io.IOException; +import java.nio.file.Path; import java.util.HashSet; import java.util.Set; import java.util.concurrent.ExecutorService; @@ -144,7 +144,7 @@ public class DistributorInTheWildTest extends ThreadedIndexingAndSearchingTestCa Directory[] directories = new Directory[1 + random().nextInt(5)]; directories[0] = in; for (int i = 1; i < directories.length; i++) { - final File tempDir = createTempDir(getTestName()); + final Path tempDir = createTempDir(getTestName()); directories[i] = newMockFSDirectory(tempDir); // some subclasses rely on this being MDW if (!useNonNrtReaders) ((MockDirectoryWrapper) directories[i]).setAssertNoDeleteOpenFile(true); } diff --git a/src/test/java/org/elasticsearch/index/store/StoreTest.java b/src/test/java/org/elasticsearch/index/store/StoreTest.java index 742233b1d55..73d07faf895 100644 --- a/src/test/java/org/elasticsearch/index/store/StoreTest.java +++ b/src/test/java/org/elasticsearch/index/store/StoreTest.java @@ -26,6 +26,7 @@ import org.apache.lucene.store.*; import org.apache.lucene.util.BytesRef; import org.apache.lucene.util.IOUtils; import org.apache.lucene.util.TestUtil; +import org.apache.lucene.util.Version; import org.elasticsearch.common.settings.ImmutableSettings; import org.elasticsearch.index.Index; import org.elasticsearch.index.shard.ShardId; @@ -113,7 +114,7 @@ public class StoreTest extends ElasticsearchLuceneTestCase { indexInput.seek(0); BytesRef ref = new BytesRef(scaledRandomIntBetween(1, 1024)); long length = indexInput.length(); - IndexOutput verifyingOutput = new Store.VerifyingIndexOutput(new StoreFileMetaData("foo1.bar", length, checksum, TEST_VERSION_CURRENT), dir.createOutput("foo1.bar", IOContext.DEFAULT)); + IndexOutput verifyingOutput = new Store.VerifyingIndexOutput(new StoreFileMetaData("foo1.bar", length, checksum), dir.createOutput("foo1.bar", IOContext.DEFAULT)); while (length > 0) { if (random().nextInt(10) == 0) { verifyingOutput.writeByte(indexInput.readByte()); @@ -130,7 +131,7 @@ public class StoreTest extends ElasticsearchLuceneTestCase { try { Store.verify(verifyingOutput); fail("should be a corrupted index"); - } catch (CorruptIndexException ex) { + } catch (CorruptIndexException | IndexFormatTooOldException | IndexFormatTooNewException ex) { // ok } IOUtils.close(indexInput, verifyingOutput, dir); @@ -140,14 +141,14 @@ public class StoreTest extends ElasticsearchLuceneTestCase { public void testVerifyingIndexOutputWithBogusInput() throws IOException { Directory dir = newDirectory(); int length = scaledRandomIntBetween(10, 1024); - IndexOutput verifyingOutput = new Store.VerifyingIndexOutput(new StoreFileMetaData("foo1.bar", length, "", TEST_VERSION_CURRENT), dir.createOutput("foo1.bar", IOContext.DEFAULT)); + IndexOutput verifyingOutput = new Store.VerifyingIndexOutput(new StoreFileMetaData("foo1.bar", length, ""), dir.createOutput("foo1.bar", IOContext.DEFAULT)); try { while (length > 0) { verifyingOutput.writeByte((byte) random().nextInt()); length--; } fail("should be a corrupted index"); - } catch (CorruptIndexException ex) { + } catch (CorruptIndexException | IndexFormatTooOldException | IndexFormatTooNewException ex) { // ok } IOUtils.close(verifyingOutput, dir); @@ -159,7 +160,7 @@ public class StoreTest extends ElasticsearchLuceneTestCase { DirectoryService directoryService = new LuceneManagedDirectoryService(random()); Store store = new Store(shardId, ImmutableSettings.EMPTY, null, directoryService, randomDistributor(directoryService)); // set default codec - all segments need checksums - IndexWriter writer = new IndexWriter(store.directory(), newIndexWriterConfig(random(), TEST_VERSION_CURRENT, new MockAnalyzer(random())).setCodec(actualDefaultCodec())); + IndexWriter writer = new IndexWriter(store.directory(), newIndexWriterConfig(random(), new MockAnalyzer(random())).setCodec(actualDefaultCodec())); int docs = 1 + random().nextInt(100); for (int i = 0; i < docs; i++) { @@ -196,7 +197,7 @@ public class StoreTest extends ElasticsearchLuceneTestCase { Store.LegacyChecksums checksums = new Store.LegacyChecksums(); Map legacyMeta = new HashMap<>(); for (String file : store.directory().listAll()) { - if (file.equals("write.lock") || file.equals(IndexFileNames.SEGMENTS_GEN)) { + if (file.equals("write.lock") || file.equals(IndexFileNames.OLD_SEGMENTS_GEN)) { continue; } try (IndexInput input = store.directory().openInput(file, IOContext.READONCE)) { @@ -228,7 +229,7 @@ public class StoreTest extends ElasticsearchLuceneTestCase { DirectoryService directoryService = new LuceneManagedDirectoryService(random()); Store store = new Store(shardId, ImmutableSettings.EMPTY, null, directoryService, randomDistributor(directoryService)); // set default codec - all segments need checksums - IndexWriter writer = new IndexWriter(store.directory(), newIndexWriterConfig(random(), TEST_VERSION_CURRENT, new MockAnalyzer(random())).setCodec(actualDefaultCodec())); + IndexWriter writer = new IndexWriter(store.directory(), newIndexWriterConfig(random(), new MockAnalyzer(random())).setCodec(actualDefaultCodec())); int docs = 1 + random().nextInt(100); for (int i = 0; i < docs; i++) { @@ -269,7 +270,7 @@ public class StoreTest extends ElasticsearchLuceneTestCase { String checksum = Store.digestToString(CodecUtil.retrieveChecksum(input)); assertThat("File: " + meta.name() + " has a different checksum", meta.checksum(), equalTo(checksum)); assertThat(meta.hasLegacyChecksum(), equalTo(false)); - assertThat(meta.writtenBy(), equalTo(TEST_VERSION_CURRENT)); + assertThat(meta.writtenBy(), equalTo(Version.LATEST)); if (meta.name().endsWith(".si") || meta.name().startsWith("segments_")) { assertThat(meta.hash().length, greaterThan(0)); } @@ -288,7 +289,7 @@ public class StoreTest extends ElasticsearchLuceneTestCase { DirectoryService directoryService = new LuceneManagedDirectoryService(random()); Store store = new Store(shardId, ImmutableSettings.EMPTY, null, directoryService, randomDistributor(directoryService)); // this time random codec.... - IndexWriter writer = new IndexWriter(store.directory(), newIndexWriterConfig(random(), TEST_VERSION_CURRENT, new MockAnalyzer(random())).setCodec(actualDefaultCodec())); + IndexWriter writer = new IndexWriter(store.directory(), newIndexWriterConfig(random(), new MockAnalyzer(random())).setCodec(actualDefaultCodec())); int docs = 1 + random().nextInt(100); for (int i = 0; i < docs; i++) { @@ -332,7 +333,7 @@ public class StoreTest extends ElasticsearchLuceneTestCase { try { CodecUtil.retrieveChecksum(input); fail("expected a corrupt index - posting format has not checksums"); - } catch (CorruptIndexException ex) { + } catch (CorruptIndexException | IndexFormatTooOldException | IndexFormatTooNewException ex) { try (ChecksumIndexInput checksumIndexInput = store.directory().openChecksumInput(meta.name(), IOContext.DEFAULT)) { checksumIndexInput.seek(meta.length()); checksum = Store.digestToString(checksumIndexInput.getChecksum()); @@ -344,7 +345,7 @@ public class StoreTest extends ElasticsearchLuceneTestCase { String checksum = Store.digestToString(CodecUtil.retrieveChecksum(input)); assertThat("File: " + meta.name() + " has a different checksum", meta.checksum(), equalTo(checksum)); assertThat(meta.hasLegacyChecksum(), equalTo(false)); - assertThat(meta.writtenBy(), equalTo(TEST_VERSION_CURRENT)); + assertThat(meta.writtenBy(), equalTo(Version.LATEST)); } } } @@ -364,7 +365,7 @@ public class StoreTest extends ElasticsearchLuceneTestCase { String checksum = Store.digestToString(CodecUtil.retrieveChecksum(input)); assertThat("File: " + meta.name() + " has a different checksum", meta.checksum(), equalTo(checksum)); assertThat(meta.hasLegacyChecksum(), equalTo(false)); - assertThat(meta.writtenBy(), equalTo(TEST_VERSION_CURRENT)); + assertThat(meta.writtenBy(), equalTo(Version.LATEST)); } } } @@ -456,7 +457,7 @@ public class StoreTest extends ElasticsearchLuceneTestCase { try { Store.verify(verifyingIndexInput); fail("should be a corrupted index"); - } catch (CorruptIndexException ex) { + } catch (CorruptIndexException | IndexFormatTooOldException | IndexFormatTooNewException ex) { // ok } IOUtils.close(verifyingIndexInput); @@ -543,24 +544,11 @@ public class StoreTest extends ElasticsearchLuceneTestCase { public long throttleTimeInNanos() { return random.nextInt(1000); } - - @Override - public void renameFile(Directory dir, String from, String to) throws IOException { - dir.copy(dir, from, to, IOContext.DEFAULT); - dir.deleteFile(from); - } - - @Override - public void fullDelete(Directory dir) throws IOException { - for (String file : dir.listAll()) { - dir.deleteFile(file); - } - } } public static void assertConsistent(Store store, Store.MetadataSnapshot metadata) throws IOException { for (String file : store.directory().listAll()) { - if (!IndexWriter.WRITE_LOCK_NAME.equals(file) && !IndexFileNames.SEGMENTS_GEN.equals(file) && !Store.isChecksum(file)) { + if (!IndexWriter.WRITE_LOCK_NAME.equals(file) && !IndexFileNames.OLD_SEGMENTS_GEN.equals(file) && !Store.isChecksum(file)) { assertTrue(file + " is not in the map: " + metadata.asMap().size() + " vs. " + store.directory().listAll().length, metadata.asMap().containsKey(file)); } else { assertFalse(file + " is not in the map: " + metadata.asMap().size() + " vs. " + store.directory().listAll().length, metadata.asMap().containsKey(file)); @@ -591,7 +579,7 @@ public class StoreTest extends ElasticsearchLuceneTestCase { Store.MetadataSnapshot first; { Random random = new Random(seed); - IndexWriterConfig iwc = new IndexWriterConfig(TEST_VERSION_CURRENT, new MockAnalyzer(random)).setCodec(actualDefaultCodec()); + IndexWriterConfig iwc = new IndexWriterConfig(new MockAnalyzer(random)).setCodec(actualDefaultCodec()); iwc.setMergePolicy(NoMergePolicy.INSTANCE); iwc.setUseCompoundFile(random.nextBoolean()); iwc.setMaxThreadStates(1); @@ -608,6 +596,7 @@ public class StoreTest extends ElasticsearchLuceneTestCase { writer.commit(); } } + writer.commit(); writer.close(); first = store.getMetadata(); assertDeleteContent(store, directoryService); @@ -621,7 +610,7 @@ public class StoreTest extends ElasticsearchLuceneTestCase { Store store; { Random random = new Random(seed); - IndexWriterConfig iwc = new IndexWriterConfig(TEST_VERSION_CURRENT, new MockAnalyzer(random)).setCodec(actualDefaultCodec()); + IndexWriterConfig iwc = new IndexWriterConfig(new MockAnalyzer(random)).setCodec(actualDefaultCodec()); iwc.setMergePolicy(NoMergePolicy.INSTANCE); iwc.setUseCompoundFile(random.nextBoolean()); iwc.setMaxThreadStates(1); @@ -638,6 +627,7 @@ public class StoreTest extends ElasticsearchLuceneTestCase { writer.commit(); } } + writer.commit(); writer.close(); second = store.getMetadata(); } @@ -646,10 +636,10 @@ public class StoreTest extends ElasticsearchLuceneTestCase { for (StoreFileMetaData md : first) { assertThat(second.get(md.name()), notNullValue()); // si files are different - containing timestamps etc - assertThat(second.get(md.name()).isSame(md), equalTo(md.name().endsWith(".si") == false)); + assertThat(second.get(md.name()).isSame(md), equalTo(false)); } - assertThat(diff.different.size(), equalTo(first.size()-1)); - assertThat(diff.identical.size(), equalTo(1)); // commit point is identical + assertThat(diff.different.size(), equalTo(first.size())); + assertThat(diff.identical.size(), equalTo(0)); // in lucene 5 nothing is identical - we use random ids in file headers assertThat(diff.missing, empty()); // check the self diff @@ -661,18 +651,19 @@ public class StoreTest extends ElasticsearchLuceneTestCase { // lets add some deletes Random random = new Random(seed); - IndexWriterConfig iwc = new IndexWriterConfig(TEST_VERSION_CURRENT, new MockAnalyzer(random)).setCodec(actualDefaultCodec()); + IndexWriterConfig iwc = new IndexWriterConfig(new MockAnalyzer(random)).setCodec(actualDefaultCodec()); iwc.setMergePolicy(NoMergePolicy.INSTANCE); iwc.setUseCompoundFile(random.nextBoolean()); iwc.setMaxThreadStates(1); iwc.setOpenMode(IndexWriterConfig.OpenMode.APPEND); IndexWriter writer = new IndexWriter(store.directory(), iwc); writer.deleteDocuments(new Term("id", Integer.toString(random().nextInt(numDocs)))); + writer.commit(); writer.close(); Store.MetadataSnapshot metadata = store.getMetadata(); StoreFileMetaData delFile = null; for (StoreFileMetaData md : metadata) { - if (md.name().endsWith(".del")) { + if (md.name().endsWith(".liv")) { delFile = md; break; } @@ -696,7 +687,7 @@ public class StoreTest extends ElasticsearchLuceneTestCase { assertThat(selfDiff.missing, empty()); // add a new commit - iwc = new IndexWriterConfig(TEST_VERSION_CURRENT, new MockAnalyzer(random)).setCodec(actualDefaultCodec()); + iwc = new IndexWriterConfig(new MockAnalyzer(random)).setCodec(actualDefaultCodec()); iwc.setMergePolicy(NoMergePolicy.INSTANCE); iwc.setUseCompoundFile(true); // force CFS - easier to test here since we know it will add 3 files iwc.setMaxThreadStates(1); @@ -710,7 +701,7 @@ public class StoreTest extends ElasticsearchLuceneTestCase { if (delFile != null) { assertThat(newCommitDiff.identical.size(), equalTo(newCommitMetaData.size()-5)); // segments_N, del file, cfs, cfe, si for the new segment assertThat(newCommitDiff.different.size(), equalTo(1)); // the del file must be different - assertThat(newCommitDiff.different.get(0).name(), endsWith(".del")); + assertThat(newCommitDiff.different.get(0).name(), endsWith(".liv")); assertThat(newCommitDiff.missing.size(), equalTo(4)); // segments_N,cfs, cfe, si for the new segment } else { assertThat(newCommitDiff.identical.size(), equalTo(newCommitMetaData.size() - 4)); // segments_N, cfs, cfe, si for the new segment diff --git a/src/test/java/org/elasticsearch/index/store/distributor/DistributorTests.java b/src/test/java/org/elasticsearch/index/store/distributor/DistributorTests.java index 528f052a005..33ffccfbacb 100644 --- a/src/test/java/org/elasticsearch/index/store/distributor/DistributorTests.java +++ b/src/test/java/org/elasticsearch/index/store/distributor/DistributorTests.java @@ -26,6 +26,8 @@ import org.junit.Test; import java.io.File; import java.io.IOException; +import java.nio.file.Path; +import java.nio.file.Paths; import static org.hamcrest.MatcherAssert.assertThat; import static org.hamcrest.Matchers.*; @@ -43,7 +45,12 @@ public class DistributorTests extends ElasticsearchTestCase { }; FakeDirectoryService directoryService = new FakeDirectoryService(directories); - LeastUsedDistributor distributor = new LeastUsedDistributor(directoryService); + LeastUsedDistributor distributor = new LeastUsedDistributor(directoryService) { + @Override + protected long getUsableSpace(Directory directory) throws IOException { + return ((FakeFsDirectory)directory).useableSpace; + } + }; for (int i = 0; i < 5; i++) { assertThat(distributor.any(), equalTo((Directory) directories[2])); } @@ -98,7 +105,12 @@ public class DistributorTests extends ElasticsearchTestCase { }; FakeDirectoryService directoryService = new FakeDirectoryService(directories); - RandomWeightedDistributor randomWeightedDistributor = new RandomWeightedDistributor(directoryService); + RandomWeightedDistributor randomWeightedDistributor = new RandomWeightedDistributor(directoryService) { + @Override + protected long getUsableSpace(Directory directory) throws IOException { + return ((FakeFsDirectory)directory).useableSpace; + } + }; for (int i = 0; i < 10000; i++) { ((FakeFsDirectory) randomWeightedDistributor.any()).incrementAllocationCount(); } @@ -141,26 +153,18 @@ public class DistributorTests extends ElasticsearchTestCase { public long throttleTimeInNanos() { return 0; } - - @Override - public void renameFile(Directory dir, String from, String to) throws IOException { - } - - @Override - public void fullDelete(Directory dir) throws IOException { - } } public static class FakeFsDirectory extends FSDirectory { public int allocationCount; + public long useableSpace; - public FakeFile fakeFile; public FakeFsDirectory(String path, long usableSpace) throws IOException { - super(new File(path), NoLockFactory.getNoLockFactory()); - fakeFile = new FakeFile(path, usableSpace); + super(Paths.get(path), NoLockFactory.getNoLockFactory()); allocationCount = 0; + this.useableSpace = usableSpace; } @Override @@ -169,7 +173,7 @@ public class DistributorTests extends ElasticsearchTestCase { } public void setUsableSpace(long usableSpace) { - fakeFile.setUsableSpace(usableSpace); + this.useableSpace = usableSpace; } public void incrementAllocationCount() { @@ -183,28 +187,6 @@ public class DistributorTests extends ElasticsearchTestCase { public void resetAllocationCount() { allocationCount = 0; } - - @Override - public File getDirectory() { - return fakeFile; - } } - public static class FakeFile extends File { - private long usableSpace; - - public FakeFile(String s, long usableSpace) { - super(s); - this.usableSpace = usableSpace; - } - - @Override - public long getUsableSpace() { - return usableSpace; - } - - public void setUsableSpace(long usableSpace) { - this.usableSpace = usableSpace; - } - } } diff --git a/src/test/java/org/elasticsearch/indices/analysis/DummyAnalyzer.java b/src/test/java/org/elasticsearch/indices/analysis/DummyAnalyzer.java index d413096174d..7034d5b439f 100644 --- a/src/test/java/org/elasticsearch/indices/analysis/DummyAnalyzer.java +++ b/src/test/java/org/elasticsearch/indices/analysis/DummyAnalyzer.java @@ -26,12 +26,11 @@ import java.io.Reader; public class DummyAnalyzer extends StopwordAnalyzerBase { - protected DummyAnalyzer(Version version) { - super(version); + protected DummyAnalyzer() { } @Override - protected TokenStreamComponents createComponents(String fieldName, Reader reader) { + protected TokenStreamComponents createComponents(String fieldName) { return null; } } diff --git a/src/test/java/org/elasticsearch/indices/analysis/DummyAnalyzerProvider.java b/src/test/java/org/elasticsearch/indices/analysis/DummyAnalyzerProvider.java index 0c4b48bb403..68beb817d70 100644 --- a/src/test/java/org/elasticsearch/indices/analysis/DummyAnalyzerProvider.java +++ b/src/test/java/org/elasticsearch/indices/analysis/DummyAnalyzerProvider.java @@ -19,7 +19,6 @@ package org.elasticsearch.indices.analysis; -import org.elasticsearch.common.lucene.Lucene; import org.elasticsearch.index.analysis.AnalyzerProvider; import org.elasticsearch.index.analysis.AnalyzerScope; @@ -36,6 +35,6 @@ public class DummyAnalyzerProvider implements AnalyzerProvider { @Override public DummyAnalyzer get() { - return new DummyAnalyzer(Lucene.ANALYZER_VERSION); + return new DummyAnalyzer(); } } diff --git a/src/test/java/org/elasticsearch/indices/analysis/DummyIndicesAnalysis.java b/src/test/java/org/elasticsearch/indices/analysis/DummyIndicesAnalysis.java index c48edb11d36..9642b610f69 100644 --- a/src/test/java/org/elasticsearch/indices/analysis/DummyIndicesAnalysis.java +++ b/src/test/java/org/elasticsearch/indices/analysis/DummyIndicesAnalysis.java @@ -21,7 +21,6 @@ package org.elasticsearch.indices.analysis; import org.elasticsearch.common.component.AbstractComponent; import org.elasticsearch.common.inject.Inject; -import org.elasticsearch.common.lucene.Lucene; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.index.analysis.*; @@ -32,7 +31,7 @@ public class DummyIndicesAnalysis extends AbstractComponent { super(settings); indicesAnalysisService.analyzerProviderFactories().put("dummy", new PreBuiltAnalyzerProviderFactory("dummy", AnalyzerScope.INDICES, - new DummyAnalyzer(Lucene.ANALYZER_VERSION))); + new DummyAnalyzer())); indicesAnalysisService.tokenFilterFactories().put("dummy_token_filter", new PreBuiltTokenFilterFactoryFactory(new DummyTokenFilterFactory())); indicesAnalysisService.charFilterFactories().put("dummy_char_filter", diff --git a/src/test/java/org/elasticsearch/indices/analysis/DummyTokenizerFactory.java b/src/test/java/org/elasticsearch/indices/analysis/DummyTokenizerFactory.java index 95c6a5ed582..a27c6ae7dba 100644 --- a/src/test/java/org/elasticsearch/indices/analysis/DummyTokenizerFactory.java +++ b/src/test/java/org/elasticsearch/indices/analysis/DummyTokenizerFactory.java @@ -22,8 +22,6 @@ package org.elasticsearch.indices.analysis; import org.apache.lucene.analysis.Tokenizer; import org.elasticsearch.index.analysis.TokenizerFactory; -import java.io.Reader; - public class DummyTokenizerFactory implements TokenizerFactory { @Override public String name() { @@ -31,7 +29,7 @@ public class DummyTokenizerFactory implements TokenizerFactory { } @Override - public Tokenizer create(Reader reader) { + public Tokenizer create() { return null; } } diff --git a/src/test/java/org/elasticsearch/indices/memory/breaker/RandomExceptionCircuitBreakerTests.java b/src/test/java/org/elasticsearch/indices/memory/breaker/RandomExceptionCircuitBreakerTests.java index cbf52f53da0..4f1b4302c5b 100644 --- a/src/test/java/org/elasticsearch/indices/memory/breaker/RandomExceptionCircuitBreakerTests.java +++ b/src/test/java/org/elasticsearch/indices/memory/breaker/RandomExceptionCircuitBreakerTests.java @@ -19,7 +19,7 @@ package org.elasticsearch.indices.memory.breaker; -import org.apache.lucene.index.AtomicReader; +import org.apache.lucene.index.LeafReader; import org.apache.lucene.index.DirectoryReader; import org.elasticsearch.ElasticsearchException; import org.elasticsearch.action.admin.cluster.health.ClusterHealthResponse; @@ -38,7 +38,7 @@ import org.elasticsearch.index.query.QueryBuilders; import org.elasticsearch.search.sort.SortOrder; import org.elasticsearch.test.ElasticsearchIntegrationTest; import org.elasticsearch.test.engine.MockInternalEngine; -import org.elasticsearch.test.engine.ThrowingAtomicReaderWrapper; +import org.elasticsearch.test.engine.ThrowingLeafReaderWrapper; import org.junit.Test; import java.io.IOException; @@ -196,7 +196,7 @@ public class RandomExceptionCircuitBreakerTests extends ElasticsearchIntegration // TODO: Generalize this class and add it as a utility public static class RandomExceptionDirectoryReaderWrapper extends MockInternalEngine.DirectoryReaderWrapper { private final Settings settings; - static class ThrowingSubReaderWrapper extends SubReaderWrapper implements ThrowingAtomicReaderWrapper.Thrower { + static class ThrowingSubReaderWrapper extends SubReaderWrapper implements ThrowingLeafReaderWrapper.Thrower { private final Random random; private final double topLevelRatio; private final double lowLevelRatio; @@ -209,12 +209,12 @@ public class RandomExceptionCircuitBreakerTests extends ElasticsearchIntegration } @Override - public AtomicReader wrap(AtomicReader reader) { - return new ThrowingAtomicReaderWrapper(reader, this); + public LeafReader wrap(LeafReader reader) { + return new ThrowingLeafReaderWrapper(reader, this); } @Override - public void maybeThrow(ThrowingAtomicReaderWrapper.Flags flag) throws IOException { + public void maybeThrow(ThrowingLeafReaderWrapper.Flags flag) throws IOException { switch (flag) { case Fields: break; diff --git a/src/test/java/org/elasticsearch/indices/stats/IndexStatsTests.java b/src/test/java/org/elasticsearch/indices/stats/IndexStatsTests.java index 65943a377ed..ee63ac126a4 100644 --- a/src/test/java/org/elasticsearch/indices/stats/IndexStatsTests.java +++ b/src/test/java/org/elasticsearch/indices/stats/IndexStatsTests.java @@ -539,7 +539,7 @@ public class IndexStatsTests extends ElasticsearchIntegrationTest { assertThat(stats.getTotal().getSegments(), notNullValue()); assertThat(stats.getTotal().getSegments().getCount(), equalTo((long) test1.totalNumShards)); - assumeTrue(org.elasticsearch.Version.CURRENT.luceneVersion != Version.LUCENE_46); + assumeTrue(org.elasticsearch.Version.CURRENT.luceneVersion != Version.LUCENE_4_6_0); assertThat(stats.getTotal().getSegments().getMemoryInBytes(), greaterThan(0l)); } diff --git a/src/test/java/org/elasticsearch/indices/store/StrictDistributor.java b/src/test/java/org/elasticsearch/indices/store/StrictDistributor.java index 726e0cb0a90..1229ef27475 100644 --- a/src/test/java/org/elasticsearch/indices/store/StrictDistributor.java +++ b/src/test/java/org/elasticsearch/indices/store/StrictDistributor.java @@ -40,7 +40,7 @@ public class StrictDistributor extends AbstractDistributor { } @Override - public Directory doAny() { + public Directory doAny() throws IOException { for (Directory delegate : delegates) { assertThat(getUsableSpace(delegate), greaterThan(0L)); } diff --git a/src/test/java/org/elasticsearch/nested/SimpleNestedTests.java b/src/test/java/org/elasticsearch/nested/SimpleNestedTests.java index e25368188c8..a111f7634e5 100644 --- a/src/test/java/org/elasticsearch/nested/SimpleNestedTests.java +++ b/src/test/java/org/elasticsearch/nested/SimpleNestedTests.java @@ -1186,7 +1186,7 @@ public class SimpleNestedTests extends ElasticsearchIntegrationTest { // No nested mapping yet, there shouldn't be anything in the fixed bit set cache ClusterStatsResponse clusterStatsResponse = client().admin().cluster().prepareClusterStats().get(); - assertThat(clusterStatsResponse.getIndicesStats().getSegments().getFixedBitSetMemoryInBytes(), equalTo(0l)); + assertThat(clusterStatsResponse.getIndicesStats().getSegments().getBitsetMemoryInBytes(), equalTo(0l)); // Now add nested mapping assertAcked( @@ -1207,7 +1207,7 @@ public class SimpleNestedTests extends ElasticsearchIntegrationTest { if (loadFixedBitSeLazily) { clusterStatsResponse = client().admin().cluster().prepareClusterStats().get(); - assertThat(clusterStatsResponse.getIndicesStats().getSegments().getFixedBitSetMemoryInBytes(), equalTo(0l)); + assertThat(clusterStatsResponse.getIndicesStats().getSegments().getBitsetMemoryInBytes(), equalTo(0l)); // only when querying with nested the fixed bitsets are loaded SearchResponse searchResponse = client().prepareSearch("test") @@ -1217,11 +1217,11 @@ public class SimpleNestedTests extends ElasticsearchIntegrationTest { assertThat(searchResponse.getHits().totalHits(), equalTo(5l)); } clusterStatsResponse = client().admin().cluster().prepareClusterStats().get(); - assertThat(clusterStatsResponse.getIndicesStats().getSegments().getFixedBitSetMemoryInBytes(), greaterThan(0l)); + assertThat(clusterStatsResponse.getIndicesStats().getSegments().getBitsetMemoryInBytes(), greaterThan(0l)); assertAcked(client().admin().indices().prepareDelete("test")); clusterStatsResponse = client().admin().cluster().prepareClusterStats().get(); - assertThat(clusterStatsResponse.getIndicesStats().getSegments().getFixedBitSetMemoryInBytes(), equalTo(0l)); + assertThat(clusterStatsResponse.getIndicesStats().getSegments().getBitsetMemoryInBytes(), equalTo(0l)); } /** diff --git a/src/test/java/org/elasticsearch/rest/action/admin/indices/upgrade/UpgradeReallyOldIndexTest.java b/src/test/java/org/elasticsearch/rest/action/admin/indices/upgrade/UpgradeReallyOldIndexTest.java deleted file mode 100644 index 8df47d83430..00000000000 --- a/src/test/java/org/elasticsearch/rest/action/admin/indices/upgrade/UpgradeReallyOldIndexTest.java +++ /dev/null @@ -1,92 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.rest.action.admin.indices.upgrade; - -import org.apache.http.impl.client.HttpClients; -import org.apache.lucene.util.LuceneTestCase; -import org.apache.lucene.util.LuceneTestCase.SuppressCodecs; -import org.elasticsearch.action.admin.cluster.node.info.NodeInfo; -import org.elasticsearch.action.admin.cluster.node.info.NodesInfoResponse; -import org.elasticsearch.action.admin.indices.get.GetIndexResponse; -import org.elasticsearch.action.search.SearchResponse; -import org.elasticsearch.client.Client; -import org.elasticsearch.common.settings.ImmutableSettings; -import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.common.transport.InetSocketTransportAddress; -import org.elasticsearch.common.transport.TransportAddress; -import org.elasticsearch.node.internal.InternalNode; -import org.elasticsearch.test.ElasticsearchIntegrationTest; -import org.elasticsearch.test.rest.client.http.HttpRequestBuilder; - -import java.io.File; -import java.net.InetSocketAddress; -import java.util.Arrays; - -import static org.hamcrest.Matchers.greaterThanOrEqualTo; - -@SuppressCodecs({"Lucene3x", "MockFixedIntBlock", "MockVariableIntBlock", "MockSep", "MockRandom", "Lucene40", "Lucene41", "Appending", "Lucene42", "Lucene45", "Lucene46", "Lucene49"}) -@ElasticsearchIntegrationTest.ClusterScope(scope = ElasticsearchIntegrationTest.Scope.TEST, numDataNodes = 0, minNumDataNodes = 0, maxNumDataNodes = 0) -public class UpgradeReallyOldIndexTest extends ElasticsearchIntegrationTest { - - public void testUpgrade_0_20() throws Exception { - // If this assert trips it means we are not suppressing enough codecs up above: - assertFalse("test infra is broken!", LuceneTestCase.OLD_FORMAT_IMPERSONATION_IS_ACTIVE); - Settings baseSettings = prepareBackwardsDataDir(new File(getClass().getResource("index-0.20.zip").toURI())); - internalCluster().startNode(ImmutableSettings.builder() - .put(baseSettings) - .put(InternalNode.HTTP_ENABLED, true) - .build()); - ensureGreen("test"); - - assertIndexSanity(); - - HttpRequestBuilder httpClient = httpClient(); - - UpgradeTest.assertNotUpgraded(httpClient, "test"); - UpgradeTest.runUpgrade(httpClient, "test", "wait_for_completion", "true"); - UpgradeTest.assertUpgraded(httpClient, "test"); - } - - void assertIndexSanity() { - GetIndexResponse getIndexResponse = client().admin().indices().prepareGetIndex().get(); - logger.info("Found indices: {}", Arrays.toString(getIndexResponse.indices())); - assertEquals(1, getIndexResponse.indices().length); - assertEquals("test", getIndexResponse.indices()[0]); - ensureYellow("test"); - SearchResponse test = client().prepareSearch("test").get(); - assertThat(test.getHits().getTotalHits(), greaterThanOrEqualTo(1l)); - } - - static HttpRequestBuilder httpClient() { - NodeInfo info = nodeInfo(client()); - info.getHttp().address().boundAddress(); - TransportAddress publishAddress = info.getHttp().address().publishAddress(); - assertEquals(1, publishAddress.uniqueAddressTypeId()); - InetSocketAddress address = ((InetSocketTransportAddress) publishAddress).address(); - return new HttpRequestBuilder(HttpClients.createDefault()).host(address.getHostName()).port(address.getPort()); - } - - static NodeInfo nodeInfo(final Client client) { - final NodesInfoResponse nodeInfos = client.admin().cluster().prepareNodesInfo().get(); - final NodeInfo[] nodes = nodeInfos.getNodes(); - assertEquals(1, nodes.length); - return nodes[0]; - } -} diff --git a/src/test/java/org/elasticsearch/search/aggregations/support/ScriptValuesTests.java b/src/test/java/org/elasticsearch/search/aggregations/support/ScriptValuesTests.java index a8e3fd6df01..728ec7037d2 100644 --- a/src/test/java/org/elasticsearch/search/aggregations/support/ScriptValuesTests.java +++ b/src/test/java/org/elasticsearch/search/aggregations/support/ScriptValuesTests.java @@ -20,7 +20,7 @@ package org.elasticsearch.search.aggregations.support; import com.carrotsearch.randomizedtesting.generators.RandomStrings; -import org.apache.lucene.index.AtomicReaderContext; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.search.Scorer; import org.apache.lucene.util.BytesRef; import org.elasticsearch.script.SearchScript; @@ -65,7 +65,7 @@ public class ScriptValuesTests extends ElasticsearchTestCase { } @Override - public void setNextReader(AtomicReaderContext reader) { + public void setNextReader(LeafReaderContext reader) { } @Override diff --git a/src/test/java/org/elasticsearch/search/basic/SearchWithRandomExceptionsTests.java b/src/test/java/org/elasticsearch/search/basic/SearchWithRandomExceptionsTests.java index ba06895b812..80f5ac2b651 100644 --- a/src/test/java/org/elasticsearch/search/basic/SearchWithRandomExceptionsTests.java +++ b/src/test/java/org/elasticsearch/search/basic/SearchWithRandomExceptionsTests.java @@ -19,7 +19,7 @@ package org.elasticsearch.search.basic; -import org.apache.lucene.index.AtomicReader; +import org.apache.lucene.index.LeafReader; import org.apache.lucene.index.DirectoryReader; import org.apache.lucene.util.English; import org.elasticsearch.ElasticsearchException; @@ -36,7 +36,7 @@ import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.index.query.QueryBuilders; import org.elasticsearch.test.ElasticsearchIntegrationTest; import org.elasticsearch.test.engine.MockInternalEngine; -import org.elasticsearch.test.engine.ThrowingAtomicReaderWrapper; +import org.elasticsearch.test.engine.ThrowingLeafReaderWrapper; import org.elasticsearch.test.junit.annotations.TestLogging; import org.elasticsearch.test.store.MockDirectoryHelper; import org.elasticsearch.test.store.MockFSDirectoryService; @@ -292,7 +292,7 @@ public class SearchWithRandomExceptionsTests extends ElasticsearchIntegrationTes public static class RandomExceptionDirectoryReaderWrapper extends MockInternalEngine.DirectoryReaderWrapper { private final Settings settings; - static class ThrowingSubReaderWrapper extends SubReaderWrapper implements ThrowingAtomicReaderWrapper.Thrower { + static class ThrowingSubReaderWrapper extends SubReaderWrapper implements ThrowingLeafReaderWrapper.Thrower { private final Random random; private final double topLevelRatio; private final double lowLevelRatio; @@ -305,12 +305,12 @@ public class SearchWithRandomExceptionsTests extends ElasticsearchIntegrationTes } @Override - public AtomicReader wrap(AtomicReader reader) { - return new ThrowingAtomicReaderWrapper(reader, this); + public LeafReader wrap(LeafReader reader) { + return new ThrowingLeafReaderWrapper(reader, this); } @Override - public void maybeThrow(ThrowingAtomicReaderWrapper.Flags flag) throws IOException { + public void maybeThrow(ThrowingLeafReaderWrapper.Flags flag) throws IOException { switch (flag) { case Fields: case TermVectors: diff --git a/src/test/java/org/elasticsearch/search/child/SimpleChildQuerySearchTests.java b/src/test/java/org/elasticsearch/search/child/SimpleChildQuerySearchTests.java index b4657d14d10..5a1315dc104 100644 --- a/src/test/java/org/elasticsearch/search/child/SimpleChildQuerySearchTests.java +++ b/src/test/java/org/elasticsearch/search/child/SimpleChildQuerySearchTests.java @@ -70,6 +70,7 @@ import static org.elasticsearch.index.query.QueryBuilders.constantScoreQuery; import static org.elasticsearch.index.query.functionscore.ScoreFunctionBuilders.factorFunction; import static org.elasticsearch.index.query.functionscore.ScoreFunctionBuilders.scriptFunction; import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.*; +import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchResponse; import static org.hamcrest.Matchers.*; /** @@ -370,7 +371,7 @@ public class SimpleChildQuerySearchTests extends ElasticsearchIntegrationTest { // No _parent field yet, there shouldn't be anything in the parent id cache ClusterStatsResponse clusterStatsResponse = client().admin().cluster().prepareClusterStats().get(); - assertThat(clusterStatsResponse.getIndicesStats().getSegments().getFixedBitSetMemoryInBytes(), equalTo(0l)); + assertThat(clusterStatsResponse.getIndicesStats().getSegments().getBitsetMemoryInBytes(), equalTo(0l)); // Now add mapping + children assertAcked( @@ -388,7 +389,7 @@ public class SimpleChildQuerySearchTests extends ElasticsearchIntegrationTest { if (loadFixedBitSetLazily) { clusterStatsResponse = client().admin().cluster().prepareClusterStats().get(); - assertThat(clusterStatsResponse.getIndicesStats().getSegments().getFixedBitSetMemoryInBytes(), equalTo(0l)); + assertThat(clusterStatsResponse.getIndicesStats().getSegments().getBitsetMemoryInBytes(), equalTo(0l)); // only when querying with has_child the fixed bitsets are loaded SearchResponse searchResponse = client().prepareSearch("test") @@ -399,11 +400,11 @@ public class SimpleChildQuerySearchTests extends ElasticsearchIntegrationTest { assertThat(searchResponse.getHits().totalHits(), equalTo(1l)); } clusterStatsResponse = client().admin().cluster().prepareClusterStats().get(); - assertThat(clusterStatsResponse.getIndicesStats().getSegments().getFixedBitSetMemoryInBytes(), greaterThan(0l)); + assertThat(clusterStatsResponse.getIndicesStats().getSegments().getBitsetMemoryInBytes(), greaterThan(0l)); assertAcked(client().admin().indices().prepareDelete("test")); clusterStatsResponse = client().admin().cluster().prepareClusterStats().get(); - assertThat(clusterStatsResponse.getIndicesStats().getSegments().getFixedBitSetMemoryInBytes(), equalTo(0l)); + assertThat(clusterStatsResponse.getIndicesStats().getSegments().getBitsetMemoryInBytes(), equalTo(0l)); } @Test @@ -2005,7 +2006,6 @@ public class SimpleChildQuerySearchTests extends ElasticsearchIntegrationTest { assertNoFailures(scrollResponse); assertThat(scrollResponse.getHits().totalHits(), equalTo(10l)); - int scannedDocs = 0; do { scrollResponse = client() diff --git a/src/test/java/org/elasticsearch/search/geo/GeoFilterTests.java b/src/test/java/org/elasticsearch/search/geo/GeoFilterTests.java index 5b2ad26cf37..b1abb267fe0 100644 --- a/src/test/java/org/elasticsearch/search/geo/GeoFilterTests.java +++ b/src/test/java/org/elasticsearch/search/geo/GeoFilterTests.java @@ -607,6 +607,10 @@ public class GeoFilterTests extends ElasticsearchIntegrationTest { } protected static boolean testRelationSupport(SpatialOperation relation) { + if (relation == SpatialOperation.IsDisjointTo) { + // disjoint works in terms of intersection + relation = SpatialOperation.Intersects; + } try { GeohashPrefixTree tree = new GeohashPrefixTree(SpatialContext.GEO, 3); RecursivePrefixTreeStrategy strategy = new RecursivePrefixTreeStrategy(tree, "area"); @@ -615,6 +619,7 @@ public class GeoFilterTests extends ElasticsearchIntegrationTest { strategy.makeFilter(args); return true; } catch (UnsupportedSpatialOperation e) { + e.printStackTrace(); return false; } } diff --git a/src/test/java/org/elasticsearch/search/query/SimpleQueryTests.java b/src/test/java/org/elasticsearch/search/query/SimpleQueryTests.java index 7af3e3fd2a7..126a20abab8 100644 --- a/src/test/java/org/elasticsearch/search/query/SimpleQueryTests.java +++ b/src/test/java/org/elasticsearch/search/query/SimpleQueryTests.java @@ -475,7 +475,7 @@ public class SimpleQueryTests extends ElasticsearchIntegrationTest { cluster().wipeIndices("test"); } catch (MapperParsingException ex) { assertThat(version.toString(), version.onOrAfter(Version.V_1_0_0_RC2), equalTo(true)); - assertThat(ex.getCause().getMessage(), equalTo("'omit_term_freq_and_positions' is not supported anymore - use ['index_options' : 'DOCS_ONLY'] instead")); + assertThat(ex.getCause().getMessage(), equalTo("'omit_term_freq_and_positions' is not supported anymore - use ['index_options' : 'docs'] instead")); } version = randomVersion(); } diff --git a/src/test/java/org/elasticsearch/search/suggest/CompletionTokenStreamTest.java b/src/test/java/org/elasticsearch/search/suggest/CompletionTokenStreamTest.java index 241e46c0e44..af36c5739f2 100644 --- a/src/test/java/org/elasticsearch/search/suggest/CompletionTokenStreamTest.java +++ b/src/test/java/org/elasticsearch/search/suggest/CompletionTokenStreamTest.java @@ -19,6 +19,7 @@ package org.elasticsearch.search.suggest; import org.apache.lucene.analysis.MockTokenizer; +import org.apache.lucene.analysis.Tokenizer; import org.apache.lucene.analysis.TokenFilter; import org.apache.lucene.analysis.TokenStream; import org.apache.lucene.analysis.core.SimpleAnalyzer; @@ -43,11 +44,12 @@ import static org.hamcrest.Matchers.equalTo; public class CompletionTokenStreamTest extends ElasticsearchTokenStreamTestCase { - final XAnalyzingSuggester suggester = new XAnalyzingSuggester(new SimpleAnalyzer(TEST_VERSION_CURRENT)); + final XAnalyzingSuggester suggester = new XAnalyzingSuggester(new SimpleAnalyzer()); @Test public void testSuggestTokenFilter() throws Exception { - TokenStream tokenStream = new MockTokenizer(new StringReader("mykeyword"), MockTokenizer.WHITESPACE, true); + Tokenizer tokenStream = new MockTokenizer(MockTokenizer.WHITESPACE, true); + tokenStream.setReader(new StringReader("mykeyword")); BytesRef payload = new BytesRef("Surface keyword|friggin payload|10"); TokenStream suggestTokenStream = new ByteTermAttrToCharTermAttrFilter(new CompletionTokenStream(tokenStream, payload, new CompletionTokenStream.ToFiniteStrings() { @Override @@ -63,7 +65,8 @@ public class CompletionTokenStreamTest extends ElasticsearchTokenStreamTestCase Builder builder = new SynonymMap.Builder(true); builder.add(new CharsRef("mykeyword"), new CharsRef("mysynonym"), true); - MockTokenizer tokenizer = new MockTokenizer(new StringReader("mykeyword"), MockTokenizer.WHITESPACE, true); + Tokenizer tokenizer = new MockTokenizer(MockTokenizer.WHITESPACE, true); + tokenizer.setReader(new StringReader("mykeyword")); SynonymFilter filter = new SynonymFilter(tokenizer, builder.build(), true); BytesRef payload = new BytesRef("Surface keyword|friggin payload|10"); @@ -87,7 +90,8 @@ public class CompletionTokenStreamTest extends ElasticsearchTokenStreamTestCase valueBuilder.append(i+1); valueBuilder.append(" "); } - MockTokenizer tokenizer = new MockTokenizer(new StringReader(valueBuilder.toString()), MockTokenizer.WHITESPACE, true); + MockTokenizer tokenizer = new MockTokenizer(MockTokenizer.WHITESPACE, true); + tokenizer.setReader(new StringReader(valueBuilder.toString())); SynonymFilter filter = new SynonymFilter(tokenizer, builder.build(), true); TokenStream suggestTokenStream = new CompletionTokenStream(filter, new BytesRef("Surface keyword|friggin payload|10"), new CompletionTokenStream.ToFiniteStrings() { @@ -126,7 +130,8 @@ public class CompletionTokenStreamTest extends ElasticsearchTokenStreamTestCase valueBuilder.append(i+1); valueBuilder.append(" "); } - MockTokenizer tokenizer = new MockTokenizer(new StringReader(valueBuilder.toString()), MockTokenizer.WHITESPACE, true); + MockTokenizer tokenizer = new MockTokenizer(MockTokenizer.WHITESPACE, true); + tokenizer.setReader(new StringReader(valueBuilder.toString())); SynonymFilter filter = new SynonymFilter(tokenizer, builder.build(), true); TokenStream suggestTokenStream = new CompletionTokenStream(filter, new BytesRef("Surface keyword|friggin payload|10"), new CompletionTokenStream.ToFiniteStrings() { @@ -145,9 +150,10 @@ public class CompletionTokenStreamTest extends ElasticsearchTokenStreamTestCase @Test public void testSuggestTokenFilterProperlyDelegateInputStream() throws Exception { - TokenStream tokenStream = new MockTokenizer(new StringReader("mykeyword"), MockTokenizer.WHITESPACE, true); + Tokenizer tokenizer = new MockTokenizer(MockTokenizer.WHITESPACE, true); + tokenizer.setReader(new StringReader("mykeyword")); BytesRef payload = new BytesRef("Surface keyword|friggin payload|10"); - TokenStream suggestTokenStream = new ByteTermAttrToCharTermAttrFilter(new CompletionTokenStream(tokenStream, payload, new CompletionTokenStream.ToFiniteStrings() { + TokenStream suggestTokenStream = new ByteTermAttrToCharTermAttrFilter(new CompletionTokenStream(tokenizer, payload, new CompletionTokenStream.ToFiniteStrings() { @Override public Set toFiniteStrings(TokenStream stream) throws IOException { return suggester.toFiniteStrings(suggester.getTokenStreamToAutomaton(), stream); diff --git a/src/test/java/org/elasticsearch/search/suggest/completion/AnalyzingCompletionLookupProviderV1.java b/src/test/java/org/elasticsearch/search/suggest/completion/AnalyzingCompletionLookupProviderV1.java index a774966ce76..4b459342fac 100644 --- a/src/test/java/org/elasticsearch/search/suggest/completion/AnalyzingCompletionLookupProviderV1.java +++ b/src/test/java/org/elasticsearch/search/suggest/completion/AnalyzingCompletionLookupProviderV1.java @@ -21,19 +21,26 @@ package org.elasticsearch.search.suggest.completion; import com.carrotsearch.hppc.ObjectLongOpenHashMap; import org.apache.lucene.analysis.TokenStream; -import org.apache.lucene.codecs.*; -import org.apache.lucene.index.FieldInfo; +import org.apache.lucene.codecs.CodecUtil; +import org.apache.lucene.codecs.FieldsConsumer; +import org.apache.lucene.index.*; +import org.apache.lucene.search.DocIdSetIterator; import org.apache.lucene.search.suggest.Lookup; import org.apache.lucene.search.suggest.analyzing.XAnalyzingSuggester; import org.apache.lucene.search.suggest.analyzing.XFuzzySuggester; import org.apache.lucene.store.IndexInput; import org.apache.lucene.store.IndexOutput; +import org.apache.lucene.util.Accountable; +import org.apache.lucene.util.Accountables; import org.apache.lucene.util.BytesRef; import org.apache.lucene.util.IOUtils; import org.apache.lucene.util.IntsRef; import org.apache.lucene.util.automaton.Automaton; -import org.apache.lucene.util.fst.*; +import org.apache.lucene.util.fst.ByteSequenceOutputs; +import org.apache.lucene.util.fst.FST; +import org.apache.lucene.util.fst.PairOutputs; import org.apache.lucene.util.fst.PairOutputs.Pair; +import org.apache.lucene.util.fst.PositiveIntOutputs; import org.elasticsearch.common.regex.Regex; import org.elasticsearch.index.mapper.core.CompletionFieldMapper; import org.elasticsearch.search.suggest.completion.AnalyzingCompletionLookupProvider.AnalyzingSuggestHolder; @@ -42,7 +49,11 @@ import org.elasticsearch.search.suggest.completion.Completion090PostingsFormat.L import org.elasticsearch.search.suggest.context.ContextMapping.ContextQuery; import java.io.IOException; -import java.util.*; +import java.util.HashMap; +import java.util.Map; +import java.util.Set; +import java.util.TreeMap; + /** * This is an older implementation of the AnalyzingCompletionLookupProvider class * We use this to test for backwards compatibility in our tests, namely @@ -83,7 +94,7 @@ public class AnalyzingCompletionLookupProviderV1 extends CompletionLookupProvide int options = preserveSep ? XAnalyzingSuggester.PRESERVE_SEP : 0; // needs to fixed in the suggester first before it can be supported //options |= exactFirst ? XAnalyzingSuggester.EXACT_FIRST : 0; - prototype = new XAnalyzingSuggester(null,null, null, options, maxSurfaceFormsPerAnalyzedForm, maxGraphExpansions, preservePositionIncrements, + prototype = new XAnalyzingSuggester(null, null, null, options, maxSurfaceFormsPerAnalyzedForm, maxGraphExpansions, preservePositionIncrements, null, false, 1, SEP_LABEL, PAYLOAD_SEP, END_BYTE, XAnalyzingSuggester.HOLE_CHARACTER); } @@ -94,9 +105,10 @@ public class AnalyzingCompletionLookupProviderV1 extends CompletionLookupProvide @Override public FieldsConsumer consumer(final IndexOutput output) throws IOException { + // TODO write index header? CodecUtil.writeHeader(output, CODEC_NAME, CODEC_VERSION); return new FieldsConsumer() { - private Map fieldOffsets = new HashMap<>(); + private Map fieldOffsets = new HashMap<>(); @Override public void close() throws IOException { @@ -106,108 +118,80 @@ public class AnalyzingCompletionLookupProviderV1 extends CompletionLookupProvide */ long pointer = output.getFilePointer(); output.writeVInt(fieldOffsets.size()); - for (Map.Entry entry : fieldOffsets.entrySet()) { - output.writeString(entry.getKey().name); + for (Map.Entry entry : fieldOffsets.entrySet()) { + output.writeString(entry.getKey()); output.writeVLong(entry.getValue()); } output.writeLong(pointer); - output.flush(); } finally { IOUtils.close(output); } } @Override - public TermsConsumer addField(final FieldInfo field) throws IOException { - - return new TermsConsumer() { - final XAnalyzingSuggester.XBuilder builder = new XAnalyzingSuggester.XBuilder(maxSurfaceFormsPerAnalyzedForm, hasPayloads, PAYLOAD_SEP); - final CompletionPostingsConsumer postingsConsumer = new CompletionPostingsConsumer(AnalyzingCompletionLookupProviderV1.this, builder); - - @Override - public PostingsConsumer startTerm(BytesRef text) throws IOException { - builder.startTerm(text); - return postingsConsumer; + public void write(Fields fields) throws IOException { + for (String field : fields) { + Terms terms = fields.terms(field); + if (terms == null) { + continue; } - - @Override - public Comparator getComparator() throws IOException { - return BytesRef.getUTF8SortedAsUnicodeComparator(); + TermsEnum termsEnum = terms.iterator(null); + DocsAndPositionsEnum docsEnum = null; + final SuggestPayload spare = new SuggestPayload(); + int maxAnalyzedPathsForOneInput = 0; + final XAnalyzingSuggester.XBuilder builder = new XAnalyzingSuggester.XBuilder(maxSurfaceFormsPerAnalyzedForm, hasPayloads, XAnalyzingSuggester.PAYLOAD_SEP); + int docCount = 0; + while (true) { + BytesRef term = termsEnum.next(); + if (term == null) { + break; + } + docsEnum = termsEnum.docsAndPositions(null, docsEnum, DocsAndPositionsEnum.FLAG_PAYLOADS); + builder.startTerm(term); + int docFreq = 0; + while (docsEnum.nextDoc() != DocIdSetIterator.NO_MORE_DOCS) { + for (int i = 0; i < docsEnum.freq(); i++) { + final int position = docsEnum.nextPosition(); + AnalyzingCompletionLookupProviderV1.this.parsePayload(docsEnum.getPayload(), spare); + builder.addSurface(spare.surfaceForm.get(), spare.payload.get(), spare.weight); + // multi fields have the same surface form so we sum up here + maxAnalyzedPathsForOneInput = Math.max(maxAnalyzedPathsForOneInput, position + 1); + } + docFreq++; + docCount = Math.max(docCount, docsEnum.docID() + 1); + } + builder.finishTerm(docFreq); } - - @Override - public void finishTerm(BytesRef text, TermStats stats) throws IOException { - builder.finishTerm(stats.docFreq); // use doc freq as a fallback - } - - @Override - public void finish(long sumTotalTermFreq, long sumDocFreq, int docCount) throws IOException { - /* - * Here we are done processing the field and we can - * buid the FST and write it to disk. - */ - FST> build = builder.build(); - assert build != null || docCount == 0 : "the FST is null but docCount is != 0 actual value: [" + docCount + "]"; + /* + * Here we are done processing the field and we can + * buid the FST and write it to disk. + */ + FST> build = builder.build(); + assert build != null || docCount == 0 : "the FST is null but docCount is != 0 actual value: [" + docCount + "]"; /* * it's possible that the FST is null if we have 2 segments that get merged * and all docs that have a value in this field are deleted. This will cause * a consumer to be created but it doesn't consume any values causing the FSTBuilder * to return null. */ - if (build != null) { - fieldOffsets.put(field, output.getFilePointer()); - build.save(output); + if (build != null) { + fieldOffsets.put(field, output.getFilePointer()); + build.save(output); /* write some more meta-info */ - output.writeVInt(postingsConsumer.getMaxAnalyzedPathsForOneInput()); - output.writeVInt(maxSurfaceFormsPerAnalyzedForm); - output.writeInt(maxGraphExpansions); // can be negative - int options = 0; - options |= preserveSep ? SERIALIZE_PRESERVE_SEPERATORS : 0; - options |= hasPayloads ? SERIALIZE_HAS_PAYLOADS : 0; - options |= preservePositionIncrements ? SERIALIZE_PRESERVE_POSITION_INCREMENTS : 0; - output.writeVInt(options); - } + output.writeVInt(maxAnalyzedPathsForOneInput); + output.writeVInt(maxSurfaceFormsPerAnalyzedForm); + output.writeInt(maxGraphExpansions); // can be negative + int options = 0; + options |= preserveSep ? SERIALIZE_PRESERVE_SEPERATORS : 0; + options |= hasPayloads ? SERIALIZE_HAS_PAYLOADS : 0; + options |= preservePositionIncrements ? SERIALIZE_PRESERVE_POSITION_INCREMENTS : 0; + output.writeVInt(options); } - }; + } } }; } - private static final class CompletionPostingsConsumer extends PostingsConsumer { - private final SuggestPayload spare = new SuggestPayload(); - private AnalyzingCompletionLookupProviderV1 analyzingSuggestLookupProvider; - private XAnalyzingSuggester.XBuilder builder; - private int maxAnalyzedPathsForOneInput = 0; - - public CompletionPostingsConsumer(AnalyzingCompletionLookupProviderV1 analyzingSuggestLookupProvider, XAnalyzingSuggester.XBuilder builder) { - this.analyzingSuggestLookupProvider = analyzingSuggestLookupProvider; - this.builder = builder; - } - - @Override - public void startDoc(int docID, int freq) throws IOException { - } - - @Override - public void addPosition(int position, BytesRef payload, int startOffset, int endOffset) throws IOException { - analyzingSuggestLookupProvider.parsePayload(payload, spare); - builder.addSurface(spare.surfaceForm.get(), spare.payload.get(), spare.weight); - // multi fields have the same surface form so we sum up here - maxAnalyzedPathsForOneInput = Math.max(maxAnalyzedPathsForOneInput, position + 1); - } - - @Override - public void finishDoc() throws IOException { - } - - public int getMaxAnalyzedPathsForOneInput() { - return maxAnalyzedPathsForOneInput; - } - } - - ; - - @Override public LookupFactory load(IndexInput input) throws IOException { CodecUtil.checkHeader(input, CODEC_NAME, CODEC_VERSION, CODEC_VERSION); @@ -253,7 +237,7 @@ public class AnalyzingCompletionLookupProviderV1 extends CompletionLookupProvide XAnalyzingSuggester suggester; if (suggestionContext.isFuzzy()) { - suggester = new XFuzzySuggester(mapper.indexAnalyzer(),queryPrefix, mapper.searchAnalyzer(), flags, + suggester = new XFuzzySuggester(mapper.indexAnalyzer(), queryPrefix, mapper.searchAnalyzer(), flags, analyzingSuggestHolder.maxSurfaceFormsPerAnalyzedForm, analyzingSuggestHolder.maxGraphExpansions, suggestionContext.getFuzzyEditDistance(), suggestionContext.isFuzzyTranspositions(), suggestionContext.getFuzzyPrefixLength(), suggestionContext.getFuzzyMinLength(), false, @@ -273,7 +257,7 @@ public class AnalyzingCompletionLookupProviderV1 extends CompletionLookupProvide public CompletionStats stats(String... fields) { long sizeInBytes = 0; ObjectLongOpenHashMap completionFields = null; - if (fields != null && fields.length > 0) { + if (fields != null && fields.length > 0) { completionFields = new ObjectLongOpenHashMap<>(fields.length); } @@ -293,6 +277,7 @@ public class AnalyzingCompletionLookupProviderV1 extends CompletionLookupProvide return new CompletionStats(sizeInBytes, completionFields); } + @Override AnalyzingSuggestHolder getAnalyzingSuggestHolder(CompletionFieldMapper mapper) { return lookupMap.get(mapper.names().indexName()); @@ -302,6 +287,11 @@ public class AnalyzingCompletionLookupProviderV1 extends CompletionLookupProvide public long ramBytesUsed() { return ramBytesUsed; } + + @Override + public Iterable getChildResources() { + return Accountables.namedAccountables("field", lookupMap); + } }; } diff --git a/src/test/java/org/elasticsearch/search/suggest/completion/CompletionPostingsFormatTest.java b/src/test/java/org/elasticsearch/search/suggest/completion/CompletionPostingsFormatTest.java index 850de7b7763..e7d11ffe0f8 100644 --- a/src/test/java/org/elasticsearch/search/suggest/completion/CompletionPostingsFormatTest.java +++ b/src/test/java/org/elasticsearch/search/suggest/completion/CompletionPostingsFormatTest.java @@ -21,17 +21,19 @@ package org.elasticsearch.search.suggest.completion; import com.google.common.collect.Lists; import org.apache.lucene.analysis.standard.StandardAnalyzer; -import org.apache.lucene.codecs.*; +import org.apache.lucene.codecs.Codec; +import org.apache.lucene.codecs.FieldsConsumer; +import org.apache.lucene.codecs.FilterCodec; +import org.apache.lucene.codecs.PostingsFormat; import org.apache.lucene.document.Document; import org.apache.lucene.index.*; -import org.apache.lucene.index.FieldInfo.DocValuesType; -import org.apache.lucene.index.FieldInfo.IndexOptions; import org.apache.lucene.search.suggest.InputIterator; import org.apache.lucene.search.suggest.Lookup; import org.apache.lucene.search.suggest.Lookup.LookupResult; import org.apache.lucene.search.suggest.analyzing.AnalyzingSuggester; import org.apache.lucene.search.suggest.analyzing.XAnalyzingSuggester; import org.apache.lucene.store.*; +import org.apache.lucene.util.Bits; import org.apache.lucene.util.BytesRef; import org.apache.lucene.util.LineFileDocs; import org.elasticsearch.index.analysis.NamedAnalyzer; @@ -49,10 +51,7 @@ import org.junit.Test; import java.io.IOException; import java.lang.reflect.Field; -import java.util.Comparator; -import java.util.HashMap; -import java.util.List; -import java.util.Set; +import java.util.*; import static org.hamcrest.Matchers.equalTo; import static org.hamcrest.Matchers.is; @@ -72,7 +71,7 @@ public class CompletionPostingsFormatTest extends ElasticsearchTestCase { IndexInput input = dir.openInput("foo.txt", IOContext.DEFAULT); LookupFactory load = currentProvider.load(input); PostingsFormatProvider format = new PreBuiltPostingsFormatProvider(new Elasticsearch090PostingsFormat()); - NamedAnalyzer analyzer = new NamedAnalyzer("foo", new StandardAnalyzer(TEST_VERSION_CURRENT)); + NamedAnalyzer analyzer = new NamedAnalyzer("foo", new StandardAnalyzer()); Lookup lookup = load.getLookup(new CompletionFieldMapper(new Names("foo"), analyzer, analyzer, format, null, true, true, true, Integer.MAX_VALUE, AbstractFieldMapper.MultiFields.empty(), null, ContextMapping.EMPTY_MAPPING), new CompletionSuggestionContext(null)); List result = lookup.lookup("ge", false, 10); assertThat(result.get(0).key.toString(), equalTo("Generator - Foo Fighters")); @@ -91,7 +90,7 @@ public class CompletionPostingsFormatTest extends ElasticsearchTestCase { IndexInput input = dir.openInput("foo.txt", IOContext.DEFAULT); LookupFactory load = currentProvider.load(input); PostingsFormatProvider format = new PreBuiltPostingsFormatProvider(new Elasticsearch090PostingsFormat()); - NamedAnalyzer analyzer = new NamedAnalyzer("foo", new StandardAnalyzer(TEST_VERSION_CURRENT)); + NamedAnalyzer analyzer = new NamedAnalyzer("foo", new StandardAnalyzer()); AnalyzingCompletionLookupProvider.AnalyzingSuggestHolder analyzingSuggestHolder = load.getAnalyzingSuggestHolder(new CompletionFieldMapper(new Names("foo"), analyzer, analyzer, format, null, true, true, true, Integer.MAX_VALUE, AbstractFieldMapper.MultiFields.empty(), null, ContextMapping.EMPTY_MAPPING)); assertThat(analyzingSuggestHolder.sepLabel, is(AnalyzingCompletionLookupProviderV1.SEP_LABEL)); assertThat(analyzingSuggestHolder.payloadSep, is(AnalyzingCompletionLookupProviderV1.PAYLOAD_SEP)); @@ -109,7 +108,7 @@ public class CompletionPostingsFormatTest extends ElasticsearchTestCase { IndexInput input = dir.openInput("foo.txt", IOContext.DEFAULT); LookupFactory load = currentProvider.load(input); PostingsFormatProvider format = new PreBuiltPostingsFormatProvider(new Elasticsearch090PostingsFormat()); - NamedAnalyzer analyzer = new NamedAnalyzer("foo", new StandardAnalyzer(TEST_VERSION_CURRENT)); + NamedAnalyzer analyzer = new NamedAnalyzer("foo", new StandardAnalyzer()); AnalyzingCompletionLookupProvider.AnalyzingSuggestHolder analyzingSuggestHolder = load.getAnalyzingSuggestHolder(new CompletionFieldMapper(new Names("foo"), analyzer, analyzer, format, null, true, true, true, Integer.MAX_VALUE, AbstractFieldMapper.MultiFields.empty(), null, ContextMapping.EMPTY_MAPPING)); assertThat(analyzingSuggestHolder.sepLabel, is(XAnalyzingSuggester.SEP_LABEL)); assertThat(analyzingSuggestHolder.payloadSep, is(XAnalyzingSuggester.PAYLOAD_SEP)); @@ -125,8 +124,8 @@ public class CompletionPostingsFormatTest extends ElasticsearchTestCase { final boolean usePayloads = getRandom().nextBoolean(); final int options = preserveSeparators ? AnalyzingSuggester.PRESERVE_SEP : 0; - XAnalyzingSuggester reference = new XAnalyzingSuggester(new StandardAnalyzer(TEST_VERSION_CURRENT), null, new StandardAnalyzer( - TEST_VERSION_CURRENT), options, 256, -1, preservePositionIncrements, null, false, 1, XAnalyzingSuggester.SEP_LABEL, XAnalyzingSuggester.PAYLOAD_SEP, XAnalyzingSuggester.END_BYTE, XAnalyzingSuggester.HOLE_CHARACTER); + XAnalyzingSuggester reference = new XAnalyzingSuggester(new StandardAnalyzer(), null, new StandardAnalyzer(), + options, 256, -1, preservePositionIncrements, null, false, 1, XAnalyzingSuggester.SEP_LABEL, XAnalyzingSuggester.PAYLOAD_SEP, XAnalyzingSuggester.END_BYTE, XAnalyzingSuggester.HOLE_CHARACTER); LineFileDocs docs = new LineFileDocs(getRandom()); int num = scaledRandomIntBetween(150, 300); final String[] titles = new String[num]; @@ -143,11 +142,6 @@ public class CompletionPostingsFormatTest extends ElasticsearchTestCase { int index = 0; long currentWeight = -1; - @Override - public Comparator getComparator() { - return null; - } - @Override public BytesRef next() throws IOException { if (index < titles.length) { @@ -191,11 +185,6 @@ public class CompletionPostingsFormatTest extends ElasticsearchTestCase { return primaryIter.weight(); } - @Override - public Comparator getComparator() { - return primaryIter.getComparator(); - } - @Override public BytesRef next() throws IOException { return primaryIter.next(); @@ -227,7 +216,7 @@ public class CompletionPostingsFormatTest extends ElasticsearchTestCase { reference.build(iter); PostingsFormatProvider provider = new PreBuiltPostingsFormatProvider(new Elasticsearch090PostingsFormat()); - NamedAnalyzer namedAnalzyer = new NamedAnalyzer("foo", new StandardAnalyzer(TEST_VERSION_CURRENT)); + NamedAnalyzer namedAnalzyer = new NamedAnalyzer("foo", new StandardAnalyzer()); final CompletionFieldMapper mapper = new CompletionFieldMapper(new Names("foo"), namedAnalzyer, namedAnalzyer, provider, null, usePayloads, preserveSeparators, preservePositionIncrements, Integer.MAX_VALUE, AbstractFieldMapper.MultiFields.empty(), null, ContextMapping.EMPTY_MAPPING); Lookup buildAnalyzingLookup = buildAnalyzingLookup(mapper, titles, titles, weights); @@ -273,7 +262,7 @@ public class CompletionPostingsFormatTest extends ElasticsearchTestCase { return mapper.postingsFormatProvider().get(); } }; - IndexWriterConfig indexWriterConfig = new IndexWriterConfig(TEST_VERSION_CURRENT, mapper.indexAnalyzer()); + IndexWriterConfig indexWriterConfig = new IndexWriterConfig(mapper.indexAnalyzer()); indexWriterConfig.setCodec(filterCodec); IndexWriter writer = new IndexWriter(dir, indexWriterConfig); @@ -292,7 +281,7 @@ public class CompletionPostingsFormatTest extends ElasticsearchTestCase { DirectoryReader reader = DirectoryReader.open(writer, true); assertThat(reader.leaves().size(), equalTo(1)); assertThat(reader.leaves().get(0).reader().numDocs(), equalTo(weights.length)); - AtomicReaderContext atomicReaderContext = reader.leaves().get(0); + LeafReaderContext atomicReaderContext = reader.leaves().get(0); Terms luceneTerms = atomicReaderContext.reader().terms(mapper.name()); Lookup lookup = ((Completion090PostingsFormat.CompletionTerms) luceneTerms).getLookup(mapper, new CompletionSuggestionContext(null)); reader.close(); @@ -300,24 +289,35 @@ public class CompletionPostingsFormatTest extends ElasticsearchTestCase { dir.close(); return lookup; } - @Test public void testNoDocs() throws IOException { AnalyzingCompletionLookupProvider provider = new AnalyzingCompletionLookupProvider(true, false, true, true); RAMDirectory dir = new RAMDirectory(); IndexOutput output = dir.createOutput("foo.txt", IOContext.DEFAULT); FieldsConsumer consumer = provider.consumer(output); - FieldInfo fieldInfo = new FieldInfo("foo", true, 1, false, true, true, IndexOptions.DOCS_AND_FREQS_AND_POSITIONS, - DocValuesType.SORTED, DocValuesType.BINARY, -1, new HashMap()); - TermsConsumer addField = consumer.addField(fieldInfo); - addField.finish(0, 0, 0); + consumer.write(new Fields() { + @Override + public Iterator iterator() { + return Arrays.asList("foo").iterator(); + } + + @Override + public Terms terms(String field) throws IOException { + return null; + } + + @Override + public int size() { + return 1; + } + }); consumer.close(); output.close(); IndexInput input = dir.openInput("foo.txt", IOContext.DEFAULT); LookupFactory load = provider.load(input); PostingsFormatProvider format = new PreBuiltPostingsFormatProvider(new Elasticsearch090PostingsFormat()); - NamedAnalyzer analyzer = new NamedAnalyzer("foo", new StandardAnalyzer(TEST_VERSION_CURRENT)); + NamedAnalyzer analyzer = new NamedAnalyzer("foo", new StandardAnalyzer()); assertNull(load.getLookup(new CompletionFieldMapper(new Names("foo"), analyzer, analyzer, format, null, true, true, true, Integer.MAX_VALUE, AbstractFieldMapper.MultiFields.empty(), null, ContextMapping.EMPTY_MAPPING), new CompletionSuggestionContext(null))); dir.close(); } @@ -326,25 +326,199 @@ public class CompletionPostingsFormatTest extends ElasticsearchTestCase { private void writeData(Directory dir, Completion090PostingsFormat.CompletionLookupProvider provider) throws IOException { IndexOutput output = dir.createOutput("foo.txt", IOContext.DEFAULT); FieldsConsumer consumer = provider.consumer(output); - FieldInfo fieldInfo = new FieldInfo("foo", true, 1, false, true, true, IndexOptions.DOCS_AND_FREQS_AND_POSITIONS, - DocValuesType.SORTED, DocValuesType.BINARY, -1, new HashMap()); - TermsConsumer addField = consumer.addField(fieldInfo); + final List terms = new ArrayList<>(); + terms.add(new TermPosAndPayload("foofightersgenerator", 256 - 2, provider.buildPayload(new BytesRef("Generator - Foo Fighters"), 9, new BytesRef("id:10")))); + terms.add(new TermPosAndPayload("generator", 256 - 1, provider.buildPayload(new BytesRef("Generator - Foo Fighters"), 9, new BytesRef("id:10")))); + Fields fields = new Fields() { + @Override + public Iterator iterator() { + return Arrays.asList("foo").iterator(); + } - PostingsConsumer postingsConsumer = addField.startTerm(new BytesRef("foofightersgenerator")); - postingsConsumer.startDoc(0, 1); - postingsConsumer.addPosition(256 - 2, provider.buildPayload(new BytesRef("Generator - Foo Fighters"), 9, new BytesRef("id:10")), 0, - 1); - postingsConsumer.finishDoc(); - addField.finishTerm(new BytesRef("foofightersgenerator"), new TermStats(1, 1)); - addField.startTerm(new BytesRef("generator")); - postingsConsumer.startDoc(0, 1); - postingsConsumer.addPosition(256 - 1, provider.buildPayload(new BytesRef("Generator - Foo Fighters"), 9, new BytesRef("id:10")), 0, - 1); - postingsConsumer.finishDoc(); - addField.finishTerm(new BytesRef("generator"), new TermStats(1, 1)); - addField.finish(1, 1, 1); + @Override + public Terms terms(String field) throws IOException { + if (field.equals("foo")) { + return new Terms() { + @Override + public TermsEnum iterator(TermsEnum reuse) throws IOException { + final Iterator iterator = terms.iterator(); + return new TermsEnum() { + private TermPosAndPayload current = null; + @Override + public SeekStatus seekCeil(BytesRef text) throws IOException { + throw new UnsupportedOperationException(); + } + + @Override + public void seekExact(long ord) throws IOException { + throw new UnsupportedOperationException(); + } + + @Override + public BytesRef term() throws IOException { + return current == null ? null : current.term; + } + + @Override + public long ord() throws IOException { + throw new UnsupportedOperationException(); + } + + @Override + public int docFreq() throws IOException { + return current == null ? 0 : 1; + } + + @Override + public long totalTermFreq() throws IOException { + throw new UnsupportedOperationException(); + } + + @Override + public DocsEnum docs(Bits liveDocs, DocsEnum reuse, int flags) throws IOException { + throw new UnsupportedOperationException(); + } + + @Override + public DocsAndPositionsEnum docsAndPositions(Bits liveDocs, DocsAndPositionsEnum reuse, int flags) throws IOException { + final TermPosAndPayload data = current; + return new DocsAndPositionsEnum() { + boolean done = false; + @Override + public int nextPosition() throws IOException { + return current.pos; + } + + @Override + public int startOffset() throws IOException { + return 0; + } + + @Override + public int endOffset() throws IOException { + return 0; + } + + @Override + public BytesRef getPayload() throws IOException { + return current.payload; + } + + @Override + public int freq() throws IOException { + return 1; + } + + @Override + public int docID() { + if (done) { + return NO_MORE_DOCS; + } + return 0; + } + + @Override + public int nextDoc() throws IOException { + if (done) { + return NO_MORE_DOCS; + } + done = true; + return 0; + } + + @Override + public int advance(int target) throws IOException { + if (done) { + return NO_MORE_DOCS; + } + done = true; + return 0; + } + + @Override + public long cost() { + return 0; + } + }; + } + + @Override + public BytesRef next() throws IOException { + if (iterator.hasNext()) { + current = iterator.next(); + return current.term; + } + current = null; + return null; + } + }; + } + + @Override + public long size() throws IOException { + throw new UnsupportedOperationException(); + } + + @Override + public long getSumTotalTermFreq() throws IOException { + throw new UnsupportedOperationException(); + } + + @Override + public long getSumDocFreq() throws IOException { + throw new UnsupportedOperationException(); + } + + @Override + public int getDocCount() throws IOException { + throw new UnsupportedOperationException(); + } + + @Override + public boolean hasFreqs() { + throw new UnsupportedOperationException(); + } + + @Override + public boolean hasOffsets() { + throw new UnsupportedOperationException(); + } + + @Override + public boolean hasPositions() { + throw new UnsupportedOperationException(); + } + + @Override + public boolean hasPayloads() { + throw new UnsupportedOperationException(); + } + }; + } + return null; + } + + @Override + public int size() { + return 0; + } + }; + consumer.write(fields); consumer.close(); output.close(); } + + private static class TermPosAndPayload { + final BytesRef term; + final int pos; + final BytesRef payload; + + + private TermPosAndPayload(String term, int pos, BytesRef payload) { + this.term = new BytesRef(term); + this.pos = pos; + this.payload = payload; + } + } } diff --git a/src/test/java/org/elasticsearch/search/suggest/phrase/NoisyChannelSpellCheckerTests.java b/src/test/java/org/elasticsearch/search/suggest/phrase/NoisyChannelSpellCheckerTests.java index 82899b404c9..c4d4b48e28c 100644 --- a/src/test/java/org/elasticsearch/search/suggest/phrase/NoisyChannelSpellCheckerTests.java +++ b/src/test/java/org/elasticsearch/search/suggest/phrase/NoisyChannelSpellCheckerTests.java @@ -65,11 +65,11 @@ public class NoisyChannelSpellCheckerTests extends ElasticsearchTestCase{ mapping.put("body_ngram", new Analyzer() { @Override - protected TokenStreamComponents createComponents(String fieldName, Reader reader) { - Tokenizer t = new StandardTokenizer(Version.LUCENE_41, reader); + protected TokenStreamComponents createComponents(String fieldName) { + Tokenizer t = new StandardTokenizer(); ShingleFilter tf = new ShingleFilter(t, 2, 3); tf.setOutputUnigrams(false); - return new TokenStreamComponents(t, new LowerCaseFilter(Version.LUCENE_41, tf)); + return new TokenStreamComponents(t, new LowerCaseFilter(tf)); } }); @@ -77,15 +77,15 @@ public class NoisyChannelSpellCheckerTests extends ElasticsearchTestCase{ mapping.put("body", new Analyzer() { @Override - protected TokenStreamComponents createComponents(String fieldName, Reader reader) { - Tokenizer t = new StandardTokenizer(Version.LUCENE_41, reader); - return new TokenStreamComponents(t, new LowerCaseFilter(Version.LUCENE_41, t)); + protected TokenStreamComponents createComponents(String fieldName) { + Tokenizer t = new StandardTokenizer(); + return new TokenStreamComponents(t, new LowerCaseFilter(t)); } }); - PerFieldAnalyzerWrapper wrapper = new PerFieldAnalyzerWrapper(new WhitespaceAnalyzer(Version.LUCENE_41), mapping); + PerFieldAnalyzerWrapper wrapper = new PerFieldAnalyzerWrapper(new WhitespaceAnalyzer(), mapping); - IndexWriterConfig conf = new IndexWriterConfig(Version.LUCENE_41, wrapper); + IndexWriterConfig conf = new IndexWriterConfig(wrapper); IndexWriter writer = new IndexWriter(dir, conf); BufferedReader reader = new BufferedReader(new InputStreamReader(NoisyChannelSpellCheckerTests.class.getResourceAsStream("/config/names.txt"), Charsets.UTF_8)); String line = null; @@ -156,11 +156,11 @@ public class NoisyChannelSpellCheckerTests extends ElasticsearchTestCase{ Analyzer analyzer = new Analyzer() { @Override - protected TokenStreamComponents createComponents(String fieldName, Reader reader) { - Tokenizer t = new StandardTokenizer(Version.LUCENE_41, reader); - TokenFilter filter = new LowerCaseFilter(Version.LUCENE_41, t); + protected TokenStreamComponents createComponents(String fieldName) { + Tokenizer t = new StandardTokenizer(); + TokenFilter filter = new LowerCaseFilter(t); try { - SolrSynonymParser parser = new SolrSynonymParser(true, false, new WhitespaceAnalyzer(Version.LUCENE_41)); + SolrSynonymParser parser = new SolrSynonymParser(true, false, new WhitespaceAnalyzer()); ((SolrSynonymParser) parser).parse(new StringReader("usa => usa, america, american\nursa => usa, america, american")); filter = new SynonymFilter(filter, parser.build(), true); } catch (Exception e) { @@ -198,11 +198,11 @@ public class NoisyChannelSpellCheckerTests extends ElasticsearchTestCase{ mapping.put("body_ngram", new Analyzer() { @Override - protected TokenStreamComponents createComponents(String fieldName, Reader reader) { - Tokenizer t = new StandardTokenizer(Version.LUCENE_41, reader); + protected TokenStreamComponents createComponents(String fieldName) { + Tokenizer t = new StandardTokenizer(); ShingleFilter tf = new ShingleFilter(t, 2, 3); tf.setOutputUnigrams(false); - return new TokenStreamComponents(t, new LowerCaseFilter(Version.LUCENE_41, tf)); + return new TokenStreamComponents(t, new LowerCaseFilter(tf)); } }); @@ -210,24 +210,24 @@ public class NoisyChannelSpellCheckerTests extends ElasticsearchTestCase{ mapping.put("body", new Analyzer() { @Override - protected TokenStreamComponents createComponents(String fieldName, Reader reader) { - Tokenizer t = new StandardTokenizer(Version.LUCENE_41, reader); - return new TokenStreamComponents(t, new LowerCaseFilter(Version.LUCENE_41, t)); + protected TokenStreamComponents createComponents(String fieldName) { + Tokenizer t = new StandardTokenizer(); + return new TokenStreamComponents(t, new LowerCaseFilter(t)); } }); mapping.put("body_reverse", new Analyzer() { @Override - protected TokenStreamComponents createComponents(String fieldName, Reader reader) { - Tokenizer t = new StandardTokenizer(Version.LUCENE_41, reader); - return new TokenStreamComponents(t, new ReverseStringFilter(Version.LUCENE_41, new LowerCaseFilter(Version.LUCENE_41, t))); + protected TokenStreamComponents createComponents(String fieldName) { + Tokenizer t = new StandardTokenizer(); + return new TokenStreamComponents(t, new ReverseStringFilter(new LowerCaseFilter(t))); } }); - PerFieldAnalyzerWrapper wrapper = new PerFieldAnalyzerWrapper(new WhitespaceAnalyzer(Version.LUCENE_41), mapping); + PerFieldAnalyzerWrapper wrapper = new PerFieldAnalyzerWrapper(new WhitespaceAnalyzer(), mapping); - IndexWriterConfig conf = new IndexWriterConfig(Version.LUCENE_41, wrapper); + IndexWriterConfig conf = new IndexWriterConfig(wrapper); IndexWriter writer = new IndexWriter(dir, conf); BufferedReader reader = new BufferedReader(new InputStreamReader(NoisyChannelSpellCheckerTests.class.getResourceAsStream("/config/names.txt"), Charsets.UTF_8)); String line = null; @@ -290,11 +290,11 @@ public class NoisyChannelSpellCheckerTests extends ElasticsearchTestCase{ mapping.put("body_ngram", new Analyzer() { @Override - protected TokenStreamComponents createComponents(String fieldName, Reader reader) { - Tokenizer t = new StandardTokenizer(Version.LUCENE_41, reader); + protected TokenStreamComponents createComponents(String fieldName) { + Tokenizer t = new StandardTokenizer(); ShingleFilter tf = new ShingleFilter(t, 2, 3); tf.setOutputUnigrams(false); - return new TokenStreamComponents(t, new LowerCaseFilter(Version.LUCENE_41, tf)); + return new TokenStreamComponents(t, new LowerCaseFilter(tf)); } }); @@ -302,15 +302,15 @@ public class NoisyChannelSpellCheckerTests extends ElasticsearchTestCase{ mapping.put("body", new Analyzer() { @Override - protected TokenStreamComponents createComponents(String fieldName, Reader reader) { - Tokenizer t = new StandardTokenizer(Version.LUCENE_41, reader); - return new TokenStreamComponents(t, new LowerCaseFilter(Version.LUCENE_41, t)); + protected TokenStreamComponents createComponents(String fieldName) { + Tokenizer t = new StandardTokenizer(); + return new TokenStreamComponents(t, new LowerCaseFilter(t)); } }); - PerFieldAnalyzerWrapper wrapper = new PerFieldAnalyzerWrapper(new WhitespaceAnalyzer(Version.LUCENE_41), mapping); + PerFieldAnalyzerWrapper wrapper = new PerFieldAnalyzerWrapper(new WhitespaceAnalyzer(), mapping); - IndexWriterConfig conf = new IndexWriterConfig(Version.LUCENE_41, wrapper); + IndexWriterConfig conf = new IndexWriterConfig(wrapper); IndexWriter writer = new IndexWriter(dir, conf); BufferedReader reader = new BufferedReader(new InputStreamReader(NoisyChannelSpellCheckerTests.class.getResourceAsStream("/config/names.txt"), Charsets.UTF_8)); String line = null; @@ -365,11 +365,11 @@ public class NoisyChannelSpellCheckerTests extends ElasticsearchTestCase{ Analyzer analyzer = new Analyzer() { @Override - protected TokenStreamComponents createComponents(String fieldName, Reader reader) { - Tokenizer t = new StandardTokenizer(Version.LUCENE_41, reader); - TokenFilter filter = new LowerCaseFilter(Version.LUCENE_41, t); + protected TokenStreamComponents createComponents(String fieldName) { + Tokenizer t = new StandardTokenizer(); + TokenFilter filter = new LowerCaseFilter(t); try { - SolrSynonymParser parser = new SolrSynonymParser(true, false, new WhitespaceAnalyzer(Version.LUCENE_41)); + SolrSynonymParser parser = new SolrSynonymParser(true, false, new WhitespaceAnalyzer()); ((SolrSynonymParser) parser).parse(new StringReader("usa => usa, america, american\nursa => usa, america, american")); filter = new SynonymFilter(filter, parser.build(), true); } catch (Exception e) { diff --git a/src/test/java/org/elasticsearch/test/ElasticsearchIntegrationTest.java b/src/test/java/org/elasticsearch/test/ElasticsearchIntegrationTest.java index e765c35b0ba..64fc2b52a3b 100644 --- a/src/test/java/org/elasticsearch/test/ElasticsearchIntegrationTest.java +++ b/src/test/java/org/elasticsearch/test/ElasticsearchIntegrationTest.java @@ -117,6 +117,7 @@ import org.joda.time.DateTimeZone; import org.junit.*; import java.io.File; +import java.io.FileInputStream; import java.io.IOException; import java.lang.annotation.ElementType; import java.lang.annotation.Retention; @@ -1857,7 +1858,9 @@ public abstract class ElasticsearchIntegrationTest extends ElasticsearchTestCase protected Settings prepareBackwardsDataDir(File backwardsIndex) throws IOException { File indexDir = newTempDir(); File dataDir = new File(indexDir, "data"); - TestUtil.unzip(backwardsIndex, indexDir); + try (FileInputStream stream = new FileInputStream(backwardsIndex)) { + TestUtil.unzip(stream, indexDir.toPath()); + } assertTrue(dataDir.exists()); String[] list = dataDir.list(); if (list == null || list.length > 1) { diff --git a/src/test/java/org/elasticsearch/test/ElasticsearchTestCase.java b/src/test/java/org/elasticsearch/test/ElasticsearchTestCase.java index ea368d84eda..8ae868cc328 100644 --- a/src/test/java/org/elasticsearch/test/ElasticsearchTestCase.java +++ b/src/test/java/org/elasticsearch/test/ElasticsearchTestCase.java @@ -23,11 +23,10 @@ import com.carrotsearch.randomizedtesting.annotations.*; import com.carrotsearch.randomizedtesting.annotations.ThreadLeakScope.Scope; import com.google.common.base.Predicate; import com.google.common.collect.ImmutableList; -import org.apache.lucene.search.FieldCache; import org.apache.lucene.store.MockDirectoryWrapper; import org.apache.lucene.util.AbstractRandomizedTest; -import org.apache.lucene.util.LuceneTestCase; import org.apache.lucene.util.TimeUnits; +import org.apache.lucene.uninverting.UninvertingReader; import org.elasticsearch.Version; import org.elasticsearch.client.Requests; import org.elasticsearch.cluster.metadata.IndexMetaData; @@ -101,19 +100,11 @@ public abstract class ElasticsearchTestCase extends AbstractRandomizedTest { } - @Before - public void cleanFieldCache() { - FieldCache.DEFAULT.purgeAllCaches(); - } - @After public void ensureNoFieldCacheUse() { - // We use the lucene comparators, and by default they work on field cache. - // However, given the way that we use them, field cache should NEVER get loaded. - if (getClass().getAnnotation(UsesLuceneFieldCacheOnPurpose.class) == null) { - FieldCache.CacheEntry[] entries = FieldCache.DEFAULT.getCacheEntries(); - assertEquals("fieldcache must never be used, got=" + Arrays.toString(entries), 0, entries.length); - } + // field cache should NEVER get loaded. + String[] entries = UninvertingReader.getUninvertedStats(); + assertEquals("fieldcache must never be used, got=" + Arrays.toString(entries), 0, entries.length); } /** @@ -266,7 +257,7 @@ public abstract class ElasticsearchTestCase extends AbstractRandomizedTest { } public static boolean maybeDocValues() { - return LuceneTestCase.defaultCodecSupportsSortedSet() && randomBoolean(); + return randomBoolean(); } private static final List SORTED_VERSIONS; @@ -494,15 +485,6 @@ public abstract class ElasticsearchTestCase extends AbstractRandomizedTest { int version(); } - /** - * Most tests don't use {@link FieldCache} but some of them might do. - */ - @Retention(RetentionPolicy.RUNTIME) - @Target({ElementType.TYPE}) - @Ignore - public @interface UsesLuceneFieldCacheOnPurpose { - } - /** * Returns a global compatibility version that is set via the * {@value #TESTS_COMPATIBILITY} or {@value #TESTS_BACKWARDS_COMPATIBILITY_VERSION} system property. diff --git a/src/test/java/org/elasticsearch/test/ExternalTestCluster.java b/src/test/java/org/elasticsearch/test/ExternalTestCluster.java index d31bede6bdb..10b66c9d137 100644 --- a/src/test/java/org/elasticsearch/test/ExternalTestCluster.java +++ b/src/test/java/org/elasticsearch/test/ExternalTestCluster.java @@ -154,7 +154,7 @@ public final class ExternalTestCluster extends TestCluster { assertThat("Fielddata size must be 0 on node: " + stats.getNode(), stats.getIndices().getFieldData().getMemorySizeInBytes(), equalTo(0l)); assertThat("Filter cache size must be 0 on node: " + stats.getNode(), stats.getIndices().getFilterCache().getMemorySizeInBytes(), equalTo(0l)); - assertThat("FixedBitSet cache size must be 0 on node: " + stats.getNode(), stats.getIndices().getSegments().getFixedBitSetMemoryInBytes(), equalTo(0l)); + assertThat("FixedBitSet cache size must be 0 on node: " + stats.getNode(), stats.getIndices().getSegments().getBitsetMemoryInBytes(), equalTo(0l)); } } } diff --git a/src/test/java/org/elasticsearch/test/InternalTestCluster.java b/src/test/java/org/elasticsearch/test/InternalTestCluster.java index 1a8088fa6c0..cc028e632f9 100644 --- a/src/test/java/org/elasticsearch/test/InternalTestCluster.java +++ b/src/test/java/org/elasticsearch/test/InternalTestCluster.java @@ -1687,7 +1687,7 @@ public final class InternalTestCluster extends TestCluster { NodeStats stats = nodeService.stats(CommonStatsFlags.ALL, false, false, false, false, false, false, false, false, false); assertThat("Fielddata size must be 0 on node: " + stats.getNode(), stats.getIndices().getFieldData().getMemorySizeInBytes(), equalTo(0l)); assertThat("Filter cache size must be 0 on node: " + stats.getNode(), stats.getIndices().getFilterCache().getMemorySizeInBytes(), equalTo(0l)); - assertThat("FixedBitSet cache size must be 0 on node: " + stats.getNode(), stats.getIndices().getSegments().getFixedBitSetMemoryInBytes(), equalTo(0l)); + assertThat("FixedBitSet cache size must be 0 on node: " + stats.getNode(), stats.getIndices().getSegments().getBitsetMemoryInBytes(), equalTo(0l)); } } } diff --git a/src/test/java/org/elasticsearch/test/TestSearchContext.java b/src/test/java/org/elasticsearch/test/TestSearchContext.java index f4a020a1e49..5d8fe7e183e 100644 --- a/src/test/java/org/elasticsearch/test/TestSearchContext.java +++ b/src/test/java/org/elasticsearch/test/TestSearchContext.java @@ -28,8 +28,8 @@ import org.elasticsearch.action.search.SearchType; import org.elasticsearch.cache.recycler.PageCacheRecycler; import org.elasticsearch.common.util.BigArrays; import org.elasticsearch.index.analysis.AnalysisService; +import org.elasticsearch.index.cache.bitset.BitsetFilterCache; import org.elasticsearch.index.cache.filter.FilterCache; -import org.elasticsearch.index.cache.fixedbitset.FixedBitSetFilterCache; import org.elasticsearch.index.fielddata.IndexFieldDataService; import org.elasticsearch.index.mapper.FieldMapper; import org.elasticsearch.index.mapper.FieldMappers; @@ -69,7 +69,7 @@ public class TestSearchContext extends SearchContext { final IndexService indexService; final FilterCache filterCache; final IndexFieldDataService indexFieldDataService; - final FixedBitSetFilterCache fixedBitSetFilterCache; + final BitsetFilterCache fixedBitSetFilterCache; final ThreadPool threadPool; ContextIndexSearcher searcher; @@ -83,7 +83,7 @@ public class TestSearchContext extends SearchContext { this.indexService = indexService; this.filterCache = indexService.cache().filter(); this.indexFieldDataService = indexService.fieldData(); - this.fixedBitSetFilterCache = indexService.fixedBitSetFilterCache(); + this.fixedBitSetFilterCache = indexService.bitsetFilterCache(); this.threadPool = threadPool; } @@ -315,7 +315,7 @@ public class TestSearchContext extends SearchContext { } @Override - public FixedBitSetFilterCache fixedBitSetFilterCache() { + public BitsetFilterCache bitsetFilterCache() { return fixedBitSetFilterCache; } diff --git a/src/test/java/org/elasticsearch/test/cache/recycler/MockBigArrays.java b/src/test/java/org/elasticsearch/test/cache/recycler/MockBigArrays.java index 44a85c86b39..89f4c22713a 100644 --- a/src/test/java/org/elasticsearch/test/cache/recycler/MockBigArrays.java +++ b/src/test/java/org/elasticsearch/test/cache/recycler/MockBigArrays.java @@ -24,6 +24,8 @@ import com.carrotsearch.randomizedtesting.SeedUtils; import com.google.common.base.Predicate; import com.google.common.collect.Maps; import com.google.common.collect.Sets; +import org.apache.lucene.util.Accountable; +import org.apache.lucene.util.Accountables; import org.apache.lucene.util.BytesRef; import org.elasticsearch.cache.recycler.PageCacheRecycler; import org.elasticsearch.common.inject.Inject; @@ -330,6 +332,10 @@ public class MockBigArrays extends BigArrays { in.fill(fromIndex, toIndex, value); } + @Override + public Iterable getChildResources() { + return Collections.singleton(Accountables.namedAccountable("delegate", in)); + } } private class IntArrayWrapper extends AbstractArrayWrapper implements IntArray { @@ -370,7 +376,11 @@ public class MockBigArrays extends BigArrays { public void fill(long fromIndex, long toIndex, int value) { in.fill(fromIndex, toIndex, value); } - + + @Override + public Iterable getChildResources() { + return Collections.singleton(Accountables.namedAccountable("delegate", in)); + } } private class LongArrayWrapper extends AbstractArrayWrapper implements LongArray { @@ -411,6 +421,11 @@ public class MockBigArrays extends BigArrays { public void fill(long fromIndex, long toIndex, long value) { in.fill(fromIndex, toIndex, value); } + + @Override + public Iterable getChildResources() { + return Collections.singleton(Accountables.namedAccountable("delegate", in)); + } } @@ -453,6 +468,10 @@ public class MockBigArrays extends BigArrays { in.fill(fromIndex, toIndex, value); } + @Override + public Iterable getChildResources() { + return Collections.singleton(Accountables.namedAccountable("delegate", in)); + } } private class DoubleArrayWrapper extends AbstractArrayWrapper implements DoubleArray { @@ -494,6 +513,10 @@ public class MockBigArrays extends BigArrays { in.fill(fromIndex, toIndex, value); } + @Override + public Iterable getChildResources() { + return Collections.singleton(Accountables.namedAccountable("delegate", in)); + } } private class ObjectArrayWrapper extends AbstractArrayWrapper implements ObjectArray { @@ -525,6 +548,10 @@ public class MockBigArrays extends BigArrays { // will be cleared anyway } + @Override + public Iterable getChildResources() { + return Collections.singleton(Accountables.namedAccountable("delegate", in)); + } } } diff --git a/src/test/java/org/elasticsearch/test/engine/ThrowingAtomicReaderWrapper.java b/src/test/java/org/elasticsearch/test/engine/ThrowingLeafReaderWrapper.java similarity index 95% rename from src/test/java/org/elasticsearch/test/engine/ThrowingAtomicReaderWrapper.java rename to src/test/java/org/elasticsearch/test/engine/ThrowingLeafReaderWrapper.java index cedb9de0089..39018ec4cae 100644 --- a/src/test/java/org/elasticsearch/test/engine/ThrowingAtomicReaderWrapper.java +++ b/src/test/java/org/elasticsearch/test/engine/ThrowingLeafReaderWrapper.java @@ -27,16 +27,16 @@ import org.apache.lucene.util.automaton.CompiledAutomaton; import java.io.IOException; /** - * An FilterAtomicReader that allows to throw exceptions if certain methods + * An FilterLeafReader that allows to throw exceptions if certain methods * are called on is. This allows to test parts of the system under certain * error conditions that would otherwise not be possible. */ -public class ThrowingAtomicReaderWrapper extends FilterAtomicReader { +public class ThrowingLeafReaderWrapper extends FilterLeafReader { private final Thrower thrower; /** - * Flags passed to {@link Thrower#maybeThrow(org.elasticsearch.test.engine.ThrowingAtomicReaderWrapper.Flags)} + * Flags passed to {@link Thrower#maybeThrow(org.elasticsearch.test.engine.ThrowingLeafReaderWrapper.Flags)} * when the corresponding method is called. */ public enum Flags { @@ -52,7 +52,7 @@ public class ThrowingAtomicReaderWrapper extends FilterAtomicReader { /** * A callback interface that allows to throw certain exceptions for - * methods called on the IndexReader that is wrapped by {@link ThrowingAtomicReaderWrapper} + * methods called on the IndexReader that is wrapped by {@link ThrowingLeafReaderWrapper} */ public static interface Thrower { /** @@ -68,7 +68,7 @@ public class ThrowingAtomicReaderWrapper extends FilterAtomicReader { public boolean wrapTerms(String field); } - public ThrowingAtomicReaderWrapper(AtomicReader in, Thrower thrower) { + public ThrowingLeafReaderWrapper(LeafReader in, Thrower thrower) { super(in); this.thrower = thrower; } diff --git a/src/test/java/org/elasticsearch/test/store/MockFSDirectoryService.java b/src/test/java/org/elasticsearch/test/store/MockFSDirectoryService.java index 23008349252..5cde4649112 100644 --- a/src/test/java/org/elasticsearch/test/store/MockFSDirectoryService.java +++ b/src/test/java/org/elasticsearch/test/store/MockFSDirectoryService.java @@ -119,32 +119,37 @@ public class MockFSDirectoryService extends FsDirectoryService { } public void checkIndex(Store store, ShardId shardId) throws IndexShardException { - try { - Directory dir = store.directory(); - if (!Lucene.indexExists(dir)) { - return; - } - if (IndexWriter.isLocked(dir)) { - AbstractRandomizedTest.checkIndexFailed = true; - throw new IllegalStateException("IndexWriter is still open on shard " + shardId); - } - CheckIndex checkIndex = new CheckIndex(dir); - BytesStreamOutput os = new BytesStreamOutput(); - PrintStream out = new PrintStream(os, false, Charsets.UTF_8.name()); - checkIndex.setInfoStream(out); - out.flush(); - CheckIndex.Status status = checkIndex.checkIndex(); - if (!status.clean) { - AbstractRandomizedTest.checkIndexFailed = true; - logger.warn("check index [failure]\n{}", new String(os.bytes().toBytes(), Charsets.UTF_8)); - throw new IndexShardException(shardId, "index check failure"); - } else { - if (logger.isDebugEnabled()) { - logger.debug("check index [success]\n{}", new String(os.bytes().toBytes(), Charsets.UTF_8)); + if (store.tryIncRef()) { + try { + Directory dir = store.directory(); + if (!Lucene.indexExists(dir)) { + return; } + if (IndexWriter.isLocked(dir)) { + AbstractRandomizedTest.checkIndexFailed = true; + throw new IllegalStateException("IndexWriter is still open on shard " + shardId); + } + try (CheckIndex checkIndex = new CheckIndex(dir)) { + BytesStreamOutput os = new BytesStreamOutput(); + PrintStream out = new PrintStream(os, false, Charsets.UTF_8.name()); + checkIndex.setInfoStream(out); + out.flush(); + CheckIndex.Status status = checkIndex.checkIndex(); + if (!status.clean) { + AbstractRandomizedTest.checkIndexFailed = true; + logger.warn("check index [failure]\n{}", new String(os.bytes().toBytes(), Charsets.UTF_8)); + throw new IndexShardException(shardId, "index check failure"); + } else { + if (logger.isDebugEnabled()) { + logger.debug("check index [success]\n{}", new String(os.bytes().toBytes(), Charsets.UTF_8)); + } + } + } + } catch (Exception e) { + logger.warn("failed to check index", e); + } finally { + store.decRef(); } - } catch (Exception e) { - logger.warn("failed to check index", e); } } diff --git a/src/test/java/org/elasticsearch/test/store/MockRamDirectoryService.java b/src/test/java/org/elasticsearch/test/store/MockRamDirectoryService.java index 3730b0a8e2c..e8c3ac35411 100644 --- a/src/test/java/org/elasticsearch/test/store/MockRamDirectoryService.java +++ b/src/test/java/org/elasticsearch/test/store/MockRamDirectoryService.java @@ -53,14 +53,4 @@ public class MockRamDirectoryService extends AbstractIndexShardComponent impleme public long throttleTimeInNanos() { return delegateService.throttleTimeInNanos(); } - - @Override - public void renameFile(Directory dir, String from, String to) throws IOException { - delegateService.renameFile(dir, from, to); - } - - @Override - public void fullDelete(Directory dir) throws IOException { - delegateService.fullDelete(dir); - } } diff --git a/src/test/resources/org/elasticsearch/rest/action/admin/indices/upgrade/index-0.20.zip b/src/test/resources/org/elasticsearch/rest/action/admin/indices/upgrade/index-0.20.zip deleted file mode 100644 index 59cd2470e3f0c415cdc74db33f1b842135530f27..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 7539 zcmbuD2|UyPAICQ~w~{MILyW z55Ll(k|O#4I&|Ns%I_b3+VAt-tR3oW+i!fa$HU&w_w_kmZ~S6qlm-HN8L@G`{6B_& zpb--g6e5f0uq;F!q5LS;Eo4TVTS!W(CIX4-Ln06;eejR?A0JOj^pp6wp77(;cp{TU zBQvQ)202z>%<2EG@)og1rf?=P0;3q(C_$S`8MTBr8pUK0SyX}T zccUc~vVHT!%{S9PnxV%=CfhObi6mmYog=?!5aT$AL8@pT(>2fon-i!kq8(J=fsJdZ z3??m+-m;lRO-PCdE;0uO2M0y9?M#!btrfg0G&+Sw78d4)+D!Ob6Qzt>%Vr!n$r5C; zPgPFBD47(x&nFCb!l(@CPQc0$1;#C%@TAEwqYY-abB~QM(37`ZR}{W%S>lPDtL~=Dr~Q9J|2h^oc(5gWw%a^T(7J5JCcZXDqO`AI_b*(iz|F zb9&dm8hWbf4T!Uov&NcKC+J_lv|;@0lZKHyUzff9TK#xtb5?h&>vSJy?h}hdXXW>` zi(>WbHg9PAstOffR1%C;CDt6Yq(68RE=ej}Nyr2OAvzmjXq})@)w%C$i1j)(ToxOj ze;4nei(4yx7(q*fwsp0;HSh&`6!nLm0O!vTkH;H}R*hmNrZC7<9dr~cEs44m6_rAx zU{Ne;Dhr8>thQOx@|G+OsGMryepG8v)DTF+w&?V99*F@627c42j z@Q-Sd7YC1f&eQ@PK%OX=9XbOxJ6|+E4@CukG2;-mq}U+`%ubF4T#DlCB*rM6f>2s8 zMn#QK>8k0kB!wqE`mn+Tpsf_gvuHH90t|x2Favbd6BTnq1j$}yBhRaOECnZMQBe`9 zue48d0y|ix;=34&#)R`^SwTP|+e_cqcwYd9*5OOe^6Cz_ZS$mXtT&gFc@jNSrPy+Z z#|;cSrOT|R#j@b!5=PM=qi3?iOP0qF{`r|}U&>#)29}dD)n;m484S?LP~TGG_vrES zG9&`OeD&C;M}Ld_Xl{YD_Xw}^xp&OywZSSSL&U)Moc72|9j{WN?_TLU%^&hBlXZXNBC^{Y5y-o?n>#x96A{`2pwGN8s18#^csyy6pcwsmo~_SePO9XL#! zvnF(b8JB%T`+Q-*g|)VGthy>Jn4NEuH*KF(!09h6d}VezMy=XD!?H9`_ju6`Ef336 zP7${rH8!Yj;KP(7Q7co$+AVc)Zm=-Rm^?e{LXPuEzfu6%U1$JEoYDGBQ< z{v@1FesJz``^8ym>&ouTSa{8+sViU4SIc~1F43wKwRBoWXYyrKoAudD=B!eDcxEwA zYCRaP+t;88!h`y9wgMqCsWAyuI*Vx^L#4y@BD5+z!Kz^2l5${v!KXKO0bCm_1CM{; zKosn)H(<$}#Pb*(?L~R|v779V^vtlmm+XK3Y6lv7jK#GjFT!uWRcapn9mfmjd53SA zl45Du$J+C;q9-Dv`(Z_Q2Cu(-eR>^@q(jOH^PECYt>%RtZR^RM`OehzNaeh@+5<+$ zTnCeQI&P2dZo-G~;GB-!h^2qLt2$I{pRzD*wS)Q9Ft?a-WfpFs?hRMfsP&h~K1Jwl zhP$=OPoQ@6cdw{q2afq{IjcPTiTl*U=1)~u(zToFysPK!ugWPxrmboE^Lei^g{D~_ z_~pIR^Wux@dTfnxE^i0h)(?qY!lZAZPI6_F#URp|@rf~`3KYQ(<_-?o>H_Wj8=!P4 zpmctU!Gu))`fzi1balnLy1MWiX%f&bKhZDX9*{()7&tlGT4Oc8d{R?)s6E2U$F)NYOR-^2)cUBVR(g+K{M*2fS)1;J;gB%GJR}d<{mRy3$L7?WA z^{5(9xvSuj=K?Sc66goPf`fQJDN}vNGtyAks=}44KG)RR{^yXJljp}pitZ9ci6YLpVsSp80}u#y5-^+mEc=d94B3F5ND#+ zL9fYsU%Z(zzDlVrAlbzcgQ-9H*Mo|GufQW52QNQNSW48=+SrAg-rtG~?ZTmbz0PoP zj=@~-ro!7CT}(t+K{cf+WK|IAc4der%JZHs(@~Eb%*`75K4c1NfK-~K{Hr;L5l9jA z(m}lUhJFuEMbLt&h?KY%H#Kr5B5ul4Xi9oQ0t`bF5;+VTBg75o|J<-}S~9wMX)R|e zVg4bUc}On<05my)@n3qMvXlTggUy7WW68t_{1+3;fCEi9M&U@#P=t8`m#&-wnK+_} z%ILh2gAySR>K#)h*tmpq2Yv#fu?b9IBZns9pj6pF!y^`;PRY!$<_}iIOL1 z;W!0Qf4LGxF-s0pgpI_HSplkWv;r5gh%Gra5NFR;kw0QS8z@6KwjkRhCj{c`>k6`= zc>x?-6u0CQK%7lcRV;33CLkMIQu_^&hPP6|t`hZ2MJt;(l5$hr zn?41+krbb@c_67b#XVR9Mq={jL+AmdKH>PHeFq#TKVD7}yL^kbJba;Y5?r}{c@+|D zIwRGZc;MJE3W0?lNNYEc2S~fYTq#Cs9icnWO^FT0gR<5^{D6m)$WNhy9E{XGLMU|D ey_FyOLj{3=f67LKg9L#v1z*cSYpINb-u(@%;dF)o