LUCENE-1290: Deprecate org.apache.lucene.search.Hits, Hit and HitIterator.

git-svn-id: https://svn.apache.org/repos/asf/lucene/java/trunk@659626 13f79535-47bb-0310-9956-ffa450edef68
This commit is contained in:
Michael Busch 2008-05-23 18:57:55 +00:00
parent eb0596c721
commit 08a2eb4665
68 changed files with 1417 additions and 1277 deletions

View File

@ -66,6 +66,10 @@ API Changes
8. LUCENE-852: Let the SpellChecker caller specify IndexWriter mergeFactor
and RAM buffer size. (Otis Gospodnetic)
9. LUCENE-1290: Deprecate org.apache.lucene.search.Hits, Hit and HitIterator
and remove all references to these classes from the core. Also update demos
and tutorials. (Michael Busch)
Bug fixes

View File

@ -339,14 +339,23 @@ documents are interpreted: finding the end of words and removing useless words l
the results from the <span class="codefrag">QueryParser</span> which is passed to
the searcher. Note that it's also possible to programmatically construct a rich <span class="codefrag">Query</span> object without using the query
parser. The query parser just enables decoding the <a href="queryparsersyntax.html">Lucene query
syntax</a> into the corresponding <span class="codefrag">Query</span> object. The searcher results are
returned in a collection of Documents called <span class="codefrag">Hits</span> which is then iterated through and
displayed to the user.
syntax</a> into the corresponding <span class="codefrag">Query</span> object. Search can be executed in
two different ways:
<ul>
<li>Streaming: A <span class="codefrag">HitCollector</span> subclass
simply prints out the document ID and score for each matching document.</li>
<li>Paging: Using a <span class="codefrag">TopDocCollector</span>
the search results are printed in pages, sorted by score (i. e. relevance).</li>
</ul>
</p>
</div>
<a name="N100ED"></a><a name="The Web example..."></a>
<a name="N100FB"></a><a name="The Web example..."></a>
<h2 class="boxed">The Web example...</h2>
<div class="section">
<p>

View File

@ -95,10 +95,10 @@ endobj
>>
endobj
20 0 obj
<< /Length 2152 /Filter [ /ASCII85Decode /FlateDecode ]
<< /Length 2312 /Filter [ /ASCII85Decode /FlateDecode ]
>>
stream
Gatm=hf%7-&:X@\Tsl:4.n^TLCC^U&F_NdV6b=0`L1Ln&`7Sm;d3YV7hI"$mAB@>E;Z&3_Yt[e[hL0VBh4=C[[J4=@pp'=Z%K(WVUn6m4;&$ho%F+@nIRrBiEG)u[Z?Wi^/8mnr2j[PU$a5]`i1KJFP%!E@*d(LFkO7_1QB`*K,qE,<pm3MEe1+[O@ukFTeRlqs/DK`,;.mQ&m6b(@q]#:Lq8ZY:2V9tp?.s/n8]pLrgtXlm8Yb>=6`m@?=/QG:Hn%oBJUYLX859lseS0;jokC,/Z7^1)k'XC+UZM&)mG[7c%Oc>GR+P#h1?[!PAZ"q"mr*O2n4]44d.N12REW3g3e"IOqQs-_X-FH]Hk\P,>gdK\:mY<2,_ACAh7bk=esTuCKq<fF(N<>"Q/Lj4<[*W?D/N&Mj!S_PX)n_%!*&5(0<7*4]\&3\a\$$M3NM:I6:9Xi&po6=Nbos3;V15fjSBg'TIjg#*UK8J_oU]qJs-LG+@A0UoqCOcne8(NWmqrs<8u9X/$9gc7QkWefXa/68k9hqkS#TVqp'JOL2#iu]p4j(Gh*5T9'l=@p_>Q"*k&bW3S%P=XN[aG<d;5=kP7)<0B@L!\0R(=QC+3BW^LF.Bf`\4imOMK2M$;>UOFhG#=m08aGGVe@Zeu+."MYXgp!S,9M=@kXuDS6WrH7.+f%bf,&,A&-*k*ib2OKL;(]SA4L_aZrVfj`b4:n=6-'GH^>+,gDGb_Sg^C4XC$\<te7e@ZBl[[R[n&`ai+*8fnt=^p(_sWp64iMKCccNTRle,V192lkE"7/pV\&hU$@Ln<'ThpDM#pW9?eE(5_e`=d3=6L#]q4Y`KV#E,5I6n/LBY5$4M)KX$t>)i>(CJU.bk&I"lk.p+&:=@5):Jm3PQKJ5H]J81rfb+Fm2^V/""m$koe"A'QUd_'#YW&JHJClMs:aDcfj&n_@'d`<kXi`+<'FY<a@13rMA[(C#T;Vq51lE^-aPJ*(Ok9l'm0OE8$=/oEakU>sFu_9,^/MN[-GUI<[S`@A!nB43gg6[8OmrT#N6a/MRE\9;8@dJXWe4VV'J8O@.eQn3Wo.)]H-f=tR!gUFZP5-a<iEJ/Q\i+18&/f:+CE:W1gjM&fO3k3&IHCT2iWEJYJH8lS/5o0c5[9HZ(4MUp5/.d8=3)d/?-k0QfM_:*U)pN*5&\&W2nk,89Kc,:7/^OFlD9X>R6BR98!Eg\9]ch.T-8)knVdj.$h*4UC/c([%6]q^*lbKM(ag.8J/BPHN3IUm<hV<7(4R41:W.tQ(p*(Wg>TH@c9MhEtMFGrF@WObuM<adqWLZI]D(BEJ*X@<R2-)mBP>tY>1*Al%i<M5MU8`_K%Hq=\TFUG7BfWa*%7I&Nn9_GX'H9D36pI8_3&Y?BXa53-nUBm6Uech+6q:#hOpeF"V-NUpmd$QJ&nNu.<pPN7i_Q36Bih.g+i9K;f/-,YlPZh[HC&KkEMj#rbiAbC<,R.!:=Ou7%GsP6Q0QbhKG2S"9]1[FeO'mi'l&Ra80EGVZE$./.CjVh>fkK["Xg[k]SFMB9XS.V3%'ZU\[MA/TPZiuf0oB?=T,lft?1pm.'\i#Jg!*W-Q`D>gN6Y5!Z$[0s&j^jTh#fk7V2,GjkY,S*0nu,BDQ[_;SX?07p]>/i+;`']isr'<*iO>*elK,Io1[F@a/A(mh5%d]!82pl9&QSG+NgL3[",lr4`.cH7K@:9RIjr0Vs.$RFirpX(.:GaorG>Oa':RE@uH.oJnn@]2$XPZ?5.eVB$QKWp\XnLj-r:m4&<`sgm,_^J?AY"]NU_X.AUiD&=K3sn6S,4+5H;:d_,Ea<ai/>9%Fr^2P[mpd//D8/n1B_c%Q+6a9VjnlL?MK1BIWM*bd29VipX%?_>Y8XueXa">CD*!U8*bM2kgrEKCm76T=Tbc>[&Bhn46`qk#j)9@(Od5ChTG4hdt1?I>5O+EPsj+unKW9<YJJ4/PGY?$QqiJT]o]R\f$p\Gk2`<d]DQ`s2)`5gh(CRlA.A&Ii,ud"(Z5J`2Y.P>[4sUFi+k``>/d8GHML,Qn+(Bo)am_[f*nNC!m4EhE-o<^4e+6&EjpU!5POd,'u55HTb)7AnZj$JGZ8Zru;.L[+-F)XWeo=i6o\^W&Bn[I6rH7<>n,CCZd%fXrs)JGTjPq@Um~>
Gatm=>B?8n'RnB3i%<&EK0b;t+$.k/S$kE7STa%acH^#DA<:Z9(qC9tao;-"J:SpR`"+r#-5P`7kPCb<..<H<^FV$e?F\PjrdVG6*S[iN+Fp*6Y<LjR5(R'u>H<[c([mY+M`]g:X%1A+Ki:W)nB\K\8O'E]LK_llp%MH1o0S;aK&nP1rg,.Kcmi1GB9-jPeRlYk/D9S_<[KrlbtD7(LMQe@qSue=3nW*VD;&n*=iqg1hcsAEUt5'N?)hAW<W=2,LV-t#oGQLQE['PIqW5%7qh<nL1/$#`o>*&eQA`AbF%<:97u87A`)Kq$.=\poM12s\\F,%K'4FL\gDpVK1?E\/r,O<0^J[0CPnoF.e8^/oUtoWgk"-^AEebFBIJ2I"'?(HLSo@a\i2Asl_9j2Q7ntOVp!pEZgrT1_Q0oati0A?'o4#uq:2#E[A0J53S/$)gTtY:IOh&:Ga#5Pl@bYUNjQ\i%5T@P,44pYFp&fB"1JT:dJ.1OYe;'/r-r-j/X%$S5;Me=bYIL_m)3&l@_T''kV1b(!MB7#.RGXNHW?;"\e6,r<6"Gt>/]<ZFUgGI4qIMHDRUc;=_=GN8@"l'_;kB$Hl@k[f4e%,@^lk_[^F<#ZfJK/R+gL9.'UumenBj4.J:jWRN(B#H@'N781R(.Z6>[Q;B9ji`,24!X,1SmmQcRXd7%(DpWXqai0kK:(KWGq#7@kFPoUimb\hN-?E>%gNYKm_dj0p0!+nlE2>oMFJ.5fhlHlD[Vo+pS'1n95->TRu+S>/F6QcK?B1>.\WgscgOPg&?KOISZXlQCjFM@:SP"pCa7?m<M8p_UAWa"U\.>)p0$\m>Vc#GELrj%jJ=nmOoR&C2U(RR'^=8IMb>D23I-XC+.fkL5:!^k50gRe*M!prDK7TBEF$U,F#!-5k9YU)_*2SjH.q:kB6g-#hj7AO(\*V'knQ=l=uf.]_@J8LIQTJZ[X(W;39"7rdU7<qVrnDY;b.ilJFT`gZntNt5:/8:?8$7V]OV2(3/E[5_(21j7j.TQ('&*k^Z0-!R-$.`Be<AV1Uo!4XE_3Y`h\.Y!s&Ri58a&2jm.5SVu\L(/]'9<?bo$:3j39Yibu`iJiH.1-d^RRQ_&gtF9`@nN-`S7QL_'`^(.nY/:&>(Om[cr(\o8;O:?Z#mEI\rgV1L?,BqDas9Tm<e'o.9#Kcj"rN4PQt_T15$aP%U4Km(:t8OD`S%So-!#E/to3"GU_VdZ>V-N#l-I)U=?N?UHh8kW^10[8?k2,d_PS3^tA!hajou!VY$rs$&geA5]Wk*rN(0!5ZhMia8Q=,WXDfR"V/Fa9(nn3";hYW1"r!CM<u^-.2<CT+O6obTTcEJjU;'0%Y]X$KiSF:lo&.53\lmJ%bmu'e(9s+jf&9*`h"/??`lm^;"_1RH)+'1AjF=Zo$KE'r!Ngjn\Q!d"=CXFm.(ltjd-Vc?%N6HB+JGmi!5t*0]Qp^VjU\XSB),ZWG4h15dJG1hXmH#FT;k<=./"IY*Ie5M]6O6;fl%=BYuV6&@V<<Ou'n^#=t8RL3g;^^c)V?n$4Y2)8d1s9Ebk?m*:i!McD\#5nAFQG_^Fp/%JMn[f`q*Yfpob7q`a1T2]\'Un:3YoH)We55(h3nXSN*Asb:f1eCBD8j-IIUH&#MBT@<i$p2BJ6E)tAnXQ]iZCLJRc2boJqYG)-?i:D`JOh(pdo6r+;F+FcYA?t'DOU[:2D@gNZ&[jh.+/60FBnD=YCl\+QqRp&.h&(H4=Q1[,&feJVdd>oJIZ%f!8-VhUUn88/b2"hrr`qc4`]0V`]KQ>2@/C$cG<!6gR^oo/O,-P4YXHnk*a,HJD/sI<0sWdk8<&H!IZenP`bR4*=^ECgBl6YM(uE>42!^3X"+M"o>2!r35Z=2PSc+Sp@Rlc+LGd4AcTQLNTpC+brVFB;,LG&]&;tBEdoU'Zf2Qe-dj5iN-k,gh\_$?%b+JCmb*>1e<%(;`k/8pi#Dh!j%NZmK:"f,#m=X\pe\a>*inV;riZ>%a7MoQ0,$1TSNT3#&>jH&1fjBMW;DRX:5<320Yj&%5o!4H[-GH*:<j_'M,0cMK9ebV9\AnpDr,U36EJ8*-fUic_9``80;7[G]i5pN,fX7X0T^L>DN*+-VF=rV\HSjC4LbV3I(W\NSpb,%O.0UZA@^A30a3onMRh9f^45bZ5eCA`A)02`*Ao1@JC$ejTF=Q&4lcq$Bfl\n!Q"EUhlC#>\)Y95+pr4-S#tPjrNd-+D%/mA<$s3dh0qd4-g4XP/8,g?V=j[kY)C3&baKo'Z@iOk]:V9E24*@JU-;eV22=s*=C6cjNr'@#b^c#~>
endstream
endobj
21 0 obj
@ -233,7 +233,7 @@ endobj
17 0 obj
<<
/S /GoTo
/D [21 0 R /XYZ 85.0 335.866 null]
/D [21 0 R /XYZ 85.0 290.266 null]
>>
endobj
22 0 obj
@ -244,39 +244,39 @@ endobj
xref
0 34
0000000000 65535 f
0000008534 00000 n
0000008606 00000 n
0000008698 00000 n
0000008694 00000 n
0000008766 00000 n
0000008858 00000 n
0000000015 00000 n
0000000071 00000 n
0000000777 00000 n
0000000897 00000 n
0000000950 00000 n
0000008832 00000 n
0000008992 00000 n
0000001085 00000 n
0000008895 00000 n
0000009055 00000 n
0000001221 00000 n
0000008961 00000 n
0000009121 00000 n
0000001358 00000 n
0000009027 00000 n
0000009187 00000 n
0000001495 00000 n
0000009091 00000 n
0000009251 00000 n
0000001632 00000 n
0000004447 00000 n
0000004555 00000 n
0000006800 00000 n
0000009157 00000 n
0000006908 00000 n
0000007081 00000 n
0000007316 00000 n
0000007482 00000 n
0000007677 00000 n
0000007872 00000 n
0000007985 00000 n
0000008095 00000 n
0000008203 00000 n
0000008309 00000 n
0000008425 00000 n
0000006960 00000 n
0000009317 00000 n
0000007068 00000 n
0000007241 00000 n
0000007476 00000 n
0000007642 00000 n
0000007837 00000 n
0000008032 00000 n
0000008145 00000 n
0000008255 00000 n
0000008363 00000 n
0000008469 00000 n
0000008585 00000 n
trailer
<<
/Size 34
@ -284,5 +284,5 @@ trailer
/Info 4 0 R
>>
startxref
9208
9368
%%EOF

View File

@ -494,9 +494,9 @@ document.write("Last Published: " + document.lastModified);
, beginning the scoring process.
</p>
<p>Once inside the Searcher, a
<a href="api/org/apache/lucene/search/Hits.html">Hits</a>
object is constructed, which handles the scoring and caching of the search results.
The Hits constructor stores references to three or four important objects:
<a href="api/org/apache/lucene/search/HitCollector.html">HitCollector</a>
is used for the scoring and sorting of the search results.
These important objects are involved in a search:
<ol>
<li>The
@ -521,12 +521,11 @@ document.write("Last Published: " + document.lastModified);
</ol>
</p>
<p>Now that the Hits object has been initialized, it begins the process of identifying documents that
match the query by calling getMoreDocs method. Assuming we are not sorting (since sorting doesn't
<p> Assuming we are not sorting (since sorting doesn't
effect the raw Lucene score),
we call on the "expert" search method of the Searcher, passing in our
we call one of the search method of the Searcher, passing in the
<a href="api/org/apache/lucene/search/Weight.html">Weight</a>
object,
object created by Searcher.createWeight(Query),
<a href="api/org/apache/lucene/search/Filter.html">Filter</a>
and the number of results we want. This method
returns a

View File

@ -213,10 +213,10 @@ endobj
>>
endobj
40 0 obj
<< /Length 2130 /Filter [ /ASCII85Decode /FlateDecode ]
<< /Length 2003 /Filter [ /ASCII85Decode /FlateDecode ]
>>
stream
Gatm=99\*g%)2U?BQ?["ZXH88eOq9n9t/Aoh<*)OJX0u=M-_@)3:<t'-!;!3Odf&mPM*$FMZiMMp],?,Y#d:-X"Q!im*sA)<U(d[M5!Q>+,&JA,<nGE_l9tF]jEp.Y9%&*%eeBglg;j^(V/fQG(t^ELE\Zretr!4"&94JC&ZJ)jRkSt&6[7\[@9A$$6fTfkiUjrft*cH$VGY$M.Orr,`tgWb":C?LImS8P8l^.4VJWqaQ]"2o.%9UWe@Z&gJ;iWZHm'Z?MMQDs#rfc:Jiein;2N=H(Bf@M`oE>9%(ZNSaL]PEVDP:e&68$cEVlpiMCT&%8KbWL'ED=qg*3j#7!V8FL0a*`I9Y2'qi[6-VZt/S:-#:lFGia.#?.uK/c1W9S(K]iDl6j:eBh:kg04u%=X"NK)@SGrPH@VB3fOPic@S4S<UnUBG`6iYs6t#@H:S>?FM&9Oc/89Jj0Mo-C#[PPcVG,N;3#;X%B6?9=fO^T@m"6RF%"W2`^R6i-I,p?9!Q&!H>7,Q^1Yir=OEWk2PT[iH7L%8#L[E]QsbN]6Z7rs7YHi$XhguFjX6Q;D*FsM*M8:Lsl<cUd5up,ka\II9:s;:3!#`QWDaj-q:0T:MfBuqS*,O5o?7k\^;p5/)Go#Q:V:e%t7[465IU5Y%gO^3GF2C9qr<3h"/,5m4EU34m2j"f,9928h/IG./sWS'C`)J>I&kZgX5bYKS'7LJ+S.nF,N2=eI)R#^^1j/N15,G_W.Lp[b@"eL1R02Hg8:s1ej\#kk'C#7&J$!/`fKDCmqM4!edWalag.)>m%YrFN:Cr(,sn5`\88aE$LU-(n@=5"NslC&s"][eVJfWK&uVdL7=n4NCfLpRnkM?Au%0s7EO:Xbo^>o_k+\@lrCV30Z2*g(m?a#g"ldn++$rJe_<12'V(&hO90C=)SWAdIWc7a[SRV/VOq`eOqtKbi=osU>A`[A8??WB/kIAPfSa*qreK`;I=4VG%1[f^Cq!d<#\4lnI=9aUqVJAK,9*#7P4At2/Z4RpP?*k1>e.(3!s#A^9-Fj7'339A=mcU#LD31*Y[ua?-&CB36;(OB'fJS*IUdAc/(XHlde7KS>7uk^&$)(.qUD^:TW#+LX&mcdR4s#]rlMPBOqj-p=UKh6PtJHLe;!OGjX3>(]hqMl_Qn3b[3S7`*3.TPRpsVu+TuE7Y#U2/doQ'>1c/^sUZSWMoHU_k%?Q,$@ofa1`Ldf-43E*j2FPa(6)gP3][WRPTql^9K2o-6e^Q;[3>IM%gk`)1aS&kRD_r08`0>F#SV+V2K>rXDN@)7/mcG#3<g\R/X6+.Ikf(1:a[A#*9$ZIs:607k4r6]G_dk6"h;+78bVk3(d:id479?(LrZ.9-<MZ8]LY24\IGj#t.n!TVlONEW%PpN]CmQO'ZC;)NotuBtf`4\[MhQFN7<K\g9t+=YW-[S>O=.6cN)m:!:N%:\R%<IQ!qEUVJbSB;7ZFA@m/;C)rO;Nd!<O>O(5>D[)\4f*)!kJ2URU!`fP40-g?bS[>.*Gr^'Xa?0FUA?TEYb*NAGX=j4$gASB@/8EWa5=\o./I>rXh`#OahqA9CbncVW!W43i!RB3d2N/:26R.FCn#$[83PQIWFjod:#9q:[AtCF-'BV3[JZl#TM/^^`b3am[L;1n9/cA#Pd,(X.lq(%oa9CLJ^X2dfV59H=>)-nh7m>N[(+daB+1*.ss))PC,56Mc_KmW.6ak.m:O_*)A(`E?>&[hPV9?p$\mm'g*iS21CUD2i_bZ[lih<+DkEEeTn4V05c'].LB_[Z,Ri]WNdsj]!+m\eCk\^$p*&-K&(XN/BNkAgC>C3YTus0#\>p:eTOciq':[A;LWn[C\Q+%_p_X=&i#1I@qjpdBYru[p0pd(Nm20Iff[7(qgFN0unG]=c=2U6T5Pm:"!(nF*,EX4hgQY"9j_<Yo*?DHg+r8P=?STqp"+;V/bS4U#>jr,>Gc#/kLfP7;"PafjQg/EHo("F-.8^pN>6\^()(k8ct<r-[X_CIp2oq6Jr=@hX:C:JeB^H6LYd.*-TqtTp#W%[8U2!e]j5;#o4mODgERu,gUV.8NNF8"p)NfbLcDf*_sW.nB#7tWcLJX#8(&r$qQ#tlP;a_rN4q%Q<1m##P.CS9)~>
Gatm<968iG&AII3n.pAj5mY3X9.)A?DUZ8"2M\jKLkI$!'YsdB*VJ##,Xl\r!OCP'G\hFI!k\?sm^ES:R*T*bBA[@S>Nt$(ZeS'bQR%Fe)o^Y)dkF+N-`FMAI@@j:c^nF^*Y.fYoB"3b(]!n4Hc*][5`$S.o8WM=R@k9lZon)S#5q;0ZJ^>bX6G82,#V<qDU,0+k'VKX-kZ+tDRc@'6l*F8PITb!PG#]<-$WI+-8-]Ccm9G#<GDtB8Spu&T&bV?hoJF)n;fO.m/V\k8(!..=)pr+?HCr\4jZC8o#qWXmXKn0pHSBCbk&cqF+=#]NOK9)c;??O#0.")AoLWY7mh5hZht6.@5=2bTuiS_9ge60rI?5`\`!?59(ja0,X?XF"._ZoX#S:qP0Sb@4S1/2p_B\E$I`h`$Nn^]G0)UuY8CCKZB7mPj&A*t_pO2=UVN#cV?LssI%1Z^r(]X;WVf)J@eCCB`1MOlRfm%Zdd7G>`dVC>`*nOTag"?d)l*`EnHa"4^qHW&>?Vj81&1%X`Q=n6oja45]6_'$H3`rOH9XR-IJrt!0^gZ$[C)`.aBsE@OAQdgr-ou+Pf>+EK+Z-8Q=%&JS>b!X-YuYCU>/Q=C`N:H3]<8C&W3t8^!aIR4\04>@PI];QQ%PW[@QBI.k9pX.hun>K@&.Pn923_jRr0hs#5NG'G`-s@Cr2l(9/_>=nB(2(&PJ8mpcuj,)jCdpE^UcHShd)$oWVV0G7MI%=EiL9s7g0X0rHf8-S`EHO"8R9#&d<!M,$?O.TT*[eA?1?+U4F3?-OlY8"PPF=l^6eb)5`@(.)mAAQ&W:d/?mMk1Zb:mXd8^A,Me$<Bf#dX)D36h?VCQb9>?P[I0.6EPg"47l%<Lm`NnB,CK30CP;;/-r&f<.BMB[Sd_Prd7Ed"D-%qSYQGl%:=M5Nl(]1#)]n&\qSG8g5=Qki@e4Fj*W>?^3-<"j'W[5X7_T\&&>&<ACKXqn&c&>J6j]TC04kq2B#n*.q4[qO^/.k=:[e<JLHq$qOm?-RK.\*<Ui.rb#<Xi/h"*V9[C]/Z/8Xep<j;#W<NH+e;bE^oS:sj0loH99[Fb[g"#CH^9?l7W.-u_^1<L\Wf9'@fS1h6q[B;&o2gJ_HEXs.Kk'5<ed^<dj&!3;h_`%6CGrKqG-1)[S[O9NZCqG+D@KsD`tW7SCOlU6+*5kmVN?ssK/4<g@6oh\*0aV6,KRcZ)E(Qs3Fj%9MB39TTqmhX>$A0?1`u-YmCAJK`k^[GT'W,hL4UXZ^,_D$kUoADPaonm$.sk_QU]%iW!NaU]=+C*6.;Jck8/<B]8D/\P4a&:`7\,&pNDgM-6EB+\^8bg]:$J8KK5)6pJ>lRW(@gHIp$$85jR;`fs#]tEAJ0'J`L80)ZA;BZYh]0pa-.(W)@#RmW<e5g6",`*^D9TC.0_V%YgX_K2+Mkc)^iJDZU3nA^+Ub+5Z%8@Hqk+\&3m(c5W[M_[ml&O4A:[#)9s"b<0BiXrgdiFW!?-)\esk5poV&7(PW1-(6RtkNIt]$KX`mN4m2=+3XsK-G,bV;9m(lI:h<^@SLSa2]'iBP9D]"ae/uR,[,J389#-:M3!72VpVV!c^*;$h.<=u=iHjD`HYi3n%H.&%@lF7:9Kn;40ECMKsdQXqfNPh`11b)Tt22#N5M-")uI>qY8n1><#_UE)l+O]n4SB\i*+s\Wj\^H,]rM-o?N+D`UB1iNc%8G[g1?o&JY:\G?T2#EQ4#?F\.(1VPJ=]Z8l'r6I>S-*081u!eB\iHhr<=p*W_hADQq?dt_3O*d(Xf=Wa@o7dSf>i+4;t\rK@%LiAIE`h.u"_e*V?nh^[2XKhfhW?GOdVt/e-b?GF"7+b:4)XjhbHkaNT"jqA'+Mq03nd&gkcCU*qAN!@#")i/QG'DW#Z65-9E93/@mQN2qZ:MU9$d23"CrMcLBfAYUh#oI(Uql_g8_W^e+95Miq3iteh/4&L5.TaVSo;5)Lqp<)mM^ab[feJRYEAN7W<VcTqQ]$F~>
endstream
endobj
41 0 obj
@ -484,39 +484,39 @@ endobj
xref
0 63
0000000000 65535 f
0000017942 00000 n
0000018035 00000 n
0000018127 00000 n
0000017815 00000 n
0000017908 00000 n
0000018000 00000 n
0000000015 00000 n
0000000071 00000 n
0000001068 00000 n
0000001188 00000 n
0000001297 00000 n
0000018250 00000 n
0000018123 00000 n
0000001432 00000 n
0000018313 00000 n
0000018186 00000 n
0000001569 00000 n
0000018379 00000 n
0000018252 00000 n
0000001706 00000 n
0000018445 00000 n
0000018318 00000 n
0000001843 00000 n
0000018509 00000 n
0000018382 00000 n
0000001979 00000 n
0000018575 00000 n
0000018448 00000 n
0000002116 00000 n
0000018641 00000 n
0000018514 00000 n
0000002253 00000 n
0000018705 00000 n
0000018578 00000 n
0000002389 00000 n
0000018771 00000 n
0000018644 00000 n
0000002526 00000 n
0000018837 00000 n
0000018710 00000 n
0000002663 00000 n
0000018901 00000 n
0000018774 00000 n
0000002799 00000 n
0000018967 00000 n
0000018840 00000 n
0000002935 00000 n
0000019033 00000 n
0000018906 00000 n
0000003072 00000 n
0000005698 00000 n
0000005806 00000 n
@ -524,28 +524,28 @@ xref
0000008243 00000 n
0000010669 00000 n
0000010777 00000 n
0000013000 00000 n
0000013108 00000 n
0000014526 00000 n
0000019098 00000 n
0000014634 00000 n
0000014797 00000 n
0000014985 00000 n
0000015205 00000 n
0000015404 00000 n
0000015715 00000 n
0000015919 00000 n
0000016112 00000 n
0000016327 00000 n
0000016648 00000 n
0000016828 00000 n
0000017013 00000 n
0000017230 00000 n
0000017386 00000 n
0000017499 00000 n
0000017609 00000 n
0000017717 00000 n
0000017833 00000 n
0000012873 00000 n
0000012981 00000 n
0000014399 00000 n
0000018971 00000 n
0000014507 00000 n
0000014670 00000 n
0000014858 00000 n
0000015078 00000 n
0000015277 00000 n
0000015588 00000 n
0000015792 00000 n
0000015985 00000 n
0000016200 00000 n
0000016521 00000 n
0000016701 00000 n
0000016886 00000 n
0000017103 00000 n
0000017259 00000 n
0000017372 00000 n
0000017482 00000 n
0000017590 00000 n
0000017706 00000 n
trailer
<<
/Size 63
@ -553,5 +553,5 @@ trailer
/Info 4 0 R
>>
startxref
19149
19022
%%EOF

View File

@ -17,22 +17,24 @@ package org.apache.lucene.demo;
* limitations under the License.
*/
import java.io.BufferedReader;
import java.io.FileReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.util.Date;
import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.standard.StandardAnalyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.index.FilterIndexReader;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.queryParser.QueryParser;
import org.apache.lucene.search.Hits;
import org.apache.lucene.search.HitCollector;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.ScoreDoc;
import org.apache.lucene.search.Searcher;
import java.io.BufferedReader;
import java.io.FileReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.util.Date;
import org.apache.lucene.search.TopDocCollector;
/** Simple command-line based search demo. */
public class SearchFiles {
@ -60,7 +62,8 @@ public class SearchFiles {
/** Simple command-line based search demo. */
public static void main(String[] args) throws Exception {
String usage =
"Usage: java org.apache.lucene.demo.SearchFiles [-index dir] [-field f] [-repeat n] [-queries file] [-raw] [-norms field]";
"Usage:\tjava org.apache.lucene.demo.SearchFiles [-index dir] [-field f] [-repeat n] [-queries file] [-raw] [-norms field] [-paging hitsPerPage]";
usage += "\n\tSpecify 'false' for hitsPerPage to use streaming instead of paging search.";
if (args.length > 0 && ("-h".equals(args[0]) || "-help".equals(args[0]))) {
System.out.println(usage);
System.exit(0);
@ -72,6 +75,8 @@ public class SearchFiles {
int repeat = 0;
boolean raw = false;
String normsField = null;
boolean paging = true;
int hitsPerPage = 10;
for (int i = 0; i < args.length; i++) {
if ("-index".equals(args[i])) {
@ -91,6 +96,16 @@ public class SearchFiles {
} else if ("-norms".equals(args[i])) {
normsField = args[i+1];
i++;
} else if ("-paging".equals(args[i])) {
if (args[i+1].equals("false")) {
paging = false;
} else {
hitsPerPage = Integer.parseInt(args[i+1]);
if (hitsPerPage == 0) {
paging = false;
}
}
i++;
}
}
@ -125,53 +140,149 @@ public class SearchFiles {
Query query = parser.parse(line);
System.out.println("Searching for: " + query.toString(field));
Hits hits = searcher.search(query);
if (repeat > 0) { // repeat & time as benchmark
Date start = new Date();
for (int i = 0; i < repeat; i++) {
hits = searcher.search(query);
searcher.search(query, null, 100);
}
Date end = new Date();
System.out.println("Time: "+(end.getTime()-start.getTime())+"ms");
}
System.out.println(hits.length() + " total matching documents");
final int HITS_PER_PAGE = 10;
for (int start = 0; start < hits.length(); start += HITS_PER_PAGE) {
int end = Math.min(hits.length(), start + HITS_PER_PAGE);
for (int i = start; i < end; i++) {
if (raw) { // output raw format
System.out.println("doc="+hits.id(i)+" score="+hits.score(i));
continue;
}
Document doc = hits.doc(i);
String path = doc.get("path");
if (path != null) {
System.out.println((i+1) + ". " + path);
String title = doc.get("title");
if (title != null) {
System.out.println(" Title: " + doc.get("title"));
}
} else {
System.out.println((i+1) + ". " + "No path for this document");
}
}
if (queries != null) // non-interactive
break;
if (hits.length() > end) {
System.out.println("more (y/n) ? ");
line = in.readLine();
if (line.length() == 0 || line.charAt(0) == 'n')
break;
}
if (paging) {
doPagingSearch(in, searcher, query, hitsPerPage, raw, queries == null);
} else {
doStreamingSearch(searcher, query);
}
}
reader.close();
}
/**
* This method uses a custom HitCollector implementation which simply prints out
* the docId and score of every matching document.
*
* This simulates the streaming search use case, where all hits are supposed to
* be processed, regardless of their relevance.
*/
public static void doStreamingSearch(final Searcher searcher, Query query) throws IOException {
HitCollector streamingHitCollector = new HitCollector() {
// simply print docId and score of every matching document
public void collect(int doc, float score) {
System.out.println("doc="+doc+" score="+score);
}
};
searcher.search(query, streamingHitCollector);
}
/**
* This demonstrates a typical paging search scenario, where the search engine presents
* pages of size n to the user. The user can then go to the next page if interested in
* the next hits.
*
* When the query is executed for the first time, then only enough results are collected
* to fill 5 result pages. If the user wants to page beyond this limit, then the query
* is executed another time and all hits are collected.
*
*/
public static void doPagingSearch(BufferedReader in, Searcher searcher, Query query,
int hitsPerPage, boolean raw, boolean interactive) throws IOException {
// Collect enough docs to show 5 pages
TopDocCollector collector = new TopDocCollector(5 * hitsPerPage);
searcher.search(query, collector);
ScoreDoc[] hits = collector.topDocs().scoreDocs;
int numTotalHits = collector.getTotalHits();
System.out.println(numTotalHits + " total matching documents");
int start = 0;
int end = Math.min(numTotalHits, hitsPerPage);
while (true) {
if (end > hits.length) {
System.out.println("Only results 1 - " + hits.length +" of " + numTotalHits + " total matching documents collected.");
System.out.println("Collect more (y/n) ?");
String line = in.readLine();
if (line.length() == 0 || line.charAt(0) == 'n') {
break;
}
collector = new TopDocCollector(numTotalHits);
searcher.search(query, collector);
hits = collector.topDocs().scoreDocs;
}
end = Math.min(hits.length, start + hitsPerPage);
for (int i = start; i < end; i++) {
if (raw) { // output raw format
System.out.println("doc="+hits[i].doc+" score="+hits[i].score);
continue;
}
Document doc = searcher.doc(hits[i].doc);
String path = doc.get("path");
if (path != null) {
System.out.println((i+1) + ". " + path);
String title = doc.get("title");
if (title != null) {
System.out.println(" Title: " + doc.get("title"));
}
} else {
System.out.println((i+1) + ". " + "No path for this document");
}
}
if (!interactive) {
break;
}
if (numTotalHits >= end) {
boolean quit = false;
while (true) {
System.out.print("Press ");
if (start - hitsPerPage >= 0) {
System.out.print("(p)revious page, ");
}
if (start + hitsPerPage < numTotalHits) {
System.out.print("(n)ext page, ");
}
System.out.println("(q)uit or enter number to jump to a page.");
String line = in.readLine();
if (line.length() == 0 || line.charAt(0)=='q') {
quit = true;
break;
}
if (line.charAt(0) == 'p') {
start = Math.max(0, start - hitsPerPage);
break;
} else if (line.charAt(0) == 'n') {
if (start + hitsPerPage < numTotalHits) {
start+=hitsPerPage;
}
break;
} else {
int page = Integer.parseInt(line);
if ((page - 1) * hitsPerPage < numTotalHits) {
start = (page - 1) * hitsPerPage;
break;
} else {
System.out.println("No such page");
}
}
}
if (quit) break;
end = Math.min(numTotalHits, start + hitsPerPage);
}
}
}
}

View File

@ -18,7 +18,7 @@ package org.apache.lucene.document;
*/
import java.util.*; // for javadoc
import org.apache.lucene.search.Hits; // for javadoc
import org.apache.lucene.search.ScoreDoc; // for javadoc
import org.apache.lucene.search.Searcher; // for javadoc
import org.apache.lucene.index.IndexReader; // for javadoc
@ -32,7 +32,7 @@ import org.apache.lucene.index.IndexReader; // for javadoc
*
* <p>Note that fields which are <i>not</i> {@link Fieldable#isStored() stored} are
* <i>not</i> available in documents retrieved from the index, e.g. with {@link
* Hits#doc(int)}, {@link Searcher#doc(int)} or {@link
* ScoreDoc#doc}, {@link Searcher#doc(int)} or {@link
* IndexReader#document(int)}.
*/

View File

@ -26,6 +26,7 @@ import org.apache.lucene.index.CorruptIndexException;
* Wrapper used by {@link HitIterator} to provide a lazily loaded hit
* from {@link Hits}.
*
* @deprecated Hits will be removed in Lucene 3.0. Use {@link TopDocCollector} and {@link TopDocs} instead.
* @author Jeremy Rayner
*/
public class Hit implements java.io.Serializable {

View File

@ -25,6 +25,7 @@ import java.util.NoSuchElementException;
* {@link Hits#iterator()} returns an instance of this class. Calls to {@link #next()}
* return a {@link Hit} instance.
*
* @deprecated Hits will be removed in Lucene 3.0. Use {@link TopDocCollector} and {@link TopDocs} instead.
* @author Jeremy Rayner
*/
public class HitIterator implements Iterator {
@ -76,3 +77,4 @@ public class HitIterator implements Iterator {
}
}

View File

@ -38,6 +38,19 @@ import org.apache.lucene.index.CorruptIndexException;
* {@link java.util.ConcurrentModificationException ConcurrentModificationException}
* is thrown when accessing hit <code>n</code> &ge; current_{@link #length()}
* (but <code>n</code> &lt; {@link #length()}_at_start).
*
* @deprecated Hits will be removed in Lucene 3.0. <p>
* Instead e. g. {@link TopDocCollector} and {@link TopDocs} can be used:<br>
* <pre>
* TopDocCollector collector = new TopDocCollector(hitsPerPage);
* searcher.search(query, collector);
* ScoreDoc[] hits = collector.topDocs().scoreDocs;
* for (int i = 0; i < hits.length; i++) {
* int docId = hits[i].doc;
* Document d = searcher.doc(docId);
* // do something with current hit
* ...
* </pre>
*/
public final class Hits {
private Weight weight;

View File

@ -33,6 +33,8 @@ public abstract class Searcher implements Searchable {
/** Returns the documents matching <code>query</code>.
* @throws BooleanQuery.TooManyClauses
* @deprecated Hits will be removed in Lucene 3.0. Use
* {@link #search(Query, Filter, int))} instead.
*/
public final Hits search(Query query) throws IOException {
return search(query, (Filter)null);
@ -41,6 +43,8 @@ public abstract class Searcher implements Searchable {
/** Returns the documents matching <code>query</code> and
* <code>filter</code>.
* @throws BooleanQuery.TooManyClauses
* @deprecated Hits will be removed in Lucene 3.0. Use
* {@link #search(Query, Filter, int))} instead.
*/
public Hits search(Query query, Filter filter) throws IOException {
return new Hits(this, query, filter);
@ -49,6 +53,8 @@ public abstract class Searcher implements Searchable {
/** Returns documents matching <code>query</code> sorted by
* <code>sort</code>.
* @throws BooleanQuery.TooManyClauses
* @deprecated Hits will be removed in Lucene 3.0. Use
* {@link #search(Query, Filter, int, Sort))} instead.
*/
public Hits search(Query query, Sort sort)
throws IOException {
@ -58,13 +64,15 @@ public abstract class Searcher implements Searchable {
/** Returns documents matching <code>query</code> and <code>filter</code>,
* sorted by <code>sort</code>.
* @throws BooleanQuery.TooManyClauses
* @deprecated Hits will be removed in Lucene 3.0. Use
* {@link #search(Query, Filter, int, Sort))} instead.
*/
public Hits search(Query query, Filter filter, Sort sort)
throws IOException {
return new Hits(this, query, filter, sort);
}
/** Expert: Low-level search implementation with arbitrary sorting. Finds
/** Search implementation with arbitrary sorting. Finds
* the top <code>n</code> hits for <code>query</code>, applying
* <code>filter</code> if non-null, and sorting the hits by the criteria in
* <code>sort</code>.
@ -105,7 +113,7 @@ public abstract class Searcher implements Searchable {
*
* <p>Applications should only use this if they need <i>all</i> of the
* matching documents. The high-level search API ({@link
* Searcher#search(Query)}) is usually more efficient, as it skips
* Searcher#search(Query, Filter, int))}) is usually more efficient, as it skips
* non-high-scoring hits.
*
* @param query to match documents
@ -118,13 +126,9 @@ public abstract class Searcher implements Searchable {
search(createWeight(query), filter, results);
}
/** Expert: Low-level search implementation. Finds the top <code>n</code>
/** Finds the top <code>n</code>
* hits for <code>query</code>, applying <code>filter</code> if non-null.
*
* <p>Called by {@link Hits}.
*
* <p>Applications should usually call {@link Searcher#search(Query)} or
* {@link Searcher#search(Query,Filter)} instead.
* @throws BooleanQuery.TooManyClauses
*/
public TopDocs search(Query query, Filter filter, int n)

View File

@ -118,10 +118,14 @@ the searcher. Note that it's also possible to programmatically construct a rich
href="api/org/apache/lucene/search/Query.html">Query</a></code> object without using the query
parser. The query parser just enables decoding the <a href="queryparsersyntax.html">Lucene query
syntax</a> into the corresponding <code><a
href="api/org/apache/lucene/search/Query.html">Query</a></code> object. The searcher results are
returned in a collection of Documents called <code><a
href="api/org/apache/lucene/search/Hits.html">Hits</a></code> which is then iterated through and
displayed to the user.
href="api/org/apache/lucene/search/Query.html">Query</a></code> object. Search can be executed in
two different ways:
<ul>
<li>Streaming: A <code><a href="api/org/apache/lucene/search/HitCollector.html">HitCollector</a></code> subclass
simply prints out the document ID and score for each matching document.</li>
<li>Paging: Using a <code><a href="api/org/apache/lucene/search/TopDocCollector.html">TopDocCollector</a></code>
the search results are printed in pages, sorted by score (i. e. relevance).</li>
</ul>
</p>
</section>
@ -137,3 +141,4 @@ displayed to the user.
</body>
</document>

View File

@ -207,9 +207,9 @@
, beginning the scoring process.
</p>
<p>Once inside the Searcher, a
<a href="api/org/apache/lucene/search/Hits.html">Hits</a>
object is constructed, which handles the scoring and caching of the search results.
The Hits constructor stores references to three or four important objects:
<a href="api/org/apache/lucene/search/HitCollector.html">HitCollector</a>
is used for the scoring and sorting of the search results.
These important objects are involved in a search:
<ol>
<li>The
<a href="api/org/apache/lucene/search/Weight.html">Weight</a>
@ -228,12 +228,11 @@
</li>
</ol>
</p>
<p>Now that the Hits object has been initialized, it begins the process of identifying documents that
match the query by calling getMoreDocs method. Assuming we are not sorting (since sorting doesn't
<p> Assuming we are not sorting (since sorting doesn't
effect the raw Lucene score),
we call on the "expert" search method of the Searcher, passing in our
we call one of the search method of the Searcher, passing in the
<a href="api/org/apache/lucene/search/Weight.html">Weight</a>
object,
object created by Searcher.createWeight(Query),
<a href="api/org/apache/lucene/search/Filter.html">Filter</a>
and the number of results we want. This method
returns a
@ -288,4 +287,4 @@
</section>
</section>
</body>
</document>
</document>

View File

@ -59,7 +59,6 @@ class SearchTest {
// "\"a c\"",
"\"a c e\"",
};
Hits hits = null;
QueryParser parser = new QueryParser("contents", analyzer);
parser.setPhraseSlop(4);
@ -72,12 +71,12 @@ class SearchTest {
//DateFilter filter = DateFilter.Before("modified", Time(1997,00,01));
//System.out.println(filter);
hits = searcher.search(query);
ScoreDoc[] hits = searcher.search(query, null, docs.length).scoreDocs;
System.out.println(hits.length() + " total results");
for (int i = 0 ; i < hits.length() && i < 10; i++) {
Document d = hits.doc(i);
System.out.println(i + " " + hits.score(i)
System.out.println(hits.length + " total results");
for (int i = 0 ; i < hits.length && i < 10; i++) {
Document d = searcher.doc(hits[i].doc);
System.out.println(i + " " + hits[i].score
// + " " + DateField.stringToDate(d.get("modified"))
+ " " + d.get("contents"));
}

View File

@ -19,12 +19,18 @@ package org.apache.lucene;
import java.io.IOException;
import org.apache.lucene.store.*;
import org.apache.lucene.document.*;
import org.apache.lucene.analysis.*;
import org.apache.lucene.index.*;
import org.apache.lucene.search.*;
import org.apache.lucene.queryParser.*;
import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.SimpleAnalyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.queryParser.QueryParser;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.ScoreDoc;
import org.apache.lucene.search.Searcher;
import org.apache.lucene.store.Directory;
import org.apache.lucene.store.RAMDirectory;
class SearchTestForDuplicates {
@ -52,15 +58,15 @@ class SearchTestForDuplicates {
// try a search without OR
Searcher searcher = new IndexSearcher(directory);
Hits hits = null;
ScoreDoc[] hits = null;
QueryParser parser = new QueryParser(PRIORITY_FIELD, analyzer);
Query query = parser.parse(HIGH_PRIORITY);
System.out.println("Query: " + query.toString(PRIORITY_FIELD));
hits = searcher.search(query);
printHits(hits);
hits = searcher.search(query, null, 1000).scoreDocs;
printHits(hits, searcher);
searcher.close();
@ -73,8 +79,8 @@ class SearchTestForDuplicates {
query = parser.parse(HIGH_PRIORITY + " OR " + MED_PRIORITY);
System.out.println("Query: " + query.toString(PRIORITY_FIELD));
hits = searcher.search(query);
printHits(hits);
hits = searcher.search(query, null, 1000).scoreDocs;
printHits(hits, searcher);
searcher.close();
@ -84,11 +90,11 @@ class SearchTestForDuplicates {
}
}
private static void printHits( Hits hits ) throws IOException {
System.out.println(hits.length() + " total results\n");
for (int i = 0 ; i < hits.length(); i++) {
private static void printHits( ScoreDoc[] hits, Searcher searcher) throws IOException {
System.out.println(hits.length + " total results\n");
for (int i = 0 ; i < hits.length; i++) {
if ( i < 10 || (i > 94 && i < 105) ) {
Document d = hits.doc(i);
Document d = searcher.doc(hits[i].doc);
System.out.println(i + " " + d.get(ID_FIELD));
}
}

View File

@ -17,7 +17,8 @@ package org.apache.lucene;
* limitations under the License.
*/
import org.apache.lucene.util.LuceneTestCase;
import java.io.IOException;
import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.standard.StandardAnalyzer;
import org.apache.lucene.document.Document;
@ -25,13 +26,12 @@ import org.apache.lucene.document.Field;
import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.queryParser.ParseException;
import org.apache.lucene.queryParser.QueryParser;
import org.apache.lucene.search.Hits;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.ScoreDoc;
import org.apache.lucene.store.Directory;
import org.apache.lucene.store.RAMDirectory;
import java.io.IOException;
import org.apache.lucene.util.LuceneTestCase;
/**
* A very simple demo used in the API documentation (src/java/overview.html).
@ -64,11 +64,11 @@ public class TestDemo extends LuceneTestCase {
// Parse a simple query that searches for "text":
QueryParser parser = new QueryParser("fieldname", analyzer);
Query query = parser.parse("text");
Hits hits = isearcher.search(query);
assertEquals(1, hits.length());
ScoreDoc[] hits = isearcher.search(query, null, 1000).scoreDocs;
assertEquals(1, hits.length);
// Iterate through the results:
for (int i = 0; i < hits.length(); i++) {
Document hitDoc = hits.doc(i);
for (int i = 0; i < hits.length; i++) {
Document hitDoc = isearcher.doc(hits[i].doc);
assertEquals("This is the text to be indexed.", hitDoc.get("fieldname"));
}
isearcher.close();

View File

@ -35,6 +35,8 @@ import java.util.NoSuchElementException;
/**
* This test intentionally not put in the search package in order
* to test HitIterator and Hit package protection.
*
* @deprecated Hits will be removed in Lucene 3.0
*/
public class TestHitIterator extends LuceneTestCase {
public void testIterator() throws Exception {

View File

@ -108,7 +108,7 @@ public class TestSearch extends LuceneTestCase {
"\"a c\"",
"\"a c e\"",
};
Hits hits = null;
ScoreDoc[] hits = null;
QueryParser parser = new QueryParser("contents", analyzer);
parser.setPhraseSlop(4);
@ -121,12 +121,12 @@ public class TestSearch extends LuceneTestCase {
//DateFilter filter = DateFilter.Before("modified", Time(1997,00,01));
//System.out.println(filter);
hits = searcher.search(query);
hits = searcher.search(query, null, 1000).scoreDocs;
out.println(hits.length() + " total results");
for (int i = 0 ; i < hits.length() && i < 10; i++) {
Document d = hits.doc(i);
out.println(i + " " + hits.score(i)
out.println(hits.length + " total results");
for (int i = 0 ; i < hits.length && i < 10; i++) {
Document d = searcher.doc(hits[i].doc);
out.println(i + " " + hits[i].score
// + " " + DateField.stringToDate(d.get("modified"))
+ " " + d.get("contents"));
}

View File

@ -101,16 +101,15 @@ public class TestSearchForDuplicates extends LuceneTestCase {
// try a search without OR
Searcher searcher = new IndexSearcher(directory);
Hits hits = null;
QueryParser parser = new QueryParser(PRIORITY_FIELD, analyzer);
Query query = parser.parse(HIGH_PRIORITY);
out.println("Query: " + query.toString(PRIORITY_FIELD));
hits = searcher.search(query);
printHits(out, hits);
checkHits(hits, MAX_DOCS);
ScoreDoc[] hits = searcher.search(query, null, MAX_DOCS).scoreDocs;
printHits(out, hits, searcher);
checkHits(hits, MAX_DOCS, searcher);
searcher.close();
@ -123,29 +122,29 @@ public class TestSearchForDuplicates extends LuceneTestCase {
query = parser.parse(HIGH_PRIORITY + " OR " + MED_PRIORITY);
out.println("Query: " + query.toString(PRIORITY_FIELD));
hits = searcher.search(query);
printHits(out, hits);
checkHits(hits, MAX_DOCS);
hits = searcher.search(query, null, MAX_DOCS).scoreDocs;
printHits(out, hits, searcher);
checkHits(hits, MAX_DOCS, searcher);
searcher.close();
}
private void printHits(PrintWriter out, Hits hits ) throws IOException {
out.println(hits.length() + " total results\n");
for (int i = 0 ; i < hits.length(); i++) {
private void printHits(PrintWriter out, ScoreDoc[] hits, Searcher searcher ) throws IOException {
out.println(hits.length + " total results\n");
for (int i = 0 ; i < hits.length; i++) {
if ( i < 10 || (i > 94 && i < 105) ) {
Document d = hits.doc(i);
Document d = searcher.doc(hits[i].doc);
out.println(i + " " + d.get(ID_FIELD));
}
}
}
private void checkHits(Hits hits, int expectedCount) throws IOException {
assertEquals("total results", expectedCount, hits.length());
for (int i = 0 ; i < hits.length(); i++) {
private void checkHits(ScoreDoc[] hits, int expectedCount, Searcher searcher) throws IOException {
assertEquals("total results", expectedCount, hits.length);
for (int i = 0 ; i < hits.length; i++) {
if ( i < 10 || (i > 94 && i < 105) ) {
Document d = hits.doc(i);
Document d = searcher.doc(hits[i].doc);
assertEquals("check " + i, String.valueOf(i), d.get(ID_FIELD));
}
}

View File

@ -115,11 +115,11 @@ class ThreadSafetyTest {
throws Exception {
System.out.println("Searching for " + n);
QueryParser parser = new QueryParser("contents", ANALYZER);
Hits hits =
searcher.search(parser.parse(English.intToEnglish(n)));
System.out.println("Search for " + n + ": total=" + hits.length());
for (int j = 0; j < Math.min(3, hits.length()); j++) {
System.out.println("Hit for " + n + ": " + hits.doc(j).get("id"));
ScoreDoc[] hits =
searcher.search(parser.parse(English.intToEnglish(n)), null, 1000).scoreDocs;
System.out.println("Search for " + n + ": total=" + hits.length);
for (int j = 0; j < Math.min(3, hits.length); j++) {
System.out.println("Hit for " + n + ": " + searcher.doc(hits[j].doc).get("id"));
}
}
}

View File

@ -17,18 +17,18 @@ package org.apache.lucene.analysis;
* limitations under the License.
*/
import org.apache.lucene.util.LuceneTestCase;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.index.Term;
import org.apache.lucene.index.TermDocs;
import org.apache.lucene.store.RAMDirectory;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import org.apache.lucene.queryParser.QueryParser;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.Hits;
import org.apache.lucene.queryParser.QueryParser;
import org.apache.lucene.search.ScoreDoc;
import org.apache.lucene.store.RAMDirectory;
import org.apache.lucene.util.LuceneTestCase;
public class TestKeywordAnalyzer extends LuceneTestCase {
@ -59,10 +59,10 @@ public class TestKeywordAnalyzer extends LuceneTestCase {
QueryParser queryParser = new QueryParser("description", analyzer);
Query query = queryParser.parse("partnum:Q36 AND SPACE");
Hits hits = searcher.search(query);
ScoreDoc[] hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals("Q36 kept as-is",
"+partnum:Q36 +space", query.toString("description"));
assertEquals("doc found!", 1, hits.length());
assertEquals("doc found!", 1, hits.length);
}
public void testMutipleDocument() throws Exception {

View File

@ -1,18 +1,15 @@
package org.apache.lucene.document;
import org.apache.lucene.util.LuceneTestCase;
import org.apache.lucene.store.RAMDirectory;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import org.apache.lucene.analysis.standard.StandardAnalyzer;
import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.index.Term;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.TermQuery;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.ScoreDoc;
import org.apache.lucene.search.Searcher;
import org.apache.lucene.search.Hits;
import org.apache.lucene.search.TermQuery;
import org.apache.lucene.store.RAMDirectory;
import org.apache.lucene.util.LuceneTestCase;
/**
* Licensed to the Apache Software Foundation (ASF) under one or more
@ -170,10 +167,10 @@ public class TestDocument extends LuceneTestCase
Query query = new TermQuery(new Term("keyword", "test1"));
// ensure that queries return expected results without DateFilter first
Hits hits = searcher.search(query);
assertEquals(1, hits.length());
ScoreDoc[] hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(1, hits.length);
doAssert(hits.doc(0), true);
doAssert(searcher.doc(hits[0].doc), true);
searcher.close();
}
@ -244,11 +241,11 @@ public class TestDocument extends LuceneTestCase
Query query = new TermQuery(new Term("keyword", "test"));
// ensure that queries return expected results without DateFilter first
Hits hits = searcher.search(query);
assertEquals(3, hits.length());
ScoreDoc[] hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(3, hits.length);
int result = 0;
for(int i=0;i<3;i++) {
Document doc2 = hits.doc(i);
Document doc2 = searcher.doc(hits[i].doc);
Field f = doc2.getField("id");
if (f.stringValue().equals("id1"))
result |= 1;

View File

@ -17,29 +17,27 @@ package org.apache.lucene.index;
* limitations under the License.
*/
import org.apache.lucene.util.LuceneTestCase;
import java.util.Arrays;
import java.util.List;
import java.util.Enumeration;
import java.util.zip.ZipFile;
import java.util.zip.ZipEntry;
import java.io.OutputStream;
import java.io.InputStream;
import java.io.FileOutputStream;
import java.io.BufferedOutputStream;
import java.io.IOException;
import java.io.File;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.util.Arrays;
import java.util.Enumeration;
import java.util.List;
import java.util.zip.ZipEntry;
import java.util.zip.ZipFile;
import org.apache.lucene.analysis.WhitespaceAnalyzer;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.TermQuery;
import org.apache.lucene.search.Hits;
import org.apache.lucene.store.Directory;
import org.apache.lucene.store.FSDirectory;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.ScoreDoc;
import org.apache.lucene.search.TermQuery;
import org.apache.lucene.store.Directory;
import org.apache.lucene.store.FSDirectory;
import org.apache.lucene.util.LuceneTestCase;
import org.apache.lucene.util._TestUtil;
/*
@ -180,12 +178,12 @@ public class TestBackwardsCompatibility extends LuceneTestCase
}
}
private void testHits(Hits hits, int expectedCount, IndexReader reader) throws IOException {
final int hitCount = hits.length();
private void testHits(ScoreDoc[] hits, int expectedCount, IndexReader reader) throws IOException {
final int hitCount = hits.length;
assertEquals("wrong number of hits", expectedCount, hitCount);
for(int i=0;i<hitCount;i++) {
hits.doc(i);
reader.getTermFreqVectors(hits.id(i));
reader.document(hits[i].doc);
reader.getTermFreqVectors(hits[i].doc);
}
}
@ -224,11 +222,11 @@ public class TestBackwardsCompatibility extends LuceneTestCase
assertEquals(7, i);
}
Hits hits = searcher.search(new TermQuery(new Term("content", "aaa")));
ScoreDoc[] hits = searcher.search(new TermQuery(new Term("content", "aaa")), null, 1000).scoreDocs;
// First document should be #21 since it's norm was
// increased:
Document d = hits.doc(0);
Document d = searcher.doc(hits[0].doc);
assertEquals("didn't get the right document first", "21", d.get("id"));
testHits(hits, 34, searcher.getIndexReader());
@ -238,12 +236,12 @@ public class TestBackwardsCompatibility extends LuceneTestCase
!oldName.startsWith("21.") &&
!oldName.startsWith("22.")) {
// Test on indices >= 2.3
hits = searcher.search(new TermQuery(new Term("utf8", "\u0000")));
assertEquals(34, hits.length());
hits = searcher.search(new TermQuery(new Term("utf8", "Lu\uD834\uDD1Ece\uD834\uDD60ne")));
assertEquals(34, hits.length());
hits = searcher.search(new TermQuery(new Term("utf8", "ab\ud917\udc17cd")));
assertEquals(34, hits.length());
hits = searcher.search(new TermQuery(new Term("utf8", "\u0000")), null, 1000).scoreDocs;
assertEquals(34, hits.length);
hits = searcher.search(new TermQuery(new Term("utf8", "Lu\uD834\uDD1Ece\uD834\uDD60ne")), null, 1000).scoreDocs;
assertEquals(34, hits.length);
hits = searcher.search(new TermQuery(new Term("utf8", "ab\ud917\udc17cd")), null, 1000).scoreDocs;
assertEquals(34, hits.length);
}
searcher.close();
@ -272,8 +270,8 @@ public class TestBackwardsCompatibility extends LuceneTestCase
// make sure searching sees right # hits
IndexSearcher searcher = new IndexSearcher(dir);
Hits hits = searcher.search(new TermQuery(new Term("content", "aaa")));
Document d = hits.doc(0);
ScoreDoc[] hits = searcher.search(new TermQuery(new Term("content", "aaa")), null, 1000).scoreDocs;
Document d = searcher.doc(hits[0].doc);
assertEquals("wrong first document", "21", d.get("id"));
testHits(hits, 44, searcher.getIndexReader());
searcher.close();
@ -289,9 +287,9 @@ public class TestBackwardsCompatibility extends LuceneTestCase
// make sure they "took":
searcher = new IndexSearcher(dir);
hits = searcher.search(new TermQuery(new Term("content", "aaa")));
assertEquals("wrong number of hits", 43, hits.length());
d = hits.doc(0);
hits = searcher.search(new TermQuery(new Term("content", "aaa")), null, 1000).scoreDocs;
assertEquals("wrong number of hits", 43, hits.length);
d = searcher.doc(hits[0].doc);
assertEquals("wrong first document", "22", d.get("id"));
testHits(hits, 43, searcher.getIndexReader());
searcher.close();
@ -302,9 +300,9 @@ public class TestBackwardsCompatibility extends LuceneTestCase
writer.close();
searcher = new IndexSearcher(dir);
hits = searcher.search(new TermQuery(new Term("content", "aaa")));
assertEquals("wrong number of hits", 43, hits.length());
d = hits.doc(0);
hits = searcher.search(new TermQuery(new Term("content", "aaa")), null, 1000).scoreDocs;
assertEquals("wrong number of hits", 43, hits.length);
d = searcher.doc(hits[0].doc);
testHits(hits, 43, searcher.getIndexReader());
assertEquals("wrong first document", "22", d.get("id"));
searcher.close();
@ -322,9 +320,9 @@ public class TestBackwardsCompatibility extends LuceneTestCase
// make sure searching sees right # hits
IndexSearcher searcher = new IndexSearcher(dir);
Hits hits = searcher.search(new TermQuery(new Term("content", "aaa")));
assertEquals("wrong number of hits", 34, hits.length());
Document d = hits.doc(0);
ScoreDoc[] hits = searcher.search(new TermQuery(new Term("content", "aaa")), null, 1000).scoreDocs;
assertEquals("wrong number of hits", 34, hits.length);
Document d = searcher.doc(hits[0].doc);
assertEquals("wrong first document", "21", d.get("id"));
searcher.close();
@ -339,9 +337,9 @@ public class TestBackwardsCompatibility extends LuceneTestCase
// make sure they "took":
searcher = new IndexSearcher(dir);
hits = searcher.search(new TermQuery(new Term("content", "aaa")));
assertEquals("wrong number of hits", 33, hits.length());
d = hits.doc(0);
hits = searcher.search(new TermQuery(new Term("content", "aaa")), null, 1000).scoreDocs;
assertEquals("wrong number of hits", 33, hits.length);
d = searcher.doc(hits[0].doc);
assertEquals("wrong first document", "22", d.get("id"));
testHits(hits, 33, searcher.getIndexReader());
searcher.close();
@ -352,9 +350,9 @@ public class TestBackwardsCompatibility extends LuceneTestCase
writer.close();
searcher = new IndexSearcher(dir);
hits = searcher.search(new TermQuery(new Term("content", "aaa")));
assertEquals("wrong number of hits", 33, hits.length());
d = hits.doc(0);
hits = searcher.search(new TermQuery(new Term("content", "aaa")), null, 1000).scoreDocs;
assertEquals("wrong number of hits", 33, hits.length);
d = searcher.doc(hits[0].doc);
assertEquals("wrong first document", "22", d.get("id"));
testHits(hits, 33, searcher.getIndexReader());
searcher.close();

View File

@ -17,23 +17,22 @@ package org.apache.lucene.index;
* limitations under the License.
*/
import org.apache.lucene.util.LuceneTestCase;
import java.io.IOException;
import java.util.HashSet;
import java.util.Iterator;
import java.util.List;
import java.util.Set;
import org.apache.lucene.analysis.WhitespaceAnalyzer;
import org.apache.lucene.store.Directory;
import org.apache.lucene.store.RAMDirectory;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.TermQuery;
import org.apache.lucene.search.Hits;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import java.util.List;
import java.util.Iterator;
import java.util.Set;
import java.util.HashSet;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.ScoreDoc;
import org.apache.lucene.search.TermQuery;
import org.apache.lucene.store.Directory;
import org.apache.lucene.store.RAMDirectory;
import org.apache.lucene.util.LuceneTestCase;
/*
Verify we can read the pre-2.1 file format, do searches
@ -440,8 +439,8 @@ public class TestDeletionPolicy extends LuceneTestCase
reader.deleteDocument(3*i+1);
reader.setNorm(4*i+1, "content", 2.0F);
IndexSearcher searcher = new IndexSearcher(reader);
Hits hits = searcher.search(query);
assertEquals(16*(1+i), hits.length());
ScoreDoc[] hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(16*(1+i), hits.length);
// this is a commit when autoCommit=false:
reader.close();
searcher.close();
@ -457,8 +456,8 @@ public class TestDeletionPolicy extends LuceneTestCase
assertEquals(2*(N+2)-1, policy.numOnCommit);
IndexSearcher searcher = new IndexSearcher(dir);
Hits hits = searcher.search(query);
assertEquals(176, hits.length());
ScoreDoc[] hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(176, hits.length);
// Simplistic check: just verify only the past N segments_N's still
// exist, and, I can open a reader on each:
@ -476,7 +475,7 @@ public class TestDeletionPolicy extends LuceneTestCase
// autoCommit false case:
if (!autoCommit) {
searcher = new IndexSearcher(reader);
hits = searcher.search(query);
hits = searcher.search(query, null, 1000).scoreDocs;
if (i > 1) {
if (i % 2 == 0) {
expectedCount += 1;
@ -484,7 +483,7 @@ public class TestDeletionPolicy extends LuceneTestCase
expectedCount -= 17;
}
}
assertEquals(expectedCount, hits.length());
assertEquals(expectedCount, hits.length);
searcher.close();
}
reader.close();
@ -543,8 +542,8 @@ public class TestDeletionPolicy extends LuceneTestCase
reader.deleteDocument(3);
reader.setNorm(5, "content", 2.0F);
IndexSearcher searcher = new IndexSearcher(reader);
Hits hits = searcher.search(query);
assertEquals(16, hits.length());
ScoreDoc[] hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(16, hits.length);
// this is a commit when autoCommit=false:
reader.close();
searcher.close();
@ -560,8 +559,8 @@ public class TestDeletionPolicy extends LuceneTestCase
assertEquals(2*(N+1), policy.numOnCommit);
IndexSearcher searcher = new IndexSearcher(dir);
Hits hits = searcher.search(query);
assertEquals(0, hits.length());
ScoreDoc[] hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(0, hits.length);
// Simplistic check: just verify only the past N segments_N's still
// exist, and, I can open a reader on each:
@ -579,8 +578,8 @@ public class TestDeletionPolicy extends LuceneTestCase
// autoCommit false case:
if (!autoCommit) {
searcher = new IndexSearcher(reader);
hits = searcher.search(query);
assertEquals(expectedCount, hits.length());
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(expectedCount, hits.length);
searcher.close();
if (expectedCount == 0) {
expectedCount = 16;

View File

@ -18,25 +18,35 @@ package org.apache.lucene.index;
*/
import org.apache.lucene.util.LuceneTestCase;
import java.io.File;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.util.Arrays;
import java.util.Collection;
import java.util.Iterator;
import java.util.Map;
import java.util.Set;
import junit.framework.TestSuite;
import junit.textui.TestRunner;
import org.apache.lucene.analysis.WhitespaceAnalyzer;
import org.apache.lucene.analysis.standard.StandardAnalyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import org.apache.lucene.index.IndexReader.FieldOption;
import org.apache.lucene.search.Hits;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.ScoreDoc;
import org.apache.lucene.search.TermQuery;
import org.apache.lucene.store.*;
import org.apache.lucene.store.AlreadyClosedException;
import org.apache.lucene.store.Directory;
import org.apache.lucene.store.FSDirectory;
import org.apache.lucene.store.LockObtainFailedException;
import org.apache.lucene.store.MockRAMDirectory;
import org.apache.lucene.store.RAMDirectory;
import org.apache.lucene.util.LuceneTestCase;
import org.apache.lucene.util._TestUtil;
import java.io.File;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.util.*;
public class TestIndexReader extends LuceneTestCase
{
/** Main for running test case by itself. */
@ -910,14 +920,14 @@ public class TestIndexReader extends LuceneTestCase
*/
IndexSearcher searcher = new IndexSearcher(newReader);
Hits hits = null;
ScoreDoc[] hits = null;
try {
hits = searcher.search(new TermQuery(searchTerm));
hits = searcher.search(new TermQuery(searchTerm), null, 1000).scoreDocs;
} catch (IOException e) {
e.printStackTrace();
fail(testName + ": exception when searching: " + e);
}
int result2 = hits.length();
int result2 = hits.length;
if (success) {
if (result2 != END_COUNT) {
fail(testName + ": method did not throw exception but hits.length for search on term 'aaa' is " + result2 + " instead of expected " + END_COUNT);

View File

@ -27,6 +27,8 @@ import java.util.List;
import java.util.Random;
import java.util.Set;
import junit.framework.TestCase;
import org.apache.lucene.analysis.KeywordAnalyzer;
import org.apache.lucene.analysis.WhitespaceAnalyzer;
import org.apache.lucene.analysis.standard.StandardAnalyzer;
@ -35,16 +37,14 @@ import org.apache.lucene.document.Field;
import org.apache.lucene.document.Field.Index;
import org.apache.lucene.document.Field.Store;
import org.apache.lucene.index.IndexWriter.MaxFieldLength;
import org.apache.lucene.search.Hits;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.ScoreDoc;
import org.apache.lucene.search.TermQuery;
import org.apache.lucene.store.Directory;
import org.apache.lucene.store.FSDirectory;
import org.apache.lucene.store.RAMDirectory;
import org.apache.lucene.util.LuceneTestCase;
import junit.framework.TestCase;
public class TestIndexReaderReopen extends LuceneTestCase {
private File indexDir;
@ -687,9 +687,11 @@ public class TestIndexReaderReopen extends LuceneTestCase {
IndexSearcher searcher = new IndexSearcher(refreshed);
Hits hits = searcher.search(new TermQuery(new Term("field1", "a" + rnd.nextInt(refreshed.maxDoc()))));
if (hits.length() > 0) {
hits.doc(0);
ScoreDoc[] hits = searcher.search(
new TermQuery(new Term("field1", "a" + rnd.nextInt(refreshed.maxDoc()))),
null, 1000).scoreDocs;
if (hits.length > 0) {
searcher.doc(hits[0].doc);
}
// r might have changed because this is not a

View File

@ -39,7 +39,7 @@ import org.apache.lucene.analysis.Token;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.Hits;
import org.apache.lucene.search.ScoreDoc;
import org.apache.lucene.search.TermQuery;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.spans.SpanTermQuery;
@ -188,8 +188,8 @@ public class TestIndexWriter extends LuceneTestCase
assertEquals("first docFreq", 57, reader.docFreq(searchTerm));
IndexSearcher searcher = new IndexSearcher(reader);
Hits hits = searcher.search(new TermQuery(searchTerm));
assertEquals("first number of hits", 57, hits.length());
ScoreDoc[] hits = searcher.search(new TermQuery(searchTerm), null, 1000).scoreDocs;
assertEquals("first number of hits", 57, hits.length);
searcher.close();
reader.close();
@ -392,12 +392,12 @@ public class TestIndexWriter extends LuceneTestCase
searcher = new IndexSearcher(reader);
try {
hits = searcher.search(new TermQuery(searchTerm));
hits = searcher.search(new TermQuery(searchTerm), null, END_COUNT).scoreDocs;
} catch (IOException e) {
e.printStackTrace(System.out);
fail(testName + ": exception when searching: " + e);
}
int result2 = hits.length();
int result2 = hits.length;
if (success) {
if (result2 != result) {
fail(testName + ": method did not throw exception but hits.length for search on term 'aaa' is " + result2 + " instead of expected " + result);
@ -1016,8 +1016,8 @@ public class TestIndexWriter extends LuceneTestCase
Term searchTerm = new Term("content", "aaa");
IndexSearcher searcher = new IndexSearcher(dir);
Hits hits = searcher.search(new TermQuery(searchTerm));
assertEquals("first number of hits", 14, hits.length());
ScoreDoc[] hits = searcher.search(new TermQuery(searchTerm), null, 1000).scoreDocs;
assertEquals("first number of hits", 14, hits.length);
searcher.close();
IndexReader reader = IndexReader.open(dir);
@ -1028,8 +1028,8 @@ public class TestIndexWriter extends LuceneTestCase
addDoc(writer);
}
searcher = new IndexSearcher(dir);
hits = searcher.search(new TermQuery(searchTerm));
assertEquals("reader incorrectly sees changes from writer with autoCommit disabled", 14, hits.length());
hits = searcher.search(new TermQuery(searchTerm), null, 1000).scoreDocs;
assertEquals("reader incorrectly sees changes from writer with autoCommit disabled", 14, hits.length);
searcher.close();
assertTrue("reader should have still been current", reader.isCurrent());
}
@ -1039,8 +1039,8 @@ public class TestIndexWriter extends LuceneTestCase
assertFalse("reader should not be current now", reader.isCurrent());
searcher = new IndexSearcher(dir);
hits = searcher.search(new TermQuery(searchTerm));
assertEquals("reader did not see changes after writer was closed", 47, hits.length());
hits = searcher.search(new TermQuery(searchTerm), null, 1000).scoreDocs;
assertEquals("reader did not see changes after writer was closed", 47, hits.length);
searcher.close();
}
@ -1064,8 +1064,8 @@ public class TestIndexWriter extends LuceneTestCase
Term searchTerm = new Term("content", "aaa");
IndexSearcher searcher = new IndexSearcher(dir);
Hits hits = searcher.search(new TermQuery(searchTerm));
assertEquals("first number of hits", 14, hits.length());
ScoreDoc[] hits = searcher.search(new TermQuery(searchTerm), null, 1000).scoreDocs;
assertEquals("first number of hits", 14, hits.length);
searcher.close();
writer = new IndexWriter(dir, false, new WhitespaceAnalyzer(), false, IndexWriter.MaxFieldLength.LIMITED);
@ -1077,8 +1077,8 @@ public class TestIndexWriter extends LuceneTestCase
writer.deleteDocuments(searchTerm);
searcher = new IndexSearcher(dir);
hits = searcher.search(new TermQuery(searchTerm));
assertEquals("reader incorrectly sees changes from writer with autoCommit disabled", 14, hits.length());
hits = searcher.search(new TermQuery(searchTerm), null, 1000).scoreDocs;
assertEquals("reader incorrectly sees changes from writer with autoCommit disabled", 14, hits.length);
searcher.close();
// Now, close the writer:
@ -1087,8 +1087,8 @@ public class TestIndexWriter extends LuceneTestCase
assertNoUnreferencedFiles(dir, "unreferenced files remain after abort()");
searcher = new IndexSearcher(dir);
hits = searcher.search(new TermQuery(searchTerm));
assertEquals("saw changes after writer.abort", 14, hits.length());
hits = searcher.search(new TermQuery(searchTerm), null, 1000).scoreDocs;
assertEquals("saw changes after writer.abort", 14, hits.length);
searcher.close();
// Now make sure we can re-open the index, add docs,
@ -1105,15 +1105,15 @@ public class TestIndexWriter extends LuceneTestCase
addDoc(writer);
}
searcher = new IndexSearcher(dir);
hits = searcher.search(new TermQuery(searchTerm));
assertEquals("reader incorrectly sees changes from writer with autoCommit disabled", 14, hits.length());
hits = searcher.search(new TermQuery(searchTerm), null, 1000).scoreDocs;
assertEquals("reader incorrectly sees changes from writer with autoCommit disabled", 14, hits.length);
searcher.close();
}
writer.close();
searcher = new IndexSearcher(dir);
hits = searcher.search(new TermQuery(searchTerm));
assertEquals("didn't see changes after close", 218, hits.length());
hits = searcher.search(new TermQuery(searchTerm), null, 1000).scoreDocs;
assertEquals("didn't see changes after close", 218, hits.length);
searcher.close();
dir.close();
@ -1437,8 +1437,8 @@ public class TestIndexWriter extends LuceneTestCase
writer.close();
IndexSearcher searcher = new IndexSearcher(dir);
Hits hits = searcher.search(new TermQuery(new Term("field", "aaa")));
assertEquals(300, hits.length());
ScoreDoc[] hits = searcher.search(new TermQuery(new Term("field", "aaa")), null, 1000).scoreDocs;
assertEquals(300, hits.length);
searcher.close();
dir.close();
@ -1463,8 +1463,8 @@ public class TestIndexWriter extends LuceneTestCase
Term searchTerm = new Term("field", "aaa");
IndexSearcher searcher = new IndexSearcher(dir);
Hits hits = searcher.search(new TermQuery(searchTerm));
assertEquals(10, hits.length());
ScoreDoc[] hits = searcher.search(new TermQuery(searchTerm), null, 1000).scoreDocs;
assertEquals(10, hits.length);
searcher.close();
writer = new IndexWriter(dir, new WhitespaceAnalyzer(), true, IndexWriter.MaxFieldLength.LIMITED);
@ -1481,8 +1481,8 @@ public class TestIndexWriter extends LuceneTestCase
}
writer.close();
searcher = new IndexSearcher(dir);
hits = searcher.search(new TermQuery(searchTerm));
assertEquals(27, hits.length());
hits = searcher.search(new TermQuery(searchTerm), null, 1000).scoreDocs;
assertEquals(27, hits.length);
searcher.close();
IndexReader reader = IndexReader.open(dir);
@ -1546,8 +1546,8 @@ public class TestIndexWriter extends LuceneTestCase
writer.close();
Term searchTerm = new Term("content", "aaa");
IndexSearcher searcher = new IndexSearcher(dir);
Hits hits = searcher.search(new TermQuery(searchTerm));
assertEquals("did not get right number of hits", 100, hits.length());
ScoreDoc[] hits = searcher.search(new TermQuery(searchTerm), null, 1000).scoreDocs;
assertEquals("did not get right number of hits", 100, hits.length);
writer.close();
writer = new IndexWriter(dir, new WhitespaceAnalyzer(), true, IndexWriter.MaxFieldLength.LIMITED);
@ -3587,12 +3587,12 @@ public class TestIndexWriter extends LuceneTestCase
pq.add(new Term("field", "a"));
pq.add(new Term("field", "b"));
pq.add(new Term("field", "c"));
Hits hits = s.search(pq);
assertEquals(1, hits.length());
ScoreDoc[] hits = s.search(pq, null, 1000).scoreDocs;
assertEquals(1, hits.length);
Query q = new SpanTermQuery(new Term("field", "a"));
hits = s.search(q);
assertEquals(1, hits.length());
hits = s.search(q, null, 1000).scoreDocs;
assertEquals(1, hits.length);
TermPositions tps = s.getIndexReader().termPositions(new Term("field", "a"));
assertTrue(tps.next());
assertEquals(1, tps.freq());

View File

@ -20,16 +20,15 @@ package org.apache.lucene.index;
import java.io.IOException;
import java.util.Arrays;
import org.apache.lucene.util.LuceneTestCase;
import org.apache.lucene.analysis.WhitespaceAnalyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import org.apache.lucene.search.Hits;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.ScoreDoc;
import org.apache.lucene.search.TermQuery;
import org.apache.lucene.store.Directory;
import org.apache.lucene.store.MockRAMDirectory;
import org.apache.lucene.util.LuceneTestCase;
public class TestIndexWriterDelete extends LuceneTestCase {
@ -278,7 +277,7 @@ public class TestIndexWriterDelete extends LuceneTestCase {
private int getHitCount(Directory dir, Term term) throws IOException {
IndexSearcher searcher = new IndexSearcher(dir);
int hitCount = searcher.search(new TermQuery(term)).length();
int hitCount = searcher.search(new TermQuery(term), null, 1000).totalHits;
searcher.close();
return hitCount;
}
@ -434,15 +433,15 @@ public class TestIndexWriterDelete extends LuceneTestCase {
}
IndexSearcher searcher = new IndexSearcher(newReader);
Hits hits = null;
ScoreDoc[] hits = null;
try {
hits = searcher.search(new TermQuery(searchTerm));
hits = searcher.search(new TermQuery(searchTerm), null, 1000).scoreDocs;
}
catch (IOException e) {
e.printStackTrace();
fail(testName + ": exception when searching: " + e);
}
int result2 = hits.length();
int result2 = hits.length;
if (success) {
if (x == 0 && result2 != END_COUNT) {
fail(testName

View File

@ -22,14 +22,13 @@ import java.io.IOException;
import org.apache.lucene.analysis.WhitespaceAnalyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import org.apache.lucene.search.Hits;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.PhraseQuery;
import org.apache.lucene.search.ScoreDoc;
import org.apache.lucene.search.Searcher;
import org.apache.lucene.store.Directory;
import org.apache.lucene.store.IndexInput;
import org.apache.lucene.store.RAMDirectory;
import org.apache.lucene.util.LuceneTestCase;
/**
@ -82,20 +81,20 @@ public class TestLazyProxSkipping extends LuceneTestCase {
this.searcher = new IndexSearcher(reader);
}
private Hits search() throws IOException {
private ScoreDoc[] search() throws IOException {
// create PhraseQuery "term1 term2" and search
PhraseQuery pq = new PhraseQuery();
pq.add(new Term(this.field, this.term1));
pq.add(new Term(this.field, this.term2));
return this.searcher.search(pq);
return this.searcher.search(pq, null, 1000).scoreDocs;
}
private void performTest(int numHits) throws IOException {
createIndex(numHits);
this.seeksCounter = 0;
Hits hits = search();
ScoreDoc[] hits = search();
// verify that the right number of docs was found
assertEquals(numHits, hits.length());
assertEquals(numHits, hits.length);
// check if the number of calls of seek() does not exceed the number of hits
assertTrue(this.seeksCounter <= numHits + 1);

View File

@ -21,22 +21,21 @@ import java.io.IOException;
import java.util.Arrays;
import java.util.Collection;
import org.apache.lucene.util.LuceneTestCase;
import org.apache.lucene.analysis.standard.StandardAnalyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import org.apache.lucene.document.MapFieldSelector;
import org.apache.lucene.search.BooleanQuery;
import org.apache.lucene.search.Hits;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.ScoreDoc;
import org.apache.lucene.search.Searcher;
import org.apache.lucene.search.TermQuery;
import org.apache.lucene.search.BooleanClause.Occur;
import org.apache.lucene.store.Directory;
import org.apache.lucene.store.RAMDirectory;
import org.apache.lucene.store.MockRAMDirectory;
import org.apache.lucene.store.RAMDirectory;
import org.apache.lucene.util.LuceneTestCase;
public class TestParallelReader extends LuceneTestCase {
@ -197,13 +196,13 @@ public class TestParallelReader extends LuceneTestCase {
private void queryTest(Query query) throws IOException {
Hits parallelHits = parallel.search(query);
Hits singleHits = single.search(query);
assertEquals(parallelHits.length(), singleHits.length());
for(int i = 0; i < parallelHits.length(); i++) {
assertEquals(parallelHits.score(i), singleHits.score(i), 0.001f);
Document docParallel = parallelHits.doc(i);
Document docSingle = singleHits.doc(i);
ScoreDoc[] parallelHits = parallel.search(query, null, 1000).scoreDocs;
ScoreDoc[] singleHits = single.search(query, null, 1000).scoreDocs;
assertEquals(parallelHits.length, singleHits.length);
for(int i = 0; i < parallelHits.length; i++) {
assertEquals(parallelHits[i].score, singleHits[i].score, 0.001f);
Document docParallel = parallel.doc(parallelHits[i].doc);
Document docSingle = single.doc(singleHits[i].doc);
assertEquals(docParallel.get("f1"), docSingle.get("f1"));
assertEquals(docParallel.get("f2"), docSingle.get("f2"));
assertEquals(docParallel.get("f3"), docSingle.get("f3"));

View File

@ -17,7 +17,10 @@ package org.apache.lucene.queryParser;
* limitations under the License.
*/
import org.apache.lucene.util.LuceneTestCase;
import java.io.Reader;
import java.util.HashMap;
import java.util.Map;
import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.Token;
import org.apache.lucene.analysis.TokenStream;
@ -26,17 +29,13 @@ import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.search.BooleanClause;
import org.apache.lucene.search.BooleanQuery;
import org.apache.lucene.search.Hits;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.ScoreDoc;
import org.apache.lucene.search.BooleanClause.Occur;
import org.apache.lucene.store.Directory;
import org.apache.lucene.store.RAMDirectory;
import java.io.Reader;
import java.util.HashMap;
import java.util.Map;
import org.apache.lucene.util.LuceneTestCase;
/**
* Tests QueryParser.
@ -297,8 +296,8 @@ public class TestMultiFieldQueryParser extends LuceneTestCase {
mfqp.setDefaultOperator(QueryParser.Operator.AND);
Query q = mfqp.parse("the footest");
IndexSearcher is = new IndexSearcher(ramDir);
Hits hits = is.search(q);
assertEquals(1, hits.length());
ScoreDoc[] hits = is.search(q, null, 1000).scoreDocs;
assertEquals(1, hits.length);
is.close();
}

View File

@ -17,9 +17,23 @@ package org.apache.lucene.queryParser;
* limitations under the License.
*/
import org.apache.lucene.util.LuceneTestCase;
import org.apache.lucene.analysis.*;
import java.io.IOException;
import java.io.Reader;
import java.text.DateFormat;
import java.util.Calendar;
import java.util.Date;
import java.util.Locale;
import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.KeywordAnalyzer;
import org.apache.lucene.analysis.LowerCaseTokenizer;
import org.apache.lucene.analysis.SimpleAnalyzer;
import org.apache.lucene.analysis.StopAnalyzer;
import org.apache.lucene.analysis.StopFilter;
import org.apache.lucene.analysis.Token;
import org.apache.lucene.analysis.TokenFilter;
import org.apache.lucene.analysis.TokenStream;
import org.apache.lucene.analysis.WhitespaceAnalyzer;
import org.apache.lucene.analysis.standard.StandardAnalyzer;
import org.apache.lucene.document.DateField;
import org.apache.lucene.document.DateTools;
@ -27,15 +41,20 @@ import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.index.Term;
import org.apache.lucene.search.*;
import org.apache.lucene.search.BooleanQuery;
import org.apache.lucene.search.ConstantScoreRangeQuery;
import org.apache.lucene.search.FuzzyQuery;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.MatchAllDocsQuery;
import org.apache.lucene.search.PhraseQuery;
import org.apache.lucene.search.PrefixQuery;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.RangeQuery;
import org.apache.lucene.search.ScoreDoc;
import org.apache.lucene.search.TermQuery;
import org.apache.lucene.search.WildcardQuery;
import org.apache.lucene.store.RAMDirectory;
import java.io.IOException;
import java.io.Reader;
import java.text.DateFormat;
import java.util.Calendar;
import java.util.Date;
import java.util.Locale;
import org.apache.lucene.util.LuceneTestCase;
/**
* Tests QueryParser.
@ -887,8 +906,8 @@ public class TestQueryParser extends LuceneTestCase {
QueryParser qp = new QueryParser("date", new WhitespaceAnalyzer());
qp.setLocale(Locale.ENGLISH);
Query q = qp.parse(query);
Hits hits = is.search(q);
assertEquals(expected, hits.length());
ScoreDoc[] hits = is.search(q, null, 1000).scoreDocs;
assertEquals(expected, hits.length);
}
private static void addDateDoc(String content, int year, int month,

View File

@ -123,7 +123,7 @@ public class CheckHits {
QueryUtils.check(query,(IndexSearcher)searcher);
}
Hits hits = searcher.search(query);
ScoreDoc[] hits = searcher.search(query, null, 1000).scoreDocs;
Set correct = new TreeSet();
for (int i = 0; i < results.length; i++) {
@ -131,8 +131,8 @@ public class CheckHits {
}
Set actual = new TreeSet();
for (int i = 0; i < hits.length(); i++) {
actual.add(new Integer(hits.id(i)));
for (int i = 0; i < hits.length; i++) {
actual.add(new Integer(hits[i].doc));
}
TestCase.assertEquals(query.toString(defaultFieldName), correct, actual);
@ -141,11 +141,11 @@ public class CheckHits {
}
/** Tests that a Hits has an expected order of documents */
public static void checkDocIds(String mes, int[] results, Hits hits)
public static void checkDocIds(String mes, int[] results, ScoreDoc[] hits)
throws IOException {
TestCase.assertEquals(mes + " nr of hits", results.length, hits.length());
TestCase.assertEquals(mes + " nr of hits", results.length, hits.length);
for (int i = 0; i < results.length; i++) {
TestCase.assertEquals(mes + " doc nrs for hit " + i, results[i], hits.id(i));
TestCase.assertEquals(mes + " doc nrs for hit " + i, results[i], hits[i].doc);
}
}
@ -154,8 +154,8 @@ public class CheckHits {
*/
public static void checkHitsQuery(
Query query,
Hits hits1,
Hits hits2,
ScoreDoc[] hits1,
ScoreDoc[] hits2,
int[] results)
throws IOException {
@ -164,33 +164,33 @@ public class CheckHits {
checkEqual(query, hits1, hits2);
}
public static void checkEqual(Query query, Hits hits1, Hits hits2) throws IOException {
public static void checkEqual(Query query, ScoreDoc[] hits1, ScoreDoc[] hits2) throws IOException {
final float scoreTolerance = 1.0e-6f;
if (hits1.length() != hits2.length()) {
TestCase.fail("Unequal lengths: hits1="+hits1.length()+",hits2="+hits2.length());
if (hits1.length != hits2.length) {
TestCase.fail("Unequal lengths: hits1="+hits1.length+",hits2="+hits2.length);
}
for (int i = 0; i < hits1.length(); i++) {
if (hits1.id(i) != hits2.id(i)) {
for (int i = 0; i < hits1.length; i++) {
if (hits1[i].doc != hits2[i].doc) {
TestCase.fail("Hit " + i + " docnumbers don't match\n"
+ hits2str(hits1, hits2,0,0)
+ "for query:" + query.toString());
}
if ((hits1.id(i) != hits2.id(i))
|| Math.abs(hits1.score(i) - hits2.score(i)) > scoreTolerance)
if ((hits1[i].doc != hits2[i].doc)
|| Math.abs(hits1[i].score - hits2[i].score) > scoreTolerance)
{
TestCase.fail("Hit " + i + ", doc nrs " + hits1.id(i) + " and " + hits2.id(i)
+ "\nunequal : " + hits1.score(i)
+ "\n and: " + hits2.score(i)
TestCase.fail("Hit " + i + ", doc nrs " + hits1[i].doc + " and " + hits2[i].doc
+ "\nunequal : " + hits1[i].score
+ "\n and: " + hits2[i].score
+ "\nfor query:" + query.toString());
}
}
}
public static String hits2str(Hits hits1, Hits hits2, int start, int end) throws IOException {
public static String hits2str(ScoreDoc[] hits1, ScoreDoc[] hits2, int start, int end) throws IOException {
StringBuffer sb = new StringBuffer();
int len1=hits1==null ? 0 : hits1.length();
int len2=hits2==null ? 0 : hits2.length();
int len1=hits1==null ? 0 : hits1.length;
int len2=hits2==null ? 0 : hits2.length;
if (end<=0) {
end = Math.max(len1,len2);
}
@ -201,13 +201,13 @@ public class CheckHits {
for (int i=start; i<end; i++) {
sb.append("hit=").append(i).append(':');
if (i<len1) {
sb.append(" doc").append(hits1.id(i)).append('=').append(hits1.score(i));
sb.append(" doc").append(hits1[i].doc).append('=').append(hits1[i].score);
} else {
sb.append(" ");
}
sb.append(",\t");
if (i<len2) {
sb.append(" doc").append(hits2.id(i)).append('=').append(hits2.score(i));
sb.append(" doc").append(hits2[i].doc).append('=').append(hits2[i].score);
}
sb.append('\n');
}
@ -377,19 +377,6 @@ public class CheckHits {
new ExplanationAsserter
(q, null, this));
}
public Hits search(Query query, Filter filter) throws IOException {
checkExplanations(query);
return super.search(query,filter);
}
public Hits search(Query query, Sort sort) throws IOException {
checkExplanations(query);
return super.search(query,sort);
}
public Hits search(Query query, Filter filter,
Sort sort) throws IOException {
checkExplanations(query);
return super.search(query,filter,sort);
}
public TopFieldDocs search(Query query,
Filter filter,
int n,
@ -467,3 +454,4 @@ public class CheckHits {
}

View File

@ -74,11 +74,11 @@ public class TestBoolean2 extends LuceneTestCase {
try {
Query query1 = makeQuery(queryText);
BooleanQuery.setAllowDocsOutOfOrder(true);
Hits hits1 = searcher.search(query1);
ScoreDoc[] hits1 = searcher.search(query1, null, 1000).scoreDocs;
Query query2 = makeQuery(queryText); // there should be no need to parse again...
BooleanQuery.setAllowDocsOutOfOrder(false);
Hits hits2 = searcher.search(query2);
ScoreDoc[] hits2 = searcher.search(query2, null, 1000).scoreDocs;
CheckHits.checkHitsQuery(query2, hits1, hits2, expDocNrs);
} finally { // even when a test fails.
@ -173,13 +173,11 @@ public class TestBoolean2 extends LuceneTestCase {
QueryUtils.check(q1,searcher);
Hits hits1 = searcher.search(q1,sort);
if (hits1.length()>0) hits1.id(hits1.length()-1);
ScoreDoc[] hits1 = searcher.search(q1,null, 1000, sort).scoreDocs;
BooleanQuery.setAllowDocsOutOfOrder(true);
Hits hits2 = searcher.search(q1,sort);
if (hits2.length()>0) hits2.id(hits1.length()-1);
tot+=hits2.length();
ScoreDoc[] hits2 = searcher.search(q1,null, 1000, sort).scoreDocs;
tot+=hits2.length;
CheckHits.checkEqual(q1, hits1, hits2);
}

View File

@ -83,11 +83,11 @@ public class TestBooleanMinShouldMatch extends LuceneTestCase {
}
public void verifyNrHits(Query q, int expected) throws Exception {
Hits h = s.search(q);
if (expected != h.length()) {
printHits(getName(), h);
ScoreDoc[] h = s.search(q, null, 1000).scoreDocs;
if (expected != h.length) {
printHits(getName(), h, s);
}
assertEquals("result count", expected, h.length());
assertEquals("result count", expected, h.length);
QueryUtils.check(q,s);
}
@ -375,15 +375,15 @@ public class TestBooleanMinShouldMatch extends LuceneTestCase {
protected void printHits(String test, Hits h) throws Exception {
protected void printHits(String test, ScoreDoc[] h, Searcher searcher) throws Exception {
System.err.println("------- " + test + " -------");
DecimalFormat f = new DecimalFormat("0.000000");
for (int i = 0; i < h.length(); i++) {
Document d = h.doc(i);
float score = h.score(i);
for (int i = 0; i < h.length; i++) {
Document d = searcher.doc(h[i].doc);
float score = h[i].score;
System.err.println("#" + i + ": " + f.format(score) + " - " +
d.get("id") + " - " + d.get("data"));
}

View File

@ -50,7 +50,7 @@ public class TestBooleanOr extends LuceneTestCase {
private int search(Query q) throws IOException {
QueryUtils.check(q,searcher);
return searcher.search(q).length();
return searcher.search(q, null, 1000).totalHits;
}
public void testElements() throws IOException {

View File

@ -64,8 +64,8 @@ public class TestBooleanScorer extends LuceneTestCase
query.add(new TermQuery(new Term(FIELD, "9")), BooleanClause.Occur.MUST_NOT);
IndexSearcher indexSearcher = new IndexSearcher(directory);
Hits hits = indexSearcher.search(query);
assertEquals("Number of matched documents", 2, hits.length());
ScoreDoc[] hits = indexSearcher.search(query, null, 1000).scoreDocs;
assertEquals("Number of matched documents", 2, hits.length);
}
catch (IOException e) {

View File

@ -104,17 +104,17 @@ public class TestConstantScoreRangeQuery extends BaseTestRangeFilter {
IndexReader reader = IndexReader.open(small);
IndexSearcher search = new IndexSearcher(reader);
Hits result;
ScoreDoc[] result;
// some hits match more terms then others, score should be the same
result = search.search(csrq("data","1","6",T,T));
int numHits = result.length();
result = search.search(csrq("data","1","6",T,T), null, 1000).scoreDocs;
int numHits = result.length;
assertEquals("wrong number of results", 6, numHits);
float score = result.score(0);
float score = result[0].score;
for (int i = 1; i < numHits; i++) {
assertEquals("score for " + i +" was not the same",
score, result.score(i));
score, result[i].score);
}
}
@ -148,10 +148,10 @@ public class TestConstantScoreRangeQuery extends BaseTestRangeFilter {
bq.add(q1, BooleanClause.Occur.SHOULD);
bq.add(q2, BooleanClause.Occur.SHOULD);
Hits hits = search.search(bq);
assertEquals(1, hits.id(0));
assertEquals(0, hits.id(1));
assertTrue(hits.score(0) > hits.score(1));
ScoreDoc[] hits = search.search(bq, null, 1000).scoreDocs;
assertEquals(1, hits[0].doc);
assertEquals(0, hits[1].doc);
assertTrue(hits[0].score > hits[1].score);
q1 = csrq("data","A","A",T,T); // matches document #0
q1.setBoost(10f);
@ -160,10 +160,10 @@ public class TestConstantScoreRangeQuery extends BaseTestRangeFilter {
bq.add(q1, BooleanClause.Occur.SHOULD);
bq.add(q2, BooleanClause.Occur.SHOULD);
hits = search.search(bq);
assertEquals(0, hits.id(0));
assertEquals(1, hits.id(1));
assertTrue(hits.score(0) > hits.score(1));
hits = search.search(bq, null, 1000).scoreDocs;
assertEquals(0, hits[0].doc);
assertEquals(1, hits[1].doc);
assertTrue(hits[0].score > hits[1].score);
}
@ -178,8 +178,8 @@ public class TestConstantScoreRangeQuery extends BaseTestRangeFilter {
Query rq = new RangeQuery(new Term("data","1"),new Term("data","4"),T);
Hits expected = search.search(rq);
int numHits = expected.length();
ScoreDoc[] expected = search.search(rq, null, 1000).scoreDocs;
int numHits = expected.length;
// now do a boolean where which also contains a
// ConstantScoreRangeQuery and make sure hte order is the same
@ -188,12 +188,12 @@ public class TestConstantScoreRangeQuery extends BaseTestRangeFilter {
q.add(rq, BooleanClause.Occur.MUST);//T, F);
q.add(csrq("data","1","6", T, T), BooleanClause.Occur.MUST);//T, F);
Hits actual = search.search(q);
ScoreDoc[] actual = search.search(q, null, 1000).scoreDocs;
assertEquals("wrong numebr of hits", numHits, actual.length());
assertEquals("wrong numebr of hits", numHits, actual.length);
for (int i = 0; i < numHits; i++) {
assertEquals("mismatch in docid for hit#"+i,
expected.id(i), actual.id(i));
expected[i].doc, actual[i].doc);
}
}
@ -218,69 +218,69 @@ public class TestConstantScoreRangeQuery extends BaseTestRangeFilter {
assertEquals("num of docs", numDocs, 1+ maxId - minId);
Hits result;
ScoreDoc[] result;
// test id, bounded on both ends
result = search.search(csrq("id",minIP,maxIP,T,T));
assertEquals("find all", numDocs, result.length());
result = search.search(csrq("id",minIP,maxIP,T,T), null, numDocs).scoreDocs;
assertEquals("find all", numDocs, result.length);
result = search.search(csrq("id",minIP,maxIP,T,F));
assertEquals("all but last", numDocs-1, result.length());
result = search.search(csrq("id",minIP,maxIP,T,F), null, numDocs).scoreDocs;
assertEquals("all but last", numDocs-1, result.length);
result = search.search(csrq("id",minIP,maxIP,F,T));
assertEquals("all but first", numDocs-1, result.length());
result = search.search(csrq("id",minIP,maxIP,F,T), null, numDocs).scoreDocs;
assertEquals("all but first", numDocs-1, result.length);
result = search.search(csrq("id",minIP,maxIP,F,F));
assertEquals("all but ends", numDocs-2, result.length());
result = search.search(csrq("id",minIP,maxIP,F,F), null, numDocs).scoreDocs;
assertEquals("all but ends", numDocs-2, result.length);
result = search.search(csrq("id",medIP,maxIP,T,T));
assertEquals("med and up", 1+ maxId-medId, result.length());
result = search.search(csrq("id",medIP,maxIP,T,T), null, numDocs).scoreDocs;
assertEquals("med and up", 1+ maxId-medId, result.length);
result = search.search(csrq("id",minIP,medIP,T,T));
assertEquals("up to med", 1+ medId-minId, result.length());
result = search.search(csrq("id",minIP,medIP,T,T), null, numDocs).scoreDocs;
assertEquals("up to med", 1+ medId-minId, result.length);
// unbounded id
result = search.search(csrq("id",minIP,null,T,F));
assertEquals("min and up", numDocs, result.length());
result = search.search(csrq("id",minIP,null,T,F), null, numDocs).scoreDocs;
assertEquals("min and up", numDocs, result.length);
result = search.search(csrq("id",null,maxIP,F,T));
assertEquals("max and down", numDocs, result.length());
result = search.search(csrq("id",null,maxIP,F,T), null, numDocs).scoreDocs;
assertEquals("max and down", numDocs, result.length);
result = search.search(csrq("id",minIP,null,F,F));
assertEquals("not min, but up", numDocs-1, result.length());
result = search.search(csrq("id",minIP,null,F,F), null, numDocs).scoreDocs;
assertEquals("not min, but up", numDocs-1, result.length);
result = search.search(csrq("id",null,maxIP,F,F));
assertEquals("not max, but down", numDocs-1, result.length());
result = search.search(csrq("id",null,maxIP,F,F), null, numDocs).scoreDocs;
assertEquals("not max, but down", numDocs-1, result.length);
result = search.search(csrq("id",medIP,maxIP,T,F));
assertEquals("med and up, not max", maxId-medId, result.length());
result = search.search(csrq("id",medIP,maxIP,T,F), null, numDocs).scoreDocs;
assertEquals("med and up, not max", maxId-medId, result.length);
result = search.search(csrq("id",minIP,medIP,F,T));
assertEquals("not min, up to med", medId-minId, result.length());
result = search.search(csrq("id",minIP,medIP,F,T), null, numDocs).scoreDocs;
assertEquals("not min, up to med", medId-minId, result.length);
// very small sets
result = search.search(csrq("id",minIP,minIP,F,F));
assertEquals("min,min,F,F", 0, result.length());
result = search.search(csrq("id",medIP,medIP,F,F));
assertEquals("med,med,F,F", 0, result.length());
result = search.search(csrq("id",maxIP,maxIP,F,F));
assertEquals("max,max,F,F", 0, result.length());
result = search.search(csrq("id",minIP,minIP,F,F), null, numDocs).scoreDocs;
assertEquals("min,min,F,F", 0, result.length);
result = search.search(csrq("id",medIP,medIP,F,F), null, numDocs).scoreDocs;
assertEquals("med,med,F,F", 0, result.length);
result = search.search(csrq("id",maxIP,maxIP,F,F), null, numDocs).scoreDocs;
assertEquals("max,max,F,F", 0, result.length);
result = search.search(csrq("id",minIP,minIP,T,T));
assertEquals("min,min,T,T", 1, result.length());
result = search.search(csrq("id",null,minIP,F,T));
assertEquals("nul,min,F,T", 1, result.length());
result = search.search(csrq("id",minIP,minIP,T,T), null, numDocs).scoreDocs;
assertEquals("min,min,T,T", 1, result.length);
result = search.search(csrq("id",null,minIP,F,T), null, numDocs).scoreDocs;
assertEquals("nul,min,F,T", 1, result.length);
result = search.search(csrq("id",maxIP,maxIP,T,T));
assertEquals("max,max,T,T", 1, result.length());
result = search.search(csrq("id",maxIP,null,T,F));
assertEquals("max,nul,T,T", 1, result.length());
result = search.search(csrq("id",maxIP,maxIP,T,T), null, numDocs).scoreDocs;
assertEquals("max,max,T,T", 1, result.length);
result = search.search(csrq("id",maxIP,null,T,F), null, numDocs).scoreDocs;
assertEquals("max,nul,T,T", 1, result.length);
result = search.search(csrq("id",medIP,medIP,T,T));
assertEquals("med,med,T,T", 1, result.length());
result = search.search(csrq("id",medIP,medIP,T,T), null, numDocs).scoreDocs;
assertEquals("med,med,T,T", 1, result.length);
}
@ -297,53 +297,53 @@ public class TestConstantScoreRangeQuery extends BaseTestRangeFilter {
assertEquals("num of docs", numDocs, 1+ maxId - minId);
Hits result;
ScoreDoc[] result;
Query q = new TermQuery(new Term("body","body"));
// test extremes, bounded on both ends
result = search.search(csrq("rand",minRP,maxRP,T,T));
assertEquals("find all", numDocs, result.length());
result = search.search(csrq("rand",minRP,maxRP,T,T), null, numDocs).scoreDocs;
assertEquals("find all", numDocs, result.length);
result = search.search(csrq("rand",minRP,maxRP,T,F));
assertEquals("all but biggest", numDocs-1, result.length());
result = search.search(csrq("rand",minRP,maxRP,T,F), null, numDocs).scoreDocs;
assertEquals("all but biggest", numDocs-1, result.length);
result = search.search(csrq("rand",minRP,maxRP,F,T));
assertEquals("all but smallest", numDocs-1, result.length());
result = search.search(csrq("rand",minRP,maxRP,F,T), null, numDocs).scoreDocs;
assertEquals("all but smallest", numDocs-1, result.length);
result = search.search(csrq("rand",minRP,maxRP,F,F));
assertEquals("all but extremes", numDocs-2, result.length());
result = search.search(csrq("rand",minRP,maxRP,F,F), null, numDocs).scoreDocs;
assertEquals("all but extremes", numDocs-2, result.length);
// unbounded
result = search.search(csrq("rand",minRP,null,T,F));
assertEquals("smallest and up", numDocs, result.length());
result = search.search(csrq("rand",minRP,null,T,F), null, numDocs).scoreDocs;
assertEquals("smallest and up", numDocs, result.length);
result = search.search(csrq("rand",null,maxRP,F,T));
assertEquals("biggest and down", numDocs, result.length());
result = search.search(csrq("rand",null,maxRP,F,T), null, numDocs).scoreDocs;
assertEquals("biggest and down", numDocs, result.length);
result = search.search(csrq("rand",minRP,null,F,F));
assertEquals("not smallest, but up", numDocs-1, result.length());
result = search.search(csrq("rand",minRP,null,F,F), null, numDocs).scoreDocs;
assertEquals("not smallest, but up", numDocs-1, result.length);
result = search.search(csrq("rand",null,maxRP,F,F));
assertEquals("not biggest, but down", numDocs-1, result.length());
result = search.search(csrq("rand",null,maxRP,F,F), null, numDocs).scoreDocs;
assertEquals("not biggest, but down", numDocs-1, result.length);
// very small sets
result = search.search(csrq("rand",minRP,minRP,F,F));
assertEquals("min,min,F,F", 0, result.length());
result = search.search(csrq("rand",maxRP,maxRP,F,F));
assertEquals("max,max,F,F", 0, result.length());
result = search.search(csrq("rand",minRP,minRP,F,F), null, numDocs).scoreDocs;
assertEquals("min,min,F,F", 0, result.length);
result = search.search(csrq("rand",maxRP,maxRP,F,F), null, numDocs).scoreDocs;
assertEquals("max,max,F,F", 0, result.length);
result = search.search(csrq("rand",minRP,minRP,T,T));
assertEquals("min,min,T,T", 1, result.length());
result = search.search(csrq("rand",null,minRP,F,T));
assertEquals("nul,min,F,T", 1, result.length());
result = search.search(csrq("rand",minRP,minRP,T,T), null, numDocs).scoreDocs;
assertEquals("min,min,T,T", 1, result.length);
result = search.search(csrq("rand",null,minRP,F,T), null, numDocs).scoreDocs;
assertEquals("nul,min,F,T", 1, result.length);
result = search.search(csrq("rand",maxRP,maxRP,T,T));
assertEquals("max,max,T,T", 1, result.length());
result = search.search(csrq("rand",maxRP,null,T,F));
assertEquals("max,nul,T,T", 1, result.length());
result = search.search(csrq("rand",maxRP,maxRP,T,T), null, numDocs).scoreDocs;
assertEquals("max,max,T,T", 1, result.length);
result = search.search(csrq("rand",maxRP,null,T,F), null, numDocs).scoreDocs;
assertEquals("max,nul,T,T", 1, result.length);
}

View File

@ -151,24 +151,24 @@ implements Serializable {
private void matchHits (Searcher searcher, Sort sort)
throws IOException {
// make a query without sorting first
Hits hitsByRank = searcher.search(query);
ScoreDoc[] hitsByRank = searcher.search(query, null, 1000).scoreDocs;
checkHits(hitsByRank, "Sort by rank: "); // check for duplicates
Map resultMap = new TreeMap();
// store hits in TreeMap - TreeMap does not allow duplicates; existing entries are silently overwritten
for(int hitid=0;hitid<hitsByRank.length(); ++hitid) {
for(int hitid=0;hitid<hitsByRank.length; ++hitid) {
resultMap.put(
new Integer(hitsByRank.id(hitid)), // Key: Lucene Document ID
new Integer(hitsByRank[hitid].doc), // Key: Lucene Document ID
new Integer(hitid)); // Value: Hits-Objekt Index
}
// now make a query using the sort criteria
Hits resultSort = searcher.search (query, sort);
ScoreDoc[] resultSort = searcher.search (query, null, 1000, sort).scoreDocs;
checkHits(resultSort, "Sort by custom criteria: "); // check for duplicates
String lf = System.getProperty("line.separator", "\n");
// besides the sorting both sets of hits must be identical
for(int hitid=0;hitid<resultSort.length(); ++hitid) {
Integer idHitDate = new Integer(resultSort.id(hitid)); // document ID from sorted search
for(int hitid=0;hitid<resultSort.length; ++hitid) {
Integer idHitDate = new Integer(resultSort[hitid].doc); // document ID from sorted search
if(!resultMap.containsKey(idHitDate)) {
log("ID "+idHitDate+" not found. Possibliy a duplicate.");
}
@ -189,33 +189,24 @@ implements Serializable {
* Check the hits for duplicates.
* @param hits
*/
private void checkHits(Hits hits, String prefix) {
private void checkHits(ScoreDoc[] hits, String prefix) {
if(hits!=null) {
Map idMap = new TreeMap();
for(int docnum=0;docnum<hits.length();++docnum) {
for(int docnum=0;docnum<hits.length;++docnum) {
Integer luceneId = null;
try {
luceneId = new Integer(hits.id(docnum));
if(idMap.containsKey(luceneId)) {
StringBuffer message = new StringBuffer(prefix);
message.append("Duplicate key for hit index = ");
message.append(docnum);
message.append(", previous index = ");
message.append(((Integer)idMap.get(luceneId)).toString());
message.append(", Lucene ID = ");
message.append(luceneId);
log(message.toString());
} else {
idMap.put(luceneId, new Integer(docnum));
}
} catch(IOException ioe) {
luceneId = new Integer(hits[docnum].doc);
if(idMap.containsKey(luceneId)) {
StringBuffer message = new StringBuffer(prefix);
message.append("Error occurred for hit index = ");
message.append("Duplicate key for hit index = ");
message.append(docnum);
message.append(" (");
message.append(ioe.getMessage());
message.append(")");
message.append(", previous index = ");
message.append(((Integer)idMap.get(luceneId)).toString());
message.append(", Lucene ID = ");
message.append(luceneId);
log(message.toString());
} else {
idMap.put(luceneId, new Integer(docnum));
}
}
}

View File

@ -79,28 +79,28 @@ public class TestDateFilter
// search for something that does exists
Query query2 = new TermQuery(new Term("body", "sunny"));
Hits result;
ScoreDoc[] result;
// ensure that queries return expected results without DateFilter first
result = searcher.search(query1);
assertEquals(0, result.length());
result = searcher.search(query1, null, 1000).scoreDocs;
assertEquals(0, result.length);
result = searcher.search(query2);
assertEquals(1, result.length());
result = searcher.search(query2, null, 1000).scoreDocs;
assertEquals(1, result.length);
// run queries with DateFilter
result = searcher.search(query1, df1);
assertEquals(0, result.length());
result = searcher.search(query1, df1, 1000).scoreDocs;
assertEquals(0, result.length);
result = searcher.search(query1, df2);
assertEquals(0, result.length());
result = searcher.search(query1, df2, 1000).scoreDocs;
assertEquals(0, result.length);
result = searcher.search(query2, df1);
assertEquals(1, result.length());
result = searcher.search(query2, df1, 1000).scoreDocs;
assertEquals(1, result.length);
result = searcher.search(query2, df2);
assertEquals(0, result.length());
result = searcher.search(query2, df2, 1000).scoreDocs;
assertEquals(0, result.length);
}
/**
@ -140,27 +140,27 @@ public class TestDateFilter
// search for something that does exists
Query query2 = new TermQuery(new Term("body", "sunny"));
Hits result;
ScoreDoc[] result;
// ensure that queries return expected results without DateFilter first
result = searcher.search(query1);
assertEquals(0, result.length());
result = searcher.search(query1, null, 1000).scoreDocs;
assertEquals(0, result.length);
result = searcher.search(query2);
assertEquals(1, result.length());
result = searcher.search(query2, null, 1000).scoreDocs;
assertEquals(1, result.length);
// run queries with DateFilter
result = searcher.search(query1, df1);
assertEquals(0, result.length());
result = searcher.search(query1, df1, 1000).scoreDocs;
assertEquals(0, result.length);
result = searcher.search(query1, df2);
assertEquals(0, result.length());
result = searcher.search(query1, df2, 1000).scoreDocs;
assertEquals(0, result.length);
result = searcher.search(query2, df1);
assertEquals(1, result.length());
result = searcher.search(query2, df1, 1000).scoreDocs;
assertEquals(1, result.length);
result = searcher.search(query2, df2);
assertEquals(0, result.length());
result = searcher.search(query2, df2, 1000).scoreDocs;
assertEquals(0, result.length);
}
}

View File

@ -27,7 +27,6 @@ import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.queryParser.QueryParser;
import org.apache.lucene.search.Hits;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.Sort;
@ -81,9 +80,9 @@ public class TestDateSort extends TestCase {
// Execute the search and process the search results.
String[] actualOrder = new String[5];
Hits hits = searcher.search(query, sort);
for (int i = 0; i < hits.length(); i++) {
Document document = hits.doc(i);
ScoreDoc[] hits = searcher.search(query, null, 1000, sort).scoreDocs;
for (int i = 0; i < hits.length; i++) {
Document document = searcher.doc(hits[i].doc);
String text = document.get(TEXT_FIELD);
actualOrder[i] = text;
}

View File

@ -166,19 +166,19 @@ public class TestDisjunctionMaxQuery extends LuceneTestCase{
q.add(tq("hed","elephant"));
QueryUtils.check(q,s);
Hits h = s.search(q);
ScoreDoc[] h = s.search(q, null, 1000).scoreDocs;
try {
assertEquals("all docs should match " + q.toString(),
4, h.length());
4, h.length);
float score = h.score(0);
for (int i = 1; i < h.length(); i++) {
float score = h[0].score;
for (int i = 1; i < h.length; i++) {
assertEquals("score #" + i + " is not the same",
score, h.score(i), SCORE_COMP_THRESH);
score, h[i].score, SCORE_COMP_THRESH);
}
} catch (Error e) {
printHits("testSimpleEqualScores1",h);
printHits("testSimpleEqualScores1",h,s);
throw e;
}
@ -193,18 +193,18 @@ public class TestDisjunctionMaxQuery extends LuceneTestCase{
QueryUtils.check(q,s);
Hits h = s.search(q);
ScoreDoc[] h = s.search(q, null, 1000).scoreDocs;
try {
assertEquals("3 docs should match " + q.toString(),
3, h.length());
float score = h.score(0);
for (int i = 1; i < h.length(); i++) {
3, h.length);
float score = h[0].score;
for (int i = 1; i < h.length; i++) {
assertEquals("score #" + i + " is not the same",
score, h.score(i), SCORE_COMP_THRESH);
score, h[i].score, SCORE_COMP_THRESH);
}
} catch (Error e) {
printHits("testSimpleEqualScores2",h);
printHits("testSimpleEqualScores2",h, s);
throw e;
}
@ -220,18 +220,18 @@ public class TestDisjunctionMaxQuery extends LuceneTestCase{
QueryUtils.check(q,s);
Hits h = s.search(q);
ScoreDoc[] h = s.search(q, null, 1000).scoreDocs;
try {
assertEquals("all docs should match " + q.toString(),
4, h.length());
float score = h.score(0);
for (int i = 1; i < h.length(); i++) {
4, h.length);
float score = h[0].score;
for (int i = 1; i < h.length; i++) {
assertEquals("score #" + i + " is not the same",
score, h.score(i), SCORE_COMP_THRESH);
score, h[i].score, SCORE_COMP_THRESH);
}
} catch (Error e) {
printHits("testSimpleEqualScores3",h);
printHits("testSimpleEqualScores3",h, s);
throw e;
}
@ -245,22 +245,22 @@ public class TestDisjunctionMaxQuery extends LuceneTestCase{
QueryUtils.check(q,s);
Hits h = s.search(q);
ScoreDoc[] h = s.search(q, null, 1000).scoreDocs;
try {
assertEquals("3 docs should match " + q.toString(),
3, h.length());
assertEquals("wrong first", "d2", h.doc(0).get("id"));
float score0 = h.score(0);
float score1 = h.score(1);
float score2 = h.score(2);
3, h.length);
assertEquals("wrong first", "d2", s.doc(h[0].doc).get("id"));
float score0 = h[0].score;
float score1 = h[1].score;
float score2 = h[2].score;
assertTrue("d2 does not have better score then others: " +
score0 + " >? " + score1,
score0 > score1);
assertEquals("d4 and d1 don't have equal scores",
score1, score2, SCORE_COMP_THRESH);
} catch (Error e) {
printHits("testSimpleTiebreaker",h);
printHits("testSimpleTiebreaker",h, s);
throw e;
}
}
@ -286,18 +286,18 @@ public class TestDisjunctionMaxQuery extends LuceneTestCase{
QueryUtils.check(q,s);
Hits h = s.search(q);
ScoreDoc[] h = s.search(q, null, 1000).scoreDocs;
try {
assertEquals("3 docs should match " + q.toString(),
3, h.length());
float score = h.score(0);
for (int i = 1; i < h.length(); i++) {
3, h.length);
float score = h[0].score;
for (int i = 1; i < h.length; i++) {
assertEquals("score #" + i + " is not the same",
score, h.score(i), SCORE_COMP_THRESH);
score, h[i].score, SCORE_COMP_THRESH);
}
} catch (Error e) {
printHits("testBooleanRequiredEqualScores1",h);
printHits("testBooleanRequiredEqualScores1",h, s);
throw e;
}
}
@ -321,23 +321,23 @@ public class TestDisjunctionMaxQuery extends LuceneTestCase{
QueryUtils.check(q,s);
Hits h = s.search(q);
ScoreDoc[] h = s.search(q, null, 1000).scoreDocs;
try {
assertEquals("4 docs should match " + q.toString(),
4, h.length());
float score = h.score(0);
for (int i = 1; i < h.length()-1; i++) { /* note: -1 */
4, h.length);
float score = h[0].score;
for (int i = 1; i < h.length-1; i++) { /* note: -1 */
assertEquals("score #" + i + " is not the same",
score, h.score(i), SCORE_COMP_THRESH);
score, h[i].score, SCORE_COMP_THRESH);
}
assertEquals("wrong last", "d1", h.doc(h.length()-1).get("id"));
float score1 = h.score(h.length()-1);
assertEquals("wrong last", "d1", s.doc(h[h.length-1].doc).get("id"));
float score1 = h[h.length-1].score;
assertTrue("d1 does not have worse score then others: " +
score + " >? " + score1,
score > score1);
} catch (Error e) {
printHits("testBooleanOptionalNoTiebreaker",h);
printHits("testBooleanOptionalNoTiebreaker",h, s);
throw e;
}
}
@ -361,22 +361,22 @@ public class TestDisjunctionMaxQuery extends LuceneTestCase{
QueryUtils.check(q,s);
Hits h = s.search(q);
ScoreDoc[] h = s.search(q, null, 1000).scoreDocs;
try {
assertEquals("4 docs should match " + q.toString(),
4, h.length());
4, h.length);
float score0 = h.score(0);
float score1 = h.score(1);
float score2 = h.score(2);
float score3 = h.score(3);
float score0 = h[0].score;
float score1 = h[1].score;
float score2 = h[2].score;
float score3 = h[3].score;
String doc0 = h.doc(0).get("id");
String doc1 = h.doc(1).get("id");
String doc2 = h.doc(2).get("id");
String doc3 = h.doc(3).get("id");
String doc0 = s.doc(h[0].doc).get("id");
String doc1 = s.doc(h[1].doc).get("id");
String doc2 = s.doc(h[2].doc).get("id");
String doc3 = s.doc(h[3].doc).get("id");
assertTrue("doc0 should be d2 or d4: " + doc0,
doc0.equals("d2") || doc0.equals("d4"));
@ -395,7 +395,7 @@ public class TestDisjunctionMaxQuery extends LuceneTestCase{
score2 > score3);
} catch (Error e) {
printHits("testBooleanOptionalWithTiebreaker",h);
printHits("testBooleanOptionalWithTiebreaker",h, s);
throw e;
}
@ -420,22 +420,22 @@ public class TestDisjunctionMaxQuery extends LuceneTestCase{
QueryUtils.check(q,s);
Hits h = s.search(q);
ScoreDoc[] h = s.search(q, null, 1000).scoreDocs;
try {
assertEquals("4 docs should match " + q.toString(),
4, h.length());
4, h.length);
float score0 = h.score(0);
float score1 = h.score(1);
float score2 = h.score(2);
float score3 = h.score(3);
float score0 = h[0].score;
float score1 = h[1].score;
float score2 = h[2].score;
float score3 = h[3].score;
String doc0 = h.doc(0).get("id");
String doc1 = h.doc(1).get("id");
String doc2 = h.doc(2).get("id");
String doc3 = h.doc(3).get("id");
String doc0 = s.doc(h[0].doc).get("id");
String doc1 = s.doc(h[1].doc).get("id");
String doc2 = s.doc(h[2].doc).get("id");
String doc3 = s.doc(h[3].doc).get("id");
assertEquals("doc0 should be d4: ", "d4", doc0);
assertEquals("doc1 should be d3: ", "d3", doc1);
@ -453,7 +453,7 @@ public class TestDisjunctionMaxQuery extends LuceneTestCase{
score2 > score3);
} catch (Error e) {
printHits("testBooleanOptionalWithTiebreakerAndBoost",h);
printHits("testBooleanOptionalWithTiebreakerAndBoost",h, s);
throw e;
}
}
@ -476,15 +476,15 @@ public class TestDisjunctionMaxQuery extends LuceneTestCase{
}
protected void printHits(String test, Hits h) throws Exception {
protected void printHits(String test, ScoreDoc[] h, Searcher searcher) throws Exception {
System.err.println("------- " + test + " -------");
DecimalFormat f = new DecimalFormat("0.000000000");
for (int i = 0; i < h.length(); i++) {
Document d = h.doc(i);
float score = h.score(i);
for (int i = 0; i < h.length; i++) {
Document d = searcher.doc(h[i].doc);
float score = h[i].score;
System.err.println("#" + i + ": " + f.format(score) + " - " +
d.get("id"));
}

View File

@ -101,29 +101,29 @@ extends LuceneTestCase {
public void testFilteredQuery()
throws Exception {
Query filteredquery = new FilteredQuery (query, filter);
Hits hits = searcher.search (filteredquery);
assertEquals (1, hits.length());
assertEquals (1, hits.id(0));
ScoreDoc[] hits = searcher.search (filteredquery, null, 1000).scoreDocs;
assertEquals (1, hits.length);
assertEquals (1, hits[0].doc);
QueryUtils.check(filteredquery,searcher);
hits = searcher.search (filteredquery, new Sort("sorter"));
assertEquals (1, hits.length());
assertEquals (1, hits.id(0));
hits = searcher.search (filteredquery, null, 1000, new Sort("sorter")).scoreDocs;
assertEquals (1, hits.length);
assertEquals (1, hits[0].doc);
filteredquery = new FilteredQuery (new TermQuery (new Term ("field", "one")), filter);
hits = searcher.search (filteredquery);
assertEquals (2, hits.length());
hits = searcher.search (filteredquery, null, 1000).scoreDocs;
assertEquals (2, hits.length);
QueryUtils.check(filteredquery,searcher);
filteredquery = new FilteredQuery (new TermQuery (new Term ("field", "x")), filter);
hits = searcher.search (filteredquery);
assertEquals (1, hits.length());
assertEquals (3, hits.id(0));
hits = searcher.search (filteredquery, null, 1000).scoreDocs;
assertEquals (1, hits.length);
assertEquals (3, hits[0].doc);
QueryUtils.check(filteredquery,searcher);
filteredquery = new FilteredQuery (new TermQuery (new Term ("field", "y")), filter);
hits = searcher.search (filteredquery);
assertEquals (0, hits.length());
hits = searcher.search (filteredquery, null, 1000).scoreDocs;
assertEquals (0, hits.length);
QueryUtils.check(filteredquery,searcher);
// test boost
@ -163,13 +163,13 @@ extends LuceneTestCase {
* Tests whether the scores of the two queries are the same.
*/
public void assertScoreEquals(Query q1, Query q2) throws Exception {
Hits hits1 = searcher.search (q1);
Hits hits2 = searcher.search (q2);
ScoreDoc[] hits1 = searcher.search (q1, null, 1000).scoreDocs;
ScoreDoc[] hits2 = searcher.search (q2, null, 1000).scoreDocs;
assertEquals(hits1.length(), hits2.length());
assertEquals(hits1.length, hits2.length);
for (int i = 0; i < hits1.length(); i++) {
assertEquals(hits1.score(i), hits2.score(i), 0.0000001f);
for (int i = 0; i < hits1.length; i++) {
assertEquals(hits1[i].score, hits2[i].score, 0.0000001f);
}
}
@ -181,8 +181,8 @@ extends LuceneTestCase {
new Term("sorter", "b"), new Term("sorter", "d"), true);
Query filteredquery = new FilteredQuery(rq, filter);
Hits hits = searcher.search(filteredquery);
assertEquals(2, hits.length());
ScoreDoc[] hits = searcher.search(filteredquery, null, 1000).scoreDocs;
assertEquals(2, hits.length);
QueryUtils.check(filteredquery,searcher);
}
@ -194,11 +194,12 @@ extends LuceneTestCase {
query = new FilteredQuery(new MatchAllDocsQuery(),
new SingleDocTestFilter(1));
bq.add(query, BooleanClause.Occur.MUST);
Hits hits = searcher.search(bq);
assertEquals(0, hits.length());
ScoreDoc[] hits = searcher.search(bq, null, 1000).scoreDocs;
assertEquals(0, hits.length);
QueryUtils.check(query,searcher);
}
}

View File

@ -63,8 +63,8 @@ public class TestFilteredSearch extends TestCase
IndexSearcher indexSearcher = new IndexSearcher(directory);
org.apache.lucene.search.Hits hits = indexSearcher.search(booleanQuery, filter);
assertEquals("Number of matched documents", 1, hits.length());
ScoreDoc[] hits = indexSearcher.search(booleanQuery, filter, 1000).scoreDocs;
assertEquals("Number of matched documents", 1, hits.length);
}
catch (IOException e) {

View File

@ -49,114 +49,114 @@ public class TestFuzzyQuery extends LuceneTestCase {
IndexSearcher searcher = new IndexSearcher(directory);
FuzzyQuery query = new FuzzyQuery(new Term("field", "aaaaa"), FuzzyQuery.defaultMinSimilarity, 0);
Hits hits = searcher.search(query);
assertEquals(3, hits.length());
ScoreDoc[] hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(3, hits.length);
// same with prefix
query = new FuzzyQuery(new Term("field", "aaaaa"), FuzzyQuery.defaultMinSimilarity, 1);
hits = searcher.search(query);
assertEquals(3, hits.length());
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(3, hits.length);
query = new FuzzyQuery(new Term("field", "aaaaa"), FuzzyQuery.defaultMinSimilarity, 2);
hits = searcher.search(query);
assertEquals(3, hits.length());
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(3, hits.length);
query = new FuzzyQuery(new Term("field", "aaaaa"), FuzzyQuery.defaultMinSimilarity, 3);
hits = searcher.search(query);
assertEquals(3, hits.length());
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(3, hits.length);
query = new FuzzyQuery(new Term("field", "aaaaa"), FuzzyQuery.defaultMinSimilarity, 4);
hits = searcher.search(query);
assertEquals(2, hits.length());
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(2, hits.length);
query = new FuzzyQuery(new Term("field", "aaaaa"), FuzzyQuery.defaultMinSimilarity, 5);
hits = searcher.search(query);
assertEquals(1, hits.length());
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(1, hits.length);
query = new FuzzyQuery(new Term("field", "aaaaa"), FuzzyQuery.defaultMinSimilarity, 6);
hits = searcher.search(query);
assertEquals(1, hits.length());
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(1, hits.length);
// not similar enough:
query = new FuzzyQuery(new Term("field", "xxxxx"), FuzzyQuery.defaultMinSimilarity, 0);
hits = searcher.search(query);
assertEquals(0, hits.length());
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(0, hits.length);
query = new FuzzyQuery(new Term("field", "aaccc"), FuzzyQuery.defaultMinSimilarity, 0); // edit distance to "aaaaa" = 3
hits = searcher.search(query);
assertEquals(0, hits.length());
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(0, hits.length);
// query identical to a word in the index:
query = new FuzzyQuery(new Term("field", "aaaaa"), FuzzyQuery.defaultMinSimilarity, 0);
hits = searcher.search(query);
assertEquals(3, hits.length());
assertEquals(hits.doc(0).get("field"), ("aaaaa"));
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(3, hits.length);
assertEquals(searcher.doc(hits[0].doc).get("field"), ("aaaaa"));
// default allows for up to two edits:
assertEquals(hits.doc(1).get("field"), ("aaaab"));
assertEquals(hits.doc(2).get("field"), ("aaabb"));
assertEquals(searcher.doc(hits[1].doc).get("field"), ("aaaab"));
assertEquals(searcher.doc(hits[2].doc).get("field"), ("aaabb"));
// query similar to a word in the index:
query = new FuzzyQuery(new Term("field", "aaaac"), FuzzyQuery.defaultMinSimilarity, 0);
hits = searcher.search(query);
assertEquals(3, hits.length());
assertEquals(hits.doc(0).get("field"), ("aaaaa"));
assertEquals(hits.doc(1).get("field"), ("aaaab"));
assertEquals(hits.doc(2).get("field"), ("aaabb"));
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(3, hits.length);
assertEquals(searcher.doc(hits[0].doc).get("field"), ("aaaaa"));
assertEquals(searcher.doc(hits[1].doc).get("field"), ("aaaab"));
assertEquals(searcher.doc(hits[2].doc).get("field"), ("aaabb"));
// now with prefix
query = new FuzzyQuery(new Term("field", "aaaac"), FuzzyQuery.defaultMinSimilarity, 1);
hits = searcher.search(query);
assertEquals(3, hits.length());
assertEquals(hits.doc(0).get("field"), ("aaaaa"));
assertEquals(hits.doc(1).get("field"), ("aaaab"));
assertEquals(hits.doc(2).get("field"), ("aaabb"));
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(3, hits.length);
assertEquals(searcher.doc(hits[0].doc).get("field"), ("aaaaa"));
assertEquals(searcher.doc(hits[1].doc).get("field"), ("aaaab"));
assertEquals(searcher.doc(hits[2].doc).get("field"), ("aaabb"));
query = new FuzzyQuery(new Term("field", "aaaac"), FuzzyQuery.defaultMinSimilarity, 2);
hits = searcher.search(query);
assertEquals(3, hits.length());
assertEquals(hits.doc(0).get("field"), ("aaaaa"));
assertEquals(hits.doc(1).get("field"), ("aaaab"));
assertEquals(hits.doc(2).get("field"), ("aaabb"));
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(3, hits.length);
assertEquals(searcher.doc(hits[0].doc).get("field"), ("aaaaa"));
assertEquals(searcher.doc(hits[1].doc).get("field"), ("aaaab"));
assertEquals(searcher.doc(hits[2].doc).get("field"), ("aaabb"));
query = new FuzzyQuery(new Term("field", "aaaac"), FuzzyQuery.defaultMinSimilarity, 3);
hits = searcher.search(query);
assertEquals(3, hits.length());
assertEquals(hits.doc(0).get("field"), ("aaaaa"));
assertEquals(hits.doc(1).get("field"), ("aaaab"));
assertEquals(hits.doc(2).get("field"), ("aaabb"));
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(3, hits.length);
assertEquals(searcher.doc(hits[0].doc).get("field"), ("aaaaa"));
assertEquals(searcher.doc(hits[1].doc).get("field"), ("aaaab"));
assertEquals(searcher.doc(hits[2].doc).get("field"), ("aaabb"));
query = new FuzzyQuery(new Term("field", "aaaac"), FuzzyQuery.defaultMinSimilarity, 4);
hits = searcher.search(query);
assertEquals(2, hits.length());
assertEquals(hits.doc(0).get("field"), ("aaaaa"));
assertEquals(hits.doc(1).get("field"), ("aaaab"));
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(2, hits.length);
assertEquals(searcher.doc(hits[0].doc).get("field"), ("aaaaa"));
assertEquals(searcher.doc(hits[1].doc).get("field"), ("aaaab"));
query = new FuzzyQuery(new Term("field", "aaaac"), FuzzyQuery.defaultMinSimilarity, 5);
hits = searcher.search(query);
assertEquals(0, hits.length());
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(0, hits.length);
query = new FuzzyQuery(new Term("field", "ddddX"), FuzzyQuery.defaultMinSimilarity, 0);
hits = searcher.search(query);
assertEquals(1, hits.length());
assertEquals(hits.doc(0).get("field"), ("ddddd"));
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(1, hits.length);
assertEquals(searcher.doc(hits[0].doc).get("field"), ("ddddd"));
// now with prefix
query = new FuzzyQuery(new Term("field", "ddddX"), FuzzyQuery.defaultMinSimilarity, 1);
hits = searcher.search(query);
assertEquals(1, hits.length());
assertEquals(hits.doc(0).get("field"), ("ddddd"));
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(1, hits.length);
assertEquals(searcher.doc(hits[0].doc).get("field"), ("ddddd"));
query = new FuzzyQuery(new Term("field", "ddddX"), FuzzyQuery.defaultMinSimilarity, 2);
hits = searcher.search(query);
assertEquals(1, hits.length());
assertEquals(hits.doc(0).get("field"), ("ddddd"));
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(1, hits.length);
assertEquals(searcher.doc(hits[0].doc).get("field"), ("ddddd"));
query = new FuzzyQuery(new Term("field", "ddddX"), FuzzyQuery.defaultMinSimilarity, 3);
hits = searcher.search(query);
assertEquals(1, hits.length());
assertEquals(hits.doc(0).get("field"), ("ddddd"));
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(1, hits.length);
assertEquals(searcher.doc(hits[0].doc).get("field"), ("ddddd"));
query = new FuzzyQuery(new Term("field", "ddddX"), FuzzyQuery.defaultMinSimilarity, 4);
hits = searcher.search(query);
assertEquals(1, hits.length());
assertEquals(hits.doc(0).get("field"), ("ddddd"));
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(1, hits.length);
assertEquals(searcher.doc(hits[0].doc).get("field"), ("ddddd"));
query = new FuzzyQuery(new Term("field", "ddddX"), FuzzyQuery.defaultMinSimilarity, 5);
hits = searcher.search(query);
assertEquals(0, hits.length());
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(0, hits.length);
// different field = no match:
query = new FuzzyQuery(new Term("anotherfield", "ddddX"), FuzzyQuery.defaultMinSimilarity, 0);
hits = searcher.search(query);
assertEquals(0, hits.length());
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(0, hits.length);
searcher.close();
directory.close();
@ -174,64 +174,64 @@ public class TestFuzzyQuery extends LuceneTestCase {
FuzzyQuery query;
// not similar enough:
query = new FuzzyQuery(new Term("field", "xxxxx"), FuzzyQuery.defaultMinSimilarity, 0);
Hits hits = searcher.search(query);
assertEquals(0, hits.length());
ScoreDoc[] hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(0, hits.length);
// edit distance to "aaaaaaa" = 3, this matches because the string is longer than
// in testDefaultFuzziness so a bigger difference is allowed:
query = new FuzzyQuery(new Term("field", "aaaaccc"), FuzzyQuery.defaultMinSimilarity, 0);
hits = searcher.search(query);
assertEquals(1, hits.length());
assertEquals(hits.doc(0).get("field"), ("aaaaaaa"));
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(1, hits.length);
assertEquals(searcher.doc(hits[0].doc).get("field"), ("aaaaaaa"));
// now with prefix
query = new FuzzyQuery(new Term("field", "aaaaccc"), FuzzyQuery.defaultMinSimilarity, 1);
hits = searcher.search(query);
assertEquals(1, hits.length());
assertEquals(hits.doc(0).get("field"), ("aaaaaaa"));
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(1, hits.length);
assertEquals(searcher.doc(hits[0].doc).get("field"), ("aaaaaaa"));
query = new FuzzyQuery(new Term("field", "aaaaccc"), FuzzyQuery.defaultMinSimilarity, 4);
hits = searcher.search(query);
assertEquals(1, hits.length());
assertEquals(hits.doc(0).get("field"), ("aaaaaaa"));
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(1, hits.length);
assertEquals(searcher.doc(hits[0].doc).get("field"), ("aaaaaaa"));
query = new FuzzyQuery(new Term("field", "aaaaccc"), FuzzyQuery.defaultMinSimilarity, 5);
hits = searcher.search(query);
assertEquals(0, hits.length());
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(0, hits.length);
// no match, more than half of the characters is wrong:
query = new FuzzyQuery(new Term("field", "aaacccc"), FuzzyQuery.defaultMinSimilarity, 0);
hits = searcher.search(query);
assertEquals(0, hits.length());
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(0, hits.length);
// now with prefix
query = new FuzzyQuery(new Term("field", "aaacccc"), FuzzyQuery.defaultMinSimilarity, 2);
hits = searcher.search(query);
assertEquals(0, hits.length());
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(0, hits.length);
// "student" and "stellent" are indeed similar to "segment" by default:
query = new FuzzyQuery(new Term("field", "student"), FuzzyQuery.defaultMinSimilarity, 0);
hits = searcher.search(query);
assertEquals(1, hits.length());
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(1, hits.length);
query = new FuzzyQuery(new Term("field", "stellent"), FuzzyQuery.defaultMinSimilarity, 0);
hits = searcher.search(query);
assertEquals(1, hits.length());
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(1, hits.length);
// now with prefix
query = new FuzzyQuery(new Term("field", "student"), FuzzyQuery.defaultMinSimilarity, 1);
hits = searcher.search(query);
assertEquals(1, hits.length());
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(1, hits.length);
query = new FuzzyQuery(new Term("field", "stellent"), FuzzyQuery.defaultMinSimilarity, 1);
hits = searcher.search(query);
assertEquals(1, hits.length());
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(1, hits.length);
query = new FuzzyQuery(new Term("field", "student"), FuzzyQuery.defaultMinSimilarity, 2);
hits = searcher.search(query);
assertEquals(0, hits.length());
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(0, hits.length);
query = new FuzzyQuery(new Term("field", "stellent"), FuzzyQuery.defaultMinSimilarity, 2);
hits = searcher.search(query);
assertEquals(0, hits.length());
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(0, hits.length);
// "student" doesn't match anymore thanks to increased minimum similarity:
query = new FuzzyQuery(new Term("field", "student"), 0.6f, 0);
hits = searcher.search(query);
assertEquals(0, hits.length());
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(0, hits.length);
try {
query = new FuzzyQuery(new Term("field", "student"), 1.1f);

View File

@ -43,27 +43,27 @@ public class TestMatchAllDocsQuery extends LuceneTestCase {
iw.close();
IndexSearcher is = new IndexSearcher(dir);
Hits hits = is.search(new MatchAllDocsQuery());
assertEquals(3, hits.length());
ScoreDoc[] hits = is.search(new MatchAllDocsQuery(), null, 1000).scoreDocs;
assertEquals(3, hits.length);
// some artificial queries to trigger the use of skipTo():
BooleanQuery bq = new BooleanQuery();
bq.add(new MatchAllDocsQuery(), BooleanClause.Occur.MUST);
bq.add(new MatchAllDocsQuery(), BooleanClause.Occur.MUST);
hits = is.search(bq);
assertEquals(3, hits.length());
hits = is.search(bq, null, 1000).scoreDocs;
assertEquals(3, hits.length);
bq = new BooleanQuery();
bq.add(new MatchAllDocsQuery(), BooleanClause.Occur.MUST);
bq.add(new TermQuery(new Term("key", "three")), BooleanClause.Occur.MUST);
hits = is.search(bq);
assertEquals(1, hits.length());
hits = is.search(bq, null, 1000).scoreDocs;
assertEquals(1, hits.length);
// delete a document:
is.getIndexReader().deleteDocument(0);
hits = is.search(new MatchAllDocsQuery());
assertEquals(2, hits.length());
hits = is.search(new MatchAllDocsQuery(), null, 1000).scoreDocs;
assertEquals(2, hits.length);
is.close();
}

View File

@ -85,11 +85,11 @@ public class TestMultiPhraseQuery extends LuceneTestCase
query2.add((Term[])termsWithPrefix.toArray(new Term[0]));
assertEquals("body:\"strawberry (piccadilly pie pizza)\"", query2.toString());
Hits result;
result = searcher.search(query1);
assertEquals(2, result.length());
result = searcher.search(query2);
assertEquals(0, result.length());
ScoreDoc[] result;
result = searcher.search(query1, null, 1000).scoreDocs;
assertEquals(2, result.length);
result = searcher.search(query2, null, 1000).scoreDocs;
assertEquals(0, result.length);
// search for "blue* pizza":
MultiPhraseQuery query3 = new MultiPhraseQuery();
@ -105,14 +105,14 @@ public class TestMultiPhraseQuery extends LuceneTestCase
query3.add((Term[])termsWithPrefix.toArray(new Term[0]));
query3.add(new Term("body", "pizza"));
result = searcher.search(query3);
assertEquals(2, result.length()); // blueberry pizza, bluebird pizza
result = searcher.search(query3, null, 1000).scoreDocs;
assertEquals(2, result.length); // blueberry pizza, bluebird pizza
assertEquals("body:\"(blueberry bluebird) pizza\"", query3.toString());
// test slop:
query3.setSlop(1);
result = searcher.search(query3);
assertEquals(3, result.length()); // blueberry pizza, bluebird pizza, bluebird foobar pizza
result = searcher.search(query3, null, 1000).scoreDocs;
assertEquals(3, result.length); // blueberry pizza, bluebird pizza, bluebird foobar pizza
MultiPhraseQuery query4 = new MultiPhraseQuery();
try {
@ -161,9 +161,9 @@ public class TestMultiPhraseQuery extends LuceneTestCase
q.add(trouble, BooleanClause.Occur.MUST);
// exception will be thrown here without fix
Hits hits = searcher.search(q);
ScoreDoc[] hits = searcher.search(q, null, 1000).scoreDocs;
assertEquals("Wrong number of hits", 2, hits.length());
assertEquals("Wrong number of hits", 2, hits.length);
searcher.close();
}
@ -186,8 +186,8 @@ public class TestMultiPhraseQuery extends LuceneTestCase
q.add(trouble, BooleanClause.Occur.MUST);
// exception will be thrown here without fix for #35626:
Hits hits = searcher.search(q);
assertEquals("Wrong number of hits", 0, hits.length());
ScoreDoc[] hits = searcher.search(q, null, 1000).scoreDocs;
assertEquals("Wrong number of hits", 0, hits.length);
searcher.close();
}

View File

@ -114,13 +114,13 @@ public class TestMultiSearcher extends LuceneTestCase
// creating the multiSearcher
Searcher mSearcher = getMultiSearcherInstance(searchers);
// performing the search
Hits hits = mSearcher.search(query);
ScoreDoc[] hits = mSearcher.search(query, null, 1000).scoreDocs;
assertEquals(3, hits.length());
assertEquals(3, hits.length);
// iterating over the hit documents
for (int i = 0; i < hits.length(); i++) {
Document d = hits.doc(i);
for (int i = 0; i < hits.length; i++) {
Document d = mSearcher.doc(hits[i].doc);
}
mSearcher.close();
@ -143,26 +143,26 @@ public class TestMultiSearcher extends LuceneTestCase
// creating the mulitSearcher
MultiSearcher mSearcher2 = getMultiSearcherInstance(searchers2);
// performing the same search
Hits hits2 = mSearcher2.search(query);
ScoreDoc[] hits2 = mSearcher2.search(query, null, 1000).scoreDocs;
assertEquals(4, hits2.length());
assertEquals(4, hits2.length);
// iterating over the hit documents
for (int i = 0; i < hits2.length(); i++) {
for (int i = 0; i < hits2.length; i++) {
// no exception should happen at this point
Document d = hits2.doc(i);
Document d = mSearcher2.doc(hits2[i].doc);
}
// test the subSearcher() method:
Query subSearcherQuery = parser.parse("id:doc1");
hits2 = mSearcher2.search(subSearcherQuery);
assertEquals(2, hits2.length());
assertEquals(0, mSearcher2.subSearcher(hits2.id(0))); // hit from searchers2[0]
assertEquals(1, mSearcher2.subSearcher(hits2.id(1))); // hit from searchers2[1]
hits2 = mSearcher2.search(subSearcherQuery, null, 1000).scoreDocs;
assertEquals(2, hits2.length);
assertEquals(0, mSearcher2.subSearcher(hits2[0].doc)); // hit from searchers2[0]
assertEquals(1, mSearcher2.subSearcher(hits2[1].doc)); // hit from searchers2[1]
subSearcherQuery = parser.parse("id:doc2");
hits2 = mSearcher2.search(subSearcherQuery);
assertEquals(1, hits2.length());
assertEquals(1, mSearcher2.subSearcher(hits2.id(0))); // hit from searchers2[1]
hits2 = mSearcher2.search(subSearcherQuery, null, 1000).scoreDocs;
assertEquals(1, hits2.length);
assertEquals(1, mSearcher2.subSearcher(hits2[0].doc)); // hit from searchers2[1]
mSearcher2.close();
//--------------------------------------------------------------------
@ -188,13 +188,13 @@ public class TestMultiSearcher extends LuceneTestCase
// creating the mulitSearcher
Searcher mSearcher3 = getMultiSearcherInstance(searchers3);
// performing the same search
Hits hits3 = mSearcher3.search(query);
ScoreDoc[] hits3 = mSearcher3.search(query, null, 1000).scoreDocs;
assertEquals(3, hits3.length());
assertEquals(3, hits3.length);
// iterating over the hit documents
for (int i = 0; i < hits3.length(); i++) {
Document d = hits3.doc(i);
for (int i = 0; i < hits3.length; i++) {
Document d = mSearcher3.doc(hits3[i].doc);
}
mSearcher3.close();
indexStoreA.close();
@ -246,10 +246,10 @@ public class TestMultiSearcher extends LuceneTestCase
MultiSearcher searcher = getMultiSearcherInstance(new Searcher[]{indexSearcher1, indexSearcher2});
assertTrue("searcher is null and it shouldn't be", searcher != null);
Hits hits = searcher.search(query);
ScoreDoc[] hits = searcher.search(query, null, 1000).scoreDocs;
assertTrue("hits is null and it shouldn't be", hits != null);
assertTrue(hits.length() + " does not equal: " + 2, hits.length() == 2);
Document document = searcher.doc(hits.id(0));
assertTrue(hits.length + " does not equal: " + 2, hits.length == 2);
Document document = searcher.doc(hits[0].doc);
assertTrue("document is null and it shouldn't be", document != null);
assertTrue("document.getFields() Size: " + document.getFields().size() + " is not: " + 2, document.getFields().size() == 2);
//Should be one document from each directory
@ -257,7 +257,7 @@ public class TestMultiSearcher extends LuceneTestCase
Set ftl = new HashSet();
ftl.add("other");
SetBasedFieldSelector fs = new SetBasedFieldSelector(ftl, Collections.EMPTY_SET);
document = searcher.doc(hits.id(0), fs);
document = searcher.doc(hits[0].doc, fs);
assertTrue("document is null and it shouldn't be", document != null);
assertTrue("document.getFields() Size: " + document.getFields().size() + " is not: " + 1, document.getFields().size() == 1);
String value = document.get("contents");
@ -267,7 +267,7 @@ public class TestMultiSearcher extends LuceneTestCase
ftl.clear();
ftl.add("contents");
fs = new SetBasedFieldSelector(ftl, Collections.EMPTY_SET);
document = searcher.doc(hits.id(1), fs);
document = searcher.doc(hits[1].doc, fs);
value = document.get("contents");
assertTrue("value is null and it shouldn't be", value != null);
value = document.get("other");
@ -289,7 +289,7 @@ public class TestMultiSearcher extends LuceneTestCase
RAMDirectory ramDirectory1;
IndexSearcher indexSearcher1;
Hits hits;
ScoreDoc[] hits;
ramDirectory1=new MockRAMDirectory();
@ -299,14 +299,12 @@ public class TestMultiSearcher extends LuceneTestCase
indexSearcher1=new IndexSearcher(ramDirectory1);
hits=indexSearcher1.search(query);
hits=indexSearcher1.search(query, null, 1000).scoreDocs;
assertEquals(message, 2, hits.length());
assertEquals(message, 1, hits.score(0), 1e-6); // hits.score(0) is 0.594535 if only a single document is in first index
assertEquals(message, 2, hits.length);
// Store the scores for use later
float[] scores={ hits.score(0), hits.score(1) };
float[] scores={ hits[0].score, hits[1].score };
assertTrue(message, scores[0] > scores[1]);
@ -331,23 +329,23 @@ public class TestMultiSearcher extends LuceneTestCase
Searcher searcher=getMultiSearcherInstance(new Searcher[] { indexSearcher1, indexSearcher2 });
hits=searcher.search(query);
hits=searcher.search(query, null, 1000).scoreDocs;
assertEquals(message, 2, hits.length());
assertEquals(message, 2, hits.length);
// The scores should be the same (within reason)
assertEquals(message, scores[0], hits.score(0), 1e-6); // This will a document from ramDirectory1
assertEquals(message, scores[1], hits.score(1), 1e-6); // This will a document from ramDirectory2
assertEquals(message, scores[0], hits[0].score, 1e-6); // This will a document from ramDirectory1
assertEquals(message, scores[1], hits[1].score, 1e-6); // This will a document from ramDirectory2
// Adding a Sort.RELEVANCE object should not change anything
hits=searcher.search(query, Sort.RELEVANCE);
hits=searcher.search(query, null, 1000, Sort.RELEVANCE).scoreDocs;
assertEquals(message, 2, hits.length());
assertEquals(message, 2, hits.length);
assertEquals(message, scores[0], hits.score(0), 1e-6); // This will a document from ramDirectory1
assertEquals(message, scores[1], hits.score(1), 1e-6); // This will a document from ramDirectory2
assertEquals(message, scores[0], hits[0].score, 1e-6); // This will a document from ramDirectory1
assertEquals(message, scores[1], hits[1].score, 1e-6); // This will a document from ramDirectory2
searcher.close();

View File

@ -90,17 +90,17 @@ public class TestMultiSearcherRanking extends LuceneTestCase {
if(verbose) System.out.println("Query: " + queryStr);
QueryParser queryParser = new QueryParser(FIELD_NAME, new StandardAnalyzer());
Query query = queryParser.parse(queryStr);
Hits multiSearcherHits = multiSearcher.search(query);
Hits singleSearcherHits = singleSearcher.search(query);
assertEquals(multiSearcherHits.length(), singleSearcherHits.length());
for (int i = 0; i < multiSearcherHits.length(); i++) {
Document docMulti = multiSearcherHits.doc(i);
Document docSingle = singleSearcherHits.doc(i);
ScoreDoc[] multiSearcherHits = multiSearcher.search(query, null, 1000).scoreDocs;
ScoreDoc[] singleSearcherHits = singleSearcher.search(query, null, 1000).scoreDocs;
assertEquals(multiSearcherHits.length, singleSearcherHits.length);
for (int i = 0; i < multiSearcherHits.length; i++) {
Document docMulti = multiSearcher.doc(multiSearcherHits[i].doc);
Document docSingle = singleSearcher.doc(singleSearcherHits[i].doc);
if(verbose) System.out.println("Multi: " + docMulti.get(FIELD_NAME) + " score="
+ multiSearcherHits.score(i));
+ multiSearcherHits[i].score);
if(verbose) System.out.println("Single: " + docSingle.get(FIELD_NAME) + " score="
+ singleSearcherHits.score(i));
assertEquals(multiSearcherHits.score(i), singleSearcherHits.score(i),
+ singleSearcherHits[i].score);
assertEquals(multiSearcherHits[i].score, singleSearcherHits[i].score,
0.001f);
assertEquals(docMulti.get(FIELD_NAME), docSingle.get(FIELD_NAME));
}

View File

@ -51,7 +51,7 @@ public class TestNot extends LuceneTestCase {
QueryParser parser = new QueryParser("field", new SimpleAnalyzer());
Query query = parser.parse("a NOT b");
//System.out.println(query);
Hits hits = searcher.search(query);
assertEquals(0, hits.length());
ScoreDoc[] hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(0, hits.length);
}
}

View File

@ -95,11 +95,11 @@ public class TestPhrasePrefixQuery
query1.add((Term[])termsWithPrefix.toArray(new Term[0]));
query2.add((Term[])termsWithPrefix.toArray(new Term[0]));
Hits result;
result = searcher.search(query1);
assertEquals(2, result.length());
ScoreDoc[] result;
result = searcher.search(query1, null, 1000).scoreDocs;
assertEquals(2, result.length);
result = searcher.search(query2);
assertEquals(0, result.length());
result = searcher.search(query2, null, 1000).scoreDocs;
assertEquals(0, result.length);
}
}

View File

@ -91,8 +91,8 @@ public class TestPhraseQuery extends LuceneTestCase {
query.setSlop(2);
query.add(new Term("field", "one"));
query.add(new Term("field", "five"));
Hits hits = searcher.search(query);
assertEquals(0, hits.length());
ScoreDoc[] hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(0, hits.length);
QueryUtils.check(query,searcher);
}
@ -100,8 +100,8 @@ public class TestPhraseQuery extends LuceneTestCase {
query.setSlop(3);
query.add(new Term("field", "one"));
query.add(new Term("field", "five"));
Hits hits = searcher.search(query);
assertEquals(1, hits.length());
ScoreDoc[] hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(1, hits.length);
QueryUtils.check(query,searcher);
}
@ -112,16 +112,16 @@ public class TestPhraseQuery extends LuceneTestCase {
// slop is zero by default
query.add(new Term("field", "four"));
query.add(new Term("field", "five"));
Hits hits = searcher.search(query);
assertEquals("exact match", 1, hits.length());
ScoreDoc[] hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals("exact match", 1, hits.length);
QueryUtils.check(query,searcher);
query = new PhraseQuery();
query.add(new Term("field", "two"));
query.add(new Term("field", "one"));
hits = searcher.search(query);
assertEquals("reverse not exact", 0, hits.length());
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals("reverse not exact", 0, hits.length);
QueryUtils.check(query,searcher);
}
@ -130,8 +130,8 @@ public class TestPhraseQuery extends LuceneTestCase {
query.setSlop(1);
query.add(new Term("field", "one"));
query.add(new Term("field", "two"));
Hits hits = searcher.search(query);
assertEquals("in order", 1, hits.length());
ScoreDoc[] hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals("in order", 1, hits.length);
QueryUtils.check(query,searcher);
@ -141,8 +141,8 @@ public class TestPhraseQuery extends LuceneTestCase {
query.setSlop(1);
query.add(new Term("field", "two"));
query.add(new Term("field", "one"));
hits = searcher.search(query);
assertEquals("reversed, slop not 2 or more", 0, hits.length());
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals("reversed, slop not 2 or more", 0, hits.length);
QueryUtils.check(query,searcher);
}
@ -153,8 +153,8 @@ public class TestPhraseQuery extends LuceneTestCase {
query.setSlop(2); // must be at least two for reverse order match
query.add(new Term("field", "two"));
query.add(new Term("field", "one"));
Hits hits = searcher.search(query);
assertEquals("just sloppy enough", 1, hits.length());
ScoreDoc[] hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals("just sloppy enough", 1, hits.length);
QueryUtils.check(query,searcher);
@ -162,8 +162,8 @@ public class TestPhraseQuery extends LuceneTestCase {
query.setSlop(2);
query.add(new Term("field", "three"));
query.add(new Term("field", "one"));
hits = searcher.search(query);
assertEquals("not sloppy enough", 0, hits.length());
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals("not sloppy enough", 0, hits.length);
QueryUtils.check(query,searcher);
}
@ -177,8 +177,8 @@ public class TestPhraseQuery extends LuceneTestCase {
query.add(new Term("field", "one"));
query.add(new Term("field", "three"));
query.add(new Term("field", "five"));
Hits hits = searcher.search(query);
assertEquals("two total moves", 1, hits.length());
ScoreDoc[] hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals("two total moves", 1, hits.length);
QueryUtils.check(query,searcher);
@ -187,14 +187,14 @@ public class TestPhraseQuery extends LuceneTestCase {
query.add(new Term("field", "five"));
query.add(new Term("field", "three"));
query.add(new Term("field", "one"));
hits = searcher.search(query);
assertEquals("slop of 5 not close enough", 0, hits.length());
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals("slop of 5 not close enough", 0, hits.length);
QueryUtils.check(query,searcher);
query.setSlop(6);
hits = searcher.search(query);
assertEquals("slop of 6 just right", 1, hits.length());
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals("slop of 6 just right", 1, hits.length);
QueryUtils.check(query,searcher);
}
@ -215,8 +215,8 @@ public class TestPhraseQuery extends LuceneTestCase {
PhraseQuery query = new PhraseQuery();
query.add(new Term("field","stop"));
query.add(new Term("field","words"));
Hits hits = searcher.search(query);
assertEquals(1, hits.length());
ScoreDoc[] hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(1, hits.length);
QueryUtils.check(query,searcher);
@ -224,8 +224,8 @@ public class TestPhraseQuery extends LuceneTestCase {
query = new PhraseQuery();
query.add(new Term("field", "words"));
query.add(new Term("field", "here"));
hits = searcher.search(query);
assertEquals(1, hits.length());
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(1, hits.length);
QueryUtils.check(query,searcher);
@ -254,8 +254,8 @@ public class TestPhraseQuery extends LuceneTestCase {
PhraseQuery phraseQuery = new PhraseQuery();
phraseQuery.add(new Term("source", "marketing"));
phraseQuery.add(new Term("source", "info"));
Hits hits = searcher.search(phraseQuery);
assertEquals(2, hits.length());
ScoreDoc[] hits = searcher.search(phraseQuery, null, 1000).scoreDocs;
assertEquals(2, hits.length);
QueryUtils.check(phraseQuery,searcher);
@ -263,8 +263,8 @@ public class TestPhraseQuery extends LuceneTestCase {
BooleanQuery booleanQuery = new BooleanQuery();
booleanQuery.add(termQuery, BooleanClause.Occur.MUST);
booleanQuery.add(phraseQuery, BooleanClause.Occur.MUST);
hits = searcher.search(booleanQuery);
assertEquals(1, hits.length());
hits = searcher.search(booleanQuery, null, 1000).scoreDocs;
assertEquals(1, hits.length);
QueryUtils.check(termQuery,searcher);
@ -294,23 +294,23 @@ public class TestPhraseQuery extends LuceneTestCase {
phraseQuery.add(new Term("contents","map"));
phraseQuery.add(new Term("contents","entry"));
hits = searcher.search(termQuery);
assertEquals(3, hits.length());
hits = searcher.search(phraseQuery);
assertEquals(2, hits.length());
hits = searcher.search(termQuery, null, 1000).scoreDocs;
assertEquals(3, hits.length);
hits = searcher.search(phraseQuery, null, 1000).scoreDocs;
assertEquals(2, hits.length);
booleanQuery = new BooleanQuery();
booleanQuery.add(termQuery, BooleanClause.Occur.MUST);
booleanQuery.add(phraseQuery, BooleanClause.Occur.MUST);
hits = searcher.search(booleanQuery);
assertEquals(2, hits.length());
hits = searcher.search(booleanQuery, null, 1000).scoreDocs;
assertEquals(2, hits.length);
booleanQuery = new BooleanQuery();
booleanQuery.add(phraseQuery, BooleanClause.Occur.MUST);
booleanQuery.add(termQuery, BooleanClause.Occur.MUST);
hits = searcher.search(booleanQuery);
assertEquals(2, hits.length());
hits = searcher.search(booleanQuery, null, 1000).scoreDocs;
assertEquals(2, hits.length);
QueryUtils.check(booleanQuery,searcher);
@ -343,16 +343,16 @@ public class TestPhraseQuery extends LuceneTestCase {
query.add(new Term("field", "firstname"));
query.add(new Term("field", "lastname"));
query.setSlop(Integer.MAX_VALUE);
Hits hits = searcher.search(query);
assertEquals(3, hits.length());
ScoreDoc[] hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(3, hits.length);
// Make sure that those matches where the terms appear closer to
// each other get a higher score:
assertEquals(0.71, hits.score(0), 0.01);
assertEquals(0, hits.id(0));
assertEquals(0.44, hits.score(1), 0.01);
assertEquals(1, hits.id(1));
assertEquals(0.31, hits.score(2), 0.01);
assertEquals(2, hits.id(2));
assertEquals(0.71, hits[0].score, 0.01);
assertEquals(0, hits[0].doc);
assertEquals(0.44, hits[1].score, 0.01);
assertEquals(1, hits[1].doc);
assertEquals(0.31, hits[2].score, 0.01);
assertEquals(2, hits[2].doc);
QueryUtils.check(query,searcher);
}
@ -363,14 +363,14 @@ public class TestPhraseQuery extends LuceneTestCase {
query.add(new Term("repeated", "part"));
query.setSlop(100);
Hits hits = searcher.search(query);
assertEquals("slop of 100 just right", 1, hits.length());
ScoreDoc[] hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals("slop of 100 just right", 1, hits.length);
QueryUtils.check(query,searcher);
query.setSlop(99);
hits = searcher.search(query);
assertEquals("slop of 99 not enough", 0, hits.length());
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals("slop of 99 not enough", 0, hits.length);
QueryUtils.check(query,searcher);
}
@ -382,8 +382,8 @@ public class TestPhraseQuery extends LuceneTestCase {
query.add(new Term("nonexist", "found"));
query.setSlop(2); // would be found this way
Hits hits = searcher.search(query);
assertEquals("phrase without repetitions exists in 2 docs", 2, hits.length());
ScoreDoc[] hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals("phrase without repetitions exists in 2 docs", 2, hits.length);
QueryUtils.check(query,searcher);
// phrase with repetitions that exists in 2 docs
@ -393,8 +393,8 @@ public class TestPhraseQuery extends LuceneTestCase {
query.add(new Term("nonexist", "exist"));
query.setSlop(1); // would be found
hits = searcher.search(query);
assertEquals("phrase with repetitions exists in two docs", 2, hits.length());
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals("phrase with repetitions exists in two docs", 2, hits.length);
QueryUtils.check(query,searcher);
// phrase I with repetitions that does not exist in any doc
@ -404,8 +404,8 @@ public class TestPhraseQuery extends LuceneTestCase {
query.add(new Term("nonexist", "phrase"));
query.setSlop(1000); // would not be found no matter how high the slop is
hits = searcher.search(query);
assertEquals("nonexisting phrase with repetitions does not exist in any doc", 0, hits.length());
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals("nonexisting phrase with repetitions does not exist in any doc", 0, hits.length);
QueryUtils.check(query,searcher);
// phrase II with repetitions that does not exist in any doc
@ -416,8 +416,8 @@ public class TestPhraseQuery extends LuceneTestCase {
query.add(new Term("nonexist", "exist"));
query.setSlop(1000); // would not be found no matter how high the slop is
hits = searcher.search(query);
assertEquals("nonexisting phrase with repetitions does not exist in any doc", 0, hits.length());
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals("nonexisting phrase with repetitions does not exist in any doc", 0, hits.length);
QueryUtils.check(query,searcher);
}
@ -437,17 +437,17 @@ public class TestPhraseQuery extends LuceneTestCase {
query.setSlop(0); // to use exact phrase scorer
query.add(new Term("field", "two"));
query.add(new Term("field", "three"));
Hits hits = searcher.search(query);
assertEquals("phrase found with exact phrase scorer", 1, hits.length());
float score0 = hits.score(0);
ScoreDoc[] hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals("phrase found with exact phrase scorer", 1, hits.length);
float score0 = hits[0].score;
//System.out.println("(exact) field: two three: "+score0);
QueryUtils.check(query,searcher);
// search on non palyndrome, find phrase with slop 2, though no slop required here.
query.setSlop(2); // to use sloppy scorer
hits = searcher.search(query);
assertEquals("just sloppy enough", 1, hits.length());
float score1 = hits.score(0);
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals("just sloppy enough", 1, hits.length);
float score1 = hits[0].score;
//System.out.println("(sloppy) field: two three: "+score1);
assertEquals("exact scorer and sloppy scorer score the same when slop does not matter",score0, score1, SCORE_COMP_THRESH);
QueryUtils.check(query,searcher);
@ -457,9 +457,9 @@ public class TestPhraseQuery extends LuceneTestCase {
query.setSlop(2); // must be at least two for both ordered and reversed to match
query.add(new Term("palindrome", "two"));
query.add(new Term("palindrome", "three"));
hits = searcher.search(query);
assertEquals("just sloppy enough", 1, hits.length());
float score2 = hits.score(0);
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals("just sloppy enough", 1, hits.length);
float score2 = hits[0].score;
//System.out.println("palindrome: two three: "+score2);
QueryUtils.check(query,searcher);
@ -471,9 +471,9 @@ public class TestPhraseQuery extends LuceneTestCase {
query.setSlop(2); // must be at least two for both ordered and reversed to match
query.add(new Term("palindrome", "three"));
query.add(new Term("palindrome", "two"));
hits = searcher.search(query);
assertEquals("just sloppy enough", 1, hits.length());
float score3 = hits.score(0);
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals("just sloppy enough", 1, hits.length);
float score3 = hits[0].score;
//System.out.println("palindrome: three two: "+score3);
QueryUtils.check(query,searcher);
@ -498,17 +498,17 @@ public class TestPhraseQuery extends LuceneTestCase {
query.add(new Term("field", "one"));
query.add(new Term("field", "two"));
query.add(new Term("field", "three"));
Hits hits = searcher.search(query);
assertEquals("phrase found with exact phrase scorer", 1, hits.length());
float score0 = hits.score(0);
ScoreDoc[] hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals("phrase found with exact phrase scorer", 1, hits.length);
float score0 = hits[0].score;
//System.out.println("(exact) field: one two three: "+score0);
QueryUtils.check(query,searcher);
// search on non palyndrome, find phrase with slop 3, though no slop required here.
query.setSlop(4); // to use sloppy scorer
hits = searcher.search(query);
assertEquals("just sloppy enough", 1, hits.length());
float score1 = hits.score(0);
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals("just sloppy enough", 1, hits.length);
float score1 = hits[0].score;
//System.out.println("(sloppy) field: one two three: "+score1);
assertEquals("exact scorer and sloppy scorer score the same when slop does not matter",score0, score1, SCORE_COMP_THRESH);
QueryUtils.check(query,searcher);
@ -519,9 +519,9 @@ public class TestPhraseQuery extends LuceneTestCase {
query.add(new Term("palindrome", "one"));
query.add(new Term("palindrome", "two"));
query.add(new Term("palindrome", "three"));
hits = searcher.search(query);
assertEquals("just sloppy enough", 1, hits.length());
float score2 = hits.score(0);
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals("just sloppy enough", 1, hits.length);
float score2 = hits[0].score;
//System.out.println("palindrome: one two three: "+score2);
QueryUtils.check(query,searcher);
@ -534,9 +534,9 @@ public class TestPhraseQuery extends LuceneTestCase {
query.add(new Term("palindrome", "three"));
query.add(new Term("palindrome", "two"));
query.add(new Term("palindrome", "one"));
hits = searcher.search(query);
assertEquals("just sloppy enough", 1, hits.length());
float score3 = hits.score(0);
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals("just sloppy enough", 1, hits.length);
float score3 = hits[0].score;
//System.out.println("palindrome: three two one: "+score3);
QueryUtils.check(query,searcher);

View File

@ -17,25 +17,20 @@ package org.apache.lucene.search;
* limitations under the License.
*/
import org.apache.lucene.index.Term;
import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.queryParser.QueryParser;
import org.apache.lucene.search.PhraseQuery;
import org.apache.lucene.search.Hits;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.store.RAMDirectory;
import java.io.Reader;
import java.io.StringReader;
import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.StopAnalyzer;
import org.apache.lucene.analysis.StopFilter;
import org.apache.lucene.analysis.Token;
import org.apache.lucene.analysis.TokenStream;
import org.apache.lucene.analysis.WhitespaceAnalyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import java.io.Reader;
import java.io.StringReader;
import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.index.Term;
import org.apache.lucene.queryParser.QueryParser;
import org.apache.lucene.store.RAMDirectory;
import org.apache.lucene.util.LuceneTestCase;
/**
@ -76,85 +71,85 @@ public class TestPositionIncrement extends LuceneTestCase {
IndexSearcher searcher = new IndexSearcher(store);
PhraseQuery q;
Hits hits;
ScoreDoc[] hits;
q = new PhraseQuery();
q.add(new Term("field", "1"));
q.add(new Term("field", "2"));
hits = searcher.search(q);
assertEquals(0, hits.length());
hits = searcher.search(q, null, 1000).scoreDocs;
assertEquals(0, hits.length);
// same as previous, just specify positions explicitely.
q = new PhraseQuery();
q.add(new Term("field", "1"),0);
q.add(new Term("field", "2"),1);
hits = searcher.search(q);
assertEquals(0, hits.length());
hits = searcher.search(q, null, 1000).scoreDocs;
assertEquals(0, hits.length);
// specifying correct positions should find the phrase.
q = new PhraseQuery();
q.add(new Term("field", "1"),0);
q.add(new Term("field", "2"),2);
hits = searcher.search(q);
assertEquals(1, hits.length());
hits = searcher.search(q, null, 1000).scoreDocs;
assertEquals(1, hits.length);
q = new PhraseQuery();
q.add(new Term("field", "2"));
q.add(new Term("field", "3"));
hits = searcher.search(q);
assertEquals(1, hits.length());
hits = searcher.search(q, null, 1000).scoreDocs;
assertEquals(1, hits.length);
q = new PhraseQuery();
q.add(new Term("field", "3"));
q.add(new Term("field", "4"));
hits = searcher.search(q);
assertEquals(0, hits.length());
hits = searcher.search(q, null, 1000).scoreDocs;
assertEquals(0, hits.length);
// phrase query would find it when correct positions are specified.
q = new PhraseQuery();
q.add(new Term("field", "3"),0);
q.add(new Term("field", "4"),0);
hits = searcher.search(q);
assertEquals(1, hits.length());
hits = searcher.search(q, null, 1000).scoreDocs;
assertEquals(1, hits.length);
// phrase query should fail for non existing searched term
// even if there exist another searched terms in the same searched position.
q = new PhraseQuery();
q.add(new Term("field", "3"),0);
q.add(new Term("field", "9"),0);
hits = searcher.search(q);
assertEquals(0, hits.length());
hits = searcher.search(q, null, 1000).scoreDocs;
assertEquals(0, hits.length);
// multi-phrase query should succed for non existing searched term
// because there exist another searched terms in the same searched position.
MultiPhraseQuery mq = new MultiPhraseQuery();
mq.add(new Term[]{new Term("field", "3"),new Term("field", "9")},0);
hits = searcher.search(mq);
assertEquals(1, hits.length());
hits = searcher.search(mq, null, 1000).scoreDocs;
assertEquals(1, hits.length);
q = new PhraseQuery();
q.add(new Term("field", "2"));
q.add(new Term("field", "4"));
hits = searcher.search(q);
assertEquals(1, hits.length());
hits = searcher.search(q, null, 1000).scoreDocs;
assertEquals(1, hits.length);
q = new PhraseQuery();
q.add(new Term("field", "3"));
q.add(new Term("field", "5"));
hits = searcher.search(q);
assertEquals(1, hits.length());
hits = searcher.search(q, null, 1000).scoreDocs;
assertEquals(1, hits.length);
q = new PhraseQuery();
q.add(new Term("field", "4"));
q.add(new Term("field", "5"));
hits = searcher.search(q);
assertEquals(1, hits.length());
hits = searcher.search(q, null, 1000).scoreDocs;
assertEquals(1, hits.length);
q = new PhraseQuery();
q.add(new Term("field", "2"));
q.add(new Term("field", "5"));
hits = searcher.search(q);
assertEquals(0, hits.length());
hits = searcher.search(q, null, 1000).scoreDocs;
assertEquals(0, hits.length);
// analyzer to introduce stopwords and increment gaps
Analyzer stpa = new Analyzer() {
@ -168,19 +163,19 @@ public class TestPositionIncrement extends LuceneTestCase {
// should not find "1 2" because there is a gap of 1 in the index
QueryParser qp = new QueryParser("field",stpa);
q = (PhraseQuery) qp.parse("\"1 2\"");
hits = searcher.search(q);
assertEquals(0, hits.length());
hits = searcher.search(q, null, 1000).scoreDocs;
assertEquals(0, hits.length);
// omitted stop word cannot help because stop filter swallows the increments.
q = (PhraseQuery) qp.parse("\"1 stop 2\"");
hits = searcher.search(q);
assertEquals(0, hits.length());
hits = searcher.search(q, null, 1000).scoreDocs;
assertEquals(0, hits.length);
// query parser alone won't help, because stop filter swallows the increments.
qp.setEnablePositionIncrements(true);
q = (PhraseQuery) qp.parse("\"1 stop 2\"");
hits = searcher.search(q);
assertEquals(0, hits.length());
hits = searcher.search(q, null, 1000).scoreDocs;
assertEquals(0, hits.length);
boolean dflt = StopFilter.getEnablePositionIncrementsDefault();
try {
@ -188,14 +183,14 @@ public class TestPositionIncrement extends LuceneTestCase {
qp.setEnablePositionIncrements(false);
StopFilter.setEnablePositionIncrementsDefault(true);
q = (PhraseQuery) qp.parse("\"1 stop 2\"");
hits = searcher.search(q);
assertEquals(0, hits.length());
hits = searcher.search(q, null, 1000).scoreDocs;
assertEquals(0, hits.length);
// when both qp qnd stopFilter propagate increments, we should find the doc.
qp.setEnablePositionIncrements(true);
q = (PhraseQuery) qp.parse("\"1 stop 2\"");
hits = searcher.search(q);
assertEquals(1, hits.length());
hits = searcher.search(q, null, 1000).scoreDocs;
assertEquals(1, hits.length);
} finally {
StopFilter.setEnablePositionIncrementsDefault(dflt);
}

View File

@ -51,55 +51,55 @@ public class TestPrefixFilter extends LuceneTestCase {
PrefixFilter filter = new PrefixFilter(new Term("category", "/Computers"));
Query query = new ConstantScoreQuery(filter);
IndexSearcher searcher = new IndexSearcher(directory);
Hits hits = searcher.search(query);
assertEquals(4, hits.length());
ScoreDoc[] hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(4, hits.length);
// test middle of values
filter = new PrefixFilter(new Term("category", "/Computers/Mac"));
query = new ConstantScoreQuery(filter);
hits = searcher.search(query);
assertEquals(2, hits.length());
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(2, hits.length);
// test start of values
filter = new PrefixFilter(new Term("category", "/Computers/Linux"));
query = new ConstantScoreQuery(filter);
hits = searcher.search(query);
assertEquals(1, hits.length());
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(1, hits.length);
// test end of values
filter = new PrefixFilter(new Term("category", "/Computers/Windows"));
query = new ConstantScoreQuery(filter);
hits = searcher.search(query);
assertEquals(1, hits.length());
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(1, hits.length);
// test non-existant
filter = new PrefixFilter(new Term("category", "/Computers/ObsoleteOS"));
query = new ConstantScoreQuery(filter);
hits = searcher.search(query);
assertEquals(0, hits.length());
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(0, hits.length);
// test non-existant, before values
filter = new PrefixFilter(new Term("category", "/Computers/AAA"));
query = new ConstantScoreQuery(filter);
hits = searcher.search(query);
assertEquals(0, hits.length());
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(0, hits.length);
// test non-existant, after values
filter = new PrefixFilter(new Term("category", "/Computers/ZZZ"));
query = new ConstantScoreQuery(filter);
hits = searcher.search(query);
assertEquals(0, hits.length());
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(0, hits.length);
// test zero length prefix
filter = new PrefixFilter(new Term("category", ""));
query = new ConstantScoreQuery(filter);
hits = searcher.search(query);
assertEquals(4, hits.length());
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(4, hits.length);
// test non existent field
filter = new PrefixFilter(new Term("nonexistantfield", "/Computers"));
query = new ConstantScoreQuery(filter);
hits = searcher.search(query);
assertEquals(0, hits.length());
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(0, hits.length);
}
}

View File

@ -47,11 +47,11 @@ public class TestPrefixQuery extends LuceneTestCase {
PrefixQuery query = new PrefixQuery(new Term("category", "/Computers"));
IndexSearcher searcher = new IndexSearcher(directory);
Hits hits = searcher.search(query);
assertEquals("All documents in /Computers category and below", 3, hits.length());
ScoreDoc[] hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals("All documents in /Computers category and below", 3, hits.length);
query = new PrefixQuery(new Term("category", "/Computers/Mac"));
hits = searcher.search(query);
assertEquals("One in /Computers/Mac", 1, hits.length());
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals("One in /Computers/Mac", 1, hits.length);
}
}

View File

@ -55,70 +55,70 @@ public class TestRangeFilter extends BaseTestRangeFilter {
assertEquals("num of docs", numDocs, 1+ maxId - minId);
Hits result;
ScoreDoc[] result;
Query q = new TermQuery(new Term("body","body"));
// test id, bounded on both ends
result = search.search(q,new RangeFilter("id",minIP,maxIP,T,T));
assertEquals("find all", numDocs, result.length());
result = search.search(q,new RangeFilter("id",minIP,maxIP,T,T), numDocs).scoreDocs;
assertEquals("find all", numDocs, result.length);
result = search.search(q,new RangeFilter("id",minIP,maxIP,T,F));
assertEquals("all but last", numDocs-1, result.length());
result = search.search(q,new RangeFilter("id",minIP,maxIP,T,F), numDocs).scoreDocs;
assertEquals("all but last", numDocs-1, result.length);
result = search.search(q,new RangeFilter("id",minIP,maxIP,F,T));
assertEquals("all but first", numDocs-1, result.length());
result = search.search(q,new RangeFilter("id",minIP,maxIP,F,T), numDocs).scoreDocs;
assertEquals("all but first", numDocs-1, result.length);
result = search.search(q,new RangeFilter("id",minIP,maxIP,F,F));
assertEquals("all but ends", numDocs-2, result.length());
result = search.search(q,new RangeFilter("id",minIP,maxIP,F,F), numDocs).scoreDocs;
assertEquals("all but ends", numDocs-2, result.length);
result = search.search(q,new RangeFilter("id",medIP,maxIP,T,T));
assertEquals("med and up", 1+ maxId-medId, result.length());
result = search.search(q,new RangeFilter("id",medIP,maxIP,T,T), numDocs).scoreDocs;
assertEquals("med and up", 1+ maxId-medId, result.length);
result = search.search(q,new RangeFilter("id",minIP,medIP,T,T));
assertEquals("up to med", 1+ medId-minId, result.length());
result = search.search(q,new RangeFilter("id",minIP,medIP,T,T), numDocs).scoreDocs;
assertEquals("up to med", 1+ medId-minId, result.length);
// unbounded id
result = search.search(q,new RangeFilter("id",minIP,null,T,F));
assertEquals("min and up", numDocs, result.length());
result = search.search(q,new RangeFilter("id",minIP,null,T,F), numDocs).scoreDocs;
assertEquals("min and up", numDocs, result.length);
result = search.search(q,new RangeFilter("id",null,maxIP,F,T));
assertEquals("max and down", numDocs, result.length());
result = search.search(q,new RangeFilter("id",null,maxIP,F,T), numDocs).scoreDocs;
assertEquals("max and down", numDocs, result.length);
result = search.search(q,new RangeFilter("id",minIP,null,F,F));
assertEquals("not min, but up", numDocs-1, result.length());
result = search.search(q,new RangeFilter("id",minIP,null,F,F), numDocs).scoreDocs;
assertEquals("not min, but up", numDocs-1, result.length);
result = search.search(q,new RangeFilter("id",null,maxIP,F,F));
assertEquals("not max, but down", numDocs-1, result.length());
result = search.search(q,new RangeFilter("id",null,maxIP,F,F), numDocs).scoreDocs;
assertEquals("not max, but down", numDocs-1, result.length);
result = search.search(q,new RangeFilter("id",medIP,maxIP,T,F));
assertEquals("med and up, not max", maxId-medId, result.length());
result = search.search(q,new RangeFilter("id",medIP,maxIP,T,F), numDocs).scoreDocs;
assertEquals("med and up, not max", maxId-medId, result.length);
result = search.search(q,new RangeFilter("id",minIP,medIP,F,T));
assertEquals("not min, up to med", medId-minId, result.length());
result = search.search(q,new RangeFilter("id",minIP,medIP,F,T), numDocs).scoreDocs;
assertEquals("not min, up to med", medId-minId, result.length);
// very small sets
result = search.search(q,new RangeFilter("id",minIP,minIP,F,F));
assertEquals("min,min,F,F", 0, result.length());
result = search.search(q,new RangeFilter("id",medIP,medIP,F,F));
assertEquals("med,med,F,F", 0, result.length());
result = search.search(q,new RangeFilter("id",maxIP,maxIP,F,F));
assertEquals("max,max,F,F", 0, result.length());
result = search.search(q,new RangeFilter("id",minIP,minIP,F,F), numDocs).scoreDocs;
assertEquals("min,min,F,F", 0, result.length);
result = search.search(q,new RangeFilter("id",medIP,medIP,F,F), numDocs).scoreDocs;
assertEquals("med,med,F,F", 0, result.length);
result = search.search(q,new RangeFilter("id",maxIP,maxIP,F,F), numDocs).scoreDocs;
assertEquals("max,max,F,F", 0, result.length);
result = search.search(q,new RangeFilter("id",minIP,minIP,T,T));
assertEquals("min,min,T,T", 1, result.length());
result = search.search(q,new RangeFilter("id",null,minIP,F,T));
assertEquals("nul,min,F,T", 1, result.length());
result = search.search(q,new RangeFilter("id",minIP,minIP,T,T), numDocs).scoreDocs;
assertEquals("min,min,T,T", 1, result.length);
result = search.search(q,new RangeFilter("id",null,minIP,F,T), numDocs).scoreDocs;
assertEquals("nul,min,F,T", 1, result.length);
result = search.search(q,new RangeFilter("id",maxIP,maxIP,T,T));
assertEquals("max,max,T,T", 1, result.length());
result = search.search(q,new RangeFilter("id",maxIP,null,T,F));
assertEquals("max,nul,T,T", 1, result.length());
result = search.search(q,new RangeFilter("id",maxIP,maxIP,T,T), numDocs).scoreDocs;
assertEquals("max,max,T,T", 1, result.length);
result = search.search(q,new RangeFilter("id",maxIP,null,T,F), numDocs).scoreDocs;
assertEquals("max,nul,T,T", 1, result.length);
result = search.search(q,new RangeFilter("id",medIP,medIP,T,T));
assertEquals("med,med,T,T", 1, result.length());
result = search.search(q,new RangeFilter("id",medIP,medIP,T,T), numDocs).scoreDocs;
assertEquals("med,med,T,T", 1, result.length);
}
@ -134,53 +134,53 @@ public class TestRangeFilter extends BaseTestRangeFilter {
assertEquals("num of docs", numDocs, 1+ maxId - minId);
Hits result;
ScoreDoc[] result;
Query q = new TermQuery(new Term("body","body"));
// test extremes, bounded on both ends
result = search.search(q,new RangeFilter("rand",minRP,maxRP,T,T));
assertEquals("find all", numDocs, result.length());
result = search.search(q,new RangeFilter("rand",minRP,maxRP,T,T), numDocs).scoreDocs;
assertEquals("find all", numDocs, result.length);
result = search.search(q,new RangeFilter("rand",minRP,maxRP,T,F));
assertEquals("all but biggest", numDocs-1, result.length());
result = search.search(q,new RangeFilter("rand",minRP,maxRP,T,F), numDocs).scoreDocs;
assertEquals("all but biggest", numDocs-1, result.length);
result = search.search(q,new RangeFilter("rand",minRP,maxRP,F,T));
assertEquals("all but smallest", numDocs-1, result.length());
result = search.search(q,new RangeFilter("rand",minRP,maxRP,F,T), numDocs).scoreDocs;
assertEquals("all but smallest", numDocs-1, result.length);
result = search.search(q,new RangeFilter("rand",minRP,maxRP,F,F));
assertEquals("all but extremes", numDocs-2, result.length());
result = search.search(q,new RangeFilter("rand",minRP,maxRP,F,F), numDocs).scoreDocs;
assertEquals("all but extremes", numDocs-2, result.length);
// unbounded
result = search.search(q,new RangeFilter("rand",minRP,null,T,F));
assertEquals("smallest and up", numDocs, result.length());
result = search.search(q,new RangeFilter("rand",minRP,null,T,F), numDocs).scoreDocs;
assertEquals("smallest and up", numDocs, result.length);
result = search.search(q,new RangeFilter("rand",null,maxRP,F,T));
assertEquals("biggest and down", numDocs, result.length());
result = search.search(q,new RangeFilter("rand",null,maxRP,F,T), numDocs).scoreDocs;
assertEquals("biggest and down", numDocs, result.length);
result = search.search(q,new RangeFilter("rand",minRP,null,F,F));
assertEquals("not smallest, but up", numDocs-1, result.length());
result = search.search(q,new RangeFilter("rand",minRP,null,F,F), numDocs).scoreDocs;
assertEquals("not smallest, but up", numDocs-1, result.length);
result = search.search(q,new RangeFilter("rand",null,maxRP,F,F));
assertEquals("not biggest, but down", numDocs-1, result.length());
result = search.search(q,new RangeFilter("rand",null,maxRP,F,F), numDocs).scoreDocs;
assertEquals("not biggest, but down", numDocs-1, result.length);
// very small sets
result = search.search(q,new RangeFilter("rand",minRP,minRP,F,F));
assertEquals("min,min,F,F", 0, result.length());
result = search.search(q,new RangeFilter("rand",maxRP,maxRP,F,F));
assertEquals("max,max,F,F", 0, result.length());
result = search.search(q,new RangeFilter("rand",minRP,minRP,F,F), numDocs).scoreDocs;
assertEquals("min,min,F,F", 0, result.length);
result = search.search(q,new RangeFilter("rand",maxRP,maxRP,F,F), numDocs).scoreDocs;
assertEquals("max,max,F,F", 0, result.length);
result = search.search(q,new RangeFilter("rand",minRP,minRP,T,T));
assertEquals("min,min,T,T", 1, result.length());
result = search.search(q,new RangeFilter("rand",null,minRP,F,T));
assertEquals("nul,min,F,T", 1, result.length());
result = search.search(q,new RangeFilter("rand",minRP,minRP,T,T), numDocs).scoreDocs;
assertEquals("min,min,T,T", 1, result.length);
result = search.search(q,new RangeFilter("rand",null,minRP,F,T), numDocs).scoreDocs;
assertEquals("nul,min,F,T", 1, result.length);
result = search.search(q,new RangeFilter("rand",maxRP,maxRP,T,T));
assertEquals("max,max,T,T", 1, result.length());
result = search.search(q,new RangeFilter("rand",maxRP,null,T,F));
assertEquals("max,nul,T,T", 1, result.length());
result = search.search(q,new RangeFilter("rand",maxRP,maxRP,T,T), numDocs).scoreDocs;
assertEquals("max,max,T,T", 1, result.length);
result = search.search(q,new RangeFilter("rand",maxRP,null,T,F), numDocs).scoreDocs;
assertEquals("max,nul,T,T", 1, result.length);
}

View File

@ -46,20 +46,20 @@ public class TestRangeQuery extends LuceneTestCase {
false);
initializeIndex(new String[] {"A", "B", "C", "D"});
IndexSearcher searcher = new IndexSearcher(dir);
Hits hits = searcher.search(query);
assertEquals("A,B,C,D, only B in range", 1, hits.length());
ScoreDoc[] hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals("A,B,C,D, only B in range", 1, hits.length);
searcher.close();
initializeIndex(new String[] {"A", "B", "D"});
searcher = new IndexSearcher(dir);
hits = searcher.search(query);
assertEquals("A,B,D, only B in range", 1, hits.length());
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals("A,B,D, only B in range", 1, hits.length);
searcher.close();
addDoc("C");
searcher = new IndexSearcher(dir);
hits = searcher.search(query);
assertEquals("C added, still only B in range", 1, hits.length());
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals("C added, still only B in range", 1, hits.length);
searcher.close();
}
@ -70,20 +70,20 @@ public class TestRangeQuery extends LuceneTestCase {
initializeIndex(new String[]{"A", "B", "C", "D"});
IndexSearcher searcher = new IndexSearcher(dir);
Hits hits = searcher.search(query);
assertEquals("A,B,C,D - A,B,C in range", 3, hits.length());
ScoreDoc[] hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals("A,B,C,D - A,B,C in range", 3, hits.length);
searcher.close();
initializeIndex(new String[]{"A", "B", "D"});
searcher = new IndexSearcher(dir);
hits = searcher.search(query);
assertEquals("A,B,D - A and B in range", 2, hits.length());
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals("A,B,D - A and B in range", 2, hits.length);
searcher.close();
addDoc("C");
searcher = new IndexSearcher(dir);
hits = searcher.search(query);
assertEquals("C added - A, B, C in range", 3, hits.length());
hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals("C added - A, B, C in range", 3, hits.length);
searcher.close();
}
@ -158,3 +158,4 @@ public class TestRangeQuery extends LuceneTestCase {
}

View File

@ -81,9 +81,9 @@ public class TestRemoteCachingWrapperFilter extends LuceneTestCase {
private static void search(Query query, Filter filter, int hitNumber, String typeValue) throws Exception {
Searchable[] searchables = { getRemote() };
Searcher searcher = new MultiSearcher(searchables);
Hits result = searcher.search(query,filter);
assertEquals(1, result.length());
Document document = result.doc(hitNumber);
ScoreDoc[] result = searcher.search(query,filter, 1000).scoreDocs;
assertEquals(1, result.length);
Document document = searcher.doc(result[hitNumber].doc);
assertTrue("document is null and it shouldn't be", document != null);
assertEquals(typeValue, document.get("type"));
assertTrue("document.getFields() Size: " + document.getFields().size() + " is not: " + 3, document.getFields().size() == 3);

View File

@ -73,10 +73,10 @@ public class TestRemoteSearchable extends LuceneTestCase {
// try to search the published index
Searchable[] searchables = { getRemote() };
Searcher searcher = new MultiSearcher(searchables);
Hits result = searcher.search(query);
ScoreDoc[] result = searcher.search(query, null, 1000).scoreDocs;
assertEquals(1, result.length());
Document document = result.doc(0);
assertEquals(1, result.length);
Document document = searcher.doc(result[0].doc);
assertTrue("document is null and it shouldn't be", document != null);
assertEquals("test text", document.get("test"));
assertTrue("document.getFields() Size: " + document.getFields().size() + " is not: " + 2, document.getFields().size() == 2);
@ -114,23 +114,23 @@ public class TestRemoteSearchable extends LuceneTestCase {
// try to search the published index
Searchable[] searchables = { getRemote() };
Searcher searcher = new MultiSearcher(searchables);
Hits hits = searcher.search(
ScoreDoc[] hits = searcher.search(
new TermQuery(new Term("test", "text")),
new QueryWrapperFilter(new TermQuery(new Term("test", "test"))));
assertEquals(1, hits.length());
Hits nohits = searcher.search(
new QueryWrapperFilter(new TermQuery(new Term("test", "test"))), 1000).scoreDocs;
assertEquals(1, hits.length);
ScoreDoc[] nohits = searcher.search(
new TermQuery(new Term("test", "text")),
new QueryWrapperFilter(new TermQuery(new Term("test", "non-existent-term"))));
assertEquals(0, nohits.length());
new QueryWrapperFilter(new TermQuery(new Term("test", "non-existent-term"))), 1000).scoreDocs;
assertEquals(0, nohits.length);
}
public void testConstantScoreQuery() throws Exception {
// try to search the published index
Searchable[] searchables = { getRemote() };
Searcher searcher = new MultiSearcher(searchables);
Hits hits = searcher.search(
ScoreDoc[] hits = searcher.search(
new ConstantScoreQuery(new QueryWrapperFilter(
new TermQuery(new Term("test", "test")))));
assertEquals(1, hits.length());
new TermQuery(new Term("test", "test")))), null, 1000).scoreDocs;
assertEquals(1, hits.length);
}
}

View File

@ -37,6 +37,7 @@ import org.apache.lucene.store.RAMDirectory;
* Test Hits searches with interleaved deletions.
*
* See {@link http://issues.apache.org/jira/browse/LUCENE-1096}.
* @deprecated Hits will be removed in Lucene 3.0
*/
public class TestSearchHitsWithDeletions extends TestCase {

View File

@ -459,9 +459,9 @@ implements Serializable {
public void testNormalizedScores() throws Exception {
// capture relevancy scores
HashMap scoresX = getScores (full.search (queryX));
HashMap scoresY = getScores (full.search (queryY));
HashMap scoresA = getScores (full.search (queryA));
HashMap scoresX = getScores (full.search (queryX, null, 1000).scoreDocs, full);
HashMap scoresY = getScores (full.search (queryY, null, 1000).scoreDocs, full);
HashMap scoresA = getScores (full.search (queryA, null, 1000).scoreDocs, full);
// we'll test searching locally, remote and multi
MultiSearcher remote = new MultiSearcher (new Searchable[] { getRemote() });
@ -470,92 +470,92 @@ implements Serializable {
// change sorting and make sure relevancy stays the same
sort = new Sort();
assertSameValues (scoresX, getScores(full.search(queryX,sort)));
assertSameValues (scoresX, getScores(remote.search(queryX,sort)));
assertSameValues (scoresX, getScores(multi.search(queryX,sort)));
assertSameValues (scoresY, getScores(full.search(queryY,sort)));
assertSameValues (scoresY, getScores(remote.search(queryY,sort)));
assertSameValues (scoresY, getScores(multi.search(queryY,sort)));
assertSameValues (scoresA, getScores(full.search(queryA,sort)));
assertSameValues (scoresA, getScores(remote.search(queryA,sort)));
assertSameValues (scoresA, getScores(multi.search(queryA,sort)));
assertSameValues (scoresX, getScores (full.search (queryX, null, 1000, sort).scoreDocs, full));
assertSameValues (scoresX, getScores (remote.search (queryX, null, 1000, sort).scoreDocs, remote));
assertSameValues (scoresX, getScores (multi.search (queryX, null, 1000, sort).scoreDocs, multi));
assertSameValues (scoresY, getScores (full.search (queryY, null, 1000, sort).scoreDocs, full));
assertSameValues (scoresY, getScores (remote.search (queryY, null, 1000, sort).scoreDocs, remote));
assertSameValues (scoresY, getScores (multi.search (queryY, null, 1000, sort).scoreDocs, multi));
assertSameValues (scoresA, getScores (full.search (queryA, null, 1000, sort).scoreDocs, full));
assertSameValues (scoresA, getScores (remote.search (queryA, null, 1000, sort).scoreDocs, remote));
assertSameValues (scoresA, getScores (multi.search (queryA, null, 1000, sort).scoreDocs, multi));
sort.setSort(SortField.FIELD_DOC);
assertSameValues (scoresX, getScores(full.search(queryX,sort)));
assertSameValues (scoresX, getScores(remote.search(queryX,sort)));
assertSameValues (scoresX, getScores(multi.search(queryX,sort)));
assertSameValues (scoresY, getScores(full.search(queryY,sort)));
assertSameValues (scoresY, getScores(remote.search(queryY,sort)));
assertSameValues (scoresY, getScores(multi.search(queryY,sort)));
assertSameValues (scoresA, getScores(full.search(queryA,sort)));
assertSameValues (scoresA, getScores(remote.search(queryA,sort)));
assertSameValues (scoresA, getScores(multi.search(queryA,sort)));
assertSameValues (scoresX, getScores (full.search (queryX, null, 1000, sort).scoreDocs, full));
assertSameValues (scoresX, getScores (remote.search (queryX, null, 1000, sort).scoreDocs, remote));
assertSameValues (scoresX, getScores (multi.search (queryX, null, 1000, sort).scoreDocs, multi));
assertSameValues (scoresY, getScores (full.search (queryY, null, 1000, sort).scoreDocs, full));
assertSameValues (scoresY, getScores (remote.search (queryY, null, 1000, sort).scoreDocs, remote));
assertSameValues (scoresY, getScores (multi.search (queryY, null, 1000, sort).scoreDocs, multi));
assertSameValues (scoresA, getScores (full.search (queryA, null, 1000, sort).scoreDocs, full));
assertSameValues (scoresA, getScores (remote.search (queryA, null, 1000, sort).scoreDocs, remote));
assertSameValues (scoresA, getScores (multi.search (queryA, null, 1000, sort).scoreDocs, multi));
sort.setSort ("int");
assertSameValues (scoresX, getScores(full.search(queryX,sort)));
assertSameValues (scoresX, getScores(remote.search(queryX,sort)));
assertSameValues (scoresX, getScores(multi.search(queryX,sort)));
assertSameValues (scoresY, getScores(full.search(queryY,sort)));
assertSameValues (scoresY, getScores(remote.search(queryY,sort)));
assertSameValues (scoresY, getScores(multi.search(queryY,sort)));
assertSameValues (scoresA, getScores(full.search(queryA,sort)));
assertSameValues (scoresA, getScores(remote.search(queryA,sort)));
assertSameValues (scoresA, getScores(multi.search(queryA,sort)));
assertSameValues (scoresX, getScores (full.search (queryX, null, 1000, sort).scoreDocs, full));
assertSameValues (scoresX, getScores (remote.search (queryX, null, 1000, sort).scoreDocs, remote));
assertSameValues (scoresX, getScores (multi.search (queryX, null, 1000, sort).scoreDocs, multi));
assertSameValues (scoresY, getScores (full.search (queryY, null, 1000, sort).scoreDocs, full));
assertSameValues (scoresY, getScores (remote.search (queryY, null, 1000, sort).scoreDocs, remote));
assertSameValues (scoresY, getScores (multi.search (queryY, null, 1000, sort).scoreDocs, multi));
assertSameValues (scoresA, getScores (full.search (queryA, null, 1000, sort).scoreDocs, full));
assertSameValues (scoresA, getScores (remote.search (queryA, null, 1000, sort).scoreDocs, remote));
assertSameValues (scoresA, getScores (multi.search (queryA, null, 1000, sort).scoreDocs, multi));
sort.setSort ("float");
assertSameValues (scoresX, getScores(full.search(queryX,sort)));
assertSameValues (scoresX, getScores(remote.search(queryX,sort)));
assertSameValues (scoresX, getScores(multi.search(queryX,sort)));
assertSameValues (scoresY, getScores(full.search(queryY,sort)));
assertSameValues (scoresY, getScores(remote.search(queryY,sort)));
assertSameValues (scoresY, getScores(multi.search(queryY,sort)));
assertSameValues (scoresA, getScores(full.search(queryA,sort)));
assertSameValues (scoresA, getScores(remote.search(queryA,sort)));
assertSameValues (scoresA, getScores(multi.search(queryA,sort)));
assertSameValues (scoresX, getScores (full.search (queryX, null, 1000, sort).scoreDocs, full));
assertSameValues (scoresX, getScores (remote.search (queryX, null, 1000, sort).scoreDocs, remote));
assertSameValues (scoresX, getScores (multi.search (queryX, null, 1000, sort).scoreDocs, multi));
assertSameValues (scoresY, getScores (full.search (queryY, null, 1000, sort).scoreDocs, full));
assertSameValues (scoresY, getScores (remote.search (queryY, null, 1000, sort).scoreDocs, remote));
assertSameValues (scoresY, getScores (multi.search (queryY, null, 1000, sort).scoreDocs, multi));
assertSameValues (scoresA, getScores (full.search (queryA, null, 1000, sort).scoreDocs, full));
assertSameValues (scoresA, getScores (remote.search (queryA, null, 1000, sort).scoreDocs, remote));
assertSameValues (scoresA, getScores (multi.search (queryA, null, 1000, sort).scoreDocs, multi));
sort.setSort ("string");
assertSameValues (scoresX, getScores(full.search(queryX,sort)));
assertSameValues (scoresX, getScores(remote.search(queryX,sort)));
assertSameValues (scoresX, getScores(multi.search(queryX,sort)));
assertSameValues (scoresY, getScores(full.search(queryY,sort)));
assertSameValues (scoresY, getScores(remote.search(queryY,sort)));
assertSameValues (scoresY, getScores(multi.search(queryY,sort)));
assertSameValues (scoresA, getScores(full.search(queryA,sort)));
assertSameValues (scoresA, getScores(remote.search(queryA,sort)));
assertSameValues (scoresA, getScores(multi.search(queryA,sort)));
assertSameValues (scoresX, getScores (full.search (queryX, null, 1000, sort).scoreDocs, full));
assertSameValues (scoresX, getScores (remote.search (queryX, null, 1000, sort).scoreDocs, remote));
assertSameValues (scoresX, getScores (multi.search (queryX, null, 1000, sort).scoreDocs, multi));
assertSameValues (scoresY, getScores (full.search (queryY, null, 1000, sort).scoreDocs, full));
assertSameValues (scoresY, getScores (remote.search (queryY, null, 1000, sort).scoreDocs, remote));
assertSameValues (scoresY, getScores (multi.search (queryY, null, 1000, sort).scoreDocs, multi));
assertSameValues (scoresA, getScores (full.search (queryA, null, 1000, sort).scoreDocs, full));
assertSameValues (scoresA, getScores (remote.search (queryA, null, 1000, sort).scoreDocs, remote));
assertSameValues (scoresA, getScores (multi.search (queryA, null, 1000, sort).scoreDocs, multi));
sort.setSort (new String[] {"int","float"});
assertSameValues (scoresX, getScores(full.search(queryX,sort)));
assertSameValues (scoresX, getScores(remote.search(queryX,sort)));
assertSameValues (scoresX, getScores(multi.search(queryX,sort)));
assertSameValues (scoresY, getScores(full.search(queryY,sort)));
assertSameValues (scoresY, getScores(remote.search(queryY,sort)));
assertSameValues (scoresY, getScores(multi.search(queryY,sort)));
assertSameValues (scoresA, getScores(full.search(queryA,sort)));
assertSameValues (scoresA, getScores(remote.search(queryA,sort)));
assertSameValues (scoresA, getScores(multi.search(queryA,sort)));
assertSameValues (scoresX, getScores (full.search (queryX, null, 1000, sort).scoreDocs, full));
assertSameValues (scoresX, getScores (remote.search (queryX, null, 1000, sort).scoreDocs, remote));
assertSameValues (scoresX, getScores (multi.search (queryX, null, 1000, sort).scoreDocs, multi));
assertSameValues (scoresY, getScores (full.search (queryY, null, 1000, sort).scoreDocs, full));
assertSameValues (scoresY, getScores (remote.search (queryY, null, 1000, sort).scoreDocs, remote));
assertSameValues (scoresY, getScores (multi.search (queryY, null, 1000, sort).scoreDocs, multi));
assertSameValues (scoresA, getScores (full.search (queryA, null, 1000, sort).scoreDocs, full));
assertSameValues (scoresA, getScores (remote.search (queryA, null, 1000, sort).scoreDocs, remote));
assertSameValues (scoresA, getScores (multi.search (queryA, null, 1000, sort).scoreDocs, multi));
sort.setSort (new SortField[] { new SortField ("int", true), new SortField (null, SortField.DOC, true) });
assertSameValues (scoresX, getScores(full.search(queryX,sort)));
assertSameValues (scoresX, getScores(remote.search(queryX,sort)));
assertSameValues (scoresX, getScores(multi.search(queryX,sort)));
assertSameValues (scoresY, getScores(full.search(queryY,sort)));
assertSameValues (scoresY, getScores(remote.search(queryY,sort)));
assertSameValues (scoresY, getScores(multi.search(queryY,sort)));
assertSameValues (scoresA, getScores(full.search(queryA,sort)));
assertSameValues (scoresA, getScores(remote.search(queryA,sort)));
assertSameValues (scoresA, getScores(multi.search(queryA,sort)));
assertSameValues (scoresX, getScores (full.search (queryX, null, 1000, sort).scoreDocs, full));
assertSameValues (scoresX, getScores (remote.search (queryX, null, 1000, sort).scoreDocs, remote));
assertSameValues (scoresX, getScores (multi.search (queryX, null, 1000, sort).scoreDocs, multi));
assertSameValues (scoresY, getScores (full.search (queryY, null, 1000, sort).scoreDocs, full));
assertSameValues (scoresY, getScores (remote.search (queryY, null, 1000, sort).scoreDocs, remote));
assertSameValues (scoresY, getScores (multi.search (queryY, null, 1000, sort).scoreDocs, multi));
assertSameValues (scoresA, getScores (full.search (queryA, null, 1000, sort).scoreDocs, full));
assertSameValues (scoresA, getScores (remote.search (queryA, null, 1000, sort).scoreDocs, remote));
assertSameValues (scoresA, getScores (multi.search (queryA, null, 1000, sort).scoreDocs, multi));
sort.setSort (new String[] {"float","string"});
assertSameValues (scoresX, getScores(full.search(queryX,sort)));
assertSameValues (scoresX, getScores(remote.search(queryX,sort)));
assertSameValues (scoresX, getScores(multi.search(queryX,sort)));
assertSameValues (scoresY, getScores(full.search(queryY,sort)));
assertSameValues (scoresY, getScores(remote.search(queryY,sort)));
assertSameValues (scoresY, getScores(multi.search(queryY,sort)));
assertSameValues (scoresA, getScores(full.search(queryA,sort)));
assertSameValues (scoresA, getScores(remote.search(queryA,sort)));
assertSameValues (scoresA, getScores(multi.search(queryA,sort)));
assertSameValues (scoresX, getScores (full.search (queryX, null, 1000, sort).scoreDocs, full));
assertSameValues (scoresX, getScores (remote.search (queryX, null, 1000, sort).scoreDocs, remote));
assertSameValues (scoresX, getScores (multi.search (queryX, null, 1000, sort).scoreDocs, multi));
assertSameValues (scoresY, getScores (full.search (queryY, null, 1000, sort).scoreDocs, full));
assertSameValues (scoresY, getScores (remote.search (queryY, null, 1000, sort).scoreDocs, remote));
assertSameValues (scoresY, getScores (multi.search (queryY, null, 1000, sort).scoreDocs, multi));
assertSameValues (scoresA, getScores (full.search (queryA, null, 1000, sort).scoreDocs, full));
assertSameValues (scoresA, getScores (remote.search (queryA, null, 1000, sort).scoreDocs, remote));
assertSameValues (scoresA, getScores (multi.search (queryA, null, 1000, sort).scoreDocs, multi));
}
@ -648,11 +648,11 @@ implements Serializable {
// make sure the documents returned by the search match the expected list
private void assertMatches (Searcher searcher, Query query, Sort sort, String expectedResult)
throws IOException {
Hits result = searcher.search (query, sort);
ScoreDoc[] result = searcher.search (query, null, 1000, sort).scoreDocs;
StringBuffer buff = new StringBuffer(10);
int n = result.length();
int n = result.length;
for (int i=0; i<n; ++i) {
Document doc = result.doc(i);
Document doc = searcher.doc(result[i].doc);
String[] v = doc.getValues("tracer");
for (int j=0; j<v.length; ++j) {
buff.append (v[j]);
@ -664,11 +664,11 @@ implements Serializable {
// make sure the documents returned by the search match the expected list pattern
private void assertMatchesPattern (Searcher searcher, Query query, Sort sort, String pattern)
throws IOException {
Hits result = searcher.search (query, sort);
ScoreDoc[] result = searcher.search (query, null, 1000, sort).scoreDocs;
StringBuffer buff = new StringBuffer(10);
int n = result.length();
int n = result.length;
for (int i=0; i<n; ++i) {
Document doc = result.doc(i);
Document doc = searcher.doc(result[i].doc);
String[] v = doc.getValues("tracer");
for (int j=0; j<v.length; ++j) {
buff.append (v[j]);
@ -678,15 +678,15 @@ implements Serializable {
assertTrue (Pattern.compile(pattern).matcher(buff.toString()).matches());
}
private HashMap getScores (Hits hits)
private HashMap getScores (ScoreDoc[] hits, Searcher searcher)
throws IOException {
HashMap scoreMap = new HashMap();
int n = hits.length();
int n = hits.length;
for (int i=0; i<n; ++i) {
Document doc = hits.doc(i);
Document doc = searcher.doc(hits[i].doc);
String[] v = doc.getValues("tracer");
assertEquals (v.length, 1);
scoreMap.put (v[0], new Float(hits.score(i)));
scoreMap.put (v[0], new Float(hits[i].score));
}
return scoreMap;
}

View File

@ -77,12 +77,12 @@ public class TestTermVectors extends LuceneTestCase {
public void testTermVectors() {
Query query = new TermQuery(new Term("field", "seventy"));
try {
Hits hits = searcher.search(query);
assertEquals(100, hits.length());
ScoreDoc[] hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(100, hits.length);
for (int i = 0; i < hits.length(); i++)
for (int i = 0; i < hits.length; i++)
{
TermFreqVector [] vector = searcher.reader.getTermFreqVectors(hits.id(i));
TermFreqVector [] vector = searcher.reader.getTermFreqVectors(hits[i].doc);
assertTrue(vector != null);
assertTrue(vector.length == 1);
}
@ -125,19 +125,19 @@ public class TestTermVectors extends LuceneTestCase {
public void testTermPositionVectors() {
Query query = new TermQuery(new Term("field", "zero"));
try {
Hits hits = searcher.search(query);
assertEquals(1, hits.length());
ScoreDoc[] hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(1, hits.length);
for (int i = 0; i < hits.length(); i++)
for (int i = 0; i < hits.length; i++)
{
TermFreqVector [] vector = searcher.reader.getTermFreqVectors(hits.id(i));
TermFreqVector [] vector = searcher.reader.getTermFreqVectors(hits[i].doc);
assertTrue(vector != null);
assertTrue(vector.length == 1);
boolean shouldBePosVector = (hits.id(i) % 2 == 0) ? true : false;
boolean shouldBePosVector = (hits[i].doc % 2 == 0) ? true : false;
assertTrue((shouldBePosVector == false) || (shouldBePosVector == true && (vector[0] instanceof TermPositionVector == true)));
boolean shouldBeOffVector = (hits.id(i) % 3 == 0) ? true : false;
boolean shouldBeOffVector = (hits[i].doc % 3 == 0) ? true : false;
assertTrue((shouldBeOffVector == false) || (shouldBeOffVector == true && (vector[0] instanceof TermPositionVector == true)));
if(shouldBePosVector || shouldBeOffVector){
@ -186,12 +186,12 @@ public class TestTermVectors extends LuceneTestCase {
public void testTermOffsetVectors() {
Query query = new TermQuery(new Term("field", "fifty"));
try {
Hits hits = searcher.search(query);
assertEquals(100, hits.length());
ScoreDoc[] hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(100, hits.length);
for (int i = 0; i < hits.length(); i++)
for (int i = 0; i < hits.length; i++)
{
TermFreqVector [] vector = searcher.reader.getTermFreqVectors(hits.id(i));
TermFreqVector [] vector = searcher.reader.getTermFreqVectors(hits[i].doc);
assertTrue(vector != null);
assertTrue(vector.length == 1);
@ -279,20 +279,20 @@ public class TestTermVectors extends LuceneTestCase {
//System.out.println("--------");
}
Query query = new TermQuery(new Term("field", "chocolate"));
Hits hits = knownSearcher.search(query);
ScoreDoc[] hits = knownSearcher.search(query, null, 1000).scoreDocs;
//doc 3 should be the first hit b/c it is the shortest match
assertTrue(hits.length() == 3);
float score = hits.score(0);
assertTrue(hits.length == 3);
float score = hits[0].score;
/*System.out.println("Hit 0: " + hits.id(0) + " Score: " + hits.score(0) + " String: " + hits.doc(0).toString());
System.out.println("Explain: " + knownSearcher.explain(query, hits.id(0)));
System.out.println("Hit 1: " + hits.id(1) + " Score: " + hits.score(1) + " String: " + hits.doc(1).toString());
System.out.println("Explain: " + knownSearcher.explain(query, hits.id(1)));
System.out.println("Hit 2: " + hits.id(2) + " Score: " + hits.score(2) + " String: " + hits.doc(2).toString());
System.out.println("Explain: " + knownSearcher.explain(query, hits.id(2)));*/
assertTrue(hits.id(0) == 2);
assertTrue(hits.id(1) == 3);
assertTrue(hits.id(2) == 0);
TermFreqVector vector = knownSearcher.reader.getTermFreqVector(hits.id(1), "field");
assertTrue(hits[0].doc == 2);
assertTrue(hits[1].doc == 3);
assertTrue(hits[2].doc == 0);
TermFreqVector vector = knownSearcher.reader.getTermFreqVector(hits[1].doc, "field");
assertTrue(vector != null);
//System.out.println("Vector: " + vector);
String[] terms = vector.getTerms();
@ -308,7 +308,7 @@ public class TestTermVectors extends LuceneTestCase {
assertTrue(freqInt.intValue() == freq);
}
SortedTermVectorMapper mapper = new SortedTermVectorMapper(new TermVectorEntryFreqSortedComparator());
knownSearcher.reader.getTermFreqVector(hits.id(1), mapper);
knownSearcher.reader.getTermFreqVector(hits[1].doc, mapper);
SortedSet vectorEntrySet = mapper.getTermVectorEntrySet();
assertTrue("mapper.getTermVectorEntrySet() Size: " + vectorEntrySet.size() + " is not: " + 10, vectorEntrySet.size() == 10);
TermVectorEntry last = null;
@ -326,7 +326,7 @@ public class TestTermVectors extends LuceneTestCase {
}
FieldSortedTermVectorMapper fieldMapper = new FieldSortedTermVectorMapper(new TermVectorEntryFreqSortedComparator());
knownSearcher.reader.getTermFreqVector(hits.id(1), fieldMapper);
knownSearcher.reader.getTermFreqVector(hits[1].doc, fieldMapper);
Map map = fieldMapper.getFieldToTerms();
assertTrue("map Size: " + map.size() + " is not: " + 2, map.size() == 2);
vectorEntrySet = (SortedSet) map.get("field");
@ -369,10 +369,10 @@ public class TestTermVectors extends LuceneTestCase {
searcher = new IndexSearcher(directory);
Query query = new TermQuery(new Term("field", "hundred"));
Hits hits = searcher.search(query);
assertEquals(10, hits.length());
for (int i = 0; i < hits.length(); i++) {
TermFreqVector [] vector = searcher.reader.getTermFreqVectors(hits.id(i));
ScoreDoc[] hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(10, hits.length);
for (int i = 0; i < hits.length; i++) {
TermFreqVector [] vector = searcher.reader.getTermFreqVectors(hits[i].doc);
assertTrue(vector != null);
assertTrue(vector.length == 1);
}
@ -401,10 +401,10 @@ public class TestTermVectors extends LuceneTestCase {
searcher = new IndexSearcher(directory);
Query query = new TermQuery(new Term("field", "one"));
Hits hits = searcher.search(query);
assertEquals(1, hits.length());
ScoreDoc[] hits = searcher.search(query, null, 1000).scoreDocs;
assertEquals(1, hits.length);
TermFreqVector [] vector = searcher.reader.getTermFreqVectors(hits.id(0));
TermFreqVector [] vector = searcher.reader.getTermFreqVectors(hits[0].doc);
assertTrue(vector != null);
assertTrue(vector.length == 1);
TermPositionVector tfv = (TermPositionVector) vector[0];

View File

@ -86,7 +86,7 @@ public class TestTimeLimitedCollector extends LuceneTestCase {
query = queryParser.parse(qtxt);
// warm the searcher
searcher.search(query);
searcher.search(query, null, 1000);
}

View File

@ -153,8 +153,8 @@ public class TestWildcard
private void assertMatches(IndexSearcher searcher, Query q, int expectedMatches)
throws IOException {
Hits result = searcher.search(q);
assertEquals(expectedMatches, result.length());
ScoreDoc[] result = searcher.search(q, null, 1000).scoreDocs;
assertEquals(expectedMatches, result.length);
}
/**
@ -212,8 +212,8 @@ public class TestWildcard
String qtxt = matchAll[i];
Query q = qp.parse(qtxt);
if (dbg) System.out.println("matchAll: qtxt="+qtxt+" q="+q+" "+q.getClass().getName());
Hits hits = searcher.search(q);
assertEquals(docs.length,hits.length());
ScoreDoc[] hits = searcher.search(q, null, 1000).scoreDocs;
assertEquals(docs.length,hits.length);
}
// test queries that must find none
@ -221,8 +221,8 @@ public class TestWildcard
String qtxt = matchNone[i];
Query q = qp.parse(qtxt);
if (dbg) System.out.println("matchNone: qtxt="+qtxt+" q="+q+" "+q.getClass().getName());
Hits hits = searcher.search(q);
assertEquals(0,hits.length());
ScoreDoc[] hits = searcher.search(q, null, 1000).scoreDocs;
assertEquals(0,hits.length);
}
// test queries that must be prefix queries and must find only one doc
@ -232,9 +232,9 @@ public class TestWildcard
Query q = qp.parse(qtxt);
if (dbg) System.out.println("match 1 prefix: doc="+docs[i]+" qtxt="+qtxt+" q="+q+" "+q.getClass().getName());
assertEquals(PrefixQuery.class, q.getClass());
Hits hits = searcher.search(q);
assertEquals(1,hits.length());
assertEquals(i,hits.id(0));
ScoreDoc[] hits = searcher.search(q, null, 1000).scoreDocs;
assertEquals(1,hits.length);
assertEquals(i,hits[0].doc);
}
}
@ -245,9 +245,9 @@ public class TestWildcard
Query q = qp.parse(qtxt);
if (dbg) System.out.println("match 1 wild: doc="+docs[i]+" qtxt="+qtxt+" q="+q+" "+q.getClass().getName());
assertEquals(WildcardQuery.class, q.getClass());
Hits hits = searcher.search(q);
assertEquals(1,hits.length());
assertEquals(i,hits.id(0));
ScoreDoc[] hits = searcher.search(q, null, 1000).scoreDocs;
assertEquals(1,hits.length);
assertEquals(i,hits[0].doc);
}
}

View File

@ -17,11 +17,9 @@ package org.apache.lucene.search.function;
* limitations under the License.
*/
import java.io.ObjectInputStream.GetField;
import java.util.HashMap;
import org.apache.lucene.index.CorruptIndexException;
import org.apache.lucene.search.Hits;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.QueryUtils;
@ -88,13 +86,13 @@ public class TestFieldScoreQuery extends FunctionTestSetup {
Query q = new FieldScoreQuery(field,tp);
log("test: "+q);
QueryUtils.check(q,s);
Hits h = s.search(q);
assertEquals("All docs should be matched!",N_DOCS,h.length());
ScoreDoc[] h = s.search(q, null, 1000).scoreDocs;
assertEquals("All docs should be matched!",N_DOCS,h.length);
String prevID = "ID"+(N_DOCS+1); // greater than all ids of docs in this test
for (int i=0; i<h.length(); i++) {
String resID = h.doc(i).get(ID_FIELD);
log(i+". score="+h.score(i)+" - "+resID);
log(s.explain(q,h.id(i)));
for (int i=0; i<h.length; i++) {
String resID = s.doc(h[i].doc).get(ID_FIELD);
log(i+". score="+h[i].score+" - "+resID);
log(s.explain(q,h[i].doc));
assertTrue("res id "+resID+" should be < prev res id "+prevID, resID.compareTo(prevID)<0);
prevID = resID;
}
@ -181,8 +179,8 @@ public class TestFieldScoreQuery extends FunctionTestSetup {
boolean warned = false; // print warning once.
for (int i=0; i<10; i++) {
FieldScoreQuery q = new FieldScoreQuery(field,tp);
Hits h = s.search(q);
assertEquals("All docs should be matched!",N_DOCS,h.length());
ScoreDoc[] h = s.search(q, null, 1000).scoreDocs;
assertEquals("All docs should be matched!",N_DOCS,h.length);
try {
if (i==0) {
innerArray = q.valSrc.getValues(s.getIndexReader()).getInnerArray();
@ -203,8 +201,8 @@ public class TestFieldScoreQuery extends FunctionTestSetup {
// verify new values are reloaded (not reused) for a new reader
s = new IndexSearcher(dir);
FieldScoreQuery q = new FieldScoreQuery(field,tp);
Hits h = s.search(q);
assertEquals("All docs should be matched!",N_DOCS,h.length());
ScoreDoc[] h = s.search(q, null, 1000).scoreDocs;
assertEquals("All docs should be matched!",N_DOCS,h.length);
try {
log("compare: "+innerArray+" to "+q.valSrc.getValues(s.getIndexReader()).getInnerArray());
assertNotSame("cached field values should not be reused if reader as changed!", innerArray, q.valSrc.getValues(s.getIndexReader()).getInnerArray());

View File

@ -18,7 +18,6 @@ package org.apache.lucene.search.function;
*/
import org.apache.lucene.index.CorruptIndexException;
import org.apache.lucene.search.Hits;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.QueryUtils;
@ -77,16 +76,16 @@ public class TestOrdValues extends FunctionTestSetup {
Query q = new ValueSourceQuery(vs);
log("test: "+q);
QueryUtils.check(q,s);
Hits h = s.search(q);
assertEquals("All docs should be matched!",N_DOCS,h.length());
ScoreDoc[] h = s.search(q, null, 1000).scoreDocs;
assertEquals("All docs should be matched!",N_DOCS,h.length);
String prevID = inOrder
? "IE" // greater than all ids of docs in this test ("ID0001", etc.)
: "IC"; // smaller than all ids of docs in this test ("ID0001", etc.)
for (int i=0; i<h.length(); i++) {
String resID = h.doc(i).get(ID_FIELD);
log(i+". score="+h.score(i)+" - "+resID);
log(s.explain(q,h.id(i)));
for (int i=0; i<h.length; i++) {
String resID = s.doc(h[i].doc).get(ID_FIELD);
log(i+". score="+h[i].score+" - "+resID);
log(s.explain(q,h[i].doc));
if (inOrder) {
assertTrue("res id "+resID+" should be < prev res id "+prevID, resID.compareTo(prevID)<0);
} else {
@ -159,9 +158,9 @@ public class TestOrdValues extends FunctionTestSetup {
vs = new ReverseOrdFieldSource(field);
}
ValueSourceQuery q = new ValueSourceQuery(vs);
Hits h = s.search(q);
ScoreDoc[] h = s.search(q, null, 1000).scoreDocs;
try {
assertEquals("All docs should be matched!",N_DOCS,h.length());
assertEquals("All docs should be matched!",N_DOCS,h.length);
if (i==0) {
innerArray = q.valSrc.getValues(s.getIndexReader()).getInnerArray();
} else {
@ -178,7 +177,7 @@ public class TestOrdValues extends FunctionTestSetup {
ValueSource vs;
ValueSourceQuery q;
Hits h;
ScoreDoc[] h;
// verify that different values are loaded for a different field
String field2 = INT_FIELD;
@ -189,8 +188,8 @@ public class TestOrdValues extends FunctionTestSetup {
vs = new ReverseOrdFieldSource(field2);
}
q = new ValueSourceQuery(vs);
h = s.search(q);
assertEquals("All docs should be matched!",N_DOCS,h.length());
h = s.search(q, null, 1000).scoreDocs;
assertEquals("All docs should be matched!",N_DOCS,h.length);
try {
log("compare (should differ): "+innerArray+" to "+q.valSrc.getValues(s.getIndexReader()).getInnerArray());
assertNotSame("different values shuold be loaded for a different field!", innerArray, q.valSrc.getValues(s.getIndexReader()).getInnerArray());
@ -209,8 +208,8 @@ public class TestOrdValues extends FunctionTestSetup {
vs = new ReverseOrdFieldSource(field);
}
q = new ValueSourceQuery(vs);
h = s.search(q);
assertEquals("All docs should be matched!",N_DOCS,h.length());
h = s.search(q, null, 1000).scoreDocs;
assertEquals("All docs should be matched!",N_DOCS,h.length);
try {
log("compare (should differ): "+innerArray+" to "+q.valSrc.getValues(s.getIndexReader()).getInnerArray());
assertNotSame("cached field values should not be reused if reader as changed!", innerArray, q.valSrc.getValues(s.getIndexReader()).getInnerArray());

View File

@ -17,24 +17,24 @@ package org.apache.lucene.store;
* limitations under the License.
*/
import java.io.IOException;
import java.io.File;
import java.util.List;
import java.util.Random;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Iterator;
import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.index.Term;
import org.apache.lucene.search.IndexSearcher;
import java.util.List;
import java.util.Random;
import org.apache.lucene.analysis.WhitespaceAnalyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import org.apache.lucene.search.Hits;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.index.Term;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.ScoreDoc;
import org.apache.lucene.search.TermQuery;
import org.apache.lucene.util._TestUtil;
import org.apache.lucene.util.LuceneTestCase;
import org.apache.lucene.util._TestUtil;
public class TestBufferedIndexInput extends LuceneTestCase {
// Call readByte() repeatedly, past the buffer boundary, and see that it
@ -184,16 +184,16 @@ public class TestBufferedIndexInput extends LuceneTestCase {
dir.tweakBufferSizes();
IndexSearcher searcher = new IndexSearcher(reader);
Hits hits = searcher.search(new TermQuery(bbb));
ScoreDoc[] hits = searcher.search(new TermQuery(bbb), null, 1000).scoreDocs;
dir.tweakBufferSizes();
assertEquals(35, hits.length());
assertEquals(35, hits.length);
dir.tweakBufferSizes();
hits = searcher.search(new TermQuery(new Term("id", "33")));
hits = searcher.search(new TermQuery(new Term("id", "33")), null, 1000).scoreDocs;
dir.tweakBufferSizes();
assertEquals(1, hits.length());
hits = searcher.search(new TermQuery(aaa));
assertEquals(1, hits.length);
hits = searcher.search(new TermQuery(aaa), null, 1000).scoreDocs;
dir.tweakBufferSizes();
assertEquals(35, hits.length());
assertEquals(35, hits.length);
searcher.close();
reader.close();
} finally {

View File

@ -17,24 +17,21 @@ package org.apache.lucene.store;
* limitations under the License.
*/
import org.apache.lucene.util.LuceneTestCase;
import java.util.Hashtable;
import java.util.Enumeration;
import java.io.IOException;
import java.io.File;
import java.io.IOException;
import java.util.Enumeration;
import java.util.Hashtable;
import org.apache.lucene.analysis.WhitespaceAnalyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.index.Term;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.Query;
import org.apache.lucene.index.Term;
import org.apache.lucene.search.ScoreDoc;
import org.apache.lucene.search.TermQuery;
import org.apache.lucene.search.Hits;
import org.apache.lucene.analysis.WhitespaceAnalyzer;
import org.apache.lucene.util.LuceneTestCase;
public class TestLockFactory extends LuceneTestCase {
@ -498,9 +495,9 @@ public class TestLockFactory extends LuceneTestCase {
break;
}
if (searcher != null) {
Hits hits = null;
ScoreDoc[] hits = null;
try {
hits = searcher.search(query);
hits = searcher.search(query, null, 1000).scoreDocs;
} catch (IOException e) {
hitException = true;
System.out.println("Stress Test Index Searcher: search hit unexpected exception: " + e.toString());