lucene/lucene/benchmark
Adrien Grand 544f5bf1e7 LUCENE-6300: Remove multi-term filters.
git-svn-id: https://svn.apache.org/repos/asf/lucene/dev/trunk@1662682 13f79535-47bb-0310-9956-ffa450edef68
2015-02-27 14:12:02 +00:00
..
conf LUCENE-5825: new benchmark codec.postingsFormat option 2014-07-23 04:43:34 +00:00
scripts LUCENE-4723: Add AnalyzerFactoryTask to benchmark, and enable analyzer creation via the resulting factories using NewAnalyzerTask. 2013-01-28 17:18:48 +00:00
src LUCENE-6300: Remove multi-term filters. 2015-02-27 14:12:02 +00:00
README.enwiki LUCENE-3965: consolidate all api modules and fix packaging for 4.0 2012-04-17 13:36:19 +00:00
build.xml LUCENE-5929: highlight terms from block join queries too 2014-11-13 15:37:28 +00:00
ivy.xml LUCENE-6007: Regularize ivy.xml files to use configurations that map to remote master configurations, so that Ivy won't try to download extraneous crap 2014-10-16 20:13:48 +00:00

README.enwiki

Support exists for downloading, parsing, and loading the English
version of wikipedia (enwiki).

The build file can automatically try to download the most current
enwiki dataset (pages-articles.xml.bz2) from the "latest" directory,
http://download.wikimedia.org/enwiki/latest/. However, this file
doesn't always exist, depending on where wikipedia is in the dump
process and whether prior dumps have succeeded. If this file doesn't
exist, you can sometimes find an older or in progress version by
looking in the dated directories under
http://download.wikimedia.org/enwiki/. For example, as of this
writing, there is a page file in
http://download.wikimedia.org/enwiki/20070402/. You can download this
file manually and put it in temp. Note that the file you download will
probably have the date in the name, e.g.,
http://download.wikimedia.org/enwiki/20070402/enwiki-20070402-pages-articles.xml.bz2. When
you put it in temp, rename it to enwiki-latest-pages-articles.xml.bz2.

After that, ant enwiki should process the data set and run a load
test. Ant targets get-enwiki, expand-enwiki, and extract-enwiki can
also be used to download, decompress, and extract (to individual files
in work/enwiki) the dataset, respectively.