lucene/lucene/benchmark
Robert Muir 784ede4eda
LUCENE-9215: replace checkJavaDocs.py with doclet (#1802)
This has the same logic as the previous python, but no longer relies
upon parsing HTML output, instead using java's doclet processor.

The errors are reported like "normal" javadoc errors with source file
name and line number and happen when running "gradlew javadoc"

Although the "rules" are the same as the previous python, the python had
some bugs where the checker didn't quite do exactly what we wanted, so
some fixes were applied throughout.

Co-authored-by: Dawid Weiss <dawid.weiss@carrotsearch.com>
Co-authored-by: Uwe Schindler <uschindler@apache.org>
2020-09-02 08:29:17 -04:00
..
conf LUCENE-8474: Remove deprecated RAMDirectory. 2019-01-28 13:49:03 +01:00
scripts LUCENE-9383: benchmark module: Gradle conversion (#1550) 2020-06-05 17:57:53 -04:00
src LUCENE-9447: Make BEST_COMPRESSION better with highly compressible data. (#1762) 2020-08-26 11:04:34 +02:00
.gitignore LUCENE-7438: Renovate benchmark module's support for highlighting 2016-10-07 09:57:11 -04:00
README.enwiki LUCENE-7438: Renovate benchmark module's support for highlighting 2016-10-07 09:57:11 -04:00
build.gradle LUCENE-9215: replace checkJavaDocs.py with doclet (#1802) 2020-09-02 08:29:17 -04:00

README.enwiki

Support exists for downloading, parsing, and loading the English
version of wikipedia (enwiki).

The build file can automatically try to download the most current
enwiki dataset (pages-articles.xml.bz2) from the "latest" directory,
http://download.wikimedia.org/enwiki/latest/. However, this file
doesn't always exist, depending on where wikipedia is in the dump
process and whether prior dumps have succeeded. If this file doesn't
exist, you can sometimes find an older or in progress version by
looking in the dated directories under
http://download.wikimedia.org/enwiki/. For example, as of this
writing, there is a page file in
http://download.wikimedia.org/enwiki/20070402/. You can download this
file manually and put it in temp. Note that the file you download will
probably have the date in the name, e.g.,
http://download.wikimedia.org/enwiki/20070402/enwiki-20070402-pages-articles.xml.bz2.

If you use the EnwikiContentSource then the data will be decompressed on the fly
during the benchmark.  If you want to benchmark indexing, you should probably decompress
it beforehand using the "enwiki" Ant target which will produce a work/enwiki.txt, after
which you can use LineDocSource in your benchmark.

After that, ant enwiki should process the data set and run a load
test. Ant target enwiki will download, decompress, and extract (to individual files
in work/enwiki) the dataset, respectively.