lucene/solr/contrib/extraction
Steven Rowe 033f80fd7a SOLR-6006: fix Solr contrib test dependencies by adding jcl-over-slf4j and retrieving it into each contrib's test-lib/ directory
git-svn-id: https://svn.apache.org/repos/asf/lucene/dev/trunk@1589953 13f79535-47bb-0310-9956-ffa450edef68
2014-04-25 08:55:05 +00:00
..
src SOLR-5936: Removed deprecated non-Trie-based numeric & date field types. 2014-04-09 19:48:14 +00:00
README.txt SOLR-3650: checkpoint, migrated CHANGES.txt for contrib/uima and contrib/extraction 2012-07-31 01:37:17 +00:00
build.xml SOLR-6006: fix Solr contrib test dependencies by adding jcl-over-slf4j and retrieving it into each contrib's test-lib/ directory 2014-04-25 08:55:05 +00:00
ivy.xml SOLR-6006: fix Solr contrib test dependencies by adding jcl-over-slf4j and retrieving it into each contrib's test-lib/ directory 2014-04-25 08:55:05 +00:00

README.txt

Apache Solr Content Extraction Library (Solr Cell)

Introduction
------------

Apache Solr Extraction provides a means for extracting and indexing content contained in "rich" documents, such
as Microsoft Word, Adobe PDF, etc.  (Each name is a trademark of their respective owners)  This contrib module
uses Apache Tika to extract content and metadata from the files, which can then be indexed.  For more information,
see http://wiki.apache.org/solr/ExtractingRequestHandler

Getting Started
---------------
You will need Solr up and running.  Then, simply add the extraction JAR file, plus the Tika dependencies (in the ./lib folder)
to your Solr Home lib directory.  See http://wiki.apache.org/solr/ExtractingRequestHandler for more details on hooking it in
 and configuring.