- Fixed spelling a bit.

- Nukes trailing blank spaces.


git-svn-id: https://svn.apache.org/repos/asf/lucene/java/trunk@149766 13f79535-47bb-0310-9956-ffa450edef68
This commit is contained in:
Otis Gospodnetic 2002-06-04 15:29:32 +00:00
parent 241f32309d
commit abefb1b48e
1 changed files with 68 additions and 68 deletions

View File

@ -21,8 +21,8 @@
The best reference is <a href="http://www.htdig.org">
htDig</a>, though it is not quite as sophisticated as
Lucene, it has a number of features that make it
desireable. It however is a traditional c-compiled app
which makes it somewhat unpleasent to install on some
desirable. It however is a traditional c-compiled app
which makes it somewhat unpleasant to install on some
platforms (like Solaris!).
</p>
<p>
@ -44,10 +44,10 @@
<section name="Goal and Objectives">
<p>
The goal is to provide features to Lucene that allow it
to be used as a dropin search engine. It should provide
to be used as a drop-in search engine. It should provide
many of the features of projects like <a
href="http://www.htdig.org">htDig</a> while surpassing
them with unique Lucene features and capabillities such as
them with unique Lucene features and capabilities such as
easy installation on and java-supporting platform,
and support for document fields and field searches. And
of course, <a href="http://apache.org/LICENSE">
@ -60,7 +60,7 @@
</p>
<ul>
<li>
Document Location Independance - meaning mapping
Document Location Independence - meaning mapping
real contexts to runtime contexts.
Essentially, if the document is at
/var/www/htdocs/mydoc.html, I probably want it
@ -73,21 +73,21 @@
many environments than is *remote* indexing (for
instance http). I would suggest that most folks
would prefer that general functionality be
suppored by Lucene instead of having to write
supported by Lucene instead of having to write
code for every indexing project. Obviously, if
what they are doing is *special* they'll have to
code, but general document indexing accross
code, but general document indexing across
web servers would not qualify.
</li>
<li>
Document interperatation abstraction - currently
Document interpretation abstraction - currently
one must handle document object construction via
custom code. A standard interface for plugging
in format handlers should be supported.
</li>
<li>
Mime and file-extension to document
interperatation mapping.
interpretation mapping.
</li>
</ul>
</section>
@ -128,7 +128,7 @@
</li>
<li>
replacement type - the type of
replacewith path: relative, url or
replace with path: relative, URL or
path.
</li>
<li>
@ -164,7 +164,7 @@
</li>
<li>
IncludeFilter - include only items
matching filter. (can occur mulitple
matching filter. (can occur multiple
times)
</li>
<li>
@ -198,7 +198,7 @@
it. Command line options override
the properties file in the case of
duplicates. There should also be an
enivironment variable or VM parameter to
environment variable or VM parameter to
set this.
</li>
</ul>
@ -209,7 +209,7 @@
</p>
<p>
This should extend the AbstractCrawler and
support any addtional options required for a
support any additional options required for a
file system index.
</p>
<!--</s2>-->
@ -222,7 +222,7 @@
</p>
<ul>
<li>
span hosts - Wheter to span hosts or not,
span hosts - Whether to span hosts or not,
by default this should be no.
</li>
<li>
@ -237,11 +237,11 @@
recurse and go to
/nextcontext/index.html this option says
to also try /nextcontext to get the dir
lsiting)
listing)
</li>
<li>
map extensions -
(always/default/never/fallback). Wether
(always/default/never/fallback). Whether
to always use extension mapping, by
default (fallback to mime type), NEVER
or fallback if mime is not available
@ -258,7 +258,7 @@
<section name="MIMEMap">
<p>
A configurable registry of document types, their
description, an identifyer, mime-type and file
description, an identifier, mime-type and file
extension. This should map both MIME -> factory
and extension -> factory.
</p>
@ -300,13 +300,13 @@
</section>
<section name="FieldMapping classes">
<p>
A class taht maps standard fields from the
A class that maps standard fields from the
DocumentFactories into *fields* in the Document objects
they create. I suggest that a regular expression system
or xpath might be the most universal way to do this.
For instance if perhaps I had an XML factory that
represented XML elements as fields, I could map content
from particular fields to ther fields or supress them
from particular fields to their fields or suppress them
entirely. We could even make this configurable.
</p>
<p>
@ -357,7 +357,7 @@
While this goes slightly beyond what HTDig provides by
providing field mapping (where HTDIG is just interested
in Strings/numbers wherever they are found), it provides
at least what I would need to use this as a dropin for
at least what I would need to use this as a drop-in for
most places I contract at (with the obvious exception of
a default set of content handlers which would of course
develop naturally over time).