Commit Graph

63 Commits

Author SHA1 Message Date
Jun Ohtani 533c1084ec Docs: add the predefined language-specific stopword lists to stop-tokenfilter.asciidoc 2014-10-16 13:20:38 +09:00
sp836490 517caa0c6f Update cjk-bigram-tokenfilter.asciidoc 2014-10-15 11:54:19 +09:00
HenrikOssipoff 1445dd2308 Remove comma in JSON
Closes #7827
2014-09-28 11:08:09 +02:00
Clinton Gormley cb00d4a542 Docs: Removed all the added/deprecated tags from 1.x 2014-09-26 21:04:42 +02:00
Clinton Gormley 091578d117 Update stemmer-tokenfilter.asciidoc
Change the `minimal_english` link to a publicly accessible URL
2014-09-25 20:29:12 +02:00
Sergii Golubev 059d9f757a Docs: bad text wrapping
On the page http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/analysis-synonym-tokenfilter.html

even on a huge monitor the text is being wrapped the next way
```
mapping:
ipod, i-pod, i pod => ipod, i-pod, i pod
mapping:
ipod, i-pod, i pod => ipod
```

So one can think that "mapping:" is not in comment and is a part of syntax. But the lines are less than 80 chars, so perhaps the problem is in the page layout and there may be some other pages in the reference where the text is also being wrapped in an undesirable way.

Closes #7739
2014-09-25 19:43:23 +02:00
Nik Everett 7bcd09a134 [docs] fix typo in language analyzer docs 2014-09-04 09:33:00 +02:00
Robert Muir 395744b0d2 [Analysis] Add missing docs for latvian analysis 2014-09-02 19:22:59 -04:00
Robert Muir 5c7cefa292 Analysis: Add keep_types for filtering by token type 2014-08-15 09:28:12 -04:00
Nik Everett 34426eb8c2 Docs: Fix syntax on lang-analyzer
Some of the language analyzer documentation contained invalid json.

Closes #7098
2014-07-30 20:17:27 +02:00
Simon Willnauer 5bfea56457 [DOCS] move all coming tags to added in master 2014-07-23 16:37:19 +02:00
Clinton Gormley 6e70edb0a4 Analysis: Improve Hunspell error messages
The Hunspell service would throw a confusing error message if more than
one affix file was present.  This commit distinguishes between the two
error cases: where there are no affix files and when there are too many
affix files.

Also implements lazy dictionary loading, which was used in the tests
but not implemented.

Closes #6850
2014-07-14 12:13:32 +02:00
Clinton Gormley e4baa56f4b Docs: Language analyzers
Clarified the use of stem_exclusion and the keyword_marker
token filter

Closes #6613
2014-07-07 10:06:18 +02:00
Clinton Gormley 54790eea10 Update lang-analyzer.asciidoc
Clarified the use of the `stem_exclusion` token filter.

Closes #6613
2014-07-04 17:50:43 +02:00
Jun Ohtani 0c6a859357 Docs: fixed ICU plugin documentation
add ICU Normalization CharFilter to docs

Closes #6711
2014-07-03 15:21:51 +02:00
Mikhail Korobov 955473f475 Docs: unescape regexes in Pattern Tokenizer docs
Currently regexes in Pattern Tokenizer docs are escaped (it seems according to Java rules). I think it is better not to escape them because JSON escaping should be automatic in client libraries, and string escaping depends on a client language used. The default pattern is `\W+`, not `\\W+`.

Closes #6615
2014-07-03 13:34:13 +02:00
Robert Muir 2935b751e9 Fix doc formatting. Norwegian stemmers and Scandinavian normalizers
were missing commas between entries.
2014-07-03 07:08:33 -04:00
Robert Muir b9a09c2b06 Analysis: Add additional Analyzers, Tokenizers, and TokenFilters from Lucene
Add `irish` analyzer
Add `sorani` analyzer (Kurdish)

Add `classic` tokenizer: specific to english text and tries to recognize hostnames, companies, acronyms, etc.
Add `thai` tokenizer: segments thai text into words.

Add `classic` tokenfilter: cleans up acronyms and possessives from classic tokenizer
Add `apostrophe` tokenfilter: removes text after apostrophe and the apostrophe itself
Add `german_normalization` tokenfilter: umlaut/sharp S normalization
Add `hindi_normalization` tokenfilter: accounts for hindi spelling differences
Add `indic_normalization` tokenfilter: accounts for different unicode representations in Indian languages
Add `sorani_normalization` tokenfilter: normalizes kurdish text
Add `scandinavian_normalization` tokenfilter: normalizes Norwegian, Danish, Swedish text
Add `scandinavian_folding` tokenfilter: much more aggressive form of `scandinavian_normalization`
Add additional languages to stemmer tokenfilter: `galician`, `minimal_galician`, `irish`, `sorani`, `light_nynorsk`, `minimal_nynorsk`

Add support access to default Thai stopword set "_thai_"

Fix some bugs and broken links in documentation.

Closes #5935
2014-07-03 05:47:49 -04:00
Clinton Gormley cf059378d1 Docs: Updated stop token filter docs 2014-06-21 18:42:38 +02:00
Clinton Gormley 69350dc426 Update stemmer-override-tokenfilter.asciidoc 2014-06-18 11:34:20 +02:00
Clinton Gormley f546662e8f Docs: Hunspell tidied
Tidied some formatting
2014-06-11 21:49:02 +02:00
Clinton Gormley 04dacaaf27 Docs: Use the "stemmer" token filter for the english analyzer, to be consistent 2014-06-11 13:47:07 +02:00
Clinton Gormley 8a94b71b75 Docs: Corrected the use of keyword_marker on the lang analyzers 2014-06-11 13:43:02 +02:00
Clinton Gormley 673ef3db3f The StemmerTokenFilter had a number of issues:
* `english` returned the slow snowball English stemmer
* `porter2` returned the snowball Porter stemmer (v1)
* `portuguese` was used twice, preventing the second version from working

Changes:

* `english` now returns the fast PorterStemmer (for indices created from v1.3.0 onwards)
* `porter2` now returns the snowball English stemmer (for indices created from v1.3.0 onwards)
* `light_english` now returns the `kstem` stemmer (`kstem` still works)
* `portuguese_rslp` returns the PortugueseStemmer
* `dutch_kp` is a synonym for `kp`

Tests and docs updated

Fixes #6345
Fixes #6213
Fixes #6330
2014-06-11 12:30:16 +02:00
Clinton Gormley e323e577e8 Docs: Fixed bad ref on cjk_width/bigram pages 2014-06-09 23:36:58 +02:00
Clinton Gormley 5e40868f44 Docs: Fixed a bad ref on lang analyzers page 2014-06-09 23:03:12 +02:00
Clinton Gormley 5c5c1da06c Docs: Fixed some errors on the language analyzers page 2014-06-09 22:51:28 +02:00
Clinton Gormley 585b0ef730 Docs: Added custom-analyzer equivalents of all the language analyzers 2014-06-09 22:41:25 +02:00
Clinton Gormley bc402d5f87 Docs: Documented the cjk_width and cjk_bigram token filters 2014-06-09 22:40:58 +02:00
Simon Willnauer 9d5507047f Update Documentation Feature Flags [1.2.0] 2014-05-22 15:06:42 +02:00
Simon Willnauer f79b28375d Add missing coming tag
Relates to #6188
Relates to #5539
2014-05-18 10:54:17 +02:00
Richard Boulton fdb5eb6555 Update keyword-tokenizer.asciidoc 2014-05-07 15:04:07 +02:00
Matthieu Bacconnier 7fd5f18539 Update asciifolding-tokenfilter.asciidoc
Typo
2014-05-06 16:30:09 +02:00
Ali Bozorgkhan f1af845795 [DOCS] Fixed a typo
Close #5963
2014-05-06 10:28:13 +02:00
Robert Muir 8e0a479316 Upgrade to Lucene 4.8
Closes #5932
2014-04-28 06:45:50 -04:00
Clinton Gormley c1e03bf860 Update keyword-repeat-tokenfilter.asciidoc 2014-04-24 16:44:02 +02:00
Kevin Wang 374b633a4b add uppercase token filter
closes #5539
2014-03-26 15:07:43 +07:00
bleskes 5d832374dd Update Documentation Feature Flags [1.1.0] 2014-03-25 17:51:30 +01:00
Clinton Gormley 4c34615686 [DOCS] Fixed some bad UTF8 2014-03-19 12:46:06 +01:00
Simon Willnauer 9160516b28 Expose `filler_token` via ShingleTokenFilterFactory
Lucene 4.7 supports a setter for the `filler_token` that is
inserted if there are gaps in the token stream. This change exposes
this setting.

Closes #4307
2014-02-26 22:21:10 +01:00
Nik Everett 5c3f4ceafb Add preserve original token option to ASCIIFolding
Closes #4931
2014-02-14 19:37:00 +01:00
Alexander Reelsen c6155c5142 release [1.0.0.RC1] 2014-01-15 17:02:22 +00:00
Benjamin Vetter ba8e012be9 Referring to stop analyzer for stopword docs #329 2014-01-14 11:53:30 +01:00
Benjamin Vetter 22a96e6a18 Added stopwords: _none_ to the docs #329 2014-01-14 11:53:29 +01:00
Simon Willnauer 7f63ddf94e Default stopwords list should be `_none_` for all but language-specific analyzers
`standard_html_strip` and `pattern` analyzer support stopwords which are
set to the default `english` stopwords by default. Those analyzers
should not use stopwords by default since they are language neutral

Closes #4699
2014-01-13 14:44:10 +01:00
Yousef 302c762d5e Wrong link to Token Filter 2013-12-03 10:39:13 +01:00
Lee Hinman 9939e81d88 [DOCS] Fix porter stem filter name in other stemming docs 2013-11-28 22:14:47 -07:00
Lee Hinman fb4e903e35 [DOCS] Fix name of porter stemming token filter 2013-11-28 22:01:19 -07:00
Simon Willnauer 77bc5d5ecf release [1.0.0.Beta1] 2013-11-06 15:32:43 +01:00
Simon Willnauer 9654631186 Change 'standart' analyzer to use emtpy stopword list by default.
The 'default' / 'standard' analyzer can be a trappy default sicne it filters
english stopwords by default. Yet a default should not be dedicated to a certain language
since elasticsearch is used in many different scenarios where a standard analysis chain
with specialization to english full-text might be rather counter productive.

This commit changes the 'standard' analyzer to use an empty stopword list for indices
that are created from 1.0.0.Beta1 version onwards but will maintain backwards compatibiliy
for older indices.

Closes #3775
2013-11-05 21:07:21 +01:00