[[analysis-analyzers]] == Analyzers Elasticsearch ships with a wide range of built-in analyzers, which can be used in any index without further configuration: <>:: The `standard` analyzer divides text into terms on word boundaries, as defined by the Unicode Text Segmentation algorithm. It removes most punctuation, lowercases terms, and supports removing stop words. <>:: The `simple` analyzer divides text into terms whenever it encounters a character which is not a letter. It lowercases all terms. <>:: The `whitespace` analyzer divides text into terms whenever it encounters any whitespace character. It does not lowercase terms. <>:: The `stop` analyzer is like the `simple` analyzer, but also supports removal of stop words. <>:: The `keyword` analyzer is a ``noop'' analyzer that accepts whatever text it is given and outputs the exact same text as a single term. <>:: The `pattern` analyzer uses a regular expression to split the text into terms. It supports lower-casing and stop words. <>:: Elasticsearch provides many language-specific analyzers like `english` or `french`. <>:: The `fingerprint` analyzer is a specialist analyzer which creates a fingerprint which can be used for duplicate detection. [float] === Custom analyzers If you do not find an analyzer suitable for your needs, you can create a <> analyzer which combines the appropriate <>, <>, and <>. include::analyzers/configuring.asciidoc[] include::analyzers/fingerprint-analyzer.asciidoc[] include::analyzers/keyword-analyzer.asciidoc[] include::analyzers/lang-analyzer.asciidoc[] include::analyzers/pattern-analyzer.asciidoc[] include::analyzers/simple-analyzer.asciidoc[] include::analyzers/standard-analyzer.asciidoc[] include::analyzers/stop-analyzer.asciidoc[] include::analyzers/whitespace-analyzer.asciidoc[] include::analyzers/custom-analyzer.asciidoc[]