David Pilato 560a761b2f Update to Tika 1.10
Release 1.10 - 8/1/2015

  * Tika Config XML can now be used to create composite detectors,
    and exclude detectors that DefaultDetector would otherwise
    have used. This brings support in-line with Parsers. (TIKA-1702)

  * Reverted to legacy sort order of parsers that was
    mistakenly reversed in Tika 1.9 (TIKA-1689).

  * Upgrade to POI 3.13-beta1 (TIKA-1667).

  * Upgrade to PDFBox 1.8.10 (TIKA-1588).

  * MimeTypes now tries to find a registered type with and
    without parameters (TIKA-1692).

  * Added more robust error handling for encoding detection
    of .MSG files (TIKA-1238).

  * Fixed bug in Tika's use of the Jackcess parser that
    prevented reading of v97 Access files (TIKA-1681).

  * Upgrade xerial.org's sqlite-jdbc to 3.8.10.1. NOTE:
    as of Tika 1.9, this jar is "provided." Make sure
    to upgrade your provided jar! (TIKA-1687).

  * Add header/footer extraction to xls (via Aeham Abushwashi)
    (TIKA-1400).

  * Drop the source file name from the embedded file path in
    RecursiveParserWrapper's "X-TIKA:embedded_resource_path"
    (TIKA-1673).

  * Upgraded to Java 7 (TIKA-1536).

  * Non-standards compliant emails are now correctly detected
    as message/rfc822 (TIKA-1602).

  * Added parser for MS Access files via Jackcess. Many thanks
    to Health Market Science, Brian O'Neill and James Ahlborn
    for relicensing Jackcess to Apache v2! (TIKA-1601)

  * GDALParser now correctly sets "nitf" as a supported
    MediaType (TIKA-1664).

  * Added DigestingParser to calculate digest hashes
    and record them in metadata. Integrated with
    tika-app and tika-server (TIKA-1663).

  * Fixed ZipContainerDetector to detect all IPA files
    (TIKA-1659).

Closes #147.
2015-08-19 10:14:33 +02:00
2014-12-14 19:59:15 +01:00
2015-05-27 20:33:26 -04:00
2013-08-21 11:57:47 +02:00
2012-06-10 22:14:18 +02:00
2015-08-19 10:14:33 +02:00
2015-07-17 17:36:57 +02:00

Mapper Attachments Type for Elasticsearch

The mapper attachments plugin adds the attachment type to Elasticsearch using Apache Tika. The attachment type allows to index different "attachment" type field (encoded as base64), for example, microsoft office formats, open document formats, ePub, HTML, and so on (full list can be found here).

In order to install the plugin, run:

bin/plugin install elasticsearch/elasticsearch-mapper-attachments/2.7.0

You need to install a version matching your Elasticsearch version:

Elasticsearch Attachments Plugin Docs
master Build from source See below
es-1.7 2.7.0 2.7.0
es-1.6 2.6.0 2.6.0
es-1.5 2.5.0 2.5.0
es-1.4 2.4.3 2.4.3
es-1.3 2.3.2 2.3.2
es-1.2 2.2.1 2.2.1
es-1.1 2.0.0 2.0.0
es-1.0 2.0.0 2.0.0
es-0.90 1.9.0 1.9.0

To build a SNAPSHOT version, you need to build it with Maven:

mvn clean install
plugin --install mapper-attachments \
       --url file:target/releases/elasticsearch-mapper-attachments-X.X.X-SNAPSHOT.zip

Using mapper attachments

Using the attachment type is simple, in your mapping JSON, simply set a certain JSON element as attachment, for example:

PUT /test/person/_mapping
{
    "person" : {
        "properties" : {
            "my_attachment" : { "type" : "attachment" }
        }
    }
}

In this case, the JSON to index can be:

PUT /test/person/1
{
    "my_attachment" : "... base64 encoded attachment ..."
}

Or it is possible to use more elaborated JSON if content type, resource name or language need to be set explicitly:

PUT /test/person/1
{
    "my_attachment" : {
        "_content_type" : "application/pdf",
        "_name" : "resource/name/of/my.pdf",
        "_language" : "en",
        "_content" : "... base64 encoded attachment ..."
    }
}

The attachment type not only indexes the content of the doc, but also automatically adds meta data on the attachment as well (when available).

The metadata supported are:

  • date
  • title
  • name only available if you set _name see above
  • author
  • keywords
  • content_type
  • content_length is the original content_length before text extraction (aka file size)
  • language

They can be queried using the "dot notation", for example: my_attachment.author.

Both the meta data and the actual content are simple core type mappers (string, date, ...), thus, they can be controlled in the mappings. For example:

PUT /test/person/_mapping
{
    "person" : {
        "properties" : {
            "file" : {
                "type" : "attachment",
                "fields" : {
                    "file" : {"index" : "no"},
                    "title" : {"store" : "yes"},
                    "date" : {"store" : "yes"},
                    "author" : {"analyzer" : "myAnalyzer"},
                    "keywords" : {"store" : "yes"},
                    "content_type" : {"store" : "yes"},
                    "content_length" : {"store" : "yes"},
                    "language" : {"store" : "yes"}
                }
            }
        }
    }
}

In the above example, the actual content indexed is mapped under fields name file, and we decide not to index it, so it will only be available in the _all field. The other fields map to their respective metadata names, but there is no need to specify the type (like string or date) since it is already known.

Copy To feature

If you want to use copy_to feature, you need to define it on each sub-field you want to copy to another field:

PUT /test/person/_mapping
{
  "person": {
    "properties": {
      "file": {
        "type": "attachment",
        "path": "full",
        "fields": {
          "file": {
            "type": "string",
            "copy_to": "copy"
          }
        }
      },
      "copy": {
        "type": "string"
      }
    }
  }
}

In this example, the extracted content will be copy as well to copy field.

Querying or accessing metadata

If you need to query on metadata fields, use the attachment field name dot the metadata field. For example:

DELETE /test
PUT /test
PUT /test/person/_mapping
{
  "person": {
    "properties": {
      "file": {
        "type": "attachment",
        "path": "full",
        "fields": {
          "content_type": {
            "type": "string",
            "store": true
          }
        }
      }
    }
  }
}
PUT /test/person/1?refresh=true
{
  "file": "IkdvZCBTYXZlIHRoZSBRdWVlbiIgKGFsdGVybmF0aXZlbHkgIkdvZCBTYXZlIHRoZSBLaW5nIg=="
}
GET /test/person/_search
{
  "fields": [ "file.content_type" ],
  "query": {
    "match": {
      "file.content_type": "text plain"
    }
  }
}

Will give you:

{
   "took": 2,
   "timed_out": false,
   "_shards": {
      "total": 5,
      "successful": 5,
      "failed": 0
   },
   "hits": {
      "total": 1,
      "max_score": 0.16273327,
      "hits": [
         {
            "_index": "test",
            "_type": "person",
            "_id": "1",
            "_score": 0.16273327,
            "fields": {
               "file.content_type": [
                  "text/plain; charset=ISO-8859-1"
               ]
            }
         }
      ]
   }
}

Indexed Characters

By default, 100000 characters are extracted when indexing the content. This default value can be changed by setting the index.mapping.attachment.indexed_chars setting. It can also be provided on a per document indexed using the _indexed_chars parameter. -1 can be set to extract all text, but note that all the text needs to be allowed to be represented in memory:

PUT /test/person/1
{
    "my_attachment" : {
        "_indexed_chars" : -1,
        "_content" : "... base64 encoded attachment ..."
    }
}

Metadata parsing error handling

While extracting metadata content, errors could happen for example when parsing dates. Parsing errors are ignored so your document is indexed.

You can disable this feature by setting the index.mapping.attachment.ignore_errors setting to false.

Language Detection

By default, language detection is disabled (false) as it could come with a cost. This default value can be changed by setting the index.mapping.attachment.detect_language setting. It can also be provided on a per document indexed using the _detect_language parameter.

Note that you can force language using _language field when sending your actual document:

{
    "my_attachment" : {
        "_language" : "en",
        "_content" : "... base64 encoded attachment ..."
    }
}

Highlighting attachments

If you want to highlight your attachment content, you will need to set "store": true and "term_vector":"with_positions_offsets" for your attachment field. Here is a full script which does it:

DELETE /test
PUT /test
PUT /test/person/_mapping
{
  "person": {
    "properties": {
      "file": {
        "type": "attachment",
        "path": "full",
        "fields": {
          "file": {
            "type": "string",
            "term_vector":"with_positions_offsets",
            "store": true
          }
        }
      }
    }
  }
}
PUT /test/person/1?refresh=true
{
  "file": "IkdvZCBTYXZlIHRoZSBRdWVlbiIgKGFsdGVybmF0aXZlbHkgIkdvZCBTYXZlIHRoZSBLaW5nIg=="
}
GET /test/person/_search
{
  "fields": [],
  "query": {
    "match": {
      "file": "king queen"
    }
  },
  "highlight": {
    "fields": {
      "file": {
      }
    }
  }
}

It gives back:

{
   "took": 9,
   "timed_out": false,
   "_shards": {
      "total": 1,
      "successful": 1,
      "failed": 0
   },
   "hits": {
      "total": 1,
      "max_score": 0.13561106,
      "hits": [
         {
            "_index": "test",
            "_type": "person",
            "_id": "1",
            "_score": 0.13561106,
            "highlight": {
               "file": [
                  "\"God Save the <em>Queen</em>\" (alternatively \"God Save the <em>King</em>\"\n"
               ]
            }
         }
      ]
   }
}

Stand alone runner

If you want to run some tests within your IDE, you can use StandaloneRunner class. It accepts arguments:

  • -u file://URL/TO/YOUR/DOC
  • --size set extracted size (default to mapper attachment size)
  • BASE64 encoded binary

Example:

StandaloneRunner BASE64Text
StandaloneRunner -u /tmp/mydoc.pdf
StandaloneRunner -u /tmp/mydoc.pdf --size 1000000

It produces something like:

## Extracted text
--------------------- BEGIN -----------------------
This is the extracted text
---------------------- END ------------------------
## Metadata
- author: null
- content_length: null
- content_type: application/pdf
- date: null
- keywords: null
- language: null
- name: null
- title: null

License

This software is licensed under the Apache 2 license, quoted below.

Copyright 2009-2014 Elasticsearch <http://www.elasticsearch.org>

Licensed under the Apache License, Version 2.0 (the "License"); you may not
use this file except in compliance with the License. You may obtain a copy of
the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations under
the License.
Description
🔎 Open source distributed and RESTful search engine.
Readme 546 MiB
Languages
Java 99.5%
Groovy 0.4%