Compare commits

..

47 Commits

Author SHA1 Message Date
Magese
f0059e2c78 Resolve QueryParser exception. 2022-01-04 10:36:49 +08:00
Magese
fb4defedb7 Format comments. 2022-01-04 09:59:25 +08:00
Magese
9ed56a0f41 Format comments. 2022-01-04 09:56:41 +08:00
Magese
70d40bc3af Using final. 2022-01-04 09:51:21 +08:00
Magese
ef2dbeb979 Format comments. 2022-01-04 09:48:28 +08:00
Magese
872aac8298 Using final and volatile. 2022-01-04 09:44:12 +08:00
Magese
052e8f476e Package sorting. 2022-01-04 09:36:59 +08:00
Magese
6cc121e98d Remove HitCount Badges. 2021-12-31 17:56:33 +08:00
Magese
32a9369680 注释格式化; 2021-12-31 17:51:35 +08:00
Magese
dd7822b6be 优化排序算法逻辑; 2021-12-31 17:48:21 +08:00
Magese
6ef4798752 代码及注释格式化; 2021-12-31 17:43:40 +08:00
Magese
56f23a9027 注释格式化; 2021-12-31 17:38:43 +08:00
Magese
5ab517079b 注释格式化; 2021-12-31 17:36:34 +08:00
Magese
020f83e665 构造初始化成员变量声明为 final; 2021-12-31 17:35:32 +08:00
Magese
ab50f161e6 注释格式化; 2021-12-31 17:33:07 +08:00
Magese
47439fa94b 注释格式化; 2021-12-31 17:31:28 +08:00
Magese
c938bf1f2b 成员变量申明为 final,逻辑判断优化; 2021-12-31 17:29:59 +08:00
Magese
92cb2a28d6 格式化代码、注释; 2021-12-31 17:27:42 +08:00
Magese
f173925dc0 优化词前置判断逻辑; 2021-12-31 17:17:47 +08:00
Magese
f9bc7a12fa 代码、注释格式化; 2021-12-31 17:14:07 +08:00
Magese
df29bdc4df 代码格式化; 2021-12-31 17:11:52 +08:00
Magese
3ec8076730 日志格式优化; 2021-12-31 17:10:19 +08:00
Magese
7149c54de7 单例对象使用 volatile 关键字; 2021-12-31 17:02:02 +08:00
Magese
fff131a45a 修复单词拼写错误; 2021-12-31 16:59:38 +08:00
Magese
0c6560fe61 升级lucene版本为 8.5.0 2021-12-23 17:09:21 +08:00
Magese
bf3df0e58c 移除demo,修改配置 2021-12-23 14:30:10 +08:00
老王
4180900415
Update README.md 2021-05-26 16:32:30 +08:00
gaozhicheng
eaf58c0f1d Update README.md 2021-03-22 11:49:04 +08:00
gaozhicheng
aba7aeff19 Update README.md 2021-03-22 11:48:33 +08:00
gaozhicheng
a234d047cd Update README.md 2021-03-22 11:46:41 +08:00
gaozhicheng
8d67193774 Update lucene to 8.4.0 2021-03-22 11:41:13 +08:00
gaozhicheng
9b731bb002 更新README.md; 2020-12-30 11:42:31 +08:00
gaozhicheng
8b04070253 更新Lucene版本为8.3.1; 2020-12-30 10:59:19 +08:00
gaozhicheng
4dd4a86b4d 更新默认主词典; 2020-12-30 10:28:45 +08:00
Magese
59b86a341f
Merge pull request #15 from magese/dependabot/maven/junit-junit-4.13.1
Bump junit from 4.11 to 4.13.1
2020-12-14 14:15:11 +08:00
dependabot[bot]
a45434162d
Bump junit from 4.11 to 4.13.1
Bumps [junit](https://github.com/junit-team/junit4) from 4.11 to 4.13.1.
- [Release notes](https://github.com/junit-team/junit4/releases)
- [Changelog](https://github.com/junit-team/junit4/blob/main/doc/ReleaseNotes4.11.md)
- [Commits](https://github.com/junit-team/junit4/compare/r4.11...r4.13.1)

Signed-off-by: dependabot[bot] <support@github.com>
2020-10-13 07:05:33 +00:00
magese
95ba364090 Update README.md 2019-11-12 11:46:03 +08:00
magese
0c8992fd80 Update lucene version to 8.3.0 2019-11-12 11:30:57 +08:00
magese
356d9d9ae9 增加是否加载主词典useMainDict属性。 2019-11-12 11:29:56 +08:00
magese
cd43d4b9ee 增加use_main_dict配置选项,可配置是否加载主词典。 2019-11-12 11:27:22 +08:00
magese
5068a58e0f Update jdk version 2019-09-27 14:31:16 +08:00
magese
107b4ced30 Update travis properties 2019-09-27 14:29:12 +08:00
magese
e6817ecba5 Extract method from duplicate code 2019-09-27 09:54:15 +08:00
magese
0916f3f8bd Update lucene to 8.2.0 2019-09-27 09:52:45 +08:00
magese
a3705f5753 Merge remote-tracking branch 'origin/master' 2019-07-11 15:08:40 +08:00
magese
cc9f0d7bea Update lucene to 8.1.1 2019-07-11 13:51:05 +08:00
Magese
7a2515a134
Update README.md
Add HitCount
2019-05-29 08:58:17 +08:00
31 changed files with 1600 additions and 1703 deletions

View File

@ -1,3 +1,4 @@
language: java language: java
jdk: jdk:
- oraclejdk8 - openjdk8

View File

@ -25,7 +25,7 @@
``` ```
4. 配置Solr的`managed-schema`,添加`ik分词器`,示例如下; 4. 配置Solr的`managed-schema`,添加`ik分词器`,示例如下;
```console ```xml
<!-- ik分词器 --> <!-- ik分词器 -->
<fieldType name="text_ik" class="solr.TextField"> <fieldType name="text_ik" class="solr.TextField">
<analyzer type="index"> <analyzer type="index">

View File

@ -12,8 +12,10 @@ ik-analyzer for solr 7.x-8.x
<!-- /Badges section end. --> <!-- /Badges section end. -->
## 简介 ## 简介
#### 适配最新版本solr 7&8 **适配最新版本solr 7&8**
#### 扩展IK原有词库
**扩展IK原有词库**
| 分词工具 | 词库中词的数量 | 最后更新时间 | | 分词工具 | 词库中词的数量 | 最后更新时间 |
| :------: | :------: | :------: | | :------: | :------: | :------: |
| ik | 27.5万 | 2012年 | | ik | 27.5万 | 2012年 |
@ -21,23 +23,27 @@ ik-analyzer for solr 7.x-8.x
| word | 64.2万 | 2014年 | | word | 64.2万 | 2014年 |
| jieba | 58.4万 | 2012年 | | jieba | 58.4万 | 2012年 |
| jcesg | 16.6万 | 2018年 | | jcesg | 16.6万 | 2018年 |
| sougou词库 | 115.2万 | 2019年 | | sougou词库 | 115.2万 | 2020年 |
#### 将以上词库进行整理后约187.1万条词汇;
#### 添加动态加载词典表功能在不需要重启solr服务的情况下加载新增的词典。 **将以上词库进行整理后约187.1万条词汇;**
* IKAnalyzer的原作者为林良益<linliangyi2007@gmail.com>,项目网站为<http://code.google.com/p/ik-analyzer>
* 该项目动态加载功能根据博主[@星火燎原智勇](http://www.cnblogs.com/liang1101/articles/6395016.html)的博客进行修改其GITHUB地址为[@liang68](https://github.com/liang68) **添加动态加载词典表功能在不需要重启solr服务的情况下加载新增的词典。**
> <small>关闭默认主词典请在`IKAnalyzer.cfg.xml`配置文件中设置`use_main_dict``false`</small>
> * IKAnalyzer的原作者为林良益<linliangyi2007@gmail.com>,项目网站为<http://code.google.com/p/ik-analyzer>
> * 该项目动态加载功能根据博主[@星火燎原智勇](http://www.cnblogs.com/liang1101/articles/6395016.html)的博客进行修改其GITHUB地址为[@liang68](https://github.com/liang68)
## 使用说明 ## 使用说明
* jar包下载地址[![GitHub version](https://img.shields.io/badge/version-8.1.0-519dd9.svg)](https://search.maven.org/remotecontent?filepath=com/github/magese/ik-analyzer/8.1.0/ik-analyzer-8.1.0.jar) * jar包下载地址[![GitHub version](https://img.shields.io/badge/version-8.5.0-519dd9.svg)](https://search.maven.org/remotecontent?filepath=com/github/magese/ik-analyzer/8.5.0/ik-analyzer-8.5.0.jar)
* 历史版本:[![GitHub version](https://img.shields.io/maven-central/v/com.github.magese/ik-analyzer.svg?style=flat-square)](https://search.maven.org/search?q=g:com.github.magese%20AND%20a:ik-analyzer&core=gav) * 历史版本:[![GitHub version](https://img.shields.io/maven-central/v/com.github.magese/ik-analyzer.svg?style=flat-square)](https://search.maven.org/search?q=g:com.github.magese%20AND%20a:ik-analyzer&core=gav)
```console ```xml
<!-- Maven仓库地址 --> <!-- Maven仓库地址 -->
<dependency> <dependency>
<groupId>com.github.magese</groupId> <groupId>com.github.magese</groupId>
<artifactId>ik-analyzer</artifactId> <artifactId>ik-analyzer</artifactId>
<version>8.1.0</version> <version>8.5.0</version>
</dependency> </dependency>
``` ```
@ -57,7 +63,7 @@ ik-analyzer for solr 7.x-8.x
``` ```
3. 配置Solr的`managed-schema`,添加`ik分词器`,示例如下; 3. 配置Solr的`managed-schema`,添加`ik分词器`,示例如下;
```console ```xml
<!-- ik分词器 --> <!-- ik分词器 -->
<fieldType name="text_ik" class="solr.TextField"> <fieldType name="text_ik" class="solr.TextField">
<analyzer type="index"> <analyzer type="index">
@ -75,8 +81,16 @@ ik-analyzer for solr 7.x-8.x
![analyzer](./img/analyzer.png) ![analyzer](./img/analyzer.png)
5. `ik.conf`文件说明: 5. `IKAnalyzer.cfg.xml`配置文件说明:
```console
| 名称 | 类型 | 描述 | 默认 |
| ------ | ------ | ------ | ------ |
| use_main_dict | boolean | 是否使用默认主词典 | true |
| ext_dict | String | 扩展词典文件名称,多个用分号隔开 | ext.dic; |
| ext_stopwords | String | 停用词典文件名称,多个用分号隔开 | stopword.dic; |
6. `ik.conf`文件说明:
```properties
files=dynamicdic.txt files=dynamicdic.txt
lastupdate=0 lastupdate=0
``` ```
@ -84,33 +98,43 @@ ik-analyzer for solr 7.x-8.x
1. `files`为动态词典列表,可以设置多个词典表,用逗号进行分隔,默认动态词典表为`dynamicdic.txt` 1. `files`为动态词典列表,可以设置多个词典表,用逗号进行分隔,默认动态词典表为`dynamicdic.txt`
2. `lastupdate`默认值为`0`,每次对动态词典表修改后请+1不然不会将词典表中新的词语添加到内存中。<s>`lastupdate`采用的是`int`类型,不支持时间戳,如果使用时间戳的朋友可以把源码中的`int`改成`long`即可;</s> `2018-08-23` 已将源码中`lastUpdate`改为`long`类型,现可以用时间戳了。 2. `lastupdate`默认值为`0`,每次对动态词典表修改后请+1不然不会将词典表中新的词语添加到内存中。<s>`lastupdate`采用的是`int`类型,不支持时间戳,如果使用时间戳的朋友可以把源码中的`int`改成`long`即可;</s> `2018-08-23` 已将源码中`lastUpdate`改为`long`类型,现可以用时间戳了。
6. `dynamicdic.txt` 为动态词典 7. `dynamicdic.txt` 为动态词典
在此文件配置的词语不需重启服务即可加载进内存中。 在此文件配置的词语不需重启服务即可加载进内存中。
`#`开头的词语视为注释,将不会加载到内存中。 `#`开头的词语视为注释,将不会加载到内存中。
## 更新说明 ## 更新说明
- `2019-05-27:` - **2021-12-23:** 升级lucene版本为`8.5.0`
- **2021-03-22:** 升级lucene版本为`8.4.0`
- **2020-12-30:**
- 升级lucene版本为`8.3.1`
- 更新词库
- **2019-11-12:**
- 升级lucene版本为`8.3.0`
- `IKAnalyzer.cfg.xml`增加配置项`use_main_dict`,用于配置是否启用默认主词典
- **2019-09-27:** 升级lucene版本为`8.2.0`
- **2019-07-11:** 升级lucene版本为`8.1.1`
- **2019-05-27:**
- 升级lucene版本为`8.1.0` - 升级lucene版本为`8.1.0`
- 优化原词典部分重复词语 - 优化原词典部分重复词语
- 更新搜狗2019最新流行词汇词典约20k词汇量 - 更新搜狗2019最新流行词汇词典约20k词汇量
- `2019-05-15:` 升级lucene版本为`8.0.0`并支持Solr8使用 - **2019-05-15:** 升级lucene版本为`8.0.0`并支持Solr8使用
- `2019-03-01:` 升级lucene版本为`7.7.1` - **2019-03-01:** 升级lucene版本为`7.7.1`
- `2019-02-15:` 升级lucene版本为`7.7.0` - **2019-02-15:** 升级lucene版本为`7.7.0`
- `2018-12-26:` - **2018-12-26:**
- 升级lucene版本为`7.6.0` - 升级lucene版本为`7.6.0`
- 兼容solr-cloud动态词典配置文件及动态词典可交由`zookeeper`进行管理 - 兼容solr-cloud动态词典配置文件及动态词典可交由`zookeeper`进行管理
- 动态词典增加注释功能,以`#`开头的行将视为注释 - 动态词典增加注释功能,以`#`开头的行将视为注释
- `2018-12-04:` 整理更新词库列表`magese.dic` - **2018-12-04:** 整理更新词库列表`magese.dic`
- `2018-10-10:` 升级lucene版本为`7.5.0` - **2018-10-10:** 升级lucene版本为`7.5.0`
- `2018-09-03:` 优化注释与输出信息取消部分中文输出避免不同字符集乱码现会打印被调用inform方法的hashcode - **2018-09-03:** 优化注释与输出信息取消部分中文输出避免不同字符集乱码现会打印被调用inform方法的hashcode
- `2018-08-23: ` - **2018-08-23:**
- 完善了动态更新词库代码注释; - 完善了动态更新词库代码注释;
- 将ik.conf配置文件中的lastUpdate属性改为long类型现已支持时间戳形式 - 将ik.conf配置文件中的lastUpdate属性改为long类型现已支持时间戳形式
- `2018-08-13:` 更新maven仓库地址 - **2018-08-13:** 更新maven仓库地址
- `2018-08-01:` 移除默认的扩展词与停用词 - **2018-08-01:** 移除默认的扩展词与停用词
- `2018-07-23:` 升级lucene版本为`7.4.0` - **2018-07-23:** 升级lucene版本为`7.4.0`
## 感谢 Thanks ## 感谢 Thanks

12
pom.xml
View File

@ -4,7 +4,7 @@
<groupId>com.github.magese</groupId> <groupId>com.github.magese</groupId>
<artifactId>ik-analyzer</artifactId> <artifactId>ik-analyzer</artifactId>
<version>8.1.0</version> <version>8.5.0</version>
<packaging>jar</packaging> <packaging>jar</packaging>
<name>ik-analyzer-solr</name> <name>ik-analyzer-solr</name>
@ -13,20 +13,13 @@
<properties> <properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<lucene.version>8.1.0</lucene.version> <lucene.version>8.5.0</lucene.version>
<javac.src.version>1.8</javac.src.version> <javac.src.version>1.8</javac.src.version>
<javac.target.version>1.8</javac.target.version> <javac.target.version>1.8</javac.target.version>
<maven.compiler.plugin.version>3.3</maven.compiler.plugin.version> <maven.compiler.plugin.version>3.3</maven.compiler.plugin.version>
</properties> </properties>
<dependencies> <dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.11</version>
<scope>test</scope>
</dependency>
<dependency> <dependency>
<groupId>org.apache.lucene</groupId> <groupId>org.apache.lucene</groupId>
<artifactId>lucene-core</artifactId> <artifactId>lucene-core</artifactId>
@ -152,4 +145,3 @@
</profile> </profile>
</profiles> </profiles>
</project> </project>

View File

@ -1,6 +1,6 @@
/* /*
* IK 中文分词 版本 8.1.0 * IK 中文分词 版本 8.5.0
* IK Analyzer release 8.1.0 * IK Analyzer release 8.5.0
* *
* Licensed to the Apache Software Foundation (ASF) under one or more * Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with * contributor license agreements. See the NOTICE file distributed with
@ -21,8 +21,8 @@
* 版权声明 2012乌龙茶工作室 * 版权声明 2012乌龙茶工作室
* provided by Linliangyi and copyright 2012 by Oolong studio * provided by Linliangyi and copyright 2012 by Oolong studio
* *
* 8.1.0版本 Magese (magese@live.cn) 更新 * 8.5.0版本 Magese (magese@live.cn) 更新
* release 8.1.0 update by Magese(magese@live.cn) * release 8.5.0 update by Magese(magese@live.cn)
* *
*/ */
package org.wltea.analyzer.cfg; package org.wltea.analyzer.cfg;
@ -50,6 +50,19 @@ public interface Configuration {
*/ */
void setUseSmart(boolean useSmart); void setUseSmart(boolean useSmart);
/**
* 获取是否使用主词典
*
* @return = true 默认加载主词典 = false 不加载主词典
*/
boolean useMainDict();
/**
* 设置是否使用主词典
*
* @param useMainDic = true 默认加载主词典 = false 不加载主词典
*/
void setUseMainDict(boolean useMainDic);
/** /**
* 获取主词典路径 * 获取主词典路径
@ -63,7 +76,7 @@ public interface Configuration {
* *
* @return String 量词词典路径 * @return String 量词词典路径
*/ */
String getQuantifierDicionary(); String getQuantifierDictionary();
/** /**
* 获取扩展字典配置路径 * 获取扩展字典配置路径

View File

@ -1,6 +1,6 @@
/* /*
* IK 中文分词 版本 8.1.0 * IK 中文分词 版本 8.5.0
* IK Analyzer release 8.1.0 * IK Analyzer release 8.5.0
* *
* Licensed to the Apache Software Foundation (ASF) under one or more * Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with * contributor license agreements. See the NOTICE file distributed with
@ -21,8 +21,8 @@
* 版权声明 2012乌龙茶工作室 * 版权声明 2012乌龙茶工作室
* provided by Linliangyi and copyright 2012 by Oolong studio * provided by Linliangyi and copyright 2012 by Oolong studio
* *
* 8.1.0版本 Magese (magese@live.cn) 更新 * 8.5.0版本 Magese (magese@live.cn) 更新
* release 8.1.0 update by Magese(magese@live.cn) * release 8.5.0 update by Magese(magese@live.cn)
* *
*/ */
package org.wltea.analyzer.cfg; package org.wltea.analyzer.cfg;
@ -41,24 +41,28 @@ public class DefaultConfig implements Configuration {
/* /*
* 分词器默认字典路径 * 分词器默认字典路径
*/ */
private static final String PATH_DIC_MAIN = "dict/magese.dic"; private static final String PATH_DIC_MAIN = "dict/main_dic_2020.dic";
private static final String PATH_DIC_QUANTIFIER = "dict/quantifier.dic"; private static final String PATH_DIC_QUANTIFIER = "dict/quantifier.dic";
/* /*
* 分词器配置文件路径 * 分词器配置文件路径
*/ */
private static final String FILE_NAME = "IKAnalyzer.cfg.xml"; private static final String FILE_NAME = "IKAnalyzer.cfg.xml";
// 配置属性是否使用主词典
private static final String USE_MAIN = "use_main_dict";
// 配置属性扩展字典 // 配置属性扩展字典
private static final String EXT_DICT = "ext_dict"; private static final String EXT_DICT = "ext_dict";
// 配置属性扩展停止词典 // 配置属性扩展停止词典
private static final String EXT_STOP = "ext_stopwords"; private static final String EXT_STOP = "ext_stopwords";
private Properties props; private final Properties props;
/*
* 是否使用smart方式分词 // 是否使用smart方式分词
*/
private boolean useSmart; private boolean useSmart;
// 是否加载主词典
private boolean useMainDict = true;
/** /**
* 返回单例 * 返回单例
* *
@ -100,10 +104,33 @@ public class DefaultConfig implements Configuration {
* *
* @param useSmart =true 分词器使用智能切分策略 =false则使用细粒度切分 * @param useSmart =true 分词器使用智能切分策略 =false则使用细粒度切分
*/ */
@Override
public void setUseSmart(boolean useSmart) { public void setUseSmart(boolean useSmart) {
this.useSmart = useSmart; this.useSmart = useSmart;
} }
/**
* 获取是否使用主词典
*
* @return = true 默认加载主词典 = false 不加载主词典
*/
public boolean useMainDict() {
String useMainDictCfg = props.getProperty(USE_MAIN);
if (useMainDictCfg != null && useMainDictCfg.trim().length() > 0)
setUseMainDict(Boolean.parseBoolean(useMainDictCfg));
return useMainDict;
}
/**
* 设置是否使用主词典
*
* @param useMainDict = true 默认加载主词典 = false 不加载主词典
*/
@Override
public void setUseMainDict(boolean useMainDict) {
this.useMainDict = useMainDict;
}
/** /**
* 获取主词典路径 * 获取主词典路径
* *
@ -118,7 +145,7 @@ public class DefaultConfig implements Configuration {
* *
* @return String 量词词典路径 * @return String 量词词典路径
*/ */
public String getQuantifierDicionary() { public String getQuantifierDictionary() {
return PATH_DIC_QUANTIFIER; return PATH_DIC_QUANTIFIER;
} }
@ -142,7 +169,6 @@ public class DefaultConfig implements Configuration {
return extDictFiles; return extDictFiles;
} }
/** /**
* 获取扩展停止词典配置路径 * 获取扩展停止词典配置路径
* *

View File

@ -1,6 +1,6 @@
/* /*
* IK 中文分词 版本 8.1.0 * IK 中文分词 版本 8.5.0
* IK Analyzer release 8.1.0 * IK Analyzer release 8.5.0
* *
* Licensed to the Apache Software Foundation (ASF) under one or more * Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with * contributor license agreements. See the NOTICE file distributed with
@ -21,23 +21,19 @@
* 版权声明 2012乌龙茶工作室 * 版权声明 2012乌龙茶工作室
* provided by Linliangyi and copyright 2012 by Oolong studio * provided by Linliangyi and copyright 2012 by Oolong studio
* *
* 8.1.0版本 Magese (magese@live.cn) 更新 * 8.5.0版本 Magese (magese@live.cn) 更新
* release 8.1.0 update by Magese(magese@live.cn) * release 8.5.0 update by Magese(magese@live.cn)
* *
*/ */
package org.wltea.analyzer.core; package org.wltea.analyzer.core;
import java.io.IOException;
import java.io.Reader;
import java.util.HashMap;
import java.util.HashSet;
import java.util.LinkedList;
import java.util.Map;
import java.util.Set;
import org.wltea.analyzer.cfg.Configuration; import org.wltea.analyzer.cfg.Configuration;
import org.wltea.analyzer.dic.Dictionary; import org.wltea.analyzer.dic.Dictionary;
import java.io.IOException;
import java.io.Reader;
import java.util.*;
/** /**
* 分词器上下文状态 * 分词器上下文状态
*/ */
@ -66,17 +62,17 @@ class AnalyzeContext {
// 子分词器锁 // 子分词器锁
// 该集合非空说明有子分词器在占用segmentBuff // 该集合非空说明有子分词器在占用segmentBuff
private Set<String> buffLocker; private final Set<String> buffLocker;
// 原始分词结果集合未经歧义处理 // 原始分词结果集合未经歧义处理
private QuickSortSet orgLexemes; private QuickSortSet orgLexemes;
// LexemePath位置索引表 // LexemePath位置索引表
private Map<Integer, LexemePath> pathMap; private final Map<Integer, LexemePath> pathMap;
// 最终分词结果集 // 最终分词结果集
private LinkedList<Lexeme> results; private final LinkedList<Lexeme> results;
// 分词器配置项 // 分词器配置项
private Configuration cfg; private final Configuration cfg;
AnalyzeContext(Configuration cfg) { AnalyzeContext(Configuration cfg) {
this.cfg = cfg; this.cfg = cfg;
@ -254,7 +250,7 @@ class AnalyzeContext {
*/ */
void outputToResult() { void outputToResult() {
int index = 0; int index = 0;
for (; index <= this.cursor; ) { while (index <= this.cursor) {
// 跳过非CJK字符 // 跳过非CJK字符
if (CharacterUtil.CHAR_USELESS == this.charTypes[index]) { if (CharacterUtil.CHAR_USELESS == this.charTypes[index]) {
index++; index++;
@ -353,6 +349,7 @@ class AnalyzeContext {
if (Lexeme.TYPE_ARABIC == result.getLexemeType()) { if (Lexeme.TYPE_ARABIC == result.getLexemeType()) {
Lexeme nextLexeme = this.results.peekFirst(); Lexeme nextLexeme = this.results.peekFirst();
boolean appendOk = false; boolean appendOk = false;
if (nextLexeme != null) {
if (Lexeme.TYPE_CNUM == nextLexeme.getLexemeType()) { if (Lexeme.TYPE_CNUM == nextLexeme.getLexemeType()) {
// 合并英文数词+中文数词 // 合并英文数词+中文数词
appendOk = result.append(nextLexeme, Lexeme.TYPE_CNUM); appendOk = result.append(nextLexeme, Lexeme.TYPE_CNUM);
@ -360,6 +357,7 @@ class AnalyzeContext {
// 合并英文数词+中文量词 // 合并英文数词+中文量词
appendOk = result.append(nextLexeme, Lexeme.TYPE_CQUAN); appendOk = result.append(nextLexeme, Lexeme.TYPE_CQUAN);
} }
}
if (appendOk) { if (appendOk) {
// 弹出 // 弹出
this.results.pollFirst(); this.results.pollFirst();

View File

@ -1,6 +1,6 @@
/* /*
* IK 中文分词 版本 8.1.0 * IK 中文分词 版本 8.5.0
* IK Analyzer release 8.1.0 * IK Analyzer release 8.5.0
* *
* Licensed to the Apache Software Foundation (ASF) under one or more * Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with * contributor license agreements. See the NOTICE file distributed with
@ -21,18 +21,18 @@
* 版权声明 2012乌龙茶工作室 * 版权声明 2012乌龙茶工作室
* provided by Linliangyi and copyright 2012 by Oolong studio * provided by Linliangyi and copyright 2012 by Oolong studio
* *
* 8.1.0版本 Magese (magese@live.cn) 更新 * 8.5.0版本 Magese (magese@live.cn) 更新
* release 8.1.0 update by Magese(magese@live.cn) * release 8.5.0 update by Magese(magese@live.cn)
* *
*/ */
package org.wltea.analyzer.core; package org.wltea.analyzer.core;
import java.util.LinkedList;
import java.util.List;
import org.wltea.analyzer.dic.Dictionary; import org.wltea.analyzer.dic.Dictionary;
import org.wltea.analyzer.dic.Hit; import org.wltea.analyzer.dic.Hit;
import java.util.LinkedList;
import java.util.List;
/** /**
* 中文-日韩文子分词器 * 中文-日韩文子分词器
@ -42,7 +42,7 @@ class CJKSegmenter implements ISegmenter {
// 子分词器标签 // 子分词器标签
private static final String SEGMENTER_NAME = "CJK_SEGMENTER"; private static final String SEGMENTER_NAME = "CJK_SEGMENTER";
// 待处理的分词hit队列 // 待处理的分词hit队列
private List<Hit> tmpHits; private final List<Hit> tmpHits;
CJKSegmenter() { CJKSegmenter() {
@ -80,17 +80,16 @@ class CJKSegmenter implements ISegmenter {
// ********************************* // *********************************
// 再对当前指针位置的字符进行单字匹配 // 再对当前指针位置的字符进行单字匹配
Hit singleCharHit = Dictionary.getSingleton().matchInMainDict(context.getSegmentBuff(), context.getCursor(), 1); Hit singleCharHit = Dictionary.getSingleton().matchInMainDict(context.getSegmentBuff(), context.getCursor(), 1);
if(singleCharHit.isMatch()){//首字成词
// 首字为词前缀
if (singleCharHit.isMatch()) {
// 输出当前的词 // 输出当前的词
Lexeme newLexeme = new Lexeme(context.getBufferOffset(), context.getCursor(), 1, Lexeme.TYPE_CNWORD); Lexeme newLexeme = new Lexeme(context.getBufferOffset(), context.getCursor(), 1, Lexeme.TYPE_CNWORD);
context.addLexeme(newLexeme); context.addLexeme(newLexeme);
//同时也是词前缀
if(singleCharHit.isPrefix()){
//前缀匹配则放入hit列表
this.tmpHits.add(singleCharHit);
} }
}else if(singleCharHit.isPrefix()){//首字为词前缀
// 前缀匹配则放入hit列表
if (singleCharHit.isPrefix()) {
// 前缀匹配则放入hit列表 // 前缀匹配则放入hit列表
this.tmpHits.add(singleCharHit); this.tmpHits.add(singleCharHit);
} }

View File

@ -1,6 +1,6 @@
/* /*
* IK 中文分词 版本 8.1.0 * IK 中文分词 版本 8.5.0
* IK Analyzer release 8.1.0 * IK Analyzer release 8.5.0
* *
* Licensed to the Apache Software Foundation (ASF) under one or more * Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with * contributor license agreements. See the NOTICE file distributed with
@ -21,8 +21,8 @@
* 版权声明 2012乌龙茶工作室 * 版权声明 2012乌龙茶工作室
* provided by Linliangyi and copyright 2012 by Oolong studio * provided by Linliangyi and copyright 2012 by Oolong studio
* *
* 8.1.0版本 Magese (magese@live.cn) 更新 * 8.5.0版本 Magese (magese@live.cn) 更新
* release 8.1.0 update by Magese(magese@live.cn) * release 8.5.0 update by Magese(magese@live.cn)
* *
*/ */
package org.wltea.analyzer.core; package org.wltea.analyzer.core;
@ -36,7 +36,6 @@ import java.util.List;
import java.util.Set; import java.util.Set;
/** /**
*
* 中文数量词子分词器 * 中文数量词子分词器
*/ */
class CN_QuantifierSegmenter implements ISegmenter { class CN_QuantifierSegmenter implements ISegmenter {
@ -44,14 +43,14 @@ class CN_QuantifierSegmenter implements ISegmenter{
// 子分词器标签 // 子分词器标签
private static final String SEGMENTER_NAME = "QUAN_SEGMENTER"; private static final String SEGMENTER_NAME = "QUAN_SEGMENTER";
private static Set<Character> ChnNumberChars = new HashSet<>(); private static final Set<Character> CHN_NUMBER_CHARS = new HashSet<>();
static { static {
// 中文数词 // 中文数词
//Cnum
String chn_Num = "一二两三四五六七八九十零壹贰叁肆伍陆柒捌玖拾百千万亿拾佰仟萬億兆卅廿"; String chn_Num = "一二两三四五六七八九十零壹贰叁肆伍陆柒捌玖拾百千万亿拾佰仟萬億兆卅廿";
char[] ca = chn_Num.toCharArray(); char[] ca = chn_Num.toCharArray();
for (char nChar : ca) { for (char nChar : ca) {
ChnNumberChars.add(nChar); CHN_NUMBER_CHARS.add(nChar);
} }
} }
@ -68,7 +67,7 @@ class CN_QuantifierSegmenter implements ISegmenter{
private int nEnd; private int nEnd;
// 待处理的量词hit队列 // 待处理的量词hit队列
private List<Hit> countHits; private final List<Hit> countHits;
CN_QuantifierSegmenter() { CN_QuantifierSegmenter() {
@ -111,14 +110,14 @@ class CN_QuantifierSegmenter implements ISegmenter{
private void processCNumber(AnalyzeContext context) { private void processCNumber(AnalyzeContext context) {
if (nStart == -1 && nEnd == -1) {// 初始状态 if (nStart == -1 && nEnd == -1) {// 初始状态
if (CharacterUtil.CHAR_CHINESE == context.getCurrentCharType() if (CharacterUtil.CHAR_CHINESE == context.getCurrentCharType()
&& ChnNumberChars.contains(context.getCurrentChar())){ && CHN_NUMBER_CHARS.contains(context.getCurrentChar())) {
// 记录数词的起始结束位置 // 记录数词的起始结束位置
nStart = context.getCursor(); nStart = context.getCursor();
nEnd = context.getCursor(); nEnd = context.getCursor();
} }
} else {// 正在处理状态 } else {// 正在处理状态
if (CharacterUtil.CHAR_CHINESE == context.getCurrentCharType() if (CharacterUtil.CHAR_CHINESE == context.getCurrentCharType()
&& ChnNumberChars.contains(context.getCurrentChar())){ && CHN_NUMBER_CHARS.contains(context.getCurrentChar())) {
// 记录数词的结束位置 // 记录数词的结束位置
nEnd = context.getCursor(); nEnd = context.getCursor();
} else { } else {
@ -144,6 +143,7 @@ class CN_QuantifierSegmenter implements ISegmenter{
/** /**
* 处理中文量词 * 处理中文量词
*
* @param context 需要处理的内容 * @param context 需要处理的内容
*/ */
private void processCount(AnalyzeContext context) { private void processCount(AnalyzeContext context) {
@ -179,21 +179,19 @@ class CN_QuantifierSegmenter implements ISegmenter{
// ********************************* // *********************************
// 对当前指针位置的字符进行单字匹配 // 对当前指针位置的字符进行单字匹配
Hit singleCharHit = Dictionary.getSingleton().matchInQuantifierDict(context.getSegmentBuff(), context.getCursor(), 1); Hit singleCharHit = Dictionary.getSingleton().matchInQuantifierDict(context.getSegmentBuff(), context.getCursor(), 1);
if(singleCharHit.isMatch()){//首字成量词词
// 首字为量词前缀
if (singleCharHit.isMatch()) {
// 输出当前的词 // 输出当前的词
Lexeme newLexeme = new Lexeme(context.getBufferOffset(), context.getCursor(), 1, Lexeme.TYPE_COUNT); Lexeme newLexeme = new Lexeme(context.getBufferOffset(), context.getCursor(), 1, Lexeme.TYPE_COUNT);
context.addLexeme(newLexeme); context.addLexeme(newLexeme);
}
//同时也是词前缀 // 前缀匹配则放入hit列表
if (singleCharHit.isPrefix()) { if (singleCharHit.isPrefix()) {
// 前缀匹配则放入hit列表 // 前缀匹配则放入hit列表
this.countHits.add(singleCharHit); this.countHits.add(singleCharHit);
} }
}else if(singleCharHit.isPrefix()){//首字为量词前缀
//前缀匹配则放入hit列表
this.countHits.add(singleCharHit);
}
} else { } else {
// 输入的不是中文字符 // 输入的不是中文字符
@ -229,6 +227,7 @@ class CN_QuantifierSegmenter implements ISegmenter{
/** /**
* 添加数词词元到结果集 * 添加数词词元到结果集
*
* @param context 需要添加的词元 * @param context 需要添加的词元
*/ */
private void outputNumLexeme(AnalyzeContext context) { private void outputNumLexeme(AnalyzeContext context) {

View File

@ -1,6 +1,6 @@
/* /*
* IK 中文分词 版本 8.1.0 * IK 中文分词 版本 8.5.0
* IK Analyzer release 8.1.0 * IK Analyzer release 8.5.0
* *
* Licensed to the Apache Software Foundation (ASF) under one or more * Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with * contributor license agreements. See the NOTICE file distributed with
@ -21,14 +21,13 @@
* 版权声明 2012乌龙茶工作室 * 版权声明 2012乌龙茶工作室
* provided by Linliangyi and copyright 2012 by Oolong studio * provided by Linliangyi and copyright 2012 by Oolong studio
* *
* 8.1.0版本 Magese (magese@live.cn) 更新 * 8.5.0版本 Magese (magese@live.cn) 更新
* release 8.1.0 update by Magese(magese@live.cn) * release 8.5.0 update by Magese(magese@live.cn)
* *
*/ */
package org.wltea.analyzer.core; package org.wltea.analyzer.core;
/** /**
*
* 字符集识别工具类 * 字符集识别工具类
*/ */
class CharacterUtil { class CharacterUtil {
@ -46,6 +45,7 @@ class CharacterUtil {
/** /**
* 识别字符类型 * 识别字符类型
*
* @param input 需要识别的字符 * @param input 需要识别的字符
* @return int CharacterUtil定义的字符类型常量 * @return int CharacterUtil定义的字符类型常量
*/ */
@ -85,6 +85,7 @@ class CharacterUtil {
/** /**
* 进行字符规格化全角转半角大写转小写处理 * 进行字符规格化全角转半角大写转小写处理
*
* @param input 需要转换的字符 * @param input 需要转换的字符
* @return char * @return char
*/ */

View File

@ -1,6 +1,6 @@
/* /*
* IK 中文分词 版本 8.1.0 * IK 中文分词 版本 8.5.0
* IK Analyzer release 8.1.0 * IK Analyzer release 8.5.0
* *
* Licensed to the Apache Software Foundation (ASF) under one or more * Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with * contributor license agreements. See the NOTICE file distributed with
@ -21,8 +21,8 @@
* 版权声明 2012乌龙茶工作室 * 版权声明 2012乌龙茶工作室
* provided by Linliangyi and copyright 2012 by Oolong studio * provided by Linliangyi and copyright 2012 by Oolong studio
* *
* 8.1.0版本 Magese (magese@live.cn) 更新 * 8.5.0版本 Magese (magese@live.cn) 更新
* release 8.1.0 update by Magese(magese@live.cn) * release 8.5.0 update by Magese(magese@live.cn)
* *
*/ */
package org.wltea.analyzer.core; package org.wltea.analyzer.core;
@ -35,9 +35,7 @@ import java.util.TreeSet;
*/ */
class IKArbitrator { class IKArbitrator {
IKArbitrator() { IKArbitrator() {}
}
/** /**
* 分词歧义处理 * 分词歧义处理

View File

@ -1,6 +1,6 @@
/* /*
* IK 中文分词 版本 8.1.0 * IK 中文分词 版本 8.5.0
* IK Analyzer release 8.1.0 * IK Analyzer release 8.5.0
* *
* Licensed to the Apache Software Foundation (ASF) under one or more * Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with * contributor license agreements. See the NOTICE file distributed with
@ -21,35 +21,45 @@
* 版权声明 2012乌龙茶工作室 * 版权声明 2012乌龙茶工作室
* provided by Linliangyi and copyright 2012 by Oolong studio * provided by Linliangyi and copyright 2012 by Oolong studio
* *
* 8.1.0版本 Magese (magese@live.cn) 更新 * 8.5.0版本 Magese (magese@live.cn) 更新
* release 8.1.0 update by Magese(magese@live.cn) * release 8.3.1 update by Magese(magese@live.cn)
* *
*/ */
package org.wltea.analyzer.core; package org.wltea.analyzer.core;
import org.wltea.analyzer.cfg.Configuration;
import org.wltea.analyzer.cfg.DefaultConfig;
import org.wltea.analyzer.dic.Dictionary;
import java.io.IOException; import java.io.IOException;
import java.io.Reader; import java.io.Reader;
import java.util.ArrayList; import java.util.ArrayList;
import java.util.List; import java.util.List;
import org.wltea.analyzer.cfg.Configuration;
import org.wltea.analyzer.cfg.DefaultConfig;
import org.wltea.analyzer.dic.Dictionary;
/** /**
* IK分词器主类 * IK分词器主类
*/ */
public final class IKSegmenter { public final class IKSegmenter {
//字符窜reader /**
* 字符窜reader
*/
private Reader input; private Reader input;
//分词器配置项 /**
private Configuration cfg; * 分词器配置项
//分词器上下文 */
private final Configuration cfg;
/**
* 分词器上下文
*/
private AnalyzeContext context; private AnalyzeContext context;
//分词处理器列表 /**
* 分词处理器列表
*/
private List<ISegmenter> segmenters; private List<ISegmenter> segmenters;
//分词歧义裁决器 /**
* 分词歧义裁决器
*/
private IKArbitrator arbitrator; private IKArbitrator arbitrator;
@ -58,7 +68,6 @@ public final class IKSegmenter {
* *
* @param input 读取流 * @param input 读取流
* @param useSmart 为true使用智能分词策略 * @param useSmart 为true使用智能分词策略
* <p>
* 非智能分词细粒度输出所有可能的切分结果 * 非智能分词细粒度输出所有可能的切分结果
* 智能分词 合并数词和量词对分词结果进行歧义判断 * 智能分词 合并数词和量词对分词结果进行歧义判断
*/ */

View File

@ -1,6 +1,6 @@
/* /*
* IK 中文分词 版本 8.1.0 * IK 中文分词 版本 8.5.0
* IK Analyzer release 8.1.0 * IK Analyzer release 8.5.0
* *
* Licensed to the Apache Software Foundation (ASF) under one or more * Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with * contributor license agreements. See the NOTICE file distributed with
@ -21,21 +21,21 @@
* 版权声明 2012乌龙茶工作室 * 版权声明 2012乌龙茶工作室
* provided by Linliangyi and copyright 2012 by Oolong studio * provided by Linliangyi and copyright 2012 by Oolong studio
* *
* 8.1.0版本 Magese (magese@live.cn) 更新 * 8.5.0版本 Magese (magese@live.cn) 更新
* release 8.1.0 update by Magese(magese@live.cn) * release 8.5.0 update by Magese(magese@live.cn)
* *
*/ */
package org.wltea.analyzer.core; package org.wltea.analyzer.core;
/** /**
*
* 子分词器接口 * 子分词器接口
*/ */
interface ISegmenter { interface ISegmenter {
/** /**
* 从分析器读取下一个可能分解的词元对象 * 从分析器读取下一个可能分解的词元对象
*
* @param context 分词算法上下文 * @param context 分词算法上下文
*/ */
void analyze(AnalyzeContext context); void analyze(AnalyzeContext context);

View File

@ -1,6 +1,6 @@
/* /*
* IK 中文分词 版本 8.1.0 * IK 中文分词 版本 8.5.0
* IK Analyzer release 8.1.0 * IK Analyzer release 8.5.0
* *
* Licensed to the Apache Software Foundation (ASF) under one or more * Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with * contributor license agreements. See the NOTICE file distributed with
@ -21,8 +21,8 @@
* 版权声明 2012乌龙茶工作室 * 版权声明 2012乌龙茶工作室
* provided by Linliangyi and copyright 2012 by Oolong studio * provided by Linliangyi and copyright 2012 by Oolong studio
* *
* 8.1.0版本 Magese (magese@live.cn) 更新 * 8.5.0版本 Magese (magese@live.cn) 更新
* release 8.1.0 update by Magese(magese@live.cn) * release 8.5.0 update by Magese(magese@live.cn)
* *
*/ */
package org.wltea.analyzer.core; package org.wltea.analyzer.core;
@ -34,14 +34,18 @@ import java.util.Arrays;
*/ */
class LetterSegmenter implements ISegmenter { class LetterSegmenter implements ISegmenter {
//子分词器标签 /**
* 子分词器标签
*/
private static final String SEGMENTER_NAME = "LETTER_SEGMENTER"; private static final String SEGMENTER_NAME = "LETTER_SEGMENTER";
//链接符号 /**
* 链接符号
*/
private static final char[] Letter_Connector = new char[]{'#', '&', '+', '-', '.', '@', '_'}; private static final char[] Letter_Connector = new char[]{'#', '&', '+', '-', '.', '@', '_'};
/**
//数字符号 * 数字符号
*/
private static final char[] Num_Connector = new char[]{',', '.'}; private static final char[] Num_Connector = new char[]{',', '.'};
/* /*
* 词元的开始位置 * 词元的开始位置
* 同时作为子分词器状态标识 * 同时作为子分词器状态标识
@ -53,22 +57,18 @@ class LetterSegmenter implements ISegmenter {
* end记录的是在词元中最后一个出现的Letter但非Sign_Connector的字符的位置 * end记录的是在词元中最后一个出现的Letter但非Sign_Connector的字符的位置
*/ */
private int end; private int end;
/* /*
* 字母起始位置 * 字母起始位置
*/ */
private int englishStart; private int englishStart;
/* /*
* 字母结束位置 * 字母结束位置
*/ */
private int englishEnd; private int englishEnd;
/* /*
* 阿拉伯数字起始位置 * 阿拉伯数字起始位置
*/ */
private int arabicStart; private int arabicStart;
/* /*
* 阿拉伯数字结束位置 * 阿拉伯数字结束位置
*/ */

View File

@ -1,6 +1,6 @@
/* /*
* IK 中文分词 版本 8.1.0 * IK 中文分词 版本 8.5.0
* IK Analyzer release 8.1.0 * IK Analyzer release 8.5.0
* *
* Licensed to the Apache Software Foundation (ASF) under one or more * Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with * contributor license agreements. See the NOTICE file distributed with
@ -21,8 +21,8 @@
* 版权声明 2012乌龙茶工作室 * 版权声明 2012乌龙茶工作室
* provided by Linliangyi and copyright 2012 by Oolong studio * provided by Linliangyi and copyright 2012 by Oolong studio
* *
* 8.1.0版本 Magese (magese@live.cn) 更新 * 8.5.0版本 Magese (magese@live.cn) 更新
* release 8.1.0 update by Magese(magese@live.cn) * release 8.5.0 update by Magese(magese@live.cn)
* *
*/ */
package org.wltea.analyzer.core; package org.wltea.analyzer.core;
@ -32,34 +32,61 @@ package org.wltea.analyzer.core;
*/ */
@SuppressWarnings("unused") @SuppressWarnings("unused")
public class Lexeme implements Comparable<Lexeme> { public class Lexeme implements Comparable<Lexeme> {
//英文 /**
* 英文
*/
static final int TYPE_ENGLISH = 1; static final int TYPE_ENGLISH = 1;
//数字 /**
* 数字
*/
static final int TYPE_ARABIC = 2; static final int TYPE_ARABIC = 2;
//英文数字混合 /**
* 英文数字混合
*/
static final int TYPE_LETTER = 3; static final int TYPE_LETTER = 3;
//中文词元 /**
* 中文词元
*/
static final int TYPE_CNWORD = 4; static final int TYPE_CNWORD = 4;
//中文单字 /**
* 中文单字
*/
static final int TYPE_CNCHAR = 64; static final int TYPE_CNCHAR = 64;
//日韩文字 /**
* 日韩文字
*/
static final int TYPE_OTHER_CJK = 8; static final int TYPE_OTHER_CJK = 8;
//中文数词 /**
* 中文数词
*/
static final int TYPE_CNUM = 16; static final int TYPE_CNUM = 16;
//中文量词 /**
* 中文量词
*/
static final int TYPE_COUNT = 32; static final int TYPE_COUNT = 32;
//中文数量词 /**
* 中文数量词
*/
static final int TYPE_CQUAN = 48; static final int TYPE_CQUAN = 48;
/**
//词元的起始位移 * 词元的起始位移
*/
private int offset; private int offset;
//词元的相对起始位置 /**
* 词元的相对起始位置
*/
private int begin; private int begin;
//词元的长度 /**
* 词元的长度
*/
private int length; private int length;
//词元文本 /**
* 词元文本
*/
private String lexemeText; private String lexemeText;
//词元类型 /**
* 词元类型
*/
private int lexemeType; private int lexemeType;
@ -120,7 +147,7 @@ public class Lexeme implements Comparable<Lexeme>{
// this.length < other.getLength() // this.length < other.getLength()
return Integer.compare(other.getLength(), this.length); return Integer.compare(other.getLength(), this.length);
}else{//this.begin > other.getBegin() } else {
return 1; return 1;
} }
} }
@ -136,8 +163,10 @@ public class Lexeme implements Comparable<Lexeme>{
int getBegin() { int getBegin() {
return begin; return begin;
} }
/** /**
* 获取词元在文本中的起始位置 * 获取词元在文本中的起始位置
*
* @return int * @return int
*/ */
public int getBeginPosition() { public int getBeginPosition() {
@ -150,6 +179,7 @@ public class Lexeme implements Comparable<Lexeme>{
/** /**
* 获取词元在文本中的结束位置 * 获取词元在文本中的结束位置
*
* @return int * @return int
*/ */
public int getEndPosition() { public int getEndPosition() {
@ -158,6 +188,7 @@ public class Lexeme implements Comparable<Lexeme>{
/** /**
* 获取词元的字符长度 * 获取词元的字符长度
*
* @return int * @return int
*/ */
public int getLength() { public int getLength() {
@ -173,6 +204,7 @@ public class Lexeme implements Comparable<Lexeme>{
/** /**
* 获取词元的文本内容 * 获取词元的文本内容
*
* @return String * @return String
*/ */
public String getLexemeText() { public String getLexemeText() {
@ -194,6 +226,7 @@ public class Lexeme implements Comparable<Lexeme>{
/** /**
* 获取词元类型 * 获取词元类型
*
* @return int * @return int
*/ */
int getLexemeType() { int getLexemeType() {
@ -202,6 +235,7 @@ public class Lexeme implements Comparable<Lexeme>{
/** /**
* 获取词元类型标示字符串 * 获取词元类型标示字符串
*
* @return String * @return String
*/ */
public String getLexemeTypeString() { public String getLexemeTypeString() {
@ -235,7 +269,7 @@ public class Lexeme implements Comparable<Lexeme>{
return "TYPE_CQUAN"; return "TYPE_CQUAN";
default: default:
return "UNKONW"; return "UNKNOWN";
} }
} }
@ -246,6 +280,7 @@ public class Lexeme implements Comparable<Lexeme>{
/** /**
* 合并两个相邻的词元 * 合并两个相邻的词元
*
* @return boolean 词元是否成功合并 * @return boolean 词元是否成功合并
*/ */
boolean append(Lexeme l, int lexemeType) { boolean append(Lexeme l, int lexemeType) {
@ -258,9 +293,10 @@ public class Lexeme implements Comparable<Lexeme>{
} }
} }
/** /**
* ToString 方法
* *
* @return 字符串输出
*/ */
public String toString() { public String toString() {
return this.getBeginPosition() + "-" + this.getEndPosition() + return this.getBeginPosition() + "-" + this.getEndPosition() +

View File

@ -1,6 +1,6 @@
/* /*
* IK 中文分词 版本 8.1.0 * IK 中文分词 版本 8.5.0
* IK Analyzer release 8.1.0 * IK Analyzer release 8.5.0
* *
* Licensed to the Apache Software Foundation (ASF) under one or more * Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with * contributor license agreements. See the NOTICE file distributed with
@ -21,8 +21,8 @@
* 版权声明 2012乌龙茶工作室 * 版权声明 2012乌龙茶工作室
* provided by Linliangyi and copyright 2012 by Oolong studio * provided by Linliangyi and copyright 2012 by Oolong studio
* *
* 8.1.0版本 Magese (magese@live.cn) 更新 * 8.5.0版本 Magese (magese@live.cn) 更新
* release 8.1.0 update by Magese(magese@live.cn) * release 8.5.0 update by Magese(magese@live.cn)
* *
*/ */
package org.wltea.analyzer.core; package org.wltea.analyzer.core;
@ -34,11 +34,17 @@ package org.wltea.analyzer.core;
@SuppressWarnings("unused") @SuppressWarnings("unused")
class LexemePath extends QuickSortSet implements Comparable<LexemePath> { class LexemePath extends QuickSortSet implements Comparable<LexemePath> {
//起始位置 /**
* 起始位置
*/
private int pathBegin; private int pathBegin;
//结束 /**
* 结束
*/
private int pathEnd; private int pathEnd;
//词元链的有效字符长度 /**
* 词元链的有效字符长度
*/
private int payloadLength; private int payloadLength;
LexemePath() { LexemePath() {
@ -100,7 +106,6 @@ class LexemePath extends QuickSortSet implements Comparable<LexemePath> {
/** /**
* 移除尾部的Lexeme * 移除尾部的Lexeme
*
*/ */
void removeTail() { void removeTail() {
Lexeme tail = this.pollLast(); Lexeme tail = this.pollLast();
@ -117,7 +122,6 @@ class LexemePath extends QuickSortSet implements Comparable<LexemePath> {
/** /**
* 检测词元位置交叉有歧义的切分 * 检测词元位置交叉有歧义的切分
*
*/ */
boolean checkCross(Lexeme lexeme) { boolean checkCross(Lexeme lexeme) {
return (lexeme.getBegin() >= this.pathBegin && lexeme.getBegin() < this.pathEnd) return (lexeme.getBegin() >= this.pathBegin && lexeme.getBegin() < this.pathEnd)
@ -141,7 +145,6 @@ class LexemePath extends QuickSortSet implements Comparable<LexemePath> {
/** /**
* 获取LexemePath的路径长度 * 获取LexemePath的路径长度
*
*/ */
private int getPathLength() { private int getPathLength() {
return this.pathEnd - this.pathBegin; return this.pathEnd - this.pathBegin;
@ -150,7 +153,6 @@ class LexemePath extends QuickSortSet implements Comparable<LexemePath> {
/** /**
* X权重词元长度积 * X权重词元长度积
*
*/ */
private int getXWeight() { private int getXWeight() {
int product = 1; int product = 1;
@ -196,31 +198,36 @@ class LexemePath extends QuickSortSet implements Comparable<LexemePath> {
return -1; return -1;
} else if (this.payloadLength < o.payloadLength) { } else if (this.payloadLength < o.payloadLength) {
return 1; return 1;
} else { }
// 比较词元个数越少越好 // 比较词元个数越少越好
if (this.size() < o.size()) { if (this.size() < o.size()) {
return -1; return -1;
} else if (this.size() > o.size()) { } else if (this.size() > o.size()) {
return 1; return 1;
} else { }
// 路径跨度越大越好 // 路径跨度越大越好
if (this.getPathLength() > o.getPathLength()) { if (this.getPathLength() > o.getPathLength()) {
return -1; return -1;
} else if (this.getPathLength() < o.getPathLength()) { } else if (this.getPathLength() < o.getPathLength()) {
return 1; return 1;
} else { }
// 根据统计学结论逆向切分概率高于正向切分因此位置越靠后的优先 // 根据统计学结论逆向切分概率高于正向切分因此位置越靠后的优先
if (this.pathEnd > o.pathEnd) { if (this.pathEnd > o.pathEnd) {
return -1; return -1;
} else if (pathEnd < o.pathEnd) { } else if (pathEnd < o.pathEnd) {
return 1; return 1;
} else { }
// 词长越平均越好 // 词长越平均越好
if (this.getXWeight() > o.getXWeight()) { if (this.getXWeight() > o.getXWeight()) {
return -1; return -1;
} else if (this.getXWeight() < o.getXWeight()) { } else if (this.getXWeight() < o.getXWeight()) {
return 1; return 1;
} else { }
// 词元位置权重比较 // 词元位置权重比较
if (this.getPWeight() > o.getPWeight()) { if (this.getPWeight() > o.getPWeight()) {
return -1; return -1;
@ -228,11 +235,6 @@ class LexemePath extends QuickSortSet implements Comparable<LexemePath> {
return 1; return 1;
} }
}
}
}
}
}
return 0; return 0;
} }

View File

@ -1,6 +1,6 @@
/* /*
* IK 中文分词 版本 8.1.0 * IK 中文分词 版本 8.5.0
* IK Analyzer release 8.1.0 * IK Analyzer release 8.5.0
* *
* Licensed to the Apache Software Foundation (ASF) under one or more * Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with * contributor license agreements. See the NOTICE file distributed with
@ -21,21 +21,27 @@
* 版权声明 2012乌龙茶工作室 * 版权声明 2012乌龙茶工作室
* provided by Linliangyi and copyright 2012 by Oolong studio * provided by Linliangyi and copyright 2012 by Oolong studio
* *
* 8.1.0版本 Magese (magese@live.cn) 更新 * 8.2.0版本 Magese (magese@live.cn) 更新
* release 8.1.0 update by Magese(magese@live.cn) * release 8.2.0 update by Magese(magese@live.cn)
* *
*/ */
package org.wltea.analyzer.core; package org.wltea.analyzer.core;
/** /**
* IK分词器专用的Lexem快速排序集合 * IK分词器专用的Lexeme快速排序集合
*/ */
class QuickSortSet { class QuickSortSet {
//链表头 /**
* 链表头
*/
private Cell head; private Cell head;
//链表尾 /**
* 链表尾
*/
private Cell tail; private Cell tail;
//链表的实际大小 /**
* 链表的实际大小
*/
private int size; private int size;
QuickSortSet() { QuickSortSet() {
@ -53,15 +59,15 @@ class QuickSortSet {
this.size++; this.size++;
} else { } else {
/*if(this.tail.compareTo(newCell) == 0){//词元与尾部词元相同不放入集合 if (this.tail.compareTo(newCell) < 0) {
// 词元接入链表尾部
}else */if(this.tail.compareTo(newCell) < 0){//词元接入链表尾部
this.tail.next = newCell; this.tail.next = newCell;
newCell.prev = this.tail; newCell.prev = this.tail;
this.tail = newCell; this.tail = newCell;
this.size++; this.size++;
}else if(this.head.compareTo(newCell) > 0){//词元接入链表头部 } else if (this.head.compareTo(newCell) > 0) {
// 词元接入链表头部
this.head.prev = newCell; this.head.prev = newCell;
newCell.next = this.head; newCell.next = this.head;
this.head = newCell; this.head = newCell;
@ -73,9 +79,9 @@ class QuickSortSet {
while (index != null && index.compareTo(newCell) > 0) { while (index != null && index.compareTo(newCell) > 0) {
index = index.prev; index = index.prev;
} }
/*if(index.compareTo(newCell) == 0){//词元与集合中的词元重复不放入集合
}else */if((index != null ? index.compareTo(newCell) : 1) < 0){//词元插入链表中的某个位置 // 词元插入链表中的某个位置
if ((index != null ? index.compareTo(newCell) : 1) < 0) {
newCell.prev = index; newCell.prev = index;
newCell.next = index.next; newCell.next = index.next;
index.next.prev = newCell; index.next.prev = newCell;
@ -98,6 +104,7 @@ class QuickSortSet {
/** /**
* 取出链表集合的第一个元素 * 取出链表集合的第一个元素
*
* @return Lexeme * @return Lexeme
*/ */
Lexeme pollFirst() { Lexeme pollFirst() {
@ -129,6 +136,7 @@ class QuickSortSet {
/** /**
* 取出链表集合的最后一个元素 * 取出链表集合的最后一个元素
*
* @return Lexeme * @return Lexeme
*/ */
Lexeme pollLast() { Lexeme pollLast() {
@ -172,15 +180,15 @@ class QuickSortSet {
} }
/* /*
* IK 中文分词 版本 7.0 * IK 中文分词 版本 8.5.0
* IK Analyzer release 7.0 * IK Analyzer release 8.5.0
* update by Magese(magese@live.cn) * update by Magese(magese@live.cn)
*/ */
@SuppressWarnings("unused") @SuppressWarnings("unused")
class Cell implements Comparable<Cell>{ static class Cell implements Comparable<Cell> {
private Cell prev; private Cell prev;
private Cell next; private Cell next;
private Lexeme lexeme; private final Lexeme lexeme;
Cell(Lexeme lexeme) { Cell(Lexeme lexeme) {
if (lexeme == null) { if (lexeme == null) {

View File

@ -1,6 +1,6 @@
/* /*
* IK 中文分词 版本 8.1.0 * IK 中文分词 版本 8.5.0
* IK Analyzer release 8.1.0 * IK Analyzer release 8.5.0
* *
* Licensed to the Apache Software Foundation (ASF) under one or more * Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with * contributor license agreements. See the NOTICE file distributed with
@ -21,8 +21,8 @@
* 版权声明 2012乌龙茶工作室 * 版权声明 2012乌龙茶工作室
* provided by Linliangyi and copyright 2012 by Oolong studio * provided by Linliangyi and copyright 2012 by Oolong studio
* *
* 8.1.0版本 Magese (magese@live.cn) 更新 * 8.5.0版本 Magese (magese@live.cn) 更新
* release 8.1.0 update by Magese(magese@live.cn) * release 8.5.0 update by Magese(magese@live.cn)
* *
*/ */
package org.wltea.analyzer.dic; package org.wltea.analyzer.dic;
@ -37,24 +37,38 @@ import java.util.Map;
@SuppressWarnings("unused") @SuppressWarnings("unused")
class DictSegment implements Comparable<DictSegment> { class DictSegment implements Comparable<DictSegment> {
//公用字典表存储汉字 /**
* 公用字典表存储汉字
*/
private static final Map<Character, Character> charMap = new HashMap<>(16, 0.95f); private static final Map<Character, Character> charMap = new HashMap<>(16, 0.95f);
//数组大小上限 /**
* 数组大小上限
*/
private static final int ARRAY_LENGTH_LIMIT = 3; private static final int ARRAY_LENGTH_LIMIT = 3;
//Map存储结构 /**
private Map<Character, DictSegment> childrenMap; * Map存储结构
//数组方式存储结构 */
private DictSegment[] childrenArray; private volatile Map<Character, DictSegment> childrenMap;
/**
* 数组方式存储结构
*/
private volatile DictSegment[] childrenArray;
//当前节点上存储的字符 /**
private Character nodeChar; * 当前节点上存储的字符
//当前节点存储的Segment数目 */
//storeSize <=ARRAY_LENGTH_LIMIT 使用数组存储 storeSize >ARRAY_LENGTH_LIMIT ,则使用Map存储 private final Character nodeChar;
/**
* 当前节点存储的Segment数目
* storeSize <=ARRAY_LENGTH_LIMIT 使用数组存储 storeSize >ARRAY_LENGTH_LIMIT ,则使用Map存储
*/
private int storeSize = 0; private int storeSize = 0;
//当前DictSegment状态 ,默认 0 , 1表示从根节点到当前节点的路径表示一个词 /**
* 当前DictSegment状态 ,默认 0 , 1表示从根节点到当前节点的路径表示一个词
*/
private int nodeState = 0; private int nodeState = 0;

View File

@ -1,6 +1,6 @@
/* /*
* IK 中文分词 版本 8.1.0 * IK 中文分词 版本 8.5.0
* IK Analyzer release 8.1.0 * IK Analyzer release 8.5.0
* *
* Licensed to the Apache Software Foundation (ASF) under one or more * Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with * contributor license agreements. See the NOTICE file distributed with
@ -21,20 +21,20 @@
* 版权声明 2012乌龙茶工作室 * 版权声明 2012乌龙茶工作室
* provided by Linliangyi and copyright 2012 by Oolong studio * provided by Linliangyi and copyright 2012 by Oolong studio
* *
* 8.1.0版本 Magese (magese@live.cn) 更新 * 8.5.0版本 Magese (magese@live.cn) 更新
* release 8.1.0 update by Magese(magese@live.cn) * release 8.5.0 update by Magese(magese@live.cn)
* *
*/ */
package org.wltea.analyzer.dic; package org.wltea.analyzer.dic;
import org.wltea.analyzer.cfg.Configuration;
import org.wltea.analyzer.cfg.DefaultConfig;
import java.io.*; import java.io.*;
import java.nio.charset.StandardCharsets; import java.nio.charset.StandardCharsets;
import java.util.Collection; import java.util.Collection;
import java.util.List; import java.util.List;
import org.wltea.analyzer.cfg.Configuration;
import org.wltea.analyzer.cfg.DefaultConfig;
/** /**
* 词典管理类单例模式 * 词典管理类单例模式
*/ */
@ -44,7 +44,7 @@ public class Dictionary {
/* /*
* 词典单子实例 * 词典单子实例
*/ */
private static Dictionary singleton; private static volatile Dictionary singleton;
/* /*
* 主词典对象 * 主词典对象
@ -63,7 +63,7 @@ public class Dictionary {
/** /**
* 配置对象 * 配置对象
*/ */
private Configuration cfg; private final Configuration cfg;
/** /**
* 私有构造方法阻止外部直接实例化本类 * 私有构造方法阻止外部直接实例化本类
@ -226,22 +226,15 @@ public class Dictionary {
private void loadMainDict() { private void loadMainDict() {
// 建立一个主词典实例 // 建立一个主词典实例
_MainDict = new DictSegment((char) 0); _MainDict = new DictSegment((char) 0);
// 获取是否加载主词典
if (cfg.useMainDict()) {
// 读取主词典文件 // 读取主词典文件
InputStream is = this.getClass().getClassLoader().getResourceAsStream(cfg.getMainDictionary()); InputStream is = this.getClass().getClassLoader().getResourceAsStream(cfg.getMainDictionary());
if (is == null) { if (is == null) {
throw new RuntimeException("Main Dictionary not found!!!"); throw new RuntimeException("Main Dictionary not found!!!");
} }
try { try {
BufferedReader br = new BufferedReader(new InputStreamReader(is, StandardCharsets.UTF_8), 512); readDict(is, _MainDict);
String theWord;
do {
theWord = br.readLine();
if (theWord != null && !"".equals(theWord.trim())) {
_MainDict.fillSegment(theWord.trim().toLowerCase().toCharArray());
}
} while (theWord != null);
} catch (IOException ioe) { } catch (IOException ioe) {
System.err.println("Main Dictionary loading exception."); System.err.println("Main Dictionary loading exception.");
ioe.printStackTrace(); ioe.printStackTrace();
@ -253,6 +246,7 @@ public class Dictionary {
e.printStackTrace(); e.printStackTrace();
} }
} }
}
// 加载扩展词典 // 加载扩展词典
this.loadExtDict(); this.loadExtDict();
} }
@ -274,17 +268,7 @@ public class Dictionary {
continue; continue;
} }
try { try {
BufferedReader br = new BufferedReader(new InputStreamReader(is, StandardCharsets.UTF_8), 512); readDict(is, _MainDict);
String theWord;
do {
theWord = br.readLine();
if (theWord != null && !"".equals(theWord.trim())) {
// 加载扩展词典数据到主内存词典中
// System.out.println(theWord);
_MainDict.fillSegment(theWord.trim().toLowerCase().toCharArray());
}
} while (theWord != null);
} catch (IOException ioe) { } catch (IOException ioe) {
System.err.println("Extension Dictionary loading exception."); System.err.println("Extension Dictionary loading exception.");
ioe.printStackTrace(); ioe.printStackTrace();
@ -319,17 +303,7 @@ public class Dictionary {
continue; continue;
} }
try { try {
BufferedReader br = new BufferedReader(new InputStreamReader(is, StandardCharsets.UTF_8), 512); readDict(is, _StopWordDict);
String theWord;
do {
theWord = br.readLine();
if (theWord != null && !"".equals(theWord.trim())) {
// System.out.println(theWord);
// 加载扩展停止词典数据到内存中
_StopWordDict.fillSegment(theWord.trim().toLowerCase().toCharArray());
}
} while (theWord != null);
} catch (IOException ioe) { } catch (IOException ioe) {
System.err.println("Extension Stop word Dictionary loading exception."); System.err.println("Extension Stop word Dictionary loading exception.");
ioe.printStackTrace(); ioe.printStackTrace();
@ -352,20 +326,12 @@ public class Dictionary {
// 建立一个量词典实例 // 建立一个量词典实例
_QuantifierDict = new DictSegment((char) 0); _QuantifierDict = new DictSegment((char) 0);
// 读取量词词典文件 // 读取量词词典文件
InputStream is = this.getClass().getClassLoader().getResourceAsStream(cfg.getQuantifierDicionary()); InputStream is = this.getClass().getClassLoader().getResourceAsStream(cfg.getQuantifierDictionary());
if (is == null) { if (is == null) {
throw new RuntimeException("Quantifier Dictionary not found!!!"); throw new RuntimeException("Quantifier Dictionary not found!!!");
} }
try { try {
BufferedReader br = new BufferedReader(new InputStreamReader(is, StandardCharsets.UTF_8), 512); readDict(is, _QuantifierDict);
String theWord;
do {
theWord = br.readLine();
if (theWord != null && !"".equals(theWord.trim())) {
_QuantifierDict.fillSegment(theWord.trim().toLowerCase().toCharArray());
}
} while (theWord != null);
} catch (IOException ioe) { } catch (IOException ioe) {
System.err.println("Quantifier Dictionary loading exception."); System.err.println("Quantifier Dictionary loading exception.");
ioe.printStackTrace(); ioe.printStackTrace();
@ -379,4 +345,21 @@ public class Dictionary {
} }
} }
/**
* 读取词典文件到词典树中
*
* @param is 文件输入流
* @param dictSegment 词典树分段
* @throws IOException 读取异常
*/
private void readDict(InputStream is, DictSegment dictSegment) throws IOException {
BufferedReader br = new BufferedReader(new InputStreamReader(is, StandardCharsets.UTF_8), 512);
String theWord;
do {
theWord = br.readLine();
if (theWord != null && !"".equals(theWord.trim())) {
dictSegment.fillSegment(theWord.trim().toLowerCase().toCharArray());
}
} while (theWord != null);
}
} }

View File

@ -1,6 +1,6 @@
/* /*
* IK 中文分词 版本 8.1.0 * IK 中文分词 版本 8.5.0
* IK Analyzer release 8.1.0 * IK Analyzer release 8.5.0
* *
* Licensed to the Apache Software Foundation (ASF) under one or more * Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with * contributor license agreements. See the NOTICE file distributed with
@ -21,8 +21,8 @@
* 版权声明 2012乌龙茶工作室 * 版权声明 2012乌龙茶工作室
* provided by Linliangyi and copyright 2012 by Oolong studio * provided by Linliangyi and copyright 2012 by Oolong studio
* *
* 8.1.0版本 Magese (magese@live.cn) 更新 * 8.5.0版本 Magese (magese@live.cn) 更新
* release 8.1.0 update by Magese(magese@live.cn) * release 8.5.0 update by Magese(magese@live.cn)
* *
*/ */
package org.wltea.analyzer.dic; package org.wltea.analyzer.dic;
@ -32,24 +32,33 @@ package org.wltea.analyzer.dic;
*/ */
@SuppressWarnings("unused") @SuppressWarnings("unused")
public class Hit { public class Hit {
//Hit不匹配 /**
* Hit不匹配
*/
private static final int UNMATCH = 0x00000000; private static final int UNMATCH = 0x00000000;
//Hit完全匹配 /**
* Hit完全匹配
*/
private static final int MATCH = 0x00000001; private static final int MATCH = 0x00000001;
//Hit前缀匹配 /**
* Hit前缀匹配
*/
private static final int PREFIX = 0x00000010; private static final int PREFIX = 0x00000010;
//该HIT当前状态默认未匹配 /**
* 该HIT当前状态默认未匹配
*/
private int hitState = UNMATCH; private int hitState = UNMATCH;
/**
//记录词典匹配过程中当前匹配到的词典分支节点 * 记录词典匹配过程中当前匹配到的词典分支节点
*/
private DictSegment matchedDictSegment; private DictSegment matchedDictSegment;
/* /**
* 词段开始位置 * 词段开始位置
*/ */
private int begin; private int begin;
/* /**
* 词段的结束位置 * 词段的结束位置
*/ */
private int end; private int end;
@ -86,9 +95,7 @@ public class Hit {
public boolean isUnmatch() { public boolean isUnmatch() {
return this.hitState == UNMATCH ; return this.hitState == UNMATCH ;
} }
/**
*
*/
void setUnmatch() { void setUnmatch() {
this.hitState = UNMATCH; this.hitState = UNMATCH;
} }

View File

@ -1,6 +1,6 @@
/* /*
* IK 中文分词 版本 8.1.0 * IK 中文分词 版本 8.5.0
* IK Analyzer release 8.1.0 * IK Analyzer release 8.5.0
* *
* Licensed to the Apache Software Foundation (ASF) under one or more * Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with * contributor license agreements. See the NOTICE file distributed with
@ -21,8 +21,8 @@
* 版权声明 2012乌龙茶工作室 * 版权声明 2012乌龙茶工作室
* provided by Linliangyi and copyright 2012 by Oolong studio * provided by Linliangyi and copyright 2012 by Oolong studio
* *
* 8.1.0版本 Magese (magese@live.cn) 更新 * 8.5.0版本 Magese (magese@live.cn) 更新
* release 8.1.0 update by Magese(magese@live.cn) * release 8.5.0 update by Magese(magese@live.cn)
* *
*/ */
package org.wltea.analyzer.lucene; package org.wltea.analyzer.lucene;
@ -36,19 +36,15 @@ import org.apache.lucene.analysis.Tokenizer;
@SuppressWarnings("unused") @SuppressWarnings("unused")
public final class IKAnalyzer extends Analyzer { public final class IKAnalyzer extends Analyzer {
private boolean useSmart; private final boolean useSmart;
private boolean useSmart() { private boolean useSmart() {
return useSmart; return useSmart;
} }
public void setUseSmart(boolean useSmart) {
this.useSmart = useSmart;
}
/** /**
* IK分词器Lucene Analyzer接口实现类 * IK分词器Lucene Analyzer接口实现类
*
* 默认细粒度切分算法 * 默认细粒度切分算法
*/ */
public IKAnalyzer() { public IKAnalyzer() {

View File

@ -1,6 +1,6 @@
/* /*
* IK 中文分词 版本 8.1.0 * IK 中文分词 版本 8.5.0
* IK Analyzer release 8.1.0 * IK Analyzer release 8.5.0
* *
* Licensed to the Apache Software Foundation (ASF) under one or more * Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with * contributor license agreements. See the NOTICE file distributed with
@ -21,8 +21,8 @@
* 版权声明 2012乌龙茶工作室 * 版权声明 2012乌龙茶工作室
* provided by Linliangyi and copyright 2012 by Oolong studio * provided by Linliangyi and copyright 2012 by Oolong studio
* *
* 8.1.0版本 Magese (magese@live.cn) 更新 * 8.5.0版本 Magese (magese@live.cn) 更新
* release 8.1.0 update by Magese(magese@live.cn) * release 8.5.0 update by Magese(magese@live.cn)
* *
*/ */
package org.wltea.analyzer.lucene; package org.wltea.analyzer.lucene;
@ -39,21 +39,30 @@ import java.io.IOException;
/** /**
* IK分词器 Lucene Tokenizer适配器类 * IK分词器 Lucene Tokenizer适配器类
* 兼容Lucene 4.0版本
*/ */
@SuppressWarnings("unused") @SuppressWarnings({"unused", "FinalMethodInFinalClass"})
public final class IKTokenizer extends Tokenizer { public final class IKTokenizer extends Tokenizer {
//IK分词器实现 /**
* IK分词器实现
*/
private IKSegmenter _IKImplement; private IKSegmenter _IKImplement;
//词元文本属性 /**
* 词元文本属性
*/
private CharTermAttribute termAtt; private CharTermAttribute termAtt;
//词元位移属性 /**
* 词元位移属性
*/
private OffsetAttribute offsetAtt; private OffsetAttribute offsetAtt;
//词元分类属性该属性分类参考org.wltea.analyzer.core.Lexeme中的分类常量 /**
* 词元分类属性该属性分类参考org.wltea.analyzer.core.Lexeme中的分类常量
*/
private TypeAttribute typeAtt; private TypeAttribute typeAtt;
//记录最后一个词元的结束位置 /**
* 记录最后一个词元的结束位置
*/
private int endPosition; private int endPosition;
/** /**
@ -84,7 +93,8 @@ public final class IKTokenizer extends Tokenizer {
_IKImplement = new IKSegmenter(input, useSmart); _IKImplement = new IKSegmenter(input, useSmart);
} }
/* (non-Javadoc) /*
* (non-Javadoc)
* @see org.apache.lucene.analysis.TokenStream#incrementToken() * @see org.apache.lucene.analysis.TokenStream#incrementToken()
*/ */
@Override @Override

View File

@ -1,6 +1,6 @@
/* /*
* IK 中文分词 版本 8.1.0 * IK 中文分词 版本 8.5.0
* IK Analyzer release 8.1.0 * IK Analyzer release 8.5.0
* *
* Licensed to the Apache Software Foundation (ASF) under one or more * Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with * contributor license agreements. See the NOTICE file distributed with
@ -21,8 +21,8 @@
* 版权声明 2012乌龙茶工作室 * 版权声明 2012乌龙茶工作室
* provided by Linliangyi and copyright 2012 by Oolong studio * provided by Linliangyi and copyright 2012 by Oolong studio
* *
* 8.1.0版本 Magese (magese@live.cn) 更新 * 8.5.0版本 Magese (magese@live.cn) 更新
* release 8.1.0 update by Magese(magese@live.cn) * release 8.5.0 update by Magese(magese@live.cn)
* *
*/ */
package org.wltea.analyzer.lucene; package org.wltea.analyzer.lucene;
@ -44,6 +44,8 @@ import java.nio.charset.StandardCharsets;
import java.util.*; import java.util.*;
/** /**
* 分词器工厂类
*
* @author <a href="magese@live.cn">Magese</a> * @author <a href="magese@live.cn">Magese</a>
*/ */
public class IKTokenizerFactory extends TokenizerFactory implements ResourceLoaderAware, UpdateThread.UpdateJob { public class IKTokenizerFactory extends TokenizerFactory implements ResourceLoaderAware, UpdateThread.UpdateJob {
@ -74,7 +76,7 @@ public class IKTokenizerFactory extends TokenizerFactory implements ResourceLoad
*/ */
@Override @Override
public void inform(ResourceLoader resourceLoader) throws IOException { public void inform(ResourceLoader resourceLoader) throws IOException {
System.out.println(String.format("IKTokenizerFactory " + this.hashCode() + " inform conf: %s", getConf())); System.out.printf("IKTokenizerFactory " + this.hashCode() + " inform conf: %s%n", getConf());
this.loader = resourceLoader; this.loader = resourceLoader;
update(); update();
if ((getConf() != null) && (!getConf().trim().isEmpty())) { if ((getConf() != null) && (!getConf().trim().isEmpty())) {

View File

@ -1,6 +1,6 @@
/* /*
* IK 中文分词 版本 8.1.0 * IK 中文分词 版本 8.5.0
* IK Analyzer release 8.1.0 * IK Analyzer release 8.5.0
* *
* Licensed to the Apache Software Foundation (ASF) under one or more * Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with * contributor license agreements. See the NOTICE file distributed with
@ -21,8 +21,8 @@
* 版权声明 2012乌龙茶工作室 * 版权声明 2012乌龙茶工作室
* provided by Linliangyi and copyright 2012 by Oolong studio * provided by Linliangyi and copyright 2012 by Oolong studio
* *
* 8.1.0版本 Magese (magese@live.cn) 更新 * 8.5.0版本 Magese (magese@live.cn) 更新
* release 8.1.0 update by Magese(magese@live.cn) * release 8.5.0 update by Magese(magese@live.cn)
* *
*/ */
package org.wltea.analyzer.lucene; package org.wltea.analyzer.lucene;
@ -35,7 +35,7 @@ import java.util.Vector;
*/ */
public class UpdateThread implements Runnable { public class UpdateThread implements Runnable {
private static final long INTERVAL = 30000L; // 循环等待时间 private static final long INTERVAL = 30000L; // 循环等待时间
private Vector<UpdateJob> filterFactorys; // 更新任务集合 private final Vector<UpdateJob> filterFactorys; // 更新任务集合
/** /**
* 私有化构造器阻止外部进行实例化 * 私有化构造器阻止外部进行实例化
@ -51,7 +51,7 @@ public class UpdateThread implements Runnable {
* 静态内部类实现线程安全单例模式 * 静态内部类实现线程安全单例模式
*/ */
private static class Builder { private static class Builder {
private static UpdateThread singleton = new UpdateThread(); private static final UpdateThread singleton = new UpdateThread();
} }
/** /**
@ -81,6 +81,7 @@ public class UpdateThread implements Runnable {
//noinspection InfiniteLoopStatement //noinspection InfiniteLoopStatement
while (true) { while (true) {
try { try {
//noinspection BusyWait
Thread.sleep(INTERVAL); Thread.sleep(INTERVAL);
} catch (InterruptedException e) { } catch (InterruptedException e) {
e.printStackTrace(); e.printStackTrace();

View File

@ -1,6 +1,6 @@
/* /*
* IK 中文分词 版本 8.1.0 * IK 中文分词 版本 8.5.0
* IK Analyzer release 8.1.0 * IK Analyzer release 8.5.0
* *
* Licensed to the Apache Software Foundation (ASF) under one or more * Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with * contributor license agreements. See the NOTICE file distributed with
@ -21,8 +21,8 @@
* 版权声明 2012乌龙茶工作室 * 版权声明 2012乌龙茶工作室
* provided by Linliangyi and copyright 2012 by Oolong studio * provided by Linliangyi and copyright 2012 by Oolong studio
* *
* 8.1.0版本 Magese (magese@live.cn) 更新 * 8.5.0版本 Magese (magese@live.cn) 更新
* release 8.1.0 update by Magese(magese@live.cn) * release 8.5.0 update by Magese(magese@live.cn)
* *
*/ */
package org.wltea.analyzer.query; package org.wltea.analyzer.query;
@ -46,11 +46,11 @@ import java.util.Stack;
public class IKQueryExpressionParser { public class IKQueryExpressionParser {
private List<Element> elements = new ArrayList<>(); private final List<Element> elements = new ArrayList<>();
private Stack<Query> querys = new Stack<>(); private final Stack<Query> querys = new Stack<>();
private Stack<Element> operates = new Stack<>(); private final Stack<Element> operates = new Stack<>();
/** /**
* 解析查询表达式生成Lucene Query对象 * 解析查询表达式生成Lucene Query对象
@ -87,263 +87,263 @@ public class IKQueryExpressionParser {
if (expression == null) { if (expression == null) {
return; return;
} }
Element curretElement = null; Element currentElement = null;
char[] expChars = expression.toCharArray(); char[] expChars = expression.toCharArray();
for (char expChar : expChars) { for (char expChar : expChars) {
switch (expChar) { switch (expChar) {
case '&': case '&':
if (curretElement == null) { if (currentElement == null) {
curretElement = new Element(); currentElement = new Element();
curretElement.type = '&'; currentElement.type = '&';
curretElement.append(expChar); currentElement.append(expChar);
} else if (curretElement.type == '&') { } else if (currentElement.type == '&') {
curretElement.append(expChar); currentElement.append(expChar);
this.elements.add(curretElement); this.elements.add(currentElement);
curretElement = null; currentElement = null;
} else if (curretElement.type == '\'') { } else if (currentElement.type == '\'') {
curretElement.append(expChar); currentElement.append(expChar);
} else { } else {
this.elements.add(curretElement); this.elements.add(currentElement);
curretElement = new Element(); currentElement = new Element();
curretElement.type = '&'; currentElement.type = '&';
curretElement.append(expChar); currentElement.append(expChar);
} }
break; break;
case '|': case '|':
if (curretElement == null) { if (currentElement == null) {
curretElement = new Element(); currentElement = new Element();
curretElement.type = '|'; currentElement.type = '|';
curretElement.append(expChar); currentElement.append(expChar);
} else if (curretElement.type == '|') { } else if (currentElement.type == '|') {
curretElement.append(expChar); currentElement.append(expChar);
this.elements.add(curretElement); this.elements.add(currentElement);
curretElement = null; currentElement = null;
} else if (curretElement.type == '\'') { } else if (currentElement.type == '\'') {
curretElement.append(expChar); currentElement.append(expChar);
} else { } else {
this.elements.add(curretElement); this.elements.add(currentElement);
curretElement = new Element(); currentElement = new Element();
curretElement.type = '|'; currentElement.type = '|';
curretElement.append(expChar); currentElement.append(expChar);
} }
break; break;
case '-': case '-':
if (curretElement != null) { if (currentElement != null) {
if (curretElement.type == '\'') { if (currentElement.type == '\'') {
curretElement.append(expChar); currentElement.append(expChar);
continue; continue;
} else { } else {
this.elements.add(curretElement); this.elements.add(currentElement);
} }
} }
curretElement = new Element(); currentElement = new Element();
curretElement.type = '-'; currentElement.type = '-';
curretElement.append(expChar); currentElement.append(expChar);
this.elements.add(curretElement); this.elements.add(currentElement);
curretElement = null; currentElement = null;
break; break;
case '(': case '(':
if (curretElement != null) { if (currentElement != null) {
if (curretElement.type == '\'') { if (currentElement.type == '\'') {
curretElement.append(expChar); currentElement.append(expChar);
continue; continue;
} else { } else {
this.elements.add(curretElement); this.elements.add(currentElement);
} }
} }
curretElement = new Element(); currentElement = new Element();
curretElement.type = '('; currentElement.type = '(';
curretElement.append(expChar); currentElement.append(expChar);
this.elements.add(curretElement); this.elements.add(currentElement);
curretElement = null; currentElement = null;
break; break;
case ')': case ')':
if (curretElement != null) { if (currentElement != null) {
if (curretElement.type == '\'') { if (currentElement.type == '\'') {
curretElement.append(expChar); currentElement.append(expChar);
continue; continue;
} else { } else {
this.elements.add(curretElement); this.elements.add(currentElement);
} }
} }
curretElement = new Element(); currentElement = new Element();
curretElement.type = ')'; currentElement.type = ')';
curretElement.append(expChar); currentElement.append(expChar);
this.elements.add(curretElement); this.elements.add(currentElement);
curretElement = null; currentElement = null;
break; break;
case ':': case ':':
if (curretElement != null) { if (currentElement != null) {
if (curretElement.type == '\'') { if (currentElement.type == '\'') {
curretElement.append(expChar); currentElement.append(expChar);
continue; continue;
} else { } else {
this.elements.add(curretElement); this.elements.add(currentElement);
} }
} }
curretElement = new Element(); currentElement = new Element();
curretElement.type = ':'; currentElement.type = ':';
curretElement.append(expChar); currentElement.append(expChar);
this.elements.add(curretElement); this.elements.add(currentElement);
curretElement = null; currentElement = null;
break; break;
case '=': case '=':
if (curretElement != null) { if (currentElement != null) {
if (curretElement.type == '\'') { if (currentElement.type == '\'') {
curretElement.append(expChar); currentElement.append(expChar);
continue; continue;
} else { } else {
this.elements.add(curretElement); this.elements.add(currentElement);
} }
} }
curretElement = new Element(); currentElement = new Element();
curretElement.type = '='; currentElement.type = '=';
curretElement.append(expChar); currentElement.append(expChar);
this.elements.add(curretElement); this.elements.add(currentElement);
curretElement = null; currentElement = null;
break; break;
case ' ': case ' ':
if (curretElement != null) { if (currentElement != null) {
if (curretElement.type == '\'') { if (currentElement.type == '\'') {
curretElement.append(expChar); currentElement.append(expChar);
} else { } else {
this.elements.add(curretElement); this.elements.add(currentElement);
curretElement = null; currentElement = null;
} }
} }
break; break;
case '\'': case '\'':
if (curretElement == null) { if (currentElement == null) {
curretElement = new Element(); currentElement = new Element();
curretElement.type = '\''; currentElement.type = '\'';
} else if (curretElement.type == '\'') { } else if (currentElement.type == '\'') {
this.elements.add(curretElement); this.elements.add(currentElement);
curretElement = null; currentElement = null;
} else { } else {
this.elements.add(curretElement); this.elements.add(currentElement);
curretElement = new Element(); currentElement = new Element();
curretElement.type = '\''; currentElement.type = '\'';
} }
break; break;
case '[': case '[':
if (curretElement != null) { if (currentElement != null) {
if (curretElement.type == '\'') { if (currentElement.type == '\'') {
curretElement.append(expChar); currentElement.append(expChar);
continue; continue;
} else { } else {
this.elements.add(curretElement); this.elements.add(currentElement);
} }
} }
curretElement = new Element(); currentElement = new Element();
curretElement.type = '['; currentElement.type = '[';
curretElement.append(expChar); currentElement.append(expChar);
this.elements.add(curretElement); this.elements.add(currentElement);
curretElement = null; currentElement = null;
break; break;
case ']': case ']':
if (curretElement != null) { if (currentElement != null) {
if (curretElement.type == '\'') { if (currentElement.type == '\'') {
curretElement.append(expChar); currentElement.append(expChar);
continue; continue;
} else { } else {
this.elements.add(curretElement); this.elements.add(currentElement);
} }
} }
curretElement = new Element(); currentElement = new Element();
curretElement.type = ']'; currentElement.type = ']';
curretElement.append(expChar); currentElement.append(expChar);
this.elements.add(curretElement); this.elements.add(currentElement);
curretElement = null; currentElement = null;
break; break;
case '{': case '{':
if (curretElement != null) { if (currentElement != null) {
if (curretElement.type == '\'') { if (currentElement.type == '\'') {
curretElement.append(expChar); currentElement.append(expChar);
continue; continue;
} else { } else {
this.elements.add(curretElement); this.elements.add(currentElement);
} }
} }
curretElement = new Element(); currentElement = new Element();
curretElement.type = '{'; currentElement.type = '{';
curretElement.append(expChar); currentElement.append(expChar);
this.elements.add(curretElement); this.elements.add(currentElement);
curretElement = null; currentElement = null;
break; break;
case '}': case '}':
if (curretElement != null) { if (currentElement != null) {
if (curretElement.type == '\'') { if (currentElement.type == '\'') {
curretElement.append(expChar); currentElement.append(expChar);
continue; continue;
} else { } else {
this.elements.add(curretElement); this.elements.add(currentElement);
} }
} }
curretElement = new Element(); currentElement = new Element();
curretElement.type = '}'; currentElement.type = '}';
curretElement.append(expChar); currentElement.append(expChar);
this.elements.add(curretElement); this.elements.add(currentElement);
curretElement = null; currentElement = null;
break; break;
case ',': case ',':
if (curretElement != null) { if (currentElement != null) {
if (curretElement.type == '\'') { if (currentElement.type == '\'') {
curretElement.append(expChar); currentElement.append(expChar);
continue; continue;
} else { } else {
this.elements.add(curretElement); this.elements.add(currentElement);
} }
} }
curretElement = new Element(); currentElement = new Element();
curretElement.type = ','; currentElement.type = ',';
curretElement.append(expChar); currentElement.append(expChar);
this.elements.add(curretElement); this.elements.add(currentElement);
curretElement = null; currentElement = null;
break; break;
default: default:
if (curretElement == null) { if (currentElement == null) {
curretElement = new Element(); currentElement = new Element();
curretElement.type = 'F'; currentElement.type = 'F';
curretElement.append(expChar); currentElement.append(expChar);
} else if (curretElement.type == 'F') { } else if (currentElement.type == 'F') {
curretElement.append(expChar); currentElement.append(expChar);
} else if (curretElement.type == '\'') { } else if (currentElement.type == '\'') {
curretElement.append(expChar); currentElement.append(expChar);
} else { } else {
this.elements.add(curretElement); this.elements.add(currentElement);
curretElement = new Element(); currentElement = new Element();
curretElement.type = 'F'; currentElement.type = 'F';
curretElement.append(expChar); currentElement.append(expChar);
} }
} }
} }
if (curretElement != null) { if (currentElement != null) {
this.elements.add(curretElement); this.elements.add(currentElement);
} }
} }
@ -673,7 +673,7 @@ public class IKQueryExpressionParser {
* @author linliangyi * @author linliangyi
* May 20, 2010 * May 20, 2010
*/ */
private class Element { private static class Element {
char type = 0; char type = 0;
StringBuffer eleTextBuff; StringBuffer eleTextBuff;
@ -692,11 +692,9 @@ public class IKQueryExpressionParser {
public static void main(String[] args) { public static void main(String[] args) {
IKQueryExpressionParser parser = new IKQueryExpressionParser(); IKQueryExpressionParser parser = new IKQueryExpressionParser();
//String ikQueryExp = "newsTitle:'的两款《魔兽世界》插件Bigfoot和月光宝盒'";
String ikQueryExp = "(id='ABcdRf' && date:{'20010101','20110101'} && keyword:'魔兽中国') || (content:'KSHT-KSH-A001-18' || ulr='www.ik.com') - name:'林良益'"; String ikQueryExp = "(id='ABcdRf' && date:{'20010101','20110101'} && keyword:'魔兽中国') || (content:'KSHT-KSH-A001-18' || ulr='www.ik.com') - name:'林良益'";
Query result = parser.parseExp(ikQueryExp); Query result = parser.parseExp(ikQueryExp);
System.out.println(result); System.out.println(result);
} }
} }

View File

@ -1,6 +1,6 @@
/* /*
* IK 中文分词 版本 8.1.0 * IK 中文分词 版本 8.5.0
* IK Analyzer release 8.1.0 * IK Analyzer release 8.5.0
* *
* Licensed to the Apache Software Foundation (ASF) under one or more * Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with * contributor license agreements. See the NOTICE file distributed with
@ -21,8 +21,8 @@
* 版权声明 2012乌龙茶工作室 * 版权声明 2012乌龙茶工作室
* provided by Linliangyi and copyright 2012 by Oolong studio * provided by Linliangyi and copyright 2012 by Oolong studio
* *
* 8.1.0版本 Magese (magese@live.cn) 更新 * 8.5.0版本 Magese (magese@live.cn) 更新
* release 8.1.0 update by Magese(magese@live.cn) * release 8.5.0 update by Magese(magese@live.cn)
* *
*/ */
package org.wltea.analyzer.query; package org.wltea.analyzer.query;
@ -45,6 +45,7 @@ import java.util.List;
* *
* @author linliangyi * @author linliangyi
*/ */
@SuppressWarnings("unused")
class SWMCQueryBuilder { class SWMCQueryBuilder {
/** /**
@ -118,8 +119,8 @@ class SWMCQueryBuilder {
// 借助lucene queryparser 生成SWMC Query // 借助lucene queryparser 生成SWMC Query
QueryParser qp = new QueryParser(fieldName, new StandardAnalyzer()); QueryParser qp = new QueryParser(fieldName, new StandardAnalyzer());
qp.setAutoGeneratePhraseQueries(false);
qp.setDefaultOperator(QueryParser.AND_OPERATOR); qp.setDefaultOperator(QueryParser.AND_OPERATOR);
qp.setAutoGeneratePhraseQueries(true);
if ((shortCount * 1.0f / totalCount) > 0.5f) { if ((shortCount * 1.0f / totalCount) > 0.5f) {
try { try {

View File

@ -1,86 +0,0 @@
/*
* IK 中文分词 版本 8.1.0
* IK Analyzer release 8.1.0
*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
* 源代码由林良益(linliangyi2005@gmail.com)提供
* 版权声明 2012乌龙茶工作室
* provided by Linliangyi and copyright 2012 by Oolong studio
*
* 8.1.0版本 Magese (magese@live.cn) 更新
* release 8.1.0 update by Magese(magese@live.cn)
*
*/
package org.wltea.analyzer.sample;
import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.TokenStream;
import org.apache.lucene.analysis.tokenattributes.CharTermAttribute;
import org.apache.lucene.analysis.tokenattributes.OffsetAttribute;
import org.apache.lucene.analysis.tokenattributes.TypeAttribute;
import org.wltea.analyzer.lucene.IKAnalyzer;
import java.io.IOException;
import java.io.StringReader;
/**
* 使用IKAnalyzer进行分词的演示
* 2012-10-22
*/
public class IKAnalzyerDemo {
public static void main(String[] args) {
//构建IK分词器使用smart分词模式
Analyzer analyzer = new IKAnalyzer(true);
//获取Lucene的TokenStream对象
TokenStream ts = null;
try {
ts = analyzer.tokenStream("myfield", new StringReader("这是一个中文分词的例子你可以直接运行它IKAnalyer can analysis english text too"));
//获取词元位置属性
OffsetAttribute offset = ts.addAttribute(OffsetAttribute.class);
//获取词元文本属性
CharTermAttribute term = ts.addAttribute(CharTermAttribute.class);
//获取词元文本属性
TypeAttribute type = ts.addAttribute(TypeAttribute.class);
//重置TokenStream重置StringReader
ts.reset();
//迭代获取分词结果
while (ts.incrementToken()) {
System.out.println(offset.startOffset() + " - " + offset.endOffset() + " : " + term.toString() + " | " + type.type());
}
//关闭TokenStream关闭StringReader
ts.end(); // Perform end-of-stream operations, e.g. set the final offset.
} catch (IOException e) {
e.printStackTrace();
} finally {
//释放TokenStream的所有资源
if (ts != null) {
try {
ts.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
}

View File

@ -1,135 +0,0 @@
/*
* IK 中文分词 版本 8.1.0
* IK Analyzer release 8.1.0
*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
* 源代码由林良益(linliangyi2005@gmail.com)提供
* 版权声明 2012乌龙茶工作室
* provided by Linliangyi and copyright 2012 by Oolong studio
*
* 8.1.0版本 Magese (magese@live.cn) 更新
* release 8.1.0 update by Magese(magese@live.cn)
*
*/
package org.wltea.analyzer.sample;
import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import org.apache.lucene.document.StringField;
import org.apache.lucene.document.TextField;
import org.apache.lucene.index.DirectoryReader;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.index.IndexWriterConfig;
import org.apache.lucene.index.IndexWriterConfig.OpenMode;
import org.apache.lucene.queryparser.classic.ParseException;
import org.apache.lucene.queryparser.classic.QueryParser;
import org.apache.lucene.search.*;
import org.apache.lucene.store.Directory;
import org.apache.lucene.store.RAMDirectory;
import org.wltea.analyzer.lucene.IKAnalyzer;
import java.io.IOException;
/**
* 使用IKAnalyzer进行Lucene索引和查询的演示
* 2012-3-2
* <p>
* 以下是结合Lucene4.0 API的写法
*/
public class LuceneIndexAndSearchDemo {
/**
* 模拟
* 创建一个单条记录的索引并对其进行搜索
*
*/
public static void main(String[] args) {
//Lucene Document的域名
String fieldName = "text";
//检索内容
String text = "IK Analyzer是一个结合词典分词和文法分词的中文分词开源工具包。它使用了全新的正向迭代最细粒度切分算法。";
//实例化IKAnalyzer分词器
Analyzer analyzer = new IKAnalyzer(true);
Directory directory = null;
IndexWriter iwriter;
IndexReader ireader = null;
IndexSearcher isearcher;
try {
//建立内存索引对象
directory = new RAMDirectory();
//配置IndexWriterConfig
IndexWriterConfig iwConfig = new IndexWriterConfig(analyzer);
iwConfig.setOpenMode(OpenMode.CREATE_OR_APPEND);
iwriter = new IndexWriter(directory, iwConfig);
//写入索引
Document doc = new Document();
doc.add(new StringField("ID", "10000", Field.Store.YES));
doc.add(new TextField(fieldName, text, Field.Store.YES));
iwriter.addDocument(doc);
iwriter.close();
//搜索过程**********************************
//实例化搜索器
ireader = DirectoryReader.open(directory);
isearcher = new IndexSearcher(ireader);
String keyword = "中文分词工具包";
//使用QueryParser查询分析器构造Query对象
QueryParser qp = new QueryParser(fieldName, analyzer);
qp.setDefaultOperator(QueryParser.AND_OPERATOR);
Query query = qp.parse(keyword);
System.out.println("Query = " + query);
//搜索相似度最高的5条记录
TopDocs topDocs = isearcher.search(query, 5);
long totalHits = topDocs.totalHits.value;
System.out.println("命中:" + totalHits);
//输出结果
ScoreDoc[] scoreDocs = topDocs.scoreDocs;
for (int i = 0; i < totalHits; i++) {
Document targetDoc = isearcher.doc(scoreDocs[i].doc);
System.out.println("内容:" + targetDoc.toString());
}
} catch (ParseException | IOException e) {
e.printStackTrace();
} finally {
if (ireader != null) {
try {
ireader.close();
} catch (IOException e) {
e.printStackTrace();
}
}
if (directory != null) {
try {
directory.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
}

View File

@ -2,10 +2,10 @@
<!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd"> <!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">
<properties> <properties>
<comment>IK Analyzer 扩展配置</comment> <comment>IK Analyzer 扩展配置</comment>
<!--用户可以在这里配置自己的扩展字典 --> <!-- 配置是否加载默认词典 -->
<entry key="use_main_dict">true</entry>
<!-- 配置自己的扩展字典,多个用分号分隔 -->
<entry key="ext_dict">ext.dic;</entry> <entry key="ext_dict">ext.dic;</entry>
<!-- 配置自己的扩展停止词字典,多个用分号分隔 -->
<!--用户可以在这里配置自己的扩展停止词字典-->
<entry key="ext_stopwords">stopword.dic;</entry> <entry key="ext_stopwords">stopword.dic;</entry>
</properties> </properties>

View File

@ -1,3 +1,3 @@
Wed Aug 01 11:21:30 CST 2018 Wed Aug 01 00:00:00 CST 2021
files=dynamicdic.txt files=dynamicdic.txt
lastupdate=0 lastupdate=0