elasticsearch模糊查询为什么搜不到啊, 我用的就是ngram啊 财富值22

2016-10-25 22:52发布

我要实现模糊查询, 建立了如下的索引

PUT myidx1 {   "_all": {     "enabled": false   },   "settings": {     "analysis": {       "tokenizer": {         "my_ngram": {           "type": "nGram",           "min_gram": "1",           "max_gram": "20",           "token_chars": [             "letter",             "digit"           ]         }       },       "analyzer": {         "mylike": {           "tokenizer": "my_ngram",           "filter": [             "lowercase"           ]         }       }     }   },   "mapping": {     "mytype": {       "dynamic": false,       "properties": {         "name": {           "type": "string",           "analyzer": "mylike"         }       }     }   } }

测试一个

POST myidx1/_analyze {     "analyzer": "mylike",     "text": "文档3-aaa111" }

结果如下,

{    "tokens": [       {          "token": "文",          "start_offset": 0,          "end_offset": 1,          "type": "word",          "position": 0       },       {          "token": "文档",          "start_offset": 0,          "end_offset": 2,          "type": "word",          "position": 1       },       {          "token": "文档3",          "start_offset": 0,          "end_offset": 3,          "type": "word",          "position": 2       },       {          "token": "档",          "start_offset": 1,          "end_offset": 2,          "type": "word",          "position": 3       },       {          "token": "档3",          "start_offset": 1,          "end_offset": 3,          "type": "word",          "position": 4       },       {          "token": "3",          "start_offset": 2,          "end_offset": 3,          "type": "word",          "position": 5       },       {          "token": "a",          "start_offset": 4,          "end_offset": 5,          "type": "word",          "position": 6       },       {          "token": "aa",          "start_offset": 4,          "end_offset": 6,          "type": "word",          "position": 7       },       {          "token": "aaa",          "start_offset": 4,          "end_offset": 7,          "type": "word",          "position": 8       },       {          "token": "aaa1",          "start_offset": 4,          "end_offset": 8,          "type": "word",          "position": 9       },       {          "token": "aaa11",          "start_offset": 4,          "end_offset": 9,          "type": "word",          "position": 10       },       {          "token": "aaa111",          "start_offset": 4,          "end_offset": 10,          "type": "word",          "position": 11       },       {          "token": "a",          "start_offset": 5,          "end_offset": 6,          "type": "word",          "position": 12       },       {          "token": "aa",          "start_offset": 5,          "end_offset": 7,          "type": "word",          "position": 13       },       {          "token": "aa1",          "start_offset": 5,          "end_offset": 8,          "type": "word",          "position": 14       },       {          "token": "aa11",          "start_offset": 5,          "end_offset": 9,          "type": "word",          "position": 15       },       {          "token": "aa111",          "start_offset": 5,          "end_offset": 10,          "type": "word",          "position": 16       },       {          "token": "a",          "start_offset": 6,          "end_offset": 7,          "type": "word",          "position": 17       },       {          "token": "a1",          "start_offset": 6,          "end_offset": 8,          "type": "word",          "position": 18       },       {          "token": "a11",          "start_offset": 6,          "end_offset": 9,          "type": "word",          "position": 19       },       {          "token": "a111",          "start_offset": 6,          "end_offset": 10,          "type": "word",          "position": 20       },       {          "token": "1",          "start_offset": 7,          "end_offset": 8,          "type": "word",          "position": 21       },       {          "token": "11",          "start_offset": 7,          "end_offset": 9,          "type": "word",          "position": 22       },       {          "token": "111",          "start_offset": 7,          "end_offset": 10,          "type": "word",          "position": 23       },       {          "token": "1",          "start_offset": 8,          "end_offset": 9,          "type": "word",          "position": 24       },       {          "token": "11",          "start_offset": 8,          "end_offset": 10,          "type": "word",          "position": 25       },       {          "token": "1",          "start_offset": 9,          "end_offset": 10,          "type": "word",          "position": 26       }    ] }

这个结果是不是就证明, 我可以用a搜索, 用1也可以搜到????
那么测试一下
插入数据

POST myidx1/mytype/_bulk { "index": { "_id": 4            }} { "name": "文档3-aaa111" } { "index": { "_id": 5            }} { "name": "yyy111"} { "index": { "_id": 6            }} { "name": "yyy111"}

然后查不到呢

GET myidx1/mytype/_search {     "query": {         "match": {             "name": "1"         }     } }

结果如下:

{    "took": 10,    "timed_out": false,    "_shards": {       "total": 5,       "successful": 5,       "failed": 0    },    "hits": {       "total": 0,       "max_score": null,       "hits": []    } }

为什么, 这是为什么啊!!!!快崩溃了~

友情提示: 问题已经关闭,关闭后问题禁止继续编辑,回答。
该问题目前已经被作者或者管理员关闭, 无法添加新回复
0条回答

一周热门 更多>