I’m using nGram, too. I use standard tokenizer and nGram just as a filter. Here is my setup:
{
"index": {
"index": "my_idx",
"type": "my_type",
"analysis": {
"index_analyzer": {
"my_index_analyzer": {
"type": "custom",
"tokenizer": "standard",
"filter": [
"lowercase",
"mynGram"
]
}
},
"search_analyzer": {
"my_search_analyzer": {
"type": "custom",
"tokenizer": "standard",
"filter": [
"standard",
"lowercase",
"mynGram"
]
}
},
"filter": {
"mynGram": {
"type": "nGram",
"min_gram": 2,
"max_gram": 50
}
}
}
}
}
Let’s you find word parts up to 50 letters. Adjust the max_gram as you need. In german words can get really big, so I set it to a high value.