So, I would end up with ~6 copy fields with ~8 synonym files so that would
be about 48 field/synonym combination. Would that be a significant in terms
of index size. I guess that depends on the thesaurus size, what would be
the best way to measure this?

Custom parser:
This would take the file name, field to run the analysis on. This field
need not be a copy field which holds data, since we can use this is only
for getting the analysis.
Get the synonyms for the user query as tokens.
Create a edismax query based on the query tokens.
Return the score

This custom parser would be called in LTR as a scalar feature.

I am at the stage I can get the synonyms from the analysis chain, however
tokens are individual tokens and not phrases. So, I am stuck at how to
construct a correct query based on the synonym tokens and positions.

Thank you,
Roopa

On Wed, Feb 14, 2018 at 5:23 AM, Alessandro Benedetti <[EMAIL PROTECTED]>
wrote:
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB