If you have 400GB indices with 1 billion plus records, I'm not surprised to see it take a minute or two. Is that 400GB each index? So your use case is Netflow, and you're using daily indices. From that info indices are only written on the day it is generate. We should use the shrink API to lower the number of shards if search becomes your main purpose.  Maybe we can spend some on the mapping too to increase the performance. You do not always have to max out your heap to achieve efficiency.
---
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB