lecko 2017-09-13, 13:40
[quote="lecko, post:1, topic:100361"]
Interestingly enough, the shards in the Default index are still well distributed over 5 nodes.

That is how Elasticsearch does allocation, not based on disk space (at least for now).

[quote="lecko, post:1, topic:100361"]
what can I check or set to "force" this Secondary index set to create new indexes on different shards ?

You mean different hosts? You could use forced allocation.

But perhaps you need to look at this in a different way and reduce your shard count.
5 shards for 30GB is a bit wasteful, I'd look at doing just 2 and then using `_shrink` to reduce the counts for the old ones to a single primary.

NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB