Yikes!  That is a lot of shards!

> What have I missed.

One plausible reason is you have one or more [index templates](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-templates.html) which are configured poorly.  I'd look there (and any Logstash/Beats output configuration) and update them to a more sane number (certainly nothing larger than 12).

As for resolution to getting the shards to a reasonable number, you'll want to look whether these are primary shards or replica shards.  If they're primary shards, there is a [shrink API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-shrink-index.html) that you could use.  You could also just delete high-shard indices if they aren't valuable anymore.  You may, instead, simply have an extremely high (misconfigured) number of replicas, in which case you can just change the number of replicas for each index to a more reasonable number.

There's also a [cluster allocation explain](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-allocation-explain.html) API which can give you insights into why a shard may not be assigned.

---
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB