Hello Shawn,

You mentioned shard handler tweaks, thanks. I see we have an incorrect setting there for maximumPoolSize, way too high, but that doesn't account for the number of threads created. After reducing the number, for dubious reasons, twice the number of threads are created and the node dies.

For a short time, there were two identical collections (just for different tests) on the nodes, i have removed one of them, but the number of threads created doesn't change one bit. So it appears shard handler config has nothing to do with it, or does it?

Regarding memory leaks, of course, the first that came to mind is that i made an error which only causes trouble on 7.3, but it is unreproducible so far, even if i fully replicate production in a test environment. Since it only leaks on commits, first suspect were URPs, and the URPs are the only things i can disable in production without affecting customers. Needless to say, it weren't the URPs.

But thanks anyway, whenever i have the courage again to tests it, i'll enable INFO logging, which is disabled. Maybe it will reveal something.

If anyone has even the weirdest unconventional suggestion on how to reproduce my production memory leak in a controlled test environment, let me know/

-----Original message-----
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB