What if we remove the hadoop2 profile, making all of its setting just hard
coded default (it existed at the time bc Hadoop 1/Hadoop 2, but we haven't
supported hadoop 1 for a while.

Then, override those values in the Spark 2.2. profile with Hadoop2.6, and
specify Java8 with a plugin so it will fail on the build if compiled with
java7

My thought.

On Mon, Jul 17, 2017 at 11:02 AM, dustin vanstee <[EMAIL PROTECTED]>
wrote:
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB