I've been given a go ahead to get some training which is great but first I need to get the cluster up to the version the training it all for.

I've had some issues when I tried a test upgrade before but since then I've refreshed the hardware in the cluster and our config is a little different now. Previously we had a RAID 0 array and everything lived in the default paths. For the rebuild we got SSDs so I have the OS on a RAID 1 array with a partition for logging on there too and then the data lives on 2 SSDs mounted as /data1 and /data2

When I run the pre-upgrade checks on the cluster now I get

```
Default path settings are removed
This issue must be resolved to upgrade. Read Documentation
Details: nodes with settings: [node3.mydomain.local]
```

It doesn't seem to flag issues with node 1 or node 2 only node 3, they were all built from the same scripted install 2 weeks ago so they all have the same configs and if I open up the elasticsearch.yml on each I can see

```
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: [/data1/elasticsearch, /data2/elasticsearch]
#
# Path to log files:
#
path.logs: /logs/elasticsearch
```

from what I read here in the referenced documentation

https://www.elastic.co/guide/en/elasticsearch/reference/6.0/breaking_60_packaging_changes.html#_default_path_settings_are_removed

that would suggest that all I need to do is specify path.data and path.logs which I have so everything is set as it should be but the health check says I need to fix something on just that node before I can continue.

any insights appreceated

---
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB