I have a cluster of Pis that I'm using to experiment with Hadoop. masternode is set to .190, p1 to 191 ... p4 to 194. All nodes are up and running. start-dfs.sh, stop-all.sh, etc from the master successfully start and stop the datanodes. However, on start, the datanodes cannot connect back to the master node. The datanodes are trying to use "hostname/ip_address:9000" to try and reconnect.
hadoop-hduser-datanode-p1.log reports:
INFO org.apache.hadoop.ipc.Client: Retrying connect to server: masternode/192.168.1.190:9000. Already tried 8 time(s);
master-node is set to 192.168.1.190 via reserved DNS by MAC address on my router. Same goes from the other nodes.
/etc/hosts is empty on the datanodes. Setting them doesn't change the behavior.
All the .xml files (like core-site.xml) uses "hdfs://masternode:port". None of them uses "masternode/ip address:port", so I'm not sure where the IP address is coming from.
<property>
<name>fs.default.name</name>
<value>hdfs://masternode:9000/</value>
</property>
workers file is just the name of the datanode servers:
workers" 4L, 12C 1,1 All
p1
p2
p3
p4
Any ideas what is appending the IP address to the hostname?