I've set up a small cluster of a few servers along with a SAN. The servers are running Ubuntu 20.04 LTS.
Using instructions provided by the vendor (I can't find where I read it before), they suggested that the iSCSI connections between the SAN and the servers should be (or maybe it was "must be"?) separated from any ethernet traffic. Because of this, I've configured two VLANs on our switch -- one for iSCSI traffic and one for ethernet traffic between the servers (which the SAN is not on).
So far, it seems fine. Suppose the Ethernet is on 172.16.100.XXX/24 and iSCSI is on 172.16.200.XXX/24. More specifically, the addresses look something like this:
machine |
ethernet |
iSCSI |
Outside ethernet also? |
server 1 |
172.16.100.1 |
172.16.200.1 |
Yes |
server 2 |
172.16.100.2 |
172.16.200.2 |
Yes |
server 3 |
172.16.100.3 |
172.16.200.3 |
Yes |
SAN |
N/A |
172.16.200.4 |
No |
Not surprisingly, I can ssh
between servers using either VLAN. That is, from server 2 to server 1, I can do any of the following:
ssh 172.16.100.1
ssh 172.16.200.1
- ssh via the outside-visible IP address
What I'm worried about is whether or not I should better separate non-iSCSI traffic from the 172.16.200.X subnet with firewall rules so that port 22 (ssh) is blocked out on all servers.
I'm not concerned about the reverse -- the SAN is only on VLAN 200. It doesn't know VLAN 100 exists so it won't suddenly send iSCSI traffic down that VLAN.
I'm using the Oracle Cluster Filesystem which seems to use port 7777 -- perhaps I should block all ports on the VLAN so that only port 7777 is used? Does having ethernet traffic on an iSCSI network create problems (either lag or errors?) I should be aware of?
Thank you!