I have an ESXi 6.7 host with 6 physical NICs. Those NICs are configured as follows:
vSwitch0:
vSwitch1:
The NICs assigned to vSwitch0 are physically connected to a Netgear switch who's ports are untagged for the specific VLAN that I wish to manage ESXi from.
The NICs assigned to vSwitch1 are physically connected to a Netgear switch who's ports are tagged with the VLANs that I want to make available to the virtual machines running in my ESXi host (VLANs 10 and 50). Presently, my virtual machines in my ESXi host are only configured to be on VLAN 10.
I have been experimenting with Docker lately, so I spun up a Ubuntu Server 22.04 virtual machine to run as my Docker host. I added a Unifi Controller container and have managed to adopt my access points into the controller by using the "set inform" command from within the access point's cli.
I am looking at expanding my Unifi network, and after doing some further research, I have realized that in order to streamline the device adoption process, I need to get my Unifi Controller onto my default VLAN. Here is where things start to get complicated for me....
In order for the container to access the default VLAN, I figured that I would need to first get my container host connected to the default VLAN. I attempted to achieve this by creating a new vSwitch (vSwitch2). The physical NIC associated with vSwitch2 is tied to the Netgear switch that is configured only for untagged traffic. I figured that isolating the untagged traffic to this specific VM via a dedicated vSwitch was safer than allowing all of my servers access to the default VLAN.
vSwitch2:
I then added a 2nd NIC to my Docker host.
This 2nd NIC did not receive an IP address via DHCP. I thought that I may have had a switch configuration issue, so for testing purposes, I tried assigning the 2nd NIC to VLAN 10 and then to VLAN 50. To my surprise, it still did not receive an IP via DHCP. At this point, it is evident that the 2nd NIC not receiving an IP is a result of something being misconfigured within my Ubuntu Server VM. Before I go down the rabbit hole of making lots of configuration changes, I wanted to ask the following:
- If I want to use my Docker host for my Unifi Controller as well as other future containers, would it make sense to connect the host to two separate networks in this case?
- Is it possible to connect the Docker host to more than one network, but ensure the host is only accessible from only 1 of those networks? If so, how is this achieved?
- Does my docker host need to have an IP address on a network in order for the container to be accessed (assuming the container is configured for macvlan networking).
- Would it make sense for the Unifi Controller to be configured with macvlan networking?
- Was configuring a separate vSwitch (vSwitch2) the right choice for isolating the default VLAN?
- From a security standpoint, do I need to make architecture/network topology changes?