Score:0

firewalld: interface autonomously changing zones

in flag

I have an RHEL8 system serving as a Docker Swarm worker node. It has firewalld enabled, and has a docker zone to which the docker0 and docker_gwbridge interfaces are assigned.

$ cat /etc/firewalld/zones/docker.xml
<?xml version="1.0" encoding="utf-8"?>
<zone version="1.0" target="ACCEPT">
  <short>docker</short>
  <description>zone for docker bridge network interfaces</description>
  <interface name="docker_gwbridge"/>
  <interface name="docker0"/>
</zone>

After reboot, or firewalld restart or reload, these interfaces appear in the correct zone, per firewall-cmd --get-active-zones.

$ firewall-cmd --get-active-zones
docker
  interfaces: docker_gwbridge docker0
internal
  interfaces: vethb6daacd veth0a3a13c veth3922477 veth1fc2c24 veth35f6f77 veth172d461 vethf457e97 vethed46b94 vethc3293eb vethe6c08de vethb1c5fb6 vethd6bcfd8 eth0

However, after some minutes (usually less than an hour), they move to internal zone instead, breaking networking in containers.

$ firewall-cmd --get-active-zones
internal
  interfaces: vethb6daacd veth0a3a13c veth3922477 veth1fc2c24 veth35f6f77 veth172d461 vethf457e97 vethed46b94 vethc3293eb vethe6c08de vethb1c5fb6 vethd6bcfd8 eth0 docker_gwbridge docker0 veth5686e56 vetha51060c vethde79c75

A firewall-cmd --reload fixes it again for a little while.

This question looked relevant, but these interfaces (if I'm interpreting correctly) are not managed by NetworkManager, so I don't think that's at fault.

$ nmcli device
DEVICE           TYPE      STATE                   CONNECTION
eth0             ethernet  connected               eth0
docker0          bridge    connected (externally)  docker0
docker_gwbridge  bridge    connected (externally)  docker_gwbridge
veth5686e56      ethernet  unmanaged               --
vetha51060c      ethernet  unmanaged               --
vethde79c75      ethernet  unmanaged               --
lo               loopback  unmanaged               --

$ ls /etc/sysconfig/network-scripts/
ifcfg-eth0

I can't find anything interesting in /var/log/firewalld. I have several other nodes that are theoretically configured the same way where this problem doesn't occur.

I didn't set the nodes up, and I'm not a sysadmin, but I'm trying to figure it out! Any words of wisdom?

Score:0
in flag

Looks like a Chef task is the cause of this, with Chef inexplicably configured differently between working and broken servers.

I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.