Score:0

Iptables and docker - disable remote access to container while retaining host and containers communiation via proxy

us flag

recently, I have started to migrate a webserver with several apps to new server and bundling every app in a docker container. My current setup consist of nginx for reverse proxy and database servers running on the server itself and all web applications running in their own docker containers.

I am now trying to secure the webserver using iptables, like I was doing for many years before. I need to satisfy following conditions:

  1. normal firewall for non-docker services should still work (default policies to drop for both input and output, only explicitly named ports accessible)
  2. containers (with one exception - see bellow) must not be accessable from the world
  3. containers must be able to access host using "docker.host.internal" to access database servers
  4. containers must be able to access other container's services using their public (proxied) domain names
  5. single container should have it's 22 port accessible directly from the world (git)

So far, I was able to satisfy only the first three requirments. My simple iptables setup, which I adopted from the previous, non-docker environment, looks something like this:

# default policies to drop

-P INPUT DROP
-P FORWARD DROP
-P OUTPUT DROP

# allow internet browsing, NTP, DNS
-A OUTPUT -p tcp -m tcp --dport 80 -j ACCEPT
-A OUTPUT -p tcp -m tcp --dport 443 -j ACCEPT
-A INPUT -p tcp -m tcp --sport 80 -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p tcp -m tcp --sport 443 -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p udp -m udp --sport 53 -j ACCEPT
-A INPUT -p tcp -m tcp --sport 53 -j ACCEPT
-A OUTPUT -p udp -m udp --dport 53 -j ACCEPT
-A OUTPUT -p tcp -m tcp --dport 53 -j ACCEPT
-A INPUT -p udp -m udp --dport 123 -j ACCEPT
-A OUTPUT -p udp -m udp --sport 123 -j ACCEPT

# allow services - webserver, SSH server on port 16 (I know it's not standard, but I'm used to that) and SSH on port 22 for accessing git
-A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 16 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 443 -j ACCEPT
-A OUTPUT -p tcp -m tcp --sport 22 -m state --state RELATED,ESTABLISHED -j ACCEPT
-A OUTPUT -p tcp -m tcp --sport 16 -m state --state RELATED,ESTABLISHED -j ACCEPT
-A OUTPUT -p tcp -m tcp --sport 80 -m state --state RELATED,ESTABLISHED -j ACCEPT
-A OUTPUT -p tcp -m tcp --sport 443 -m state --state RELATED,ESTABLISHED -j ACCEPT

# allow all local communication
-A INPUT -i lo -j ACCEPT
-A OUTPUT -o lo -j ACCEPT

# allow ping
-A FORWARD -p icmp -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A OUTPUT -p icmp -j ACCEPT

I do not claim this is the best way to do things, but it should be fairly secure and was working for me for years.

Now to the stuff I don't understand that well. If I apply this to my new docker configuration, it breaks access to host using the "docker.host.internal" alias. To allow that, I used:

iptables -A INPUT -i br-+ -j ACCEPT
iptables -A OUTPUT -o br-+ -j ACCEPT

I have seen -i docker0 being used, but it didn't work for me, because every docker container has its own br-* interface on my setup which it uses to access the host. This worked, so far so good.

Now I noticed that all docker published ports (I use 80XX for webservices and than proxy correct domains to those port via nginx proxy) are visible from the world. I certainly don't want that, so after some search, I added this:

iptables -I DOCKER-USER -i venet0 -j DROP

(venet0 is the name of my network interface, instead of eth0)

This worked well, but broke conditions 4 and 5 - containers are no longer able to access each other on their public domain address and I'm not able to connect to git server via port 22 (that I expected).

By communicating on public domain address, I mean the following:

I have a drone CI server running in one container, exposing port 8001 to the host. I than have an nginx proxy passing the address https://ci.example.com to this port. The same is true for my Gitea container, exposing port 8002 and available on https://git.example.com.

Drone needs to authenticate via OAuth with the Gitea server. To do that, it has to be able to access it via the https://git.example.com:443 address, not the internal docker http://gitea_container_name:8002. I have several other use cases where this needs to be possible, but this one explains it the best.

I have now spent many hours trying to get this to work, but to no success - if I manage to get this working, other conditions break and vice-versa. Some of the stuff I already tried (most comes from various other questions here):

# tried to enable specific port for the gitea container, no effect
iptables -I DOCKER-USER -i venet0 -p tcp --sport 8002 -j ACCEPT

# tried this suggestion, no effect
iptables -I DOCKER-USER -i venet0 -p tcp -m conntrack --ctorigdstport 443 --ctdir ORIGINAL -j ACCEPT

# no effect, but I suspected this won't work since I don't think 443 is what comes to this chain
iptables -I DOCKER-USER -i venet0 -p tcp --dport 443 -j ACCEPT

# no effect
iptables -A INPUT -i docker0 -j ACCEPT

Is setup like this possible? Can someone more versed in the docker and firewall world point me in the right direction?

My setup:

  • Debian 11
  • iptables v1.8.7 (nf_tables)
  • Docker 20.10.5

Thanks very much in advance!

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.