Score:3

How do I ensure a docker network (interface) exists even when it's not running?

in flag

I have a container stack (specified by a docker-compose.yml). The stack requires a PostgreSQL database, but I am using a locally running native instance instead of having it be a part of the stack (to make e.g. backups easier, and to conserve resources). The PostgreSQL instance is set to bind and listen on 172.17.0.1, the IP under which the host is reachable from within docker containers.

However, during system startup, PostgreSQL doesn't bind to that address, and the containers subsequently fail to initialize. If I restart PostgreSQL afterwards, I can see it's bound (via ss), and the containers initialize fine. This is 100% reproducible on every boot.

I think it's because the interface doesn't exist yet, so there's nothing to bind to. Is there a way to "persist" the network (or the interface) so that it can be bound to even while docker hasn't initialized yet?

(I've also tried specifying After=docker.service in the systemd service file for PostgreSQL, with no luck - I think it's because while docker has already initialized, the container stacks haven't, and so the networks haven't been yet created either. Specifying "wait until a docker container has started" in systemd is impossible as far as I'm aware.)

joat avatar
jo flag
I worked around a similar issue by using OVS instead of Docker's networking. There's a slight learning curve but it has more capabilities than the standard Docker networking. This may or may not be useful to you.
Score:1
tl flag

I have been dealing with the same problem. I found a solution, but you might not like it. If you set listen_addresses in your Postgresql to '*', then it will bind to the 0.0.0.0 address instead, which will receive traffic from all interfaces. I've confirmed that this works right on several servers.

Yes, it's a sin to let your DB server bind to your public-facing address. You should already have UFW blocking external traffic though, plus a strictly configured pg_hba only allowing connections to the appropriate DBs from the Docker subnet, plus only local unix auth for the postgres superuser account and strong passwords for all other accounts, so I suppose it's safe enough. It works, at least, and I haven't been able to find any other way to make it work.

Tomáš M. avatar
in flag
Thanks! I'll leave the question open, as this is more of a work-around than an actual solution. FWIW, in the meantime I ended up doing sort of the "opposite" of what you describe - I set it so that the container is `network_mode: host`, meaning it can just connect to postgres via `localhost`. This also works, but is also non-ideal in its own way.
I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.