Solution
The open files
limit can be increased.
… the per-process hard limit is set to 1048576 by default, but it can be
changed using the "fs.nr_open" sysctl. From: HAProxy Management Guide | 5. File-descriptor limitations
I've stumbled upon a useful resource on how to increase fs.nr_open
. This is what I ended up doing (on my host machine):
# 0. ssh into my cloud instance
# 1. change to root
sudo su -
# 2. increase the limit
sysctl -w fs.nr_open=2010000
# 3. save changes and exit
sysctl -p
exit
# 4. now you are back in the user shell; you need to re-log
# from this shell as well for the changes to take effect
exit
Note that the changes made don't persist after a reboot. If someone knows how to do that, please let me know and I'll edit my answer.
I also needed to tell Docker that it's okay to open more than 1,048,576 files in the container. I was using a Docker Compose file to define my services. I simply added the following snippet to my docker-compose.yml
:
services:
proxy:
image: haproxy
# Begin snippet
ulimits:
nofile:
soft: 2005000
hard: 2005000
# End snippet
# ...
# ...
Last but not least, make sure that the host instance has at least 2GB of RAM or you'll run out of memory when you'll try to run the proxy. If you want to actually handle 1 million requests with your proxy, you'll require much more memory - something between 20 and 30 GB - feel free to test on your own. If you know how to calculate the limit, feel free to edit my answer or post a comment bellow!
Rationale on the number of open files
If we need 2,000,029 open files for 1 million connections, then lets:
- set the proxy container limit slightly above that: 2,000,029 + 4971 = 2,005,000
- set the host OS limit slightly above that: 2,005,000 + 5000 = 2,010,000