Score:1

UWSGI block incoming connections when all threads are busy

cn flag

I have a simple UWSGI app put behind a LB with the following .ini config

[uwsgi]
socket=0.0.0.0:5071
chdir = src/
wsgi-file = uwsgi.py
processes=2
threads=1
protocol=http
plugins=python
exit-on-reload=false
master=true
# Cleanup of temp files
vacuum = true    

When all 2x1 threads are busy, the application keeps serving incoming connections by queueing them, waiting for a thread to free.

This is somehow an unwanted behavior in my case, as I would like UWSGI to return a 5xx status code which will allow me to not oversaturate the resources of a single distribution.

Client testing code

Attaching the test client code for the UWSGI application

proxy = {
    'http':'http://localhost:5071'
}

@threaded
def f():
    print('Sending request')
    response = requests.get('http://dummy.site',proxies=proxy)
    print(str(response.status_code )+ response.text)

for i in range(5):
    f()

Test (1)

Adding listen = 2 to .ini and firing 3 requests simultaneously would just print:

*** uWSGI listen queue of socket "0.0.0.0:5071" (fd: 3) full !!! (3/2) ***

while the third connection seems to still be somehow accepted, queued and later executed instead of a 5xx error being thrown.

Test (2)

Adding listen = 0 to .ini and firing 5 requests simultaneously would just execute two requests at a time. The full queue output is not showing anymore. Somehow, the requests are somewhere still queued and executed when threads get freed.

How can I block incoming connections to the UWSGI application when all threads are busy?

anx avatar
fr flag
anx
Your configuration has a different port and listen queue than the logged message. Are you running two instances and checking a different than you meant to? Also, *almost* every use case works better with (at least some small) listen backlog - when done do check if your performance metrics really match up with your expectations.
Constantin avatar
cn flag
@anx just a mistake as I switched ports while writing the question. Regarding the backlogging, which option(s) do you particularly refer to?
anx avatar
fr flag
anx
Is the client you are using to test this maybe **retrying** *after* uwsgi turns down the connection attempt once? Maybe your configuration worked, but your test method did not?
Constantin avatar
cn flag
@anx It is not retrying at all, using simple `requests.get(url,proxies)` in python
Score:0
cz flag

This is a truly bizarre request, but if you really want to do this, you can try reducing the listen queue (to zero), i.e. --listen 0. I haven't tested this and don't know if zero is even considered a valid value. This is something that is normally increased as a site gains traffic, not decreased.

Constantin avatar
cn flag
With `listen = 0` all that happens is that the output of the queue being full does not show anymore. It still seems that "somewhere" it is hanging until the threads are freed. I have attached my testing client in the question. Thank you!
Michael Hampton avatar
cz flag
@Constantin It might be that this is just not possible. I could not find anybody else who even tried to do this.
Constantin avatar
cn flag
Thanks for the feedback. This is actually weird - I am wondering why shouldn't a web server have the ability to be limited on connections amount. Especially when these applications are put behind a whole infrastructure meant to work with load balancers, auto scaling groups and calculated pre-allocated resources
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.