Score:0

How to scale out WSGI servers

gp flag

If I had a web application with a Flask/Django backend, I could use a single host to run both the WSGI server and the nginx web server/reverse proxy. Nginx would handle incoming requests, serving static files, caching, SSL termination, etc. It would forward API requests to the WSGI server over localhost and run the backend Django API logic. However it's not clear to me how I would scale out this setup.

If I determined that we needed more backend servers for scaling and redundancy purposes, how would I configure the WSGI servers and nginx? Would I maintain a single nginx host to do load balancing/reverse proxying and have multiple backend WSGI servers?

If I decide to use something like AWS load balancer (ALB/ELB) to avoid having a single point of failure, do I still need nginx? If not, then how do I get around buffering slow clients DoS attacks?

If I use AWS ALB/ELB with nginx, haven't I introduced another single point of failure back into the chain?

I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.