I'm running a small web application written in Python, running in uWSGI and served through nginx. There's a component that generates ZIP files for downloading, which occasionally can be quite large (several GB). It often happens that the connection between nginx and uWSGI is broken and the request is aborted; nginx ignores the truncated response while the browser runs into a timeout because it keeps the connection opened, expecting more response data. The application generates a proper Content-Length header.
From the uWSGI log:
uwsgi_response_write_body_do(): Broken pipe [core/writer.c line 429] during GET [...]
OSError: write error
SIGPIPE: writing to a closed pipe/socket/fd (probably the client disconnected) on request [...] !!!
I've already set socket-timeout
, socket-send-timeout
and socket-write-timeout
to 180
in the uWSGI configuration, to no avail. The nginx conf includes uwsgi_read_timeout 180s;
and uwsgi_buffering off;
The effect is mostly reproducible, in that it happens most of the time, especially with large responses, but never at the same offset. Repeating the request over and over again might eventually lead to completion.