Score:0

how can i fix this udp error when using nginx

it flag

0

I have a problem for which I have not found a solution anywhere.

The problem is that we have deployed proxy servers in front of our game services and here we use haproxy for tcp traffic and nginx for udp traffic.

Everything works fine, meaning players can connect and play, but at random, our players are dropped from the servers for a reason: timeout

And the nginx error log gives this kind of error:

2021/11/10 07:14:14 [alert] 42692#42692: *183 shared connection is busy while proxying and sending to client, udp client: xx.xxx.xxx.xx, server: x.x.x.x:xxxxx, upstream: "xx.xx.xxx.xx:xxxxx", bytes from/to client:4992068/6665500, bytes from/to upstream:6666800/4992068

2021/11/10 07:14:14 [alert] 42692#42692: *179 shared connection is busy while proxying and sending to client, udp client: xx.xxx.xxx.xxx, server: x.x.x.x:xxxxx, upstream: "xx.xx.xxx.xx:xxxxx", bytes from/to client:5912472/8583792, bytes from/to upstream:8585092/5912472

2021/11/10 07:14:14 [alert] 42692#42692: *205 shared connection is busy while proxying and sending to client, udp client: xx.xx.xx.xxx, server: x.x.x.x:xxxxx, upstream: "xx.xx.xxx.xx:xxxxx", bytes from/to client:958222/3056834, bytes from/to upstream:3058134/958222

2021/11/10 07:14:14 [alert] 42692#42692: *207 shared connection is busy while proxying and sending to client, udp client: xx.xxx.xxx.xx, server: x.x.x.x:xxxxx, upstream: "xx.xx.xxx.xx:xxxxx", bytes from/to client:692866/3106114, bytes from/to upstream:3107414/692866

2021/11/10 17:01:59 [alert] 42692#42692: *1103 shared connection is busy while proxying and sending to client, udp client: xx.xxx.xxx.xx, server: x.x.x.x:xxxxx, upstream: "xx.xx.xxx.xx:xxxxx", bytes from/to client:44160/1230780, bytes from/to upstream:1232080/44160

2021/11/10 17:01:59 [alert] 42692#42692: *1111 shared connection is busy while proxying and sending to client, udp client: xx.xxx.xxx.xxx, server: x.x.x.x:xxxxx, upstream: "xx.xx.xxx.xx:xxxxx", bytes from/to client:104003/2480683, bytes from/to upstream:2480693/104003

The haproxy config we currently use:

global
    log /dev/log    local0
    log /dev/log    local1 notice
    chroot /var/lib/haproxy
    stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
    stats timeout 30s
    user haproxy
    group haproxy
    daemon

    defaults
    mode tcp
    timeout connect 30000ms
    timeout client 30000ms
    timeout server 30000ms

frontend proxy-in
    mode tcp
    bind *:45888
    default_backend proxy-out

backend proxy-out
    mode tcp
    server s1 main_server_ip:45888

The nginx config we currently use:

user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;

events {
    worker_connections 50000;
    # multi_accept on;
}

http {
    
    access_log /var/log/nginx/access.log;
    error_log /var/log/nginx/error.log;

    }

stream {
    upstream backend{
        server main_server_ip:45888;
    }
    server {
        listen 45888 udp reuseport;
        proxy_pass backend;
    }
}

Thanks for any help!

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.