Score:0

Multiple port on the same server behind the same backend

pm flag

I’m using haproxy with my ceph cluster and I’ve created more gateways on 1 server with different ports. I’ve added to the backend configuration with the different port numbers but seems like haproxy ignores it. What I’m missing from the configuration?

This is on the server where the gateways are running:

netstat -an|grep :808
tcp        0      0 10.118.199.1:8084       0.0.0.0:*               LISTEN
tcp        0      0 10.118.199.1:8080       0.0.0.0:*               LISTEN
tcp        0      0 10.118.199.1:8081       0.0.0.0:*               LISTEN
tcp        0      0 10.118.199.1:8082       0.0.0.0:*               LISTEN
tcp        0      0 10.118.199.1:8083       0.0.0.0:*               LISTEN
tcp        0      0 10.118.199.1:8080       10.100.112.111:56906    TIME_WAIT

 

This is my haproxy configuration:

global
    log         127.0.0.1 local2
    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     100000
    user        haproxy
    group       haproxy
    daemon
    stats socket /var/lib/haproxy/stats

defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    #option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000

listen stats 0.0.0.0:9000
    mode http
    balance
    timeout client 5000
    timeout connect 4000
    timeout server 30000
    stats uri /haproxy_stats
    stats realm HAProxy\ Statistics
    stats auth pw:pw
    stats admin if TRUE

frontend  http *:8080
    mode http
    default_backend             rgw_http

frontend https
    bind *:443 ssl crt /opt/certificate/wildcard.comp.local/PEM/wildcard_comp_local.pem crt /opt/certificate/wildcard.comp.local/PEM/wildcard.compdev.io.pem
    #http-request set-header X-Forwarded-Proto https
    mode http
    default_backend rgw_http
    log global

backend rgw_http
    balance roundrobin
    mode http
    server server01-1 10.118.199.1:8080 check inter 3s
    server server02 10.118.199.2:8080 check inter 3s
    server server03 10.118.199.3:8080 check inter 3s
    server server01-2 10.118.199.1:8081 check inter 3s
    server server01-3 10.118.199.1:8082 check inter 3s
    server server01-4 10.118.199.1:8083 check inter 3s
    server server01-5 10.118.199.1:8084 check inter 3s

This is the stat page:

Stats page

What I'm missing or misunderstanding?

tbielaszewski avatar
ng flag
Looks correct. That's the way to do it. Check your firewalls, etc. and make sure haproxy can reach those ports, because it looks like it gets rejected. You can verify with net sniffer, like `tcpdump`, that haproxy uses those ports properly.
Score:0
pm flag

Seems like not enough to look the iptables -L multiple time that the os firewall is running or not. Even if it gives back empty table, is the service running I'm f..d ....

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.