Score:1

Setting up a proxy pass for rest api server not working

kr flag

So I am trying to setup proxy pass to a rest api that is on port 30422, basically what I want to do is have for example sub-domain.example.com link to the port 30422 on the same webhost. I am hosting the webserver on nginx, and have linked a domain already.

What I have tried was adding my new sub-domain on cloudflare (I use cloudflare as protection) And then also making my new subdomain file in the sites-available folder and symlinking it to sites-enabled folder.

This was my first attempt, which didn't work:

server {
    listen 80;  # Port on which Nginx will listen for incoming requests
    server_name sub-domain.example.com;  # Your domain name or server IP

    location / {
        proxy_pass http://example.com:30422;  # Address of your REST API server
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}

Then I tried to use curl on the vps itself on the subdomain, that didn't work. Then I tried to ping the actual vps ip with the port 30422 and that returned the wanted response from the rest api, that is hosted on the same vps.

So instead I updated my nginx config to instead go thru the vps ip and the port as proxy pass:

server {
    listen 80;  # Port on which Nginx will listen for incoming requests
    server_name sub-domain.example.com;  # Your domain name or server IP

    location / {
        proxy_pass http://x.x.x.x:30422;  # Address of your REST API server
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}

Which in turn didn't work either.

I also tried this configuration, but to no avail.

server {
    listen 80;
    server_name sub-domain.example.com;
    location / {
        proxy_set_header Host $host;
        proxy_pass http://127.0.0.1:30422;
        proxy_redirect off;
    }
}

This is my first attempt at doing this, and I am just trying to learn. I have looked for answers but none of the solutions I tried worked.

Edit: I even tried disabling always https for cloudflare, and then going back to using proxy_set_header. Still doesn't work.

This is the output I get when using curl with verbose output (sub-domain.example.com)

*   Trying 104.21.93.82:80...
* TCP_NODELAY set
* Connected to sub-domain.example.com (104.21.93.82) port 80 (#0)
> GET / HTTP/1.1
> Host: sub-domain.example.com
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Date: Mon, 12 Jun 2023 20:40:35 GMT
< Content-Type: application/octet-stream
< Content-Length: 0
< Connection: keep-alive
< CF-Cache-Status: DYNAMIC
< Report-To: {"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v3?s=bO266eNsXPsKOXRa3nqoNi3wRJPkkYH8a35%2Bvcaq%2B43iQtBIqIk0Lgbf0R8%2BOpCtE6Xc1jYWskuZo6f4XWKa9GRpcCzP5E9NdC6F6kSs9HFLca2uuctXESwAdq%2BO%2FWT9t1EAyuB%2FhRmdGu%2BdWA%3D%3D"}],"group":"cf-nel","max_age":604800}
< NEL: {"success_fraction":0,"report_to":"cf-nel","max_age":604800}
< Server: cloudflare
< CF-RAY: 7d64e8e85b75414c-LHR
< alt-svc: h3=":443"; ma=86400
<
* Connection #0 to host sub-domain.example.com left intact

Edit 2, requested by @HBruijn

sudo netstat -tnlp & sudo ss -tnlp

[1] 48297
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      29439/nginx: master
tcp        0      0 127.0.0.53:53           0.0.0.0:*               LISTEN      660/systemd-resolve
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1377/sshd: /usr/sbi
tcp        0      0 0.0.0.0:443             0.0.0.0:*               LISTEN      29439/nginx: master
tcp6       0      0 :::80                   :::*                    LISTEN      29439/nginx: master
tcp6       0      0 :::30422                :::*                    LISTEN      13143/node
tcp6       0      0 :::22                   :::*                    LISTEN      1377/sshd: /usr/sbi
tcp6       0      0 :::443                  :::*                    LISTEN      29439/nginx: master
State      Recv-Q     Send-Q           Local Address:Port            Peer Address:Port     Process
LISTEN     0          511                    0.0.0.0:80                   0.0.0.0:*         users:(("nginx",pid=29441,fd=6),("nginx",pid=29440,fd=6),("nginx",pid=29439,fd=6))
LISTEN     0          4096             127.0.0.53%lo:53                   0.0.0.0:*         users:(("systemd-resolve",pid=660,fd=13))
LISTEN     0          128                    0.0.0.0:22                   0.0.0.0:*         users:(("sshd",pid=1377,fd=3))
LISTEN     0          511                    0.0.0.0:443                  0.0.0.0:*         users:(("nginx",pid=29441,fd=8),("nginx",pid=29440,fd=8),("nginx",pid=29439,fd=8))
LISTEN     0          511                       [::]:80                      [::]:*         users:(("nginx",pid=29441,fd=7),("nginx",pid=29440,fd=7),("nginx",pid=29439,fd=7))
LISTEN     0          511                          *:30422                      *:*         users:(("node",pid=13143,fd=19))
LISTEN     0          128                       [::]:22                      [::]:*         users:(("sshd",pid=1377,fd=4))
LISTEN     0          511                       [::]:443                     [::]:*         users:(("nginx",pid=29441,fd=9),("nginx",pid=29440,fd=9),("nginx",pid=29439,fd=9))

And when using curl -vv on localhost and the vps's ip

curl -vv http://localhost:30422
*   Trying 127.0.0.1:30422...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 30422 (#0)
> GET / HTTP/1.1
> Host: localhost:30422
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 401 Unauthorized
< X-Powered-By: Express
< Content-Type: application/json; charset=utf-8
< Content-Length: 24
< ETag: W/"18-XPDV80vbMk4yY1/PADG4jYM4rSI"
< Date: Tue, 13 Jun 2023 18:37:24 GMT
< Connection: keep-alive
< Keep-Alive: timeout=5
<
* Connection #0 to host localhost left intact
{"error":"Unauthorized"}
curl -vv http://81.xx.xxx.xx:30422
*   Trying 81.xx.xxx.xx:30422...
* TCP_NODELAY set
* Connected to 81.16.176.39 (81.xx.xxx.xx) port 30422 (#0)
> GET / HTTP/1.1
> Host: 81.xx.xxx.xx:30422
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 401 Unauthorized
< X-Powered-By: Express
< Content-Type: application/json; charset=utf-8
< Content-Length: 24
< ETag: W/"18-XPDV80vbMk4yY1/PADG4jYM4rSI"
< Date: Tue, 13 Jun 2023 18:38:20 GMT
< Connection: keep-alive
< Keep-Alive: timeout=5
<
* Connection #0 to host 81.xx.xxx.xx left intact
{"error":"Unauthorized"}
HBruijn avatar
in flag
When `proxy_pass http://127.0.0.1:30422` fails it suggests that your API is NOT running on port 30422 of the same system as where you run nginx. - Check if your API is actually up and running , confirm the correct port (use for example `sudo netstat -tnlp` & `sudo ss -tnlp` ; test if it responds to HTTP requests with `curl -vv http://hostname:30422` and or curl -vv http://localhost:30422`) .
infamous hvher avatar
kr flag
@HBruijn It is running, I have it running in a screen instance. The thing is the api is 100% running cause If I try to use the VPS ip and the port with curl it returns the wanted response. Same if I do localhost and the port, aswell as 127.0.0.1 and the port works as wanted as well. The only thing that doesn't work rn is using the subdomain sub-domain.example.com
infamous hvher avatar
kr flag
I also edited the post and added the curl -vv results aswell as the sudo netstat results.
I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.