Score:1

Nginx. How to select location and/or upstream based on uri?

jp flag

Problem. I have one or more domains that serve some uri with a default upstream (a CMS) and few uris that require a different upstream (but still too many to list them comforably and too different to catch them with a regexp).

Now if all the configuration is the same for the two upstreams, beside the two upstreams in itself, I think I solved it in a Good Enough way, it is as follows:

map $host$request_uri $select_upstream_with_or_without_cache {
  default http://upstream_def;

  ~^path1$            http://upstream_special;
  ~^path2$            http://upstream_special;
  ~^path3$            http://upstream_special;
  ~^path4$            http://upstream_special;
  # remember! Here path is, beside the $host, wildly different even within the same domain
}

[...config...]

server {
    server_name  server-with-same-params.for-different-upstreams.example;
    [...config...]
    location / {
        proxy_pass            $select_upstream_with_or_without_cache;
        proxy_read_timeout    90s;
        proxy_connect_timeout 90s;
        proxy_send_timeout    90s;
        proxy_set_header      Host $host;
        proxy_set_header      X-Real-IP $remote_addr;
        proxy_set_header      X-Forwarded-Proto https;
        proxy_set_header      X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_hide_header      Etag;
        proxy_hide_header      Accept-encoding;
        proxy_hide_header      Via;
    }
}

What if, though, I need to pass different parameters to different upstreams? I didn't really find a clean solution. I found one using additional redirects (30X), but those aren't transparent to the user and the helper paths are shown in the browser, even if for a little time. Any ideas?

What I want to achieve is something like this:

map $host$request_uri $select_upstream_with_or_without_cache {
  default go_to_location_upstream_def;

  ~^path1$            go_to_location_upstream_special;
  ~^path2$            go_to_location_upstream_special;
  ~^path3$            go_to_location_upstream_special;
  ~^path4$            go_to_location_upstream_special;
    # remember! Here path is, beside the $host, wildly different even within the same domain
}

[...config...]


server {
    server_name  server-with-diff-params.for-different-upstreams.example;
    [...config...]
    
    # how to have two locations, with different configs, that are selected according to the
    # uris (given that the uris are very different and not really caught by regexps) ?
    
    # how to do something like the following ?
    location @upstream_def {
        proxy_pass            http://upstream_def;
        [configuration X]
    }

    location @upstream_special {
        proxy_pass            http://upstream_special;
        [configuration Y]
    }
}

gapsf avatar
ng flag
Use several server and/or location derictives. Here are answers https://www.digitalocean.com/community/tutorials/understanding-nginx-server-and-location-block-selection-algorithms
jp flag
@gapsf I knew that article but it doesn't answer my question. I'd like to have a dynamic selector, not an enumeration (that is the obvious case).
Score:1
jp flag

Ok I found a way, I am sure there may be others, possibly even more clean and compact.

The gist of it is the following:

  • In an nginx server block it seems not possible to pick up a location according to a uri, and thus select the appropriate configuration for it.
  • What one can do is to create internal nginx servers using for example the awesome loopback interface that offers so many differnt ips, that offer the wanted configuration for a group of uri.
  • then from the main nginx server (or the frontend nginx) one can, with a map, select the backend (proxy_pass) according to the uri.
  • In this way the proper configuration for the uri is delegated to the additional nginx servers. It is as if we add a layer of servers between the frontend nginx and the backend application, only to manage the right configuration.

Once one sees it, it is really "obvious", but combining nginx parts is not always immediately clear, it is pretty interesting though.

I had to sanitize the following, it may not work straight from copy and paste.

upstream configA {
  server 127.0.0.1:1081;
}

upstream configB {
  server 127.0.0.2:1081;
}

upstream configC {
  server 127.0.0.3:1081;
}

map $host $auth_basic_off_if_host {
  default "dev";

  nginx-testing.example.systems off;
}

map $request_uri $select_backend_based_on_path {
  default http://configC;

  ~^/test/config/a/.* http://configA;
  ~^/test/config/b/.* http://configB;
}

server {
  listen       *:443 ssl;


  server_name  nginx-testing.example.systems;

  ssl_certificate           <path>;
  ssl_certificate_key       <path>;

    auth_basic                "dev";
    auth_basic_user_file      "/etc/nginx/htpasswd";
  index  index.html index.htm index.php;
  access_log            <appropriate-path-access> combined;
  error_log             <appropriate-path-errors>;


  location /test {
    proxy_pass            $select_backend_based_on_path;
    proxy_read_timeout    90s;
    proxy_connect_timeout 90s;
    proxy_send_timeout    90s;
    proxy_set_header      Host $host;
    proxy_set_header      X-Real-IP $remote_addr;
    proxy_set_header      X-Forwarded-Proto https;
    proxy_set_header      X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_hide_header      Etag;
    proxy_hide_header      Accept-encoding;
  }
}

server {
  listen 127.0.0.1:1081;


  server_name           nginx-test-config-a.example.systems;
  auth_basic           "$auth_basic_off_if_host";
  auth_basic_user_file /etc/nginx/htpasswd;


  index  index.html index.htm index.php;
  access_log            <appropriate-path-access> combined;
  error_log             <appropriate-path-errors>;

  location / {
    proxy_pass            http://app-prototype;
    proxy_read_timeout    90s;
    proxy_connect_timeout 90s;
    proxy_send_timeout    90s;
    proxy_set_header      Host $host;
    proxy_set_header      X-Forwarded-Proto https;
    proxy_set_header      X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_hide_header      Etag;
    proxy_hide_header      Accept-encoding;
  }
  add_header X-test "config A" always;
}


server {
  listen 127.0.0.2:1081;


  server_name           nginx-test-config-b.example.systems;
  auth_basic           "$auth_basic_off_if_host";
  auth_basic_user_file /etc/nginx/htpasswd;


  index  index.html index.htm index.php;
  access_log            <appropriate-path-access> combined;
  error_log             <appropriate-path-errors>;

  location / {
    proxy_pass            http://app-prototype-b;
    proxy_read_timeout    90s;
    proxy_connect_timeout 90s;
    proxy_send_timeout    90s;
    proxy_set_header      Host $host;
    proxy_set_header      X-Forwarded-Proto https;
    proxy_set_header      X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_hide_header      Etag;
    proxy_hide_header      Accept-encoding;
  }
  add_header X-test "config B" always;
}

server {
  listen 127.0.0.3:1081;


  server_name           nginx-test-config-c.example.systems;
  auth_basic           "$auth_basic_off_if_host";
  auth_basic_user_file /etc/nginx/htpasswd;


  index  index.html index.htm index.php;
  access_log            <appropriate-path-access> combined;
  error_log             <appropriate-path-errors>;

  location / {
    index     index.html index.htm index.php;
    return 200 "config C";
  }
  add_header X-test "config C" always;
  add_header Content-Type "text/plain";
}
I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.