I have a Node.js-driven site running in a Docker container, and there's a public-facing proxy site driven by Nginx server that redirects traffic to the dockerized Node.js site. Studying the Nginx logs, I see a lot of directory/path traversal attacks on all kind of paths:
GET /.env
GET /phpmyadmin/index.php
GET /owa/auth/logon.aspx
GET /+CSCOE+/logon.html
GET /ecp/Current/exporttool/microsoft.exchange.ediscovery.exporttool.application
GET /owa/auth/logon.aspx?url=https%3a%2f%2f1%2fecp%2f
GET /core/.env
GET /.vscode/sftp.json
GET /.git/config
GET /info.php
GET /config.json
etc.
Currently all of those attempts are duly processed and return http 404 response. However, I don't like to bother the dockerized site with all those fake requests, so I have started including a long list of location directives in proxy site's my Nginx config:
location = /phpmyadmin/index.php {
return 404;
}
location = /.env {
return 404;
}
But actually, isn't that a too great honor to serve them a proper 404 response? Perhaps they deserve some more devious response, like a response that is never properly finished, or something else of that nature. Also, it's kind of tiresome to keep that site config updated with new kind of paths. Using regular expressions can somewhat shorten it, but not that much.
What is considered to be the most appropriate way to handle those kind of attacks?