Regarding rejected requests (400), see my answer https://serverfault.com/a/1083820/488604.
One could also try to combine your approach to a single filter, e. g. following RE would catch every 400 and 403 response regardless the URI:
failregex = ^<ADDR> \S+ \S+(?: \[\])? "[^"]*" 40[03]\s
But this would surely expect that your pages would not generate 400/403 requests by themselves to avoid possible false positives.
A bit more "strict" RE may look like this:
failregex = ^<ADDR> \S+ \S+(?: \[\])? (?:"[^"]*" 400|"[A-Z]+ /(?:[^/]+/)*[^/.]+\.(?:php|env)\s[^"]*" 40[34])\s
or something like this, with a fast prefilter (if you'd need more different failregex
):
prefregex = ^<ADDR> \S+ \S+(?: \[\])? (?="[^"]*" 40[034]\s)<F-CONTENT>.+</F-CONTENT>$
failregex = ^"[^"]*" 400\s
^"[A-Z]+ /(?:[^/]+/)*[^/.]+\.(?:php|env)\s[^"]*" 40[34]\s
Both variants would find every 400th as well as 403th and 404th requests with .php
and .env
extensions (guessing your pages would also not generate such URIs internally and no one really want to call forbidden or missing php/env pages intentionally).
As for consuming resources by fail2ban - strictly say it depends, but it'll be not so problematic if one would not use the access-log here, which monitoring is indeed not recommended.
See fail2ban :: wiki :: Best practice for more info (especially the paragraph about "parasitic log-traffic").
BTW, generation of 302th redirect for URIs starting with multiple /
-slash is a bit strange when not to say looks like a mistake, at least for URIs that are not existing and can be never served from server. Let alone the arguments of POST requests etc.
Moreover I don't know where it'd be needed.