Score:0

Help to make a fail2ban filter

mx flag

I've already done some filters for my fail2ban, but just simple things, like:

[Definition]
failregex = ^ .* "GET .*/wp-login.php
ignoreregex =

i don't use wordpress on my server, so i block a lot of malicious attempts. And I have also created similar ones, for: phpmyadmin, wp-admin, wp-include, etc.

but i found in my access.log weird things like:

167.172.145.56 - - [22/Sep/2021:06:44:50 -0700] "GET /wp-login.php HTTP/1.1" 403 9901 "http://cpanel.alebalweb-blog.com/wp-login.php" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:62.0) Gecko/20100101 Firefox/62.0"
167.172.145.56 - - [22/Sep/2021:06:44:50 -0700] "GET /wp-login.php HTTP/1.1" 403 9901 "http://mail.alebalweb-blog.com/wp-login.php" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:62.0) Gecko/20100101 Firefox/62.0"


61.135.15.175 - - [22/Sep/2021:05:45:24 -0700] "GET / HTTP/1.1" 200 26210 "http://webdisk.alebalweb-blog.com/" "Mozilla/5.0 (Linux; Android 10.0; MI 2 Build/O012) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4472.114 Mobile Safari/537.36"    
61.135.15.175 - - [22/Sep/2021:05:45:24 -0700] "GET / HTTP/1.1" 200 26210 "http://webmail.alebalweb-blog.com/" "Mozilla/5.0 (Linux; Android 10.0; MI 2 Build/O012) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4472.114 Mobile Safari/537.36"    
61.135.15.175 - - [22/Sep/2021:05:45:24 -0700] "GET / HTTP/1.1" 200 26210 "http://cpcalendars.alebalweb-blog.com/" "Mozilla/5.0 (Linux; Android 10.0; MI 2 Build/O012) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4472.114 Mobile Safari/537.36"

Those subdomains don't exist.

I tried to create a new filter, inspired by apache-badbots, but I'm not sure it's correct:

[Definition]

varioustoblock = cpanel\.|store\.|webdisk\.|autodiscover\.|app\.|cpcalendars\.|cpcontacts\.|webmail\.|mail\.|fulaifushi\.|surf11818\.|asg\.|owa\.|exchange\.\$

failregex = ^<HOST> -.*"(GET|POST).*HTTP.*".*(?:%(varioustoblock)s).*"$
ignoreregex =

datepattern = ^[^\[]

especially for the (.), in the past I had problems with the (.) in the fail2ban filters, and the solution was to remove them completely...

but in this case, they can't be removed, I can't block anyone who has the word "mail" in my url... i need to be sure to block "mail."

and I'd like to create a single large filter that identifies both non-existent subdomains and attempts to access wordpress or phpmyadmin, but pyton regex is really scary if you've never used it...

Can anyone help me?

(I also thought about removing * .alebalweb-blog.com from the dns configuration, but I'm not sure it's a good idea, also because I use some subdomains.)

P.s. How worried should I be if someone tries to access subdomains that do not exist on my site?

ru flag
So, here's something to consider: there's a LOT of stale DNS records for things. I still get hit for a site that was deleted 6 years ago on one server, domain and DNS and everything, but spiders still try and hit it - you may want to simply consider setting up a default 'site' in Apache or nginx for any unmatched Host entry on your system that just gets a 403 or a 404. In all cases, though, you should be less concerned if someone's hitting nonexistent sites and subdomains and more concerned that you ONLY serve content for valid hosts on your system and just 403 or 404 for invalid ones.
alebal avatar
mx flag
I would prefer to block them, because for example today they arrived: 3721, a-pistefto, zarxar, progylka, tuya, bnksb ... which certainly have nothing to do with the current server or with the past servers, they seem to be looking for an open door to enter ... and I checked some ip, often they are the same ones looking for wp-login or wp-admin ... but my filter must be refined, because today for example I blocked mail.ru, which, poor guy has done nothing wrong ...
ru flag
Here's a consideration fact: most of those are probably service probers, not things you can really be concerned about. My suggestion is, just use a standard plain old Fail2Ban filter to watch for 404s and 403s, and then just extend the block to an obscene time instead of narrowing your filters. Case in point, I have sixty thousand hits today for random 5-character hostname strings for a subdomain that operates here. They're completely random, likely service scanners. Attempting to block them all is futile.
Score:0
co flag

The possible filter may look like this:

[Definition]

datepattern = ^\S+ \S+ \S+( \[{DATE}\])

errcode = (?!401)[45]\d\d
allowedsubdomains = www|mail|cpannel
wrongdomains = (?!(?:%(allowedsubdomains)s)?\.)(?:[^\."]+\.){2,}[^\."]+
failregex = ^<ADDR> \S+ \S+ "[A-Z]+ /[^"]*" (?:(?P<err>%(errcode)s)|\d+)(?(err)| \d+ "https?://%(wrongdomains)s")

This would match either any "bad" code specified by errcode or wrong domains, due to conditional match e. g. in case of code 200.

Where:

  • (?:(?P<err>%(errcode)s)|\d+) - matches either specified error codes (like 403 and stores it as named group err) or any other code (like 200);
  • (?(err)...A...|...B...) - conditional expression matching sub expression A if err was matched in expression above (error code only, because A is empty here) otherwise matching sub expression B (wrong subdomain).
  • (?!(?:%(allowedsubdomains)s)?\.)(?:[^\."]+\.){2,}[^\."]+ - matching anything excepting strings starting with allowed subdomains due to negative lookahead (?!...) and (?:[^\."]+\.){2,}[^\."]+ for somenting like zzz.xxx.yyy.

But it would be better to restrict domains on web-server side and prohibit any request to illegal domain there.

In this case the filter could be something like this:

[Definition]

datepattern = ^\S+ \S+ \S+( \[{DATE}\])

errcode = (?!401)[45]\d\d
failregex = ^<ADDR> \S+ \S+ "[A-Z]+ /[^"]*" %(errcode)s\b
alebal avatar
mx flag
Could you explain them to me?
sebres avatar
co flag
Sure, I updated my answer
alebal avatar
mx flag
Thanks a lot, now let's take a look ... and try to understand ... thanks
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.