Good Day,
I've seen a lot of questions like this but I haven't seen anything similar to my situation. I am not sure if this is the right or best place to ask.
The issue I am facing is that I have a Python script, using Selenium, to make a request to a website. For everything I have tried I can't find out where the issue lies.
Here's the specific situation:
I have two Digital Ocean droplets running in the same region. DO1 contains my python script, using selenium to .get(URL)
a resource.
DO2 contains my socks5 proxy server. Set up with ssh -f -N -D 0.0.0.0:1080 localhost
.
Now if I run the following:
- DO1 requests https://www.google.com with no proxy = No issues, ~0.8 sec request
- DO1 requests https://www.google.com with proxy = No issue, ~1.1 sec request
Now here's the issue.
When I use DO1 to request another website (https://mywebsite2.ru/) through the proxy, it takes ~3 minutes to respond. It does respond, it's super slow, like it's nerfed. And it's a brand new droplet.
The proxy will output, 3 times for every request channel X: open failed: connect failed: Connection timed out
I've tried created a proxy on a new droplet in different region - same effect.
If I try and use DO1 to request directly, same issue. It takes ~3 minutes to respond.
Now I figured the IPs themselves might be tainted. If I use DO1 to curl
the same website, I get the results right away. If I use DO1 to curl
the website through the proxy, I get the results right away.
Selenium does have a browser header attached to the request, I don't believe curl does by default. I've also tried changing those around.
So I am not sure how to address this. (1) I am not sure what might be causing the nerfing in the python scripts and (2) I am not sure what I can try and explore at this time to find the issue.
Hoping I can be pointed in the right direction.