Score:1

Transparent HTTPS proxy with squid using SNI

sm flag

Final update: I ended up using nginx as squid proved to be difficult to work with, see last update at the end for more detail

What I'm trying to do is setting up a transparent HTTPS proxy with squid using SNI (without decrypting), but it is not working.

I don't know what I'm doing wrong, I would appreciate you'r help.

I've tried these solutions: A , B

Events that happened during logging:

HTTP call

curl -v http://example.com
*   Trying 127.0.0.1:80...
* Connected to example.com (127.0.0.1) port 80 (#0)
> GET / HTTP/1.1
> Host: example.com
> User-Agent: curl/7.82.0
> Accept: */*
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Age: 512560
< Cache-Control: max-age=604800
< Content-Type: text/html; charset=UTF-8
< Date: Sun, 11 Jun 2023 15:28:56 GMT
< ETag: "3147526947+ident"
< Expires: Sun, 18 Jun 2023 15:28:56 GMT
< Last-Modified: Thu, 17 Oct 2019 07:18:26 GMT
< Server: ECS (bsa/EB17)
< Vary: Accept-Encoding
< X-Cache: HIT
< Content-Length: 1256
< X-Cache: MISS from eb91c8aa314b
< X-Cache-Lookup: MISS from eb91c8aa314b:3128
< Via: 1.1 eb91c8aa314b (squid/5.7)
< Connection: keep-alive

HTTPS call

curl -v https://example.com
*   Trying 127.0.0.1:443...
* Connected to example.com (127.0.0.1) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
*  CAfile: /etc/pki/tls/certs/ca-bundle.crt
*  CApath: none
* TLSv1.0 (OUT), TLS header, Certificate Status (22):
* TLSv1.3 (OUT), TLS handshake, Client hello (1):

For HTTPS it stucks.

squid.conf:

acl localnet src 0.0.0.1-0.255.255.255  # RFC 1122 "this" network (LAN)
acl localnet src 10.0.0.0/8     # RFC 1918 local private network (LAN)
acl localnet src 100.64.0.0/10      # RFC 6598 shared address space (CGN)
acl localnet src 169.254.0.0/16     # RFC 3927 link-local (directly plugged) machines
acl localnet src 172.16.0.0/12      # RFC 1918 local private network (LAN)
acl localnet src 192.168.0.0/16     # RFC 1918 local private network (LAN)
acl localnet src fc00::/7           # RFC 4193 local private network range
acl localnet src fe80::/10          # RFC 4291 link-local (directly plugged) machines

acl SSL_ports port 443
acl Safe_ports port 80      # http
acl Safe_ports port 21      # ftp
acl Safe_ports port 443     # https
acl Safe_ports port 70      # gopher
acl Safe_ports port 210     # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280     # http-mgmt
acl Safe_ports port 488     # gss-http
acl Safe_ports port 591     # filemaker
acl Safe_ports port 777     # multiling http

http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager

include /etc/squid/conf.d/*.conf

# http_access allow localhost
# http_access deny all
# http_port 3128

coredump_dir /var/spool/squid
refresh_pattern ^ftp:       1440    20% 10080
refresh_pattern ^gopher:    1440    0%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .       0   20% 4320


# ---------------------- Added configs ----------------------

http_access allow all

# always_direct allow all

# Squid kept crashing and after checking out the logs and searching, I found following bug report
# which in first comment there was some suggested workarounds one of them was setting a value for max_filedescriptors 
# https://bugs.launchpad.net/ubuntu-docker-images/+bug/1978272 
max_filedescriptors 1048576

# transparent proxy for http 
http_port 3128 accel vhost allow-direct

# transparent proxy for https 
acl step1 at_step SslBump1
# acl step2 at_step SslBump2
# acl step3 at_step SslBump3

ssl_bump peek step1 
ssl_bump splice all 
# ssl_bump bump 


https_port 3129 intercept ssl-bump cert=/etc/squid/squid.pem

access.log:

1686498023.786    671 172.17.0.1 TCP_MISS/304 436 GET http://example.com/ - HIER_DIRECT/93.184.216.34 -
1686498025.905      0 172.17.0.1 NONE_NONE/000 0 - error:accept-client-connection - HIER_NONE/- -

cache.log:

2023/06/11 15:40:17 kid1| Set Current Directory to /var/spool/squid
2023/06/11 15:40:17 kid1| Starting Squid Cache version 5.7 for x86_64-pc-linux-gnu...
2023/06/11 15:40:17 kid1| Service Name: squid
2023/06/11 15:40:17 kid1| Process ID 9
2023/06/11 15:40:17 kid1| Process Roles: worker
2023/06/11 15:40:17 kid1| With 1048576 file descriptors available
2023/06/11 15:40:17 kid1| Initializing IP Cache...
2023/06/11 15:40:17 kid1| DNS Socket created at 0.0.0.0, FD 8
2023/06/11 15:40:17 kid1| Adding nameserver 1.1.1.1 from /etc/resolv.conf
2023/06/11 15:40:17 kid1| Adding nameserver 8.8.8.8 from /etc/resolv.conf
2023/06/11 15:40:17 kid1| Adding nameserver 9.9.9.9 from /etc/resolv.conf
2023/06/11 15:40:17 kid1| Adding domain . from /etc/resolv.conf
2023/06/11 15:40:17 kid1| helperOpenServers: Starting 5/32 'security_file_certgen' processes
2023/06/11 15:40:17 kid1| WARNING: no_suid: setuid(0): (1) Operation not permitted
2023/06/11 15:40:17 kid1| WARNING: no_suid: setuid(0): (1) Operation not permitted
2023/06/11 15:40:17 kid1| WARNING: no_suid: setuid(0): (1) Operation not permitted
2023/06/11 15:40:17 kid1| WARNING: no_suid: setuid(0): (1) Operation not permitted
2023/06/11 15:40:17 kid1| WARNING: no_suid: setuid(0): (1) Operation not permitted
2023/06/11 15:40:17 kid1| Logfile: opening log daemon:/var/log/squid/access.log
2023/06/11 15:40:17 kid1| Logfile Daemon: opening log /var/log/squid/access.log
2023/06/11 15:40:17 kid1| WARNING: no_suid: setuid(0): (1) Operation not permitted
2023/06/11 15:40:17 kid1| Local cache digest enabled; rebuild/rewrite every 3600/3600 sec
2023/06/11 15:40:17 kid1| Store logging disabled
2023/06/11 15:40:17 kid1| Swap maxSize 0 + 262144 KB, estimated 20164 objects
2023/06/11 15:40:17 kid1| Target number of buckets: 1008
2023/06/11 15:40:17 kid1| Using 8192 Store buckets
2023/06/11 15:40:17 kid1| Max Mem  size: 262144 KB
2023/06/11 15:40:17 kid1| Max Swap size: 0 KB
2023/06/11 15:40:17 kid1| Using Least Load store dir selection
2023/06/11 15:40:17 kid1| Set Current Directory to /var/spool/squid
2023/06/11 15:40:17 kid1| Finished loading MIME types and icons.
2023/06/11 15:40:17 kid1| HTCP Disabled.
2023/06/11 15:40:17 kid1| WARNING: no_suid: setuid(0): (1) Operation not permitted
2023/06/11 15:40:17 kid1| Pinger socket opened on FD 24
2023/06/11 15:40:17 kid1| Squid plugin modules loaded: 0
2023/06/11 15:40:17 kid1| Adaptation support is off.
2023/06/11 15:40:17 kid1| Accepting reverse-proxy HTTP Socket connections at conn12 local=0.0.0.0:3128 remote=[::] FD 21 flags=9
2023/06/11 15:40:17 kid1| Accepting NAT intercepted SSL bumped HTTPS Socket connections at conn14 local=0.0.0.0:3129 remote=[::] FD 22 flags=41
2023/06/11 15:40:17| WARNING: BCP 177 violation. Detected non-functional IPv6 loopback.
2023/06/11 15:40:17| pinger: Initialising ICMP pinger ...
2023/06/11 15:40:17| pinger: ICMP socket opened.
2023/06/11 15:40:17| pinger: ICMPv6 socket opened
2023/06/11 15:40:18 kid1| storeLateRelease: released 0 objects
2023/06/11 15:40:25 kid1| ERROR: NF getsockopt(ORIGINAL_DST) failed on conn23 local=172.17.0.2:3129 remote=172.17.0.1:57978 FD 14 flags=33: (2) No such file or directory
    listening port: 3129
2023/06/11 15:40:25 kid1| ERROR: NAT/TPROXY lookup failed to locate original IPs on conn23 local=172.17.0.2:3129 remote=172.17.0.1:57978 FD 14 flags=33
    listening port: 3129

Environment:

I'm running squid inside a docker container with published ports 80:3128 and 443:3129

Host: Fedora 37
Docker version: 24.0.2
Docker image: my_own_dockerfile and ghcr.io/b4tman/squid-ssl-bump (tired both)
Squid version: 5.7 (squid-openssl debian package)

squid-ssl-bump image is from b4tman@github

I've tried both images but ended up with same result, also I tried outside docker the results were the same.

My docker file:


FROM debian:bookworm-slim

RUN apt-get update \
 && apt-get install -y squid-openssl \ 
 && rm -rf /var/lib/apt/lists/*

RUN /usr/lib/squid/security_file_certgen -c -s /var/spool/squid/ssl_db -M 4MB
RUN touch /run/squid.pid && chmod o=rw /run/squid.pid

USER proxy
 
EXPOSE 3128 3129

CMD ["squid","--foreground"]

Testing:

I've been using /etc/hosts to direct certain domain(example.com) to 127.0.0.1 (only for testing purpose) and using the curl and firefox to test the result.

To sum it up:

goal: To have squid as a transparent HTTPS proxy using SNI (with decrypting traffic).

non-goal: To decrypt HTTPS traffic and install a certificate on client.

problem: It is not working(for HTTPS) in my case and searching errors that appear in logs doesn't help much.

P.S. I've very limited knowledge of networking.

Update 1:

As per suggestion in comments that it could be docker related, I did more testing outside docker.

I've tested on fedora 37 as my main machine and a debian 11 server

Fedora 37 env:

squid version: 5.8 (squid package on fedora)

Debian 11 env:

squid version: 4.13 (squid-openssl package on debian)

The results were almost the same on both cases and it did not achive the goal however the logs were a bit diffrent compare to docker.

Here is the conf and log files in fedora:

squid.conf

#
# Recommended minimum configuration:
#

# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
acl localnet src 0.0.0.1-0.255.255.255  # RFC 1122 "this" network (LAN)
acl localnet src 10.0.0.0/8     # RFC 1918 local private network (LAN)
acl localnet src 100.64.0.0/10      # RFC 6598 shared address space (CGN)
acl localnet src 169.254.0.0/16     # RFC 3927 link-local (directly plugged) machines
acl localnet src 172.16.0.0/12      # RFC 1918 local private network (LAN)
acl localnet src 192.168.0.0/16     # RFC 1918 local private network (LAN)
acl localnet src fc00::/7           # RFC 4193 local private network range
acl localnet src fe80::/10          # RFC 4291 link-local (directly plugged) machines

acl SSL_ports port 443
acl Safe_ports port 80      # http
acl Safe_ports port 21      # ftp
acl Safe_ports port 443     # https
acl Safe_ports port 70      # gopher
acl Safe_ports port 210     # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280     # http-mgmt
acl Safe_ports port 488     # gss-http
acl Safe_ports port 591     # filemaker
acl Safe_ports port 777     # multiling http

http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager
http_access allow localhost
http_access deny to_localhost
http_access deny to_linklocal

# For example, to allow access from your local networks, you may uncomment the
# following rule (and/or add rules that match your definition of "local"):
# http_access allow localnet

#http_access deny all

#http_port 3128

coredump_dir /var/spool/squid

refresh_pattern ^ftp:       1440    20% 10080
refresh_pattern ^gopher:    1440    0%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .       0   20% 4320




# ---------------------- Added configs ----------------------

http_access allow all

# always_direct allow all

# Squid kept crashing and after checking out the logs and searching, I found following bug report
# which in first comment there was some suggested workarounds one of them was setting a value for max_filedescriptors
# https://bugs.launchpad.net/ubuntu-docker-images/+bug/1978272
#max_filedescriptors 1048576

# transparent proxy for http
http_port 80 accel vhost allow-direct

# transparent proxy for https
acl step1 at_step SslBump1
# acl step2 at_step SslBump2
# acl step3 at_step SslBump3

ssl_bump peek step1
ssl_bump splice all


https_port 443 intercept ssl-bump cert=/etc/squid/squid.pem

access.log

1686566705.397    869 127.0.0.1 TCP_MISS/200 1708 GET http://example.com/ - HIER_DIRECT/93.184.216.34 text/html

1686567418.039      0 127.0.0.1 NONE_NONE/000 0 CONNECT 127.0.0.1:443 - HIER_NONE/- -
1686567418.039      0 127.0.0.1 NONE_NONE/000 0 CONNECT 127.0.0.1:443 - HIER_NONE/- -
1686567418.039      0 127.0.0.1 NONE_NONE/000 0 CONNECT 127.0.0.1:443 - HIER_NONE/- -
1686567418.040      0 127.0.0.1 NONE_NONE/000 0 CONNECT 127.0.0.1:443 - HIER_NONE/- -
1686567418.040      0 127.0.0.1 NONE_NONE/000 0 CONNECT 127.0.0.1:443 - HIER_NONE/- -
1686567418.040      0 127.0.0.1 NONE_NONE/000 0 CONNECT 127.0.0.1:443 - HIER_NONE/- -
1686567418.040      0 127.0.0.1 NONE_NONE/000 0 CONNECT 127.0.0.1:443 - HIER_NONE/- -
1686567418.041      0 127.0.0.1 NONE_NONE/000 0 CONNECT 127.0.0.1:443 - HIER_NONE/- -
1686567418.041      0 127.0.0.1 NONE_NONE/000 0 CONNECT 127.0.0.1:443 - HIER_NONE/- -
1686567418.041      0 127.0.0.1 NONE_NONE/000 0 CONNECT 127.0.0.1:443 - HIER_NONE/- -
1686567418.042      0 127.0.0.1 NONE_NONE/000 0 CONNECT 127.0.0.1:443 - HIER_NONE/- -
1686567418.042      0 127.0.0.1 NONE_NONE/000 0 CONNECT 127.0.0.1:443 - HIER_NONE/- -
1686567418.042      0 127.0.0.1 NONE_NONE/000 0 CONNECT 127.0.0.1:443 - HIER_NONE/- -
1686567418.042      0 127.0.0.1 NONE_NONE/000 0 CONNECT 127.0.0.1:443 - HIER_NONE/- -
1686567418.043      0 127.0.0.1 NONE_NONE/000 0 CONNECT 127.0.0.1:443 - HIER_NONE/- -
1686567418.043      0 127.0.0.1 NONE_NONE/000 0 CONNECT 127.0.0.1:443 - HIER_NONE/- -
1686567418.043      0 127.0.0.1 NONE_NONE/000 0 CONNECT 127.0.0.1:443 - HIER_NONE/- -
1686567418.044      0 127.0.0.1 NONE_NONE/000 0 CONNECT 127.0.0.1:443 - HIER_NONE/- -
1686567418.044      0 127.0.0.1 NONE_NONE/000 0 CONNECT 127.0.0.1:443 - HIER_NONE/- -
1686567418.044      0 127.0.0.1 NONE_NONE/000 0 CONNECT 127.0.0.1:443 - HIER_NONE/- -
1686567418.044      0 127.0.0.1 NONE_NONE/000 0 CONNECT 127.0.0.1:443 - HIER_NONE/- -
1686567418.045      0 127.0.0.1 NONE_NONE/000 0 CONNECT 127.0.0.1:443 - HIER_NONE/- -
1686567418.045      0 127.0.0.1 NONE_NONE/000 0 CONNECT 127.0.0.1:443 - HIER_NONE/- -
1686567418.045      0 127.0.0.1 NONE_NONE/000 0 CONNECT 127.0.0.1:443 - HIER_NONE/- -
1686567418.046      0 127.0.0.1 NONE_NONE/000 0 CONNECT 127.0.0.1:443 - HIER_NONE/- -
1686567418.046      0 127.0.0.1 NONE_NONE/000 0 CONNECT 127.0.0.1:443 - HIER_NONE/- -
.........
1686567420.514      0 127.0.0.1 NONE_NONE/000 0 CONNECT 127.0.0.1:443 - HIER_NONE/- -
1686567420.514      0 127.0.0.1 NONE_NONE/000 0 CONNECT 127.0.0.1:443 - HIER_NONE/- -
1686567420.514      0 127.0.0.1 NONE_NONE/000 0 CONNECT 127.0.0.1:443 - HIER_NONE/- -
1686567420.515      0 127.0.0.1 NONE_NONE/000 0 CONNECT 127.0.0.1:443 - HIER_NONE/- -

cache.log

2023/06/12 14:14:10 kid1| Set Current Directory to /var/spool/squid
2023/06/12 14:14:10 kid1| Starting Squid Cache version 5.8 for x86_64-redhat-linux-gnu...
2023/06/12 14:14:10 kid1| Service Name: squid
2023/06/12 14:14:10 kid1| Process ID 20583
2023/06/12 14:14:10 kid1| Process Roles: worker
2023/06/12 14:14:10 kid1| With 16384 file descriptors available
2023/06/12 14:14:10 kid1| Initializing IP Cache...
2023/06/12 14:14:10 kid1| DNS Socket created at [::], FD 8
2023/06/12 14:14:10 kid1| DNS Socket created at 0.0.0.0, FD 9
2023/06/12 14:14:10 kid1| Adding nameserver 127.0.0.53 from /etc/resolv.conf
2023/06/12 14:14:10 kid1| Adding domain . from /etc/resolv.conf
2023/06/12 14:14:10 kid1| helperOpenServers: Starting 5/32 'security_file_certgen' processes
2023/06/12 14:14:10 kid1| Logfile: opening log daemon:/var/log/squid/access.log
2023/06/12 14:14:10 kid1| Logfile Daemon: opening log /var/log/squid/access.log
2023/06/12 14:14:10 kid1| Local cache digest enabled; rebuild/rewrite every 3600/3600 sec
2023/06/12 14:14:10 kid1| Store logging disabled
2023/06/12 14:14:10 kid1| Swap maxSize 0 + 262144 KB, estimated 20164 objects
2023/06/12 14:14:10 kid1| Target number of buckets: 1008
2023/06/12 14:14:10 kid1| Using 8192 Store buckets
2023/06/12 14:14:10 kid1| Max Mem  size: 262144 KB
2023/06/12 14:14:10 kid1| Max Swap size: 0 KB
2023/06/12 14:14:10 kid1| Using Least Load store dir selection
2023/06/12 14:14:10 kid1| Set Current Directory to /var/spool/squid
2023/06/12 14:14:10 kid1| Finished loading MIME types and icons.
2023/06/12 14:14:10 kid1| HTCP Disabled.
2023/06/12 14:14:10 kid1| Squid plugin modules loaded: 0
2023/06/12 14:14:10 kid1| Adaptation support is off.
2023/06/12 14:14:10 kid1| Accepting reverse-proxy HTTP Socket connections at conn13 local=[::]:80 remote=[::] FD 22 flags=9
2023/06/12 14:14:10 kid1| Accepting NAT intercepted SSL bumped HTTPS Socket connections at conn15 local=[::]:443 remote=[::] FD 23 flags=41
2023/06/12 14:14:11 kid1| storeLateRelease: released 0 objects
2023/06/12 14:27:00 kid1| WARNING! Your cache is running out of filedescriptors
    listening port: 443

Testing:

Since squid was running on host os I couldn't use /etc/hosts (to prevent looping), I tested using curl -x.

HTTP:

curl -x http://127.0.0.1:80 -v http://example.com
*   Trying 127.0.0.1:80...
* Connected to (nil) (127.0.0.1) port 80 (#0)
> GET http://example.com/ HTTP/1.1
> Host: example.com
> User-Agent: curl/7.82.0
> Accept: */*
> Proxy-Connection: Keep-Alive
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Age: 370504
< Cache-Control: max-age=604800
< Content-Type: text/html; charset=UTF-8
< Date: Mon, 12 Jun 2023 10:45:05 GMT
< ETag: "3147526947+ident"
< Expires: Mon, 19 Jun 2023 10:45:05 GMT
< Last-Modified: Thu, 17 Oct 2019 07:18:26 GMT
< Server: ECS (nyb/1D20)
< Vary: Accept-Encoding
< X-Cache: HIT
< Content-Length: 1256
< X-Cache: MISS from fedora
< X-Cache-Lookup: MISS from fedora:80
< Via: 1.1 fedora (squid/5.8)
< Connection: keep-alive

HTTPS:

curl -x https://127.0.0.1:443 -v https://example.com
*   Trying 127.0.0.1:443...
* Connected to (nil) (127.0.0.1) port 443 (#0)
* ALPN, offering http/1.1
*  CAfile: /etc/pki/tls/certs/ca-bundle.crt
*  CApath: none
* TLSv1.0 (OUT), TLS header, Certificate Status (22):
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* SSL connection timeout
* Operation timed out after 300000 milliseconds with 0 bytes received
* Closing connection 0
curl: (28) SSL connection timeout

Update 2

I did at the steps that @tegan-yorek suggested adding a cache to my squid.conf:

cache_dir ufs /var/squidcache  100 16 256
ipcache_size 2048
fqdncache_size 2048

but a whole new set of errors popped.
And I had working nginx setup that solved my problem so I didn't follow the errors for a solution.

Final update:

As squid proved to be difficult to work with, I ended up looking for alternatives
And I found nginx's ngx_stream_ssl_preread_module which did what I needed.

As I didn't get squid working I'll leave this question without an accepted answer, Thank you everybody for helping.

djdomi avatar
za flag
you will not be able to intercept ssl without deployment of the certificate
Alireza Dastyar avatar
sm flag
@djdomi I don't need to decrypt the content, I just need to forward the requests to the domain name specified as SNI which is not encrypted.
djdomi avatar
za flag
the http port does allow to use https to tunnel it through without a specific configuration
Nikita Kipriyanov avatar
za flag
@djdomi surprisingly it's possible to transparently proxy TLS with Squid without deploying a certificate, it's called "splice": https://wiki.squid-cache.org/Features/SslPeekAndSplice . What you're confusing it with is called "bump" in terms of Squid (also described on that page), which requires a client to have a Squid-controlled CA root certificate which will sign all mimicked end server certificates.
Alireza Dastyar avatar
sm flag
@NikitaKipriyanov do have any idea why it's not working(based on errors in logs or the config)?
djdomi avatar
za flag
thanks Nikita, squid is more as confused but if now root certificate is deployed, the request can not be cached?
Nikita Kipriyanov avatar
za flag
@djdomi With other than bump it can't even see where the response ends so obviously it can't cache. I don't remember whether it is possible to control its behavior when it is doing bump (MitM), but I strongly suspect it doesn't do cache in that case even considering it is possible.
Nikita Kipriyanov avatar
za flag
@AlirezaDastyar Docker complicates things here for me, and I hadn't used Squid for around 5 years. I'll try but to help I need to dedicate some time for establishing an environment and trying to set up it myself... In the meantime, you can verify your setup against [Diadele](https://docs.diladele.com/administrator_guide_develop/install/debian11/index.html) which I used and it is certainly working. Also, can you confirm your squid is listening at 3129 (by e.g. telnetting there)?
djdomi avatar
za flag
lsof -i :3129 should also do it?
Alireza Dastyar avatar
sm flag
@NikitaKipriyanov I thought maybe I'm doing a bad config or something and I didn't mean to make it inconvenience for you (of spending more time). however I can confirm it does listen of 3129 and in fact if you check the last 2 lines in cache.log and last line in access.log you can see the request is cause some sort of error, and searching the errors doesn't lead anywhere, and as for docker I've tried outside docker(on fedora 37) listening on 443 port directly with same result.
Nikita Kipriyanov avatar
za flag
Actually last 2 lines made me think it is probably listening *in a container* and connections are NATed by the Docker into the "real" socked inside a container, but TPROXY has trouble to work with that. I mean, your Squid config seem to look fine at first glance, I suspect Docker-related problems.
Alireza Dastyar avatar
sm flag
@NikitaKipriyanov per you'r suggestion I did more testing outside docker and added my finding to the question (at Update 1 section) also those last 2 line (errors) are gone, I think as you suspected those were docker related, but problem is still there and I cant proxy https.
Score:0
jp flag

Try putting your https_port 3129 above your ssl_bump config. Also not having worked with squid inside a docker. I have always had to configure my https_port with intercept which requires forwarding from 443.

EG

#https_port 443 cert=/xyz
#https_port 3129 intercept ssl-bump cert=/xyz 
   
ssl_bump peek step1
ssl_bump splice all 
Score:0
jp flag

So after some testing unfortunately on Ubuntu and not fedora. I came up with. Ssl_bump peek step1 ssl_bump splice all. What I found was that squid was dropping the connection because cdn hosted websites were triggering host header forgery detection. My work around was adding a separate cache then adding ipcache_size and fqdncache_size beneath the second cache line. Ipcache is self explanatory, fqdn caches the domain name of the servers ip address. What happens is the client machine receives one ip address and squid proxy receives a different ip address. Which then triggers the host header forgery.

I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.