I am using HA Proxy as my Ingress Controller. It is setup with 2 replicas. Meaning traffic reaching the Ingress Controller LB service can be routed to 2 pods.
My end user application has 3 replicas. The end user service is configured with sessionAffinity:ClientIP
meaning that sessions between ingress controller pods and application pods are session stuck.
This causes an issue when the 2 ingress controller pods route traffic to 2 different application pods. The end user session is not maintained and the user gets logged out suddenly.

What would be ideal in this case would be

One option is to introduce sessionAffinity:ClientIP
on the Ingress Ctrl LB too but that would affect every application in the cluster.
How can I make sure that for my specific application, the request persistently hits just one Ingress Controller pod ?
Current Ingress definiton
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
haproxy.org/backend-config-snippet: |
dynamic-cookie-key Ingress
cookie INGRESSCOOKIE insert indirect nocache dynamic
kubernetes.io/ingress.class: haproxy
name: cache
namespace: cache
spec:
rules:
- host: ac.com
http:
paths:
- backend:
service:
name: nexus
port:
number: 8080
path: /
pathType: Prefix