Score:2

Ensuring at least one ingress-nginx per kubernetes node

es flag

I'm trying to write an autoscaling configuration for ingress-nginx, deployed via helm chart.

my goals are:

  • 3 minimum replicas (because I have 3 nodes minimum)
  • ensure only one nginx per node, but:
  • be elastic, if autoscale says we need 4 nginx allow for a node in the cluster to have 2
  • if a fourth node gets added, ensure a new nginx gets spawned

https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/values.yaml#L326 https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/values.yaml#L343 https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/values.yaml#L256

I tried playing with the below settings and a combination of them but there's always something off, for example right now I have a fourth nginx that want to spawn for some reason and it can't because of the antiaffinity rule.

Can anyone share some ideas how to achieve that?

  • always one nginx per node, if a new node gets created, a new nginx gets created
  • preserve autoscaling, if hpa wants to spawn a fourth nginx on a 3 node cluster it should be free to do so
      replicaCount: 3
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app.kubernetes.io/name
                operator: In
                values:
                - ingress-nginx
              - key: app.kubernetes.io/instance
                operator: In
                values:
                - ingress-nginx
              - key: app.kubernetes.io/component
                operator: In
                values:
                - controller
            topologyKey: "kubernetes.io/hostname"

      topologySpreadConstraints:
        - maxSkew: 1
          topologyKey: topology.kubernetes.io/zone
          whenUnsatisfiable: ScheduleAnyway
          labelSelector:
            matchLabels:
              app.kubernetes.io/instance: ingress-nginx

      autoscaling:
        enabled: true
        minReplicas: 3
        maxReplicas: 6
        targetCPUUtilizationPercentage: 75
        targetMemoryUtilizationPercentage: 100
mario avatar
cm flag
If you need only one nginx instance per node, wouldn't it be easier to deploy it as [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) ?
John Smith avatar
es flag
is it possibile to deploy as daemonset via the helm chart?
John Smith avatar
es flag
just found out https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/values.yaml#L174
John Smith avatar
es flag
thanks for the hint, seems working, if you wanna put an official reply I'll accept it
Score:1
cm flag

If you need only one nginx instance per node, wouldn't it be easier to deploy it as DaemonSet ?

As you can read in the official docs:

A DaemonSet ensures that all (or some) Nodes run a copy of a Pod. As nodes are added to the cluster, Pods are added to them. As nodes are removed from the cluster, those Pods are garbage collected. Deleting a DaemonSet will clean up the Pods it created.

And this seems like the right solution in your use case.

As you rightly suggested in your comment, nginx-ingress official helm chart can be deployed both as a Deployment or as a DaemonSet. It can be done by adjusting its values.yaml file.

cn flag
I find it worth mentioning that switching between the two modes works without having to uninstall anything first.
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.