Score:0

How to install Consul on azure kubernetes with policies enabled?

cn flag

I have installed azure kubernetes with azure policy enabled. I followed the steps in the getting started guide of consul as is: link

But when the consul is deployed, the pods are not deployed.

When I checked the replicaset, the error is as below.

Events:
  Type     Reason        Age                   From                   Message
  ----     ------        ----                  ----                   -------
  Warning  FailedCreate  23s (x17 over 6m20s)  replicaset-controller  Error creating: admission webhook "validation.gatekeeper.sh" denied the request: [azurepolicy-psp-container-no-privilege-esc-30132221bc21e5b724da] Privilege escalation container is not allowed: controller

How to fix this?

The detailed steps with output.

D:\consul_azure>git clone https://github.com/hashicorp/learn-consul-kubernetes.git
Cloning into 'learn-consul-kubernetes'...
remote: Enumerating objects: 504, done.
remote: Counting objects: 100% (504/504), done.
remote: Compressing objects: 100% (325/325), done.
remote: Total 504 (delta 260), reused 354 (delta 154), pack-reused 0
Receiving objects: 100% (504/504), 87.91 KiB | 173.00 KiB/s, done.
Resolving deltas: 100% (260/260), done.

D:\consul_azure>cd learn-consul-kubernetes/service-mesh/deploy

D:\consul_azure\learn-consul-kubernetes\service-mesh\deploy>helm repo add hashicorp https://helm.releases.hashicorp.com
"hashicorp" already exists with the same configuration, skipping

D:\consul_azure\learn-consul-kubernetes\service-mesh\deploy>helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "aad-pod-identity" chart repository
...Successfully got an update from the "secrets-store-csi-driver" chart repository
...Successfully got an update from the "csi-secrets-store-provider-azure" chart repository
...Successfully got an update from the "hashicorp" chart repository
...Successfully got an update from the "spv-charts" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈Happy Helming!⎈

D:\consul_azure\learn-consul-kubernetes\service-mesh\deploy>helm install -f config.yaml consul hashicorp/consul --version "0.32.1"
NAME: consul
LAST DEPLOYED: Mon Aug 16 12:36:55 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
Thank you for installing HashiCorp Consul!

Now that you have deployed Consul, you should look over the docs on using
Consul with Kubernetes available here:

https://www.consul.io/docs/platform/k8s/index.html


Your release is named consul.

To learn more about the release, run:

  $ helm status consul
  $ helm get all consul

D:\consul_azure\learn-consul-kubernetes\service-mesh\deploy>helm status consul
NAME: consul
LAST DEPLOYED: Mon Aug 16 12:36:55 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
Thank you for installing HashiCorp Consul!

Now that you have deployed Consul, you should look over the docs on using
Consul with Kubernetes available here:

https://www.consul.io/docs/platform/k8s/index.html


Your release is named consul.

To learn more about the release, run:

  $ helm status consul
  $ helm get all consul

D:\consul_azure\learn-consul-kubernetes\service-mesh\deploy>kubectl get pods --selector app=consul
No resources found in default namespace.

D:\consul_azure\learn-consul-kubernetes\service-mesh\deploy>kubectl get pods
No resources found in default namespace.

D:\consul_azure\learn-consul-kubernetes\service-mesh\deploy>kubectl get deploy
NAME                                         READY   UP-TO-DATE   AVAILABLE   AGE
consul-connect-injector-webhook-deployment   0/2     0            0           4m53s
consul-controller                            0/1     0            0           4m53s
consul-webhook-cert-manager                  0/1     0            0           4m53s
prometheus-server                            0/1     0            0           4m53s

D:\consul_azure\learn-consul-kubernetes\service-mesh\deploy>kubectl describe deploy consul-controller
Name:                   consul-controller
Namespace:              default
CreationTimestamp:      Mon, 16 Aug 2021 12:37:25 +0530
Labels:                 app=consul
                        app.kubernetes.io/managed-by=Helm
                        chart=consul-helm
                        component=controller
                        heritage=Helm
                        release=consul
Annotations:            deployment.kubernetes.io/revision: 1
                        meta.helm.sh/release-name: consul
                        meta.helm.sh/release-namespace: default
Selector:               app=consul,chart=consul-helm,component=controller,heritage=Helm,release=consul
Replicas:               1 desired | 0 updated | 0 total | 0 available | 1 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:           app=consul
                    chart=consul-helm
                    component=controller
                    heritage=Helm
                    release=consul
  Annotations:      consul.hashicorp.com/connect-inject: false
  Service Account:  consul-controller
  Containers:
   controller:
    Image:      hashicorp/consul-k8s:0.26.0
    Port:       9443/TCP
    Host Port:  0/TCP
    Command:
      /bin/sh
      -ec
      consul-k8s controller \
        -webhook-tls-cert-dir=/tmp/controller-webhook/certs \
        -datacenter=dc1 \
        -enable-leader-election \
        -log-level="info" \

    Limits:
      cpu:     100m
      memory:  50Mi
    Requests:
      cpu:     100m
      memory:  50Mi
    Environment:
      HOST_IP:            (v1:status.hostIP)
      CONSUL_HTTP_ADDR:  http://$(HOST_IP):8500
    Mounts:
      /tmp/controller-webhook/certs from cert (ro)
  Volumes:
   cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  consul-controller-webhook-cert
    Optional:    false
Conditions:
  Type             Status  Reason
  ----             ------  ------
  Progressing      True    NewReplicaSetCreated
  Available        False   MinimumReplicasUnavailable
  ReplicaFailure   True    FailedCreate
OldReplicaSets:    <none>
NewReplicaSet:     consul-controller-dff49c9f4 (0/1 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  5m9s  deployment-controller  Scaled up replica set consul-controller-dff49c9f4 to 1

D:\consul_azure\learn-consul-kubernetes\service-mesh\deploy>kubectl get rs consul-controller-dff49c9f4
NAME                          DESIRED   CURRENT   READY   AGE
consul-controller-dff49c9f4   1         0         0       5m30s

D:\consul_azure\learn-consul-kubernetes\service-mesh\deploy>kubectl logs consul-controller-dff49c9f4
Error from server (NotFound): pods "consul-controller-dff49c9f4" not found

D:\consul_azure\learn-consul-kubernetes\service-mesh\deploy>kubectl logs rs/consul-controller-dff49c9f4
error: timed out waiting for the condition

D:\consul_azure\learn-consul-kubernetes\service-mesh\deploy>kubectl describe rs/consul-controller-dff49c9f4
Name:           consul-controller-dff49c9f4
Namespace:      default
Selector:       app=consul,chart=consul-helm,component=controller,heritage=Helm,pod-template-hash=dff49c9f4,release=consul
Labels:         app=consul
                chart=consul-helm
                component=controller
                heritage=Helm
                pod-template-hash=dff49c9f4
                release=consul
Annotations:    deployment.kubernetes.io/desired-replicas: 1
                deployment.kubernetes.io/max-replicas: 2
                deployment.kubernetes.io/revision: 1
                meta.helm.sh/release-name: consul
                meta.helm.sh/release-namespace: default
Controlled By:  Deployment/consul-controller
Replicas:       0 current / 1 desired
Pods Status:    0 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:           app=consul
                    chart=consul-helm
                    component=controller
                    heritage=Helm
                    pod-template-hash=dff49c9f4
                    release=consul
  Annotations:      consul.hashicorp.com/connect-inject: false
  Service Account:  consul-controller
  Containers:
   controller:
    Image:      hashicorp/consul-k8s:0.26.0
    Port:       9443/TCP
    Host Port:  0/TCP
    Command:
      /bin/sh
      -ec
      consul-k8s controller \
        -webhook-tls-cert-dir=/tmp/controller-webhook/certs \
        -datacenter=dc1 \
        -enable-leader-election \
        -log-level="info" \

    Limits:
      cpu:     100m
      memory:  50Mi
    Requests:
      cpu:     100m
      memory:  50Mi
    Environment:
      HOST_IP:            (v1:status.hostIP)
      CONSUL_HTTP_ADDR:  http://$(HOST_IP):8500
    Mounts:
      /tmp/controller-webhook/certs from cert (ro)
  Volumes:
   cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  consul-controller-webhook-cert
    Optional:    false
Conditions:
  Type             Status  Reason
  ----             ------  ------
  ReplicaFailure   True    FailedCreate
Events:
  Type     Reason        Age                   From                   Message
  ----     ------        ----                  ----                   -------
  Warning  FailedCreate  23s (x17 over 6m20s)  replicaset-controller  Error creating: admission webhook "validation.gatekeeper.sh" denied the request: [azurepolicy-psp-container-no-privilege-esc-30132221bc21e5b724da] Privilege escalation container is not allowed: controller
in flag
It sounds like you're running into the same issue described in https://github.com/hashicorp/consul-k8s/issues/635. Consul 1.10 requires NET_ADMIN capabilities in order to use transparent proxy. I recommend sharing additional details on that GH issue so that our team can better debug and resolve this issue. Thanks.
Sara June avatar
cn flag
Thanks Blake for the link, but there is no solution provided on that link
in flag
Hi Sara, it’s an issue tracking the bug report. I suggested in my previous comment that you use that to provide more information to the Consul team so that can investigate this.
Score:1
us flag

Until Consul drops the privilege escalation, as a workaround, in step 9 to Assign a policy definition, to exclude Kubernetes namespaces from policy evaluation, specify the list of namespaces in parameter Namespace exclusions. It's recommended to exclude: kube-system, gatekeeper-system, and azure-arc.

Particularly, for Kubernetes clusters should not allow container privilege escalation you can set parameters for Namespace exclusions and Containers exclusions, adding the consul namespace and/or containers.

According to the Policy definition:

"parameters": {
...
"excludedNamespaces": {
        "type": "Array",
        "metadata": {
          "displayName": "Namespace exclusions",
          "description": "List of Kubernetes namespaces to exclude from policy evaluation."
        },
        "defaultValue": [
          "kube-system",
          "gatekeeper-system",
          "azure-arc"
        ]
      },
...
"excludedContainers": {
        "type": "Array",
        "metadata": {
          "displayName": "Containers exclusions",
          "description": "The list of InitContainers and Containers to exclude from policy evaluation. The identify is the name of container. Use an empty list to apply this policy to all containers in all namespaces."
        },
        "defaultValue": []
      },
...
}
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.