Score:0

RKE OpenStack cloud provider without scheduled pods

fi flag

I'm trying to use openstack-cloud-controller-manager with Rancher RKE. I'm using application credentials to access openstack and for this reason it appears to me that I cannot use the openstack-provider that RKE has by default because there are no parameters on that. However, I noticed that these parameters are available with the official openstack-cloud-controller-manager install under the global section.

Now I tried to install the OpenStack provider in the following way:

  • Install RKE (v1.24) without its own openstack provider and set the kubelet argument cloud-provider to external:
  services {
    kubelet {
      extra_args = {
        cloud-provider = "external"
      }
    }
  }

with master nodes with both the role controlplane and etcd.

Create a configuration file:

[Global]
cloud=openstack
auth-url=https://openstack.com:5000
username=username
user-id=id
password=pass
tenant-id=tenant_id
tenant-name=tenant_name
region=RegionOne
ca-file=/my/local/adacloud.ca.chain
application-credential-id=cred_id
application-credential-secret=cred_secret

[LoadBalancer]
use-octavia=true
floating-network-id=ext_network_id
subnet-id=cluster_subnet_id

Create a secret from it:

kubectl create secret -n kube-system generic cloud-config --from-file=local/openstack.conf

Download manifests:

wget https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/master/manifests/controller-manager/cloud-controller-manager-roles.yaml
wget https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/master/manifests/controller-manager/cloud-controller-manager-role-bindings.yaml
wget https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/master/manifests/controller-manager/openstack-cloud-controller-manager-ds.yaml

I changed the spec.template.spec.nodeSelector in the file openstack-cloud-controller-manager-ds.yaml in the following way:

node-role.kubernetes.io/controlplane: "true"

This is because the nodes are labeled by default as controlplane=true

The tolerations in the openstack-cloud-controller-manager-ds.yaml file are:

tolerations:
- key: node.cloudprovider.kubernetes.io/uninitialized
value: "true"
effect: NoSchedule
- key: node-role.kubernetes.io/master
effect: NoSchedule
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule

Then I applied them:

kubectl apply -f cloud-controller-manager-roles.yaml
kubectl apply -f cloud-controller-manager-role-bindings.yaml
kubectl apply -f openstack-cloud-controller-manager-ds.yaml

Finally, when I check the daemonsets, there are no pods..

$ kubectl get -n kube-system ds
NAME                                 DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                               AGE
canal                                4         4         4       4            4           kubernetes.io/os=linux                      17m
openstack-cloud-controller-manager   0         0         0       0            0           node-role.kubernetes.io/controlplane=true   11s

I have the following labels, annotations and taints on the master node:

Labels:             app=ingress
                    beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=k8s-master-0
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/controlplane=true
                    node-role.kubernetes.io/etcd=true
Annotations:        flannel.alpha.coreos.com/backend-data: {"VNI":1,"VtepMAC":"3e:96:d7:c9:94:f8"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 192.168.4.165
                    node.alpha.kubernetes.io/ttl: 0
                    projectcalico.org/IPv4Address: 192.168.4.165/24
                    projectcalico.org/IPv4IPIPTunnelAddr: 10.42.0.1
                    rke.cattle.io/external-ip: 192.168.4.165
                    rke.cattle.io/internal-ip: 192.168.4.165
                    volumes.kubernetes.io/controller-managed-attach-detach: true
Taints:             node-role.kubernetes.io/etcd=true:NoExecute
                    node-role.kubernetes.io/controlplane=true:NoSchedule
                    node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule

I cannot understand why these pods are not scheduled. Is there someone who already run in that issue?

PS. I tried also without the kubelet arg cloud-provider=external, but with no changes.

Score:0
aw flag
Avi

looking it over, seems the openstack-cloud-controller-manager needs the node-role.kubernetes.io/controlplane=true:NoSchedule toleration

sctx avatar
fi flag
Thank you for your answer. Unfortunately, it does no work. DaemonSet shows 0 pods. I added to `tolerations:` ``` - key: node-role.kubernetes.io/controlplane \n value: "true" \n effect: NoSchedule ```
Score:0
fi flag

Ok, I figured out how to solve this issue.

The problem is that I created the control plane nodes with both the roles controlplane and etcd. As described in the SUSE support website, by looking at the taints we can see that not only there is node-role.kubernetes.io/controlplane=true:NoSchedule, but also node-role.kubernetes.io/etcd=true:NoExecute which prevents any pod to be scheduled:

kubectl get node k8s-master -o custom-columns=NAME:.metadata.name,TAINTS:.spec.taints
NAME                    TAINTS
k8s-master              [map[effect:NoSchedule key:node-role.kubernetes.io/controlplane value:true] map[effect:NoExecute key:node-role.kubernetes.io/etcd value:true]]

By adding the following tolerations in the openstack-cloud-controller-manager DaemonSet, pods are scheduled correctly:

spec:
  ...
    spec:
      nodeSelector:
        node-role.kubernetes.io/controlplane: "true"
      tolerations:
      ...
      - key: node-role.kubernetes.io/controlplane
        value: "true"
        effect: NoSchedule
      - key: node-role.kubernetes.io/etcd
        value: "true"
        effect: NoExecute

I link also another useful resource I found on GitHub Issues.

I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.