I'm trying to use openstack-cloud-controller-manager with Rancher RKE. I'm using application credentials to access openstack and for this reason it appears to me that I cannot use the openstack-provider that RKE has by default because there are no parameters on that. However, I noticed that these parameters are available with the official openstack-cloud-controller-manager install under the global
section.
Now I tried to install the OpenStack provider in the following way:
- Install RKE (v1.24) without its own openstack provider and set the kubelet argument
cloud-provider
to external
:
services {
kubelet {
extra_args = {
cloud-provider = "external"
}
}
}
with master nodes with both the role controlplane
and etcd
.
Create a configuration file:
[Global]
cloud=openstack
auth-url=https://openstack.com:5000
username=username
user-id=id
password=pass
tenant-id=tenant_id
tenant-name=tenant_name
region=RegionOne
ca-file=/my/local/adacloud.ca.chain
application-credential-id=cred_id
application-credential-secret=cred_secret
[LoadBalancer]
use-octavia=true
floating-network-id=ext_network_id
subnet-id=cluster_subnet_id
Create a secret from it:
kubectl create secret -n kube-system generic cloud-config --from-file=local/openstack.conf
Download manifests:
wget https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/master/manifests/controller-manager/cloud-controller-manager-roles.yaml
wget https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/master/manifests/controller-manager/cloud-controller-manager-role-bindings.yaml
wget https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/master/manifests/controller-manager/openstack-cloud-controller-manager-ds.yaml
I changed the spec.template.spec.nodeSelector
in the file openstack-cloud-controller-manager-ds.yaml
in the following way:
node-role.kubernetes.io/controlplane: "true"
This is because the nodes are labeled by default as controlplane=true
The tolerations in the openstack-cloud-controller-manager-ds.yaml
file are:
tolerations:
- key: node.cloudprovider.kubernetes.io/uninitialized
value: "true"
effect: NoSchedule
- key: node-role.kubernetes.io/master
effect: NoSchedule
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
Then I applied them:
kubectl apply -f cloud-controller-manager-roles.yaml
kubectl apply -f cloud-controller-manager-role-bindings.yaml
kubectl apply -f openstack-cloud-controller-manager-ds.yaml
Finally, when I check the daemonsets, there are no pods..
$ kubectl get -n kube-system ds
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
canal 4 4 4 4 4 kubernetes.io/os=linux 17m
openstack-cloud-controller-manager 0 0 0 0 0 node-role.kubernetes.io/controlplane=true 11s
I have the following labels, annotations and taints on the master node:
Labels: app=ingress
beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=k8s-master-0
kubernetes.io/os=linux
node-role.kubernetes.io/controlplane=true
node-role.kubernetes.io/etcd=true
Annotations: flannel.alpha.coreos.com/backend-data: {"VNI":1,"VtepMAC":"3e:96:d7:c9:94:f8"}
flannel.alpha.coreos.com/backend-type: vxlan
flannel.alpha.coreos.com/kube-subnet-manager: true
flannel.alpha.coreos.com/public-ip: 192.168.4.165
node.alpha.kubernetes.io/ttl: 0
projectcalico.org/IPv4Address: 192.168.4.165/24
projectcalico.org/IPv4IPIPTunnelAddr: 10.42.0.1
rke.cattle.io/external-ip: 192.168.4.165
rke.cattle.io/internal-ip: 192.168.4.165
volumes.kubernetes.io/controller-managed-attach-detach: true
Taints: node-role.kubernetes.io/etcd=true:NoExecute
node-role.kubernetes.io/controlplane=true:NoSchedule
node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule
I cannot understand why these pods are not scheduled. Is there someone who already run in that issue?
PS. I tried also without the kubelet arg cloud-provider=external
, but with no changes.