Score:0

Error occurred while creating a cluster using Kubespary

fm flag

I want to create a Kubernetes cluster using Kubespary. I have created three nodes. I am using the official documentation: https://kubernetes.io/docs/setup/production-environment/tools/kubespray/ I have three remote virtual machines running openSUSE, serving as nodes.

After running the following command

ansible-playbook -i inventory/local/hosts.yaml -u root --become --become-user=root cluster.yml

I get the result. Error:

TASK [etcd : Configure | Ensure etcd is running] ***********************************************************************************************************************************************************************************************************************************************************************
ok: [node1]
ok: [node2]
ok: [node3]
Friday 10 March 2023  11:33:50 +0000 (0:00:00.589)       0:05:23.939 ********** 
Friday 10 March 2023  11:33:50 +0000 (0:00:00.064)       0:05:24.004 ********** 
FAILED - RETRYING: [node1]: Configure | Wait for etcd cluster to be healthy (4 retries left).
FAILED - RETRYING: [node1]: Configure | Wait for etcd cluster to be healthy (3 retries left).
FAILED - RETRYING: [node1]: Configure | Wait for etcd cluster to be healthy (2 retries left).
FAILED - RETRYING: [node1]: Configure | Wait for etcd cluster to be healthy (1 retries left).

TASK [etcd : Configure | Wait for etcd cluster to be healthy] **********************************************************************************************************************************************************************************************************************************************************
fatal: [node1]: FAILED! => {"attempts": 4, "changed": false, "cmd": "set -o pipefail && /usr/local/bin/etcdctl endpoint --cluster status && /usr/local/bin/etcdctl endpoint --cluster health 2>&1 | grep -v 'Error: unhealthy cluster' >/dev/null", "delta": "0:00:05.030601", "end": "2023-03-10 06:34:33.341401", "msg": "non-zero return code", "rc": 1, "start": "2023-03-10 06:34:28.310800", "stderr": "{\"level\":\"warn\",\"ts\":\"2023-03-10T06:34:33.340-0500\",\"logger\":\"etcd-client\",\"caller\":\"[email protected]/retry_interceptor.go:62\",\"msg\":\"retrying of unary invoker failed\",\"target\":\"etcd-endpoints://0xc00031a8c0/192.168.122.233:2379\",\"attempt\":0,\"error\":\"rpc error: code = DeadlineExceeded desc = context deadline exceeded\"}\nFailed to get the status of endpoint https://192.168.122.120:2379 (context deadline exceeded)", "stderr_lines": ["{\"level\":\"warn\",\"ts\":\"2023-03-10T06:34:33.340-0500\",\"logger\":\"etcd-client\",\"caller\":\"[email protected]/retry_interceptor.go:62\",\"msg\":\"retrying of unary invoker failed\",\"target\":\"etcd-endpoints://0xc00031a8c0/192.168.122.233:2379\",\"attempt\":0,\"error\":\"rpc error: code = DeadlineExceeded desc = context deadline exceeded\"}", "Failed to get the status of endpoint https://192.168.122.120:2379 (context deadline exceeded)"], "stdout": "https://192.168.122.233:2379, 4dc4060cd0d7d06, 3.5.6, 20 kB, false, false, 2, 7, 7, ", "stdout_lines": ["https://192.168.122.233:2379, 4dc4060cd0d7d06, 3.5.6, 20 kB, false, false, 2, 7, 7, "]}

NO MORE HOSTS LEFT *****************************************************************************************************************************************************************************************************************************************************************************************************

PLAY RECAP *************************************************************************************************************************************************************************************************************************************************************************************************************
localhost                  : ok=3    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
node1                      : ok=517  changed=5    unreachable=0    failed=1    skipped=612  rescued=0    ignored=0   
node2                      : ok=483  changed=5    unreachable=0    failed=0    skipped=529  rescued=0    ignored=0   
node3                      : ok=436  changed=5    unreachable=0    failed=0    skipped=507  rescued=0    ignored=0   

Here is my host.yaml:

all:
  hosts:
    node1:
      ansible_host: 134.122.85.85
      ip: 134.122.85.85
      access_ip: 134.122.85.85
    node2:
      ansible_host: 134.122.69.63
      ip: 134.122.69.63
      access_ip: 134.122.69.63
    node3:
      ansible_host: 161.35.28.90
      ip: 161.35.28.90
      access_ip: 161.35.28.90
  children:
    kube_control_plane:
      hosts:
        node1:
        node2:
    kube-node:
      hosts:
        node1:
        node2:
        node3:
    etcd:
      hosts:
        node1:
        node2:
        node3:
    k8s-cluster:
      children:
        kube_control_plane:
        kube-node:
    calico-rr:
      hosts: {}

There is communication between the hosts.

SYN avatar
hk flag
SYN
Your error message mentions private IP addresses (https://192.168.122.120:2379), while your inventory mentions public addresses only. Which suggests your inventory is wrong (see https://github.com/kubernetes-sigs/kubespray/issues/6054). Also: I would reconsider setting up my cluster on machines that are publicly available. Usually, your loadbalancer would be publicly available. While exposing your kubernetes nodes publicly is a bad idea: you won't have much wiggle room setting up firewalling, ... unless you're setting up some honey pot...
ead79 avatar
fm flag
I changed my host.yaml file and mistake again... all: hosts: node1: ansible_host: 192.168.122.233 ip: 192.168.122.233 access_ip: 192.168.122.233 node2: ansible_host: 192.168.122.120 ip: 192.168.122.120 access_ip: 192.168.122.120 node3: ansible_host: 192.168.122.242 ip: 192.168.122.242 access_ip: 192.168.122.242 ...etc In this example I use my internal IP to kvm This again was the same for the next two attempts.
I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.