I have a multimaster k8s cluster. I had to delete some of the master nodes, and had executed a kubeadm delete m2
and the same for the third node (m3
), so that I could have only one master and rejoin the others later.
However, this somehow messed with the main node (m1
), which now gives me these errors:
Jan 12 08:56:29 k8s-m1 kubelet[14734]: E0112 08:56:29.314499 14734
eviction_manager.go:256] "Eviction manager: failed to get summary
stats" err="failed to get node info: node \"k8s-m1\" not found"
Jan 12 08:53:15 k8s-m1 kubelet[14734]: E0112 08:53:15.552154 14734
kubelet.go:2448] "Error getting node" err="node \"k8s-m1\" not
found"
Jan 12 08:56:29 k8s-m1 kubelet[14734]: E0112 08:56:29.571175 14734
event.go:276] Unable to write event:
'&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""},
ObjectMeta:v1.ObjectMeta{Name:"k8s-m1.1739835c5d7370d9",
GenerateName:"", Namespace:"default", SelfLink:"", UID:"",
ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1,
time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>,
DeletionGracePeriodSeconds:(*int64)(nil),
Labels:map[string]string(nil), Annotations:map[string]string(nil),
OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil),
ManagedFields:[]v1.ManagedFieldsEntry(nil)},
InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"",
Name:"k8s-m1", UID:"k8s-m1", APIVersion:"", ResourceVersion:"",
FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated
Node Allocatable limit across pods",
Source:v1.EventSource{Component:"kubelet", Host:"k8s-m1"},
FirstTimestamp:time.Date(2023, time.January, 12, 8, 46, 9,
272926425, time.Local), LastTimestamp:time.Date(2023, time.January,
12, 8, 46, 9, 272926425, time.Local), Count:1, Type:"Normal",
EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC),
Series:(*v1.EventSeries)(nil), Action:"",
Related:(*v1.ObjectReference)(nil), ReportingController:"",
ReportingInstance:""}': 'Post
"https://10.10.40.30:6443/api/v1/namespaces/default/events":
EOF'(may retry after sleeping)
The IP address is of the load balancer.
Is there a way to relive this master node, so that I don't have to recreate the whole cluster?