I am using a pod (image bitnami/kubectl) in a kubernetes cluster (clusterA) but kubeconfig is set to point to another cluster (clusterB).
Using on premises kubernetes 1.21.7 (on VMs) installed via kubeadm.
kubeconfig used:
apiVersion: v1
kind: Config
clusters:
- name: default-cluster
cluster:
certificate-authority-data: XXXXXXXXXXXXXXXXXXXXXX
server: https://clusterB:6443
contexts:
- name: default-context
context:
cluster: default-cluster
namespace: default
user: default-user
current-context: default-context
users:
- name: default-user
user:
token: YYYYYYYYYYYYYYYYYYYYYY
But even though kubeconfig is correct, clusterA doesn't allow the request to leave clusterA to reach clusterB. It seems kubernetes control plane in clusterA interprets the pod kubectl request as if it is trying to control clusterA itself but that's not what I am trying to do. I am trying to reach clusterB instead (defined in 'server' in kubeconfig).
This is the error when kubectl runs:
Error from server (Forbidden): error when retrieving current configuration of:
Resource: "/v1, Resource=secrets", GroupVersionKind: "/v1, Kind=Secret"
Name: "mysecret", Namespace: "istio-system"
from server for: "STDIN": secrets "mysecret" is forbidden: User "system:serviceaccount:mynamespace:default" cannot get resource "secrets" in API group "" in the namespace "istio-system"
EDIT:
I am using kubectl setting a custom kubeconfig:
# kubectl --kubeconfig=/etc/custom.kubeconfig cluster-info
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Error from server (Forbidden): services is forbidden: User "system:serviceaccount:mynamespace:default" cannot list resource "services" in API group "" in the namespace "kube-system"
# grep server /etc/letsencrypt/custom.kubeconfig
server: https://clusterB:6443
EDIT2:
# kubectl --kubeconfig=/etc/custom.kubeconfig cluster-info dump
Error from server (Forbidden): nodes is forbidden: User "system:serviceaccount:cadeado--producao:default" cannot list resource "nodes" in API group "" at the cluster scope