Score:0

invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable

sa flag

enter image description here

Error: Kubernetes cluster unreachable: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable

Error: Get "http://localhost/api/v1/namespaces/devcluster-ns": dial tcp [::1]:80: connectex: No connection could be made because the target machine actively refused it.

Error: Get "http://localhost/api/v1/namespaces/devcluster-ns/secrets/devcluster-storage-secret": dial tcp [::1]:80: connectex: No connection could be made because the target machine actively refused it.

I configured the Azure AKS Cluster using Terraform. After configuration, some code modifications have occurred. Modifications to other parts worked normally, but when increasing or decreasing the number of pods in "default_node_pool", we encountered the following error.

In particular, the kubernetes provider problem seems to occur when increasing or decreasing "os_disk_size_gb, node_count" of default_node_pool.

resource "azurerm_kubernetes_cluster" "aks" {
  name                                = "${var.cluster_name}-aks"
  location                            = var.location
  resource_group_name                 = data.azurerm_resource_group.aks-rg.name 
  node_resource_group                 = "${var.rgname}-aksnode"  
  dns_prefix                          = "${var.cluster_name}-aks"
  kubernetes_version                  = var.aks_version
  private_cluster_enabled             = var.private_cluster_enabled
  private_cluster_public_fqdn_enabled = var.private_cluster_public_fqdn_enabled
  private_dns_zone_id                 = var.private_dns_zone_id

  default_node_pool {
    name                         = "syspool01"
    vm_size                      = "${var.agents_size}"
    os_disk_size_gb              = "${var.os_disk_size_gb}"
    node_count                   = "${var.agents_count}"
    vnet_subnet_id               = data.azurerm_subnet.subnet.id
    zones                        = [1, 2, 3]
    kubelet_disk_type            = "OS"
    os_sku                       = "Ubuntu"
    os_disk_type                 = "Managed"
    ultra_ssd_enabled            = "false"
    max_pods                     = "${var.max_pods}"
    only_critical_addons_enabled = "true"
    # enable_auto_scaling = true
    # max_count           = ${var.max_count}
    # min_count           = ${var.min_count}
  }

I correctly specified the provider of kubernetes and helm on provider.tf as below, and there was no problem at first distribution.

data "azurerm_kubernetes_cluster" "credentials" {
  name                = azurerm_kubernetes_cluster.aks.name
  resource_group_name = data.azurerm_resource_group.aks-rg.name
}

provider "kubernetes" {
  host                   = data.azurerm_kubernetes_cluster.credentials.kube_config[0].host
  username               = data.azurerm_kubernetes_cluster.credentials.kube_config[0].username
  password               = data.azurerm_kubernetes_cluster.credentials.kube_config[0].password
  client_certificate     = base64decode(data.azurerm_kubernetes_cluster.credentials.kube_config[0].client_certificate, )
  client_key             = base64decode(data.azurerm_kubernetes_cluster.credentials.kube_config[0].client_key, )
  cluster_ca_certificate = base64decode(data.azurerm_kubernetes_cluster.credentials.kube_config[0].cluster_ca_certificate, )
}

provider "helm" {
  kubernetes {
    host = data.azurerm_kubernetes_cluster.credentials.kube_config[0].host
    # username               = data.azurerm_kubernetes_cluster.credentials.kube_config[0].username
    # password               = data.azurerm_kubernetes_cluster.credentials.kube_config[0].password
    client_certificate     = base64decode(data.azurerm_kubernetes_cluster.credentials.kube_config[0].client_certificate, )
    client_key             = base64decode(data.azurerm_kubernetes_cluster.credentials.kube_config[0].client_key, )
    cluster_ca_certificate = base64decode(data.azurerm_kubernetes_cluster.credentials.kube_config[0].cluster_ca_certificate, )
  }
}

The terrafrom refresh, terraform plan, and terraform application must operate normally.

Additional findings include: On provider.tf, read "/.Add and distribute "kube/config". This seems to work well. But what I want is "/.It should work well without specifying "kube/config".

provider "kubernetes" {
  config_path            = "~/.kube/config". # <=== add this
  host                   = data.azurerm_kubernetes_cluster.credentials.kube_config[0].host
  username               = data.azurerm_kubernetes_cluster.credentials.kube_config[0].username
  password               = data.azurerm_kubernetes_cluster.credentials.kube_config[0].password
  client_certificate     = base64decode(data.azurerm_kubernetes_cluster.credentials.kube_config[0].client_certificate, )
  client_key             = base64decode(data.azurerm_kubernetes_cluster.credentials.kube_config[0].client_key, )
  cluster_ca_certificate = base64decode(data.azurerm_kubernetes_cluster.credentials.kube_config[0].cluster_ca_certificate, )
}

As expected, when terraform plan, terraform refresh, terraform application, "data.azurm_kubernetes_cluster.credentials" can't I read this?

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.