Score:0

Can't set up the GCP's external load balancer to work correctly with Terraform

gu flag

Using Terraform, I want to build an infrastructure that consists of an external load balancer (LB) and a MIG with 3 VMs. Each VM within the MIG should run a server that listens on 80. Furthermore, I would like to set up health checks for the MIG. Additionally, I want to have an extra VM within the subnet so that I can ssh onto it and check out if the connection to the VMs within the MIG can be established.

To achieve the goal, I'm using the following Terraform modules: "GoogleCloudPlatform/lb-http/google" and "terraform-google-modules/vm/google//modules/mig”. Unfortunately, after running the terraform apply command, all health checks fail, and the LB is not accessible via its external IP.

I will put my code in the later part of this post, but first, I would like to arrive at an understanding of the different attributes of the modules I quoted before:

  1. Does the MIG module's attribute named_ports refer to the port where my servers run? In my case, 80.
  2. Does the MIG module's health_check attribute refer to the VMs within the MIG? If yes, then I assume that the port attribute of the health_check attribute should refer to the port where the servers run, again, 80.
  3. Does the LB module's backends attribute refer to the VMs within the MIG? Should the default's attribute port again point to 80?
  4. Finally, the LB's module health_check attribute is the same as the one of the MIG's, right? Once again, the port specified there should be 80.
  5. What are the attributes target_tags and firewall_network referring to? The doc says: "Names of the networks to create firewall rules in". I don't get it. How the load balancer configuration determines which firewall rules are added to the network? Moreover, which firewall rules are added to the named network? If I add my-network there, what firewall rules will be added to this network?
  6. Using the VM named ssh-vm, I want to curl the VMs within the MIG. For this I created firewall rules allow-ssh and allow-internal. Unfortunately, when I ssh onto the VM and curl one of the VMs within the MIG, I receive: connection refused

EDIT: I was asked to provide the details of how I ssh to the VM and curl the machines within the MIG. All the MIG-VMs and the ssh-vm are within 10.0.101.0/24. ssh-vm has an external IP, say X. To establish the connection, I open the terminal and run ssh -i /my_key $USER@X. Then I pick an internal IP of one of the machines, for example: 10.0.101.3, and I execute curl 10.0.101.3:80. I receive: Failed to connect to 10.0.101.3 port 80: Connection refused


Here the main.tf file:

data "external" "my_ip_addr" {
  program = ["/bin/bash", "${path.module}/getip.sh"]
}


resource "google_project_service" "project" {
  // ...
}

resource "google_service_account" "service-acc" {
  // ...
}

resource "google_compute_network" "vpc-network" {
  project = var.pro
  name = var.network_name
  auto_create_subnetworks = false
}

resource "google_compute_subnetwork" "subnetwork" {
  name = "subnetwork"
  ip_cidr_range = "10.0.101.0/24"
  region = var.region
  project = var.pro
  stack_type = "IPV4_ONLY"
  network = google_compute_network.vpc-network.self_link
}

resource "google_compute_firewall" "allow-internal" {
  name    = "allow-internal"
  project = var.pro
  network = google_compute_network.vpc-network.self_link
  allow {
    protocol = "tcp"
    ports = ["80"]
  }
  source_ranges = ["10.0.101.0/24"]
}

resource "google_compute_firewall" "allow-ssh" {
  project = var.pro
  name          = "allow-ssh"
  direction     = "INGRESS"
  network       = google_compute_network.vpc-network.self_link
  allow {
    protocol = "tcp"
    ports = ["22"]
  }
  target_tags   = ["allow-ssh"]
  source_ranges = [format("%s/%s", data.external.my_ip_addr.result["internet_ip"], 32)]
}

resource "google_compute_firewall" "allow-hc" {
  name          = "allow-health-check"
  project = var.pro
  direction     = "INGRESS"
  network       = google_compute_network.vpc-network.self_link
  source_ranges = ["130.211.0.0/22", "35.191.0.0/16"]
  target_tags   = [var.network_name]
  allow {
    ports    = ["80"]
    protocol = "tcp"
  }
}

resource "google_compute_address" "static" {
  project = var.pro
  region = var.region
  name = "ipv4-address"
}

resource "google_compute_instance" "ssh-vm" {
  name = "ssh-vm"
  machine_type = "e2-standard-2"
  project = var.pro
  tags = ["allow-ssh"]
  zone = "europe-west1-b"

  boot_disk {
    initialize_params {
      image = "ubuntu-2004-focal-v20221213"
    }
  }

  network_interface {
    subnetwork = google_compute_subnetwork.subnetwork.self_link
    access_config {
      nat_ip = google_compute_address.static.address
    }
  }

  metadata = {
    startup-script = <<-EOF
        #!/bin/bash
        sudo snap install docker
        sudo docker version > file1.txt
        sleep 5
        sudo docker run -d --rm -p ${var.server_port}:${var.server_port} \
        busybox sh -c "while true; do { echo -e 'HTTP/1.1 200 OK\r\n'; \
        echo 'yo'; } | nc -l -p ${var.server_port}; done"
        EOF
  }

}

module "instance_template" {
  source = "terraform-google-modules/vm/google//modules/instance_template"
  version = "7.9.0"
  region = var.region
  project_id = var.pro
  network = google_compute_network.vpc-network.self_link
  subnetwork = google_compute_subnetwork.subnetwork.self_link
  service_account = {
    email = google_service_account.service-acc.email
    scopes = ["cloud-platform"]
  }

  name_prefix = "webserver"
  tags = ["template-vm"]
  machine_type = "e2-standard-2"
  startup_script = <<-EOF
  #!/bin/bash
  sudo snap install docker
  sudo docker version > docker_version.txt
  sleep 5
  sudo docker run -d --rm -p ${var.server_port}:${var.server_port} \
  busybox sh -c "while true; do { echo -e 'HTTP/1.1 200 OK\r\n'; \
  echo 'yo'; } | nc -l -p ${var.server_port}; done"
  EOF
  source_image = "https://www.googleapis.com/compute/v1/projects/ubuntu-os-cloud/global/images/ubuntu-2004-focal-v20221213"
  disk_size_gb = 10
  disk_type = "pd-balanced"
  preemptible = true

}

module "vm_mig" {
  source  = "terraform-google-modules/vm/google//modules/mig"
  version = "7.9.0"
  project_id = var.pro
  region = var.region
  target_size = 3
  instance_template = module.instance_template.self_link
  named_ports = [{
    name = "http"
    port = 80
  }]
  health_check = {
    type = "http"
    initial_delay_sec = 30
    check_interval_sec = 30
    healthy_threshold = 1
    timeout_sec = 10
    unhealthy_threshold = 5
    response = ""
    proxy_header = "NONE"
    port = 80
    request = ""
    request_path = "/"
    host = ""
  }
  network = google_compute_network.vpc-network.self_link
  subnetwork = google_compute_subnetwork.subnetwork.self_link
}

module "gce-lb-http" {
  source            = "GoogleCloudPlatform/lb-http/google"
  version           = "~> 4.4"
  project           = var.pro
  name              = "group-http-lb"
  target_tags       = ["template-vm"]
  firewall_networks = [google_compute_network.vpc-network.name]
  backends = {
    default = {
      description                     = null
      port                            = 80
      protocol                        = "HTTP"
      port_name                       = "http"
      timeout_sec                     = 10
      enable_cdn                      = false
      custom_request_headers          = null
      custom_response_headers         = null
      security_policy                 = null
      connection_draining_timeout_sec = null
      session_affinity                = null
      affinity_cookie_ttl_sec         = null

      health_check = {
        check_interval_sec  = null
        timeout_sec         = null
        healthy_threshold   = null
        unhealthy_threshold = null
        request_path        = "/"
        port                = 80
        host                = null
        logging             = null
      }

      log_config = {
        enable = true
        sample_rate = 1.0
      }

      groups = [
        {
          # Each node pool instance group should be added to the backend.
          group                        = module.vm_mig.instance_group
          balancing_mode               = null
          capacity_scaler              = null
          description                  = null
          max_connections              = null
          max_connections_per_instance = null
          max_connections_per_endpoint = null
          max_rate                     = null
          max_rate_per_instance        = null
          max_rate_per_endpoint        = null
          max_utilization              = null
        },
      ]

      iap_config = {
        enable               = false
        oauth2_client_id     = null
        oauth2_client_secret = null
      }
    }
  }
}


Score:0
it flag
  1. Does the MIG module's attribute named_ports refer to the port where my servers run? In my case, 80.

Yes, The named port defines the destination port used for the TCP connection between the proxy (GFE or Envoy) and the backend instance. In your case, Port 80.

For questions 2-4, all of the assumptions are correct.

  1. What are the attributes target_tags and firewall_network referring to? The doc says: "Names of the networks to create firewall rules in"

A network tag is a string that you can add to your Compute Engine VM. When creating firewall rules, you can specify the Google Cloud VMs where the rule will be applied to, and these are target tags. The target tags in this case will be pointing to the network tag that you applied in the VM. firewall_network is simply the VPC network where the firewall rule will be enforced.

How does the load balancer configuration determines which firewall rules are added to the network?

The Loadbalancer doesn't determine the firewall rule that is added to the network. When creating a Firewall rule, you will be asked to select a VPC network where the rule will be enforced.

Moreover, which firewall rules are added to the named network? If I add my-network there, what firewall rules will be added to this network?

In one of the firewall rules from your main.tf file:

resource "google_compute_firewall" "allow-ssh" {
  project = var.pro
  name          = "allow-ssh"
  direction     = "INGRESS"
  network       = google_compute_network.vpc-network.self_link
  allow {
    protocol = "tcp"
    ports = ["22"]
  }
  target_tags   = ["allow-ssh"]
  source_ranges = [format("%s/%s", data.external.my_ip_addr.result["internet_ip"], 32)]
}

This firewall rule is applied to the network google_compute_network.vpc-network.self_link If you are to add another VPC called my-network, just simply create another "google_compute_firewall" resource and define my-network under network.

For question 6, kindly update the question and show us (through screenshots) how you are curling the VMs in the Managed Instance Group.

mångata avatar
gu flag
Hi, thanks for the answer. I edited my post so that the way I try to access machines is described. Other than that, you didn't point out any error in the presented configuration so the question why the health check fails and the LB can't be accessed is still open.
mångata avatar
gu flag
Is it possible that the default health check firewall rule and the created `allow-internal` rule exclude each other? The first has IP ranges 130.211.0.0/22 and 35.191.0.0/16 and the `allow internal` is 10.0.101.0/24. Also, is there any possibility in the GCP console to check what is wrong with the health check?
James S avatar
it flag
from ```module "gce-lb-http"``` look for ```protocol``` and replace HTTP with ```TCP``` then under ```health_check``` add type = ```TCP```
I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.