Using Terraform v1.0.11 on Ubuntu 18.04
After a terraform apply
with the main.tf
below, and after waiting for the instance to pass checks (and then another minute), attempts to SSH are hitting a wall.
$ ssh -v -i ~/.ssh/toydeploy.pem [email protected]
OpenSSH_7.6p1 Ubuntu-4ubuntu0.5, OpenSSL 1.0.2n 7 Dec 2017
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug1: Connecting to 18.144.125.224 [18.144.125.224] port 22.
debug1: connect to address 18.144.125.224 port 22: Connection timed out
ssh: connect to host 18.144.125.224 port 22: Connection timed out
I've brought up an instance manually with the same AMI and key pair, and can SSH in. Comparing network and security settings in the console, the only differences I've noticed are that the manually-deployed instance is using the default VPC, and "Answer private resource DNS name" shows "IPv4 (A)" for the manually-deployed instance and "-" for the Terraformed one. Both seem benign, but I may be wrong.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.27"
}
}
}
provider "aws" {
profile = "default"
region = "us-west-1"
}
variable "cidr_vpc" {
description = "CIDR block for VPC"
default = "10.1.0.0/16"
}
variable "cidr_subnet" {
description = "CIDR block for subnet"
default = "10.1.0.0/20"
}
resource "aws_vpc" "toydeploy-vpc" {
cidr_block = var.cidr_vpc
enable_dns_hostnames = true
enable_dns_support = true
}
resource "aws_subnet" "toydeploy-subnet" {
vpc_id = aws_vpc.toydeploy-vpc.id
cidr_block = var.cidr_subnet
}
resource "aws_security_group" "toydeploy-sg" {
name = "toydeploy-sg"
vpc_id = aws_vpc.toydeploy-vpc.id
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = [
"0.0.0.0/0"
]
}
# Terraform removes the default rule, so we re-add it.
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_instance" "toydeploy" {
ami = "ami-083f68207d3376798" # Ubuntu 18.04
instance_type = "t2.micro"
security_groups = ["${aws_security_group.toydeploy-sg.id}"]
subnet_id = aws_subnet.toydeploy-subnet.id
associate_public_ip_address = true
key_name = "toydeploy"
}
If nothing below jumps out as a problem and you can point me to a working example, that would be appreciated too.
Resolved
A closer examination showed that the routing table was only routing for the subnet, and not 0.0.0.0/0. Adding the following resolved the issue.
resource "aws_internet_gateway" "toydeploy-ig" {
vpc_id = aws_vpc.toydeploy-vpc.id
}
resource "aws_route_table" "toydeploy-rt" {
vpc_id = aws_vpc.toydeploy-vpc.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.toydeploy-ig.id
}
}
resource "aws_route_table_association" "toydeploy-rta" {
subnet_id = aws_subnet.toydeploy-subnet.id
route_table_id = aws_route_table.toydeploy-rt.id
}