I have a setup in AWS with a few different environments (Dev/Staging/Prod accounts/roles each with their own subnet) all in the Canada region, connected to our office with an IPSec VPN. All the different environments can talk to each other, and we can access all the environments from our VPN connections via the Transit Gateway.
We are now bringing up some EC2 instances with the same environment structure (same account/roles but different subnets) in the US east-1 region. I'd like to connect things so that the VPN users at my office can see the new east-1 instances, as well as allow the Canadian environments to talk to the US environments. I thought this would be really simple to do with Peering (either Transit Gateway peering or straight VPC Peering) but I can't get it to work. I can get Staging US to talk to Staging Canada fine (the TGWs are setup in Staging), but the peering link won't route the VPN traffic or the other environments over that link. The routing tables are fine, I've checked the TGA, TGRTB, and the RTB attached to the subnets of the EC2 instances to confirm that the routes are all there, the traffic just won't flow.
I know in the old days if you wanted to do this you had to setup an EC2 instances in each region and build your own VPN router, but I thought the new peering services were supposed to fix that. Do I still need to create my own link via EC2 boxes? And if so, any idea how much CPU I'm going to need to get good throughput?