Score:1

Are there any benefits in using HTTPS between a load balancer and EC2 targets?

ru flag

I've spent some time refactoring a load-balanced web application in AWS in order to make it end-to-end HTTPS, CloudFront->ALB->EC2. This was mostly just for fun, to see if I could do it. Having jumped through quite a lot of hoops to make it work, I'm now wondering if it's worth upgrading the production infrastructure to work this way. Currently in production it's just HTTPS at the front-end and between CloudFront and ALB, but between the ALB and the EC2's it's plain HTTP.

Are there any actual benefits to using HTTPS between the Application Load Balancer and the EC2's.

I had originally hoped to have it end-to-end HTTP/2, but I couldn't get this to work to the EC2's, so that part has to be HTTP1.1

Some details about the setup.

The EC2's are running IIS, Win2022. I have a Launch Template with UserData that fully configures them including creating the self-signed SSL's for IIS.

The ALB and CloudFront obviously use ACM for the certificates.

The setup seems stable, but I'm not entirely happy about the amount of complexity I've introduced to support the ALB->EC2 HTTPS. It means it now has extra http bindings for the healthcheck, instead of being able to just use a single binding for the traffic and healthcheck.

The deployment pipeline has also become more complex, with an extra step to retrieve an an SSL cert to apply it to the IIS site.

One potential benefit, is that the site itself knows that it's running over a secure connection, so self-referencing URL's will be generated properly by default. However I've mostly found work-arounds for this anyway by manually updating the server variables in the application code.

It seems to be considered a good practice to use end-to-end HTTPS, but I'm not entirely sure why. I don't want to change the production system unless there's some specific benefits to doing so.

Score:3
in flag

It seems to be considered a good practice to use end-to-end HTTPS, but I'm not entirely sure why.

Modern security thinking is that you don't consider your own network / datacenter as more trusted (than your WAN or the regular internet).

Traditionally one would allow for more relaxed security standards in the datacenter, within the "secure" perimeter of your own network. Both internal systems and users would be trusted, implicitly expected to be secure and never abusive or malicious. One only added for example TLS for connections crossing the perimeter and borders of your "secure" internal network.

Your current configuration of terminating TLS on the load balancer and using plain HTTP for the communication between load balancers and the back-end servers fits in that traditional world view and security concept.

Nowadays the increasingly more prevalent security concept is one of "zero trust" and "encryption everywhere", which abandons the concept of a secure and trusted internal networks/systems/users and applies the same rigorous level of security everywhere, regardless.

In a cloud environment you don't have your own physical network but share resources with potentially all other customers of the cloud provider.
You hope that the cloud provider isn't malicious and can be trusted to not eavesdrop on your communication. You hope and expect that the logical separation of your (internal) traffic is robust and your data can't be intercepted by those other customers either, but you have much fewer guarantees there.
You control those risks by not implicitly trusting the security of the (virtual) internal network and rigorously applying TLS encryption, for the traffic from the load balancer to your application server, for the applications accessing the database, etc. etc.


As a real world analogue:

Your home is in a gated community, protected by a security fence and guards at the gate. Maybe you even have a owners association that vets potential new residents and only allow "the right kind of people" to become your new neighbours.

Would you need to lock your front door at night and when you leave your home?

In traditional security thinking: no you wouldn't need to lock your doors at night because you're safe in your gated community and all the "bad people" will be kept out by the fence and security guards.
And your neighbours won't be nosy gossips that will simply let themselves into your home.

In modern security thinking: yes of course you do lock your front door, because when (no if about it) the fence is breached and/or the guards fail to keep "a bad person" out, then that bad person can't simply enter every single home, but will still need to pick locks, or force entry, which both delays their unwanted access, increases the odds of being detected and reduces the number of house they can burglarise.

Nikita Kipriyanov avatar
za flag
While this is indeed true, I am very sad nobody considers additional CPU cycles spent on this encryption/decryption, which could even happen on the same machine, and various implications: the additional load and delay (and argue for the cases when it could be considered negligible and when it couldn't), its environmental impact (additional CPU work wastes additional energy) and so on. Returning to the analogy, it costs money to put locks to each house, delays the valid entry for those who possess the key, adds the risk of losing the key, and so on. Nothing comes without the price.
HBruijn avatar
in flag
Indeed, nothing comes without a price and that also holds true for good/better security. _-_ In a proper risk analyses you would talk about odds and probabilities of certain risks happening and the potential loss that will/could be realised when those risks occur and then make the trade-off : EITHER accept the risks, do nothing and accept the costs when they happen OR accept that preventative measure to completely eliminate or (significantly) reduce those risks come with increased complexity, work and investments and accept those.
HBruijn avatar
in flag
TLS and encryption has become commodity and much cheaper in many aspects, with built-in instruction sets in CPU for example. Their expense has become low enough that most professionals and would consider it negligent to not set up. But I'm the first one to preach that encryption also is not the secret sauce that will automatically make your applications and infrastructure 100% secure either. Set up encryption but omit to change the default administrator password (or don't set any access controls at all) and your security is still crap, for example.
Nikita Kipriyanov avatar
za flag
Yes, "security is a process". What I wanted to say is that any security measures set up without the evaluation of the threat model should be treated with no more respect than any premature code optimization.
user1751825 avatar
ru flag
@NikitaKipriyanov Just thought I'd mention that after implementing full end-to-end SSL, I haven't noticed any increase in average CPU usage. My understanding is that this isn't really an issue anymore, with newer generation CPU's.
user1751825 avatar
ru flag
@HBruijn That's a point well worth raising, and yes I'm considering all other aspects of security as well. I'm aware that SSL is just one small part of the overall security of an application. Also, as it turned out, the end-to-end SSL setup wasn't all that difficult, and doesn't cost anything extra, or reduce performance in any way, so I'm calling this one a win :-)
I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.