Score:1

Best practise: migrating multiple VMs and VHosts to Docker

sg flag

I currently have about 20 sites and applications hosted in AWS EC2. Some have their own EC2, whilst others share an EC2 with multiple virtual hosts on that EC2.

Each site is completely separate and unrelated from another. The ones which share an EC2 are generally much smaller with little traffic/resource requirement (hence the shared server).

I also have one EC2 server which is simply used to run batch and scheduled tasks alongside the live version of the site, to ensure the live site stays accessible even when the scheduled tasks are heavy.

I am looking to making use of Docker across my whole dev > prod environment for better use of server resources, and easier migrations between environments etc.

I'm keen to get your thoughts on the best practise for production server hardware.

Is it best to use one larger EC2 and have every site as its own docker container on there? This sounds like less server admin, a tidier overall setup, and from what I understand, each docker container still keeps itself to itself from a security point of view. But, any server issues or resource spikes would impact all sites (mitigated by a load balancer).

Or, am I best to keep them split across multiple EC2s, i.e. on EC2 per docker container? This seems completely against the point of docker, but not sure if I'm missing something.

Using a single EC2 for all sites then makes it easier (less admin) to set up load balancers and/or fall over servers too.

Note; if it makes any difference, I use RDS for MySQL; no MySQl running on any EC2s directly.

Thanks in advance

Score:1
id flag
MLu

We typically use ECS (Elastic Container Service) with an X number of tasks (= your websites) and Y number of hosts (= EC2 instances for running those tasks), where X >= Y. Then we let ECS distribute the tasks across the hosts as it sees fit according to the requirements. You can specify the amount of RAM and CPU power each task/website needs.

We also have the EC2 instances in an Auto-Scaling Group - if one of them dies it's automatically replaced and ECS then automatically re-deploys the lost containers and registers them to the Load Balancers, all without any human intervention.

You may also want to run some tasks in AWS Fargate, which is a serverless container service - especially for the larger tasks/websites it may be a good option. For tiny tasks it's often more economical to consolidate them on a single EC2 instance.

The bottom line is to decouple the tasks (containers) from the hosts - have a pool of websites and don't care where they run, and have a pool of hosts and don't care what runs on them. It needs some level of automation though - you want the tasks automatically registered to ALB Target Groups, hosts automatically added when running out of capacity / replaced if they die, etc. But this basic automation should be a given anyway.

Hope that helps :)

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.