Currently I have a traditional deployment process that works as follows:
- There are many different "functionalities". Each functionality operates on a different environment, but cross-functionality communications exist.
- There is a "Deployer" host machine which pull the git repositories onto it and deploy the codes/assets to different ECS machines
- The "Deployer" will deploy different functionalities onto different machines via an automated bash script
- The "Deployer" bash script can specify the number of machines and which machines are to be deployed for each functionality
- There is an environment config file for each functionality, and the functionality code on the actual machine will refer to it. The "Deployer" bash script deploys the same config file into servers of the same functionality (same environment)
- For each functionality, the load balancer will distribute requests to respective machines of the same environment according to balancing rules
- Every machine run the same OS, CentOS7, and there is a manual documentation for setting up needed utilities (like httpd, PHP, NodeJs, MySQL and their config files .etc) for each environment. The setup needs to be done only once and can be done in several hours
Now the hype for containerized deployment is increasingly popular, I am consider whether I should switch my process to Kubernetes (or other alternatives).
Based on my understanding, the key benefits of containerized deployment and my current considerations are
Ability to deploy codes on cross-vendor clouds.
My current thought: since every cloud supports CentOS7, the setup documentation and config files should work for every cloud, right? (I am at development stage and haven't actually tested in real cloud vendors and this might be the point of my misunderstanding, but theoretically I can't see how it doesn't work)
Easy to scale the number of machines
My current thought: the "Deployer" bash script can specify the number of machines to deploy and activate automatically, for each functionality. I can't think of how it can be easier to scale up/down when containers are used
Faster deployment
My current thought: The main overhead in my "Deployer" script is copying large asset files over network. I can't see how this can be dramatically reduced when containers are used.
More integrated local development
My current thought: The current local development is done using vagrant virtual machines (all with CentOS7) rather than containers and I don't find it very inconvenient.
On the other hand, I learnt that bugs can be very hard to address on when containers are utilized and log files can be very hard to manage, which are serious issues.
I am pretty new to containers and I admit that I am one of those who look at the hype and considering switching from a development/deployment process that I am familiar with. However my current considerations seem to suggest I should not follow the hype, maybe due to my lack of knowledge. Is there anything I missed, that is really beneficial if I switch to use containers?