We have an AWS EC2 instance that has an EBS gp3 SSD mounted with around 70GB. Sometimes, we do some scp orders to copy new files to this EBS, but for the rest of the time, the instance will only do reading operations in the EBS.
This instance gets requests from internet, and for each requests it has to read 2000 files (1000 of ~ 60kb and 1000 of ~ 414b). Now we want to include this instance in an autoscaling group. What should we do with this EBS? As far as I have read, I can:
- Create the new instances with a new EBS that copies the original EBS each time they are created -> It copies GB and does IOPS that at the end means to spend $ + time to copy the EBS.
- Use multi-attach EBS -> higher costs of storage (it's provisioned GB, not general)
- Use EFS. Lower speed and higher latency. Higher price than EBS, but when multiple are created it will be cheaper.
- Use NFS on a micro instance with a autoscaling group with min:1 max:1 to avoid failures, and attach the EBS each time it is created.
- Use GlusterFS. I think it's quite expensive on AWS. Is it?
I don't think at long run we will never have more than 100GB shared. What do you think is the best approach in this scenario? I was thinking in 5) but because of costs I was thinking in 4).