I am looking for a way to whitelist an S3 Bucket on my client's server. In this S3 bucket there is a public website, thus, my idea is to find the ip address used for this S3 bucket. However, I have read from the documentation that it is using dynamic addresses based on many servers and it chooses the one with lowest latency. I am looking for a suggestion for the best way to configure the server's firewall to be able to access my public website which is hosted on the S3 Bucket?
Important: Keep in mind that I cannot filter by using URL on the server, but only IP addresses, because the it is quite old.
Some more details on the setup: On the S3 bucket the frontend app lives and on an EC2 instance, there is the REST API for the app. I have configured also Cloudfront for the S3 bucket in order to connect it with my domain.
My goal: To keep my server secure and to allow access only for my website. I have found that there is a list with all the IP addresses that Amazon uses, but the range is huge for the services that I am using:
{
"ip_prefix": "18.64.0.0/14",
"region": "GLOBAL",
"service": "CLOUDFRONT",
"network_border_group": "GLOBAL"
},
{
"ip_prefix": "18.196.0.0/15",
"region": "eu-central-1",
"service": "EC2",
"network_border_group": "eu-central-1"
}
Thus, it is not an option to allow all these ranges. I will be happy if one can provide me good solution for this.
EDIT: Since there was no way to assign a static IP for the S3 Bucket, I have found a workaround. On my EC2 instance, where the API was running, I deployed the front-end app and ran it with apache2. Then, assigned a static Elastic IP address to the EC2 instance using this documentation. This gave me the opportunity to allow only this new IP address to the server's firewall rules. Additionally, I added the IP address of the server to the security group of my EC2 instance in order to allow the server to access the webpage.