It's not clear if you want to keep the uncompressed objects in S3 or if the bucket contents are still changing.
One option you have is to use S3 inventory. It's not instant, but it will automatically generate a list of objects in the bucket and write that to a S3 bucket (the same bucket or another).
You could read this list into a small script (whatever you are comfortable with) and have it work one object at a time. Use the S3 CLI to pull down the object, then compress it using the OS/script tools.
I strongly recommend building in something that checks if the compressed object already exists so you can restart the process if it fails or new objects are added without having to process everything again.
If you are writing the compressed objects back to S3, consider using an EC2 instance or Lambda. With Lambda you may need to use a file stream to compress the file on the fly rather than pulling it down. You should be able to find examples of this for at least Python, if not other supported languages.
--
One word of caution, do a rough calculation on how much this is going to cost. Get requests are fairly cheap, but data transfer out can be expensive. Also if you are using any storage class other than Standard, it's probably going to have a retrieval cost associated with it.