Score:0

Redis on AWS task

kr flag

I have an API that have some heavy processing endpoints. To avoid problems, I created a queue system using redis.

Now I put this API into a docker, and I'm using ECS to deploy service that creates a task that loads the docker with the API. However, when introducing the docker I read I have to create a new docker with Redis. Now, my question is:

Who will do the heavy processing here? the redis docker, right? I am asking this because I have to assign how much CPU and RAM I want for each task created in the ECS, and I wish to know if I cannot assign any to them so each will use what they need or if I have to assign a minimum which one should have more power.

Score:0
kr flag

Instance with API connects to Redis instance through celery, which is the service that has workers to do background operations.

Redis is just a DB and does not execute anything. Everything is done in the instance where celery was. However, Redis save everythng in RAM, so every call, and every result, must be cleared after some time. Specially if your API asks for files. Otherwise Redis instance will fail. Use only 1 Redis instance and connect to it from as many api instances you need.

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.