Score:3

What is the best metric for auto-scaling GPU instances for machine learning inference in the cloud?

ps flag

We have an API in AWS with a GPU instance that does inference. We have an auto-scaler setup with the minimum and maximum number of instances, but aren’t sure which metric (GPU/CPU usage, RAM usage, average latency, etc) or combination of metrics should be used to determine when a new instance needs to be launched to keep up with incoming requests.

Are there best practices in regards to what metrics should be used in this scenario? Inference in our case is very GPU intensive.

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.