Score:0

Setting ep an AI server for multiple users

lt flag

In the University I work, we want to buy a workstation with one or two powerful GPUs. We want students to be able to use the workstation remotely. This is the main restriction. It would be nice if multiple users could use the workstation simultaneous with isolated development environments but with shared resources (GPU, RAM), but this is not as important. Everyone can wait the turn :) I am looking for a solution that automates this. I don't want to setup the development environment for every user seperately (Drivers, CUDA, PyTorch, etc.). Essentially I am looking for something like my Google Cloud Server, where I can ideally select from different VM/Container versions and then connect to it via browser ssh.

Are there publicly/commercially available architectures for such a use case? I have no clue what to search for as this is not really my area. I would appreciate any input.

Thanks in advance!

I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.