Score:0

How to get No. of GPUs per node used in a job which is submitted by slrum?

eg flag

I am working with a SLURM workload manager, and we have GPU partition with 2 nodes.

I have submitted a job of CUDA program which uses gpu for execution and I have submitted this job with 2 gpus per node and to get the job details I am using the below mentioned command from a script.

sacct -u -p --format=jobid,jobname%100,State%50,nodelist,nnodes,ReqTRES%30,allocgres

where -p is the username. But the following output comes.

JobId:20198|Name:CUDA-PI|State:TIMEOUT|AllocatedNodes:ssl-gpu[0102]| ErrorPath:error_20198.err|OutPutPath:output_20198.out|NumberofNodes:2|NumCpus:8|NumGpus:0

Please help me with the command that will show the result of no. of gpus per node used for a job..

Thanks in advance.

I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.