Score:0

Using GPU for Machine Learning without connecting it to monitor

ar flag

I recently bought an NVIDIA GPU. I currently don't have the HDMI cable to connect it to my monitors; however, I intend to use it exclusively for Machine Learning, so I was wondering if it's possible to do that without connecting it to a monitor. It doesn't seem to work by default; when I tried to install drivers, I got an error message telling me that I don't have the right GPU plugged in. Also, torch.cuda.is_available() (run in python) currently returns False.

My questions are

  1. Are my current observations (wrt driver installation and result of is_available()) to be expected, or does it indicate something else is wrong; and

  2. Is there any not-excessively-complicated way to solve the problem? If not, I could of course just buy a cable. I've found it surprisingly difficult to find any useful advice about this by googling. I guess I'm sort of surprised that this isn't a known problem with a known solution, given that some people use a bunch of GPUs at once for ML.

I'm running Ubuntu version 21.04.

Score:0
jp flag

You should be able to fully utilize the card without video out. You should be able to install the driver. With driver installed Pytorch will see it. I'm talking from personal experience. If driver refuse to install there maybe something wrong with the card. Also plug it to a slot with 16 PCI lanes, or at a minimum 8.

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.