Score:0

Tensorflow does not detect GPU - lambdalabs

tr flag
JrV

I try to get tensorflow GPU up and running in a virtual environment (venv):

I use lambdalabs OS is Ubuntu 20.04.3 LTS.

I have following python script: checkGPY.py:

import tensorflow as tf

if tf.test.gpu_device_name():
    print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
else:
    print("Please install GPU version of TF")

Outside the venv it works fine. I obtain Default GPU Device: /device:GPU:0. If a train a small neural network (NN) and watch nvidia-smi I see that the GPU memory increases during training. So the GPU resources are used for NN training.

However if I run it is inside a venv (I installed tensorflow version: 2.6.0 inside the venv.)

(venv) x@y $ python checkGPU.py

I obtain: Please install GPU version of TF

I obtain also following: Could not load dynamic library 'libcudnn.so.8'; dlerror: libcudnn.so.8: cannot open shared object file: No such file or directory

So I understand that the dynamic library libcudnn.so.8 cannot be accessed from inside the venv.

How can I resolve this?

Score:0
tr flag
JrV

To solve this I follow the instructions written on Tensorflow in venv using lambdalabs

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.