Score:1

Is this the default nvidia-driver for WSL2?

ph flag
noob@LAPTOP-DNCQ5AAC:/mnt/d/$ nvidia-smi
Thu Apr 20 00:04:03 2023       
+---------------------------------------------+
| NVIDIA-SMI 525.100      Driver Version: 528.76       CUDA Version: 12.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  On   | 00000000:01:00.0 Off |                  N/A |
| N/A   41C    P8     2W /  50W |      0MiB /  4096MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+---------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A       227      G   /Xwayland                       N/A      |
+---------------------------------------------+

It says something like, "be careful not to install another one that overwrites the default" here https://docs.nvidia.com/cuda/wsl-user-guide/index.html

Is this my default before I start trying to install cuda again on my Windows 11 machine? Does anyone have a link to a good answer on how to install cuda on WSL2?
I might have to search through my own questions but remember it never really worked as expected and I had to use the Windows one.

The task I want to perform with Cuda is below:

My Chatgpt prompt:
"implement helman-jaja list rank where the function signature is "void cudaListRank (long head, const long* next, long* rank, size_t n)" where the first parameter head is the index into array representation of a linked list and points to its head,  the second parameter next is a "next array" that gives the index of the next node in the array representation of a linked list, the third parameter rank is an array holding the rank of each node in the array representation of a linked list and the fourth parameter n is the length of the array representation of a linked list."
The code written by ChatGPT has been deleted

Right now when I try to compile with a Make file I'm getting

noob@LAPTOP-DNCQ5AAC:/mnt/d$ make correctness IMPL=hj
nvcc -Iutils -O0 -g -std=c++11 -o student/cuda_hj.o -c student/cuda_hj.cu
make: nvcc: Not a directory
make: *** [Makefile:8: student/cuda_hj.o] Error 127

Also I am seeing red squiggly underlines for the import statements related to cuda in VSCode but not sure if that is indicative of whether my WSL-remote-VSCode and the default Cuda are playing nicely.

I also downloaded and tried to install two different cuda packages (WSL versions) from Nvidia to install but neither got the job done (I copied and pasted each terminal instruction from the download page):

cuda_12.1.0_530.30.02_linux.run
cuda-repo-wsl-ubuntu-12-1-local_12.1.0-1_amd64.deb

Edit: So I finally got it thanks to NotTheDroids Answer and comments by adding these lines to .bashrc

#add cuda to path
export PATH="$PATH:/usr/local/cuda-12.1/bin"
#must be last line
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-12.1/lib64

Note, I read on SuperUser that the last line for LD_LIBRARY_PATH must be the last line in .bashrc. I have not tried anywhere else but this worked.

So now I get

@LAPTOP-DNCQ5AAC:/mnt/d/hpc$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Mon_Apr__3_17:16:06_PDT_2023
Cuda compilation tools, release 12.1, V12.1.105
Build cuda_12.1.r12.1/compiler.32688072_0

and my attempts to compile the above Hellman-Jaja list rank using Cuda code is throwing a bunch of compilation errors (where as before it was not even recognizing the nvcc command). The error trace

@LAPTOP-DNCQ5AAC:/mnt/d/$ make cuda_code_checker
nvcc -Iutils -O0 -g -std=c++11 -o student/cuda_hj.o -c student/cuda_hj.cu
student/cuda_hj.cu(65): error: identifier "dSublistHeads" is undefined
          dSublistHeads[i+1] = randLong;
          ^

student/cuda_hj.cu(80): error: expected a ")"
      cudaMalloc((void**) &dIsHead, sizeof(int);

and the red squiggly underlines below

#include <cuda.h>
#include <curand.h>

are gone. Thanks!

NotTheDr01ds avatar
vn flag
What's your end task? You often don't need CUDA Toolkit for many GPU accelerated tasks under WSL. Regardless, if your task *does* require CUDA Toolkit, I'd like to confirm that my answer actually accomplishes what you need if I can ;-)
mLstudent33 avatar
ph flag
Hi, I was going to try Hellman-Jaja list rank algo written using Cuda-C or C++. I have source code from a ChatGPT prompt I'll post in my question.
Score:2
vn flag

First, consider whether or not you really need the full CUDA Toolkit for your project. Many CUDA tasks can be accomplished with just the provided WSL CUDA libraries that are injected into each WSL instance.

For instance, without the CUDA Toolkit installed, take a look at:

ls /usr/lib/wsl/lib

You'll see, among others, libcuda.so there. This injected library is hooked into the Windows NVIDIA driver. It's for this reason that the CUDA Toolkit installation page warns you against installing a Linux driver in WSL -- Doing so can break the WSL/CUDA integration.

You'll find instructions on Microsoft's CUDA on WSL doc page on how to use the existing CUDA (and/or DirectML) integration with:

  • PyTorch
  • TensorFlow
  • Or using the NVIDIA Docker container

I've tested the PyTorch and TensorFlow integration personally, but not the Docker container at this point.

Again, that's "out of the box" functionality for WSL, as long as you have a supported Windows release (most any recent, supported release at this point) and a recent NVIDIA Windows driver. You don't need to install the CUDA Toolkit in order to work with those aspects.

Is this the default nvidia-driver for WSL2?

Well, pretty close. As far as I can tell, yes, it is the WSL driver that is attaching to the Windows Driver. Otherwise, I don't believe it would see the physical GPU.

It's slightly out of date, though -- A new Windows driver was released November 18th. After installing the CUDA Toolkit in my Ubuntu/WSL, I received a message that I needed a driver >= 530 to support the latest Toolkit.

So if you do want to install the full CUDA Toolkit, for example, for building native applications using the NVIDIA compiler (NVCC), you'll need to update your Windows Game Ready Driver (and reboot Windows, of course).

Then, you should follow the instructions on the site you linked. Make sure to download the WSL version of the CUDA toolkit, since all the "standard" ones for Ubuntu include the Linux driver.

Reports are that the NVIDIA compiler may not be found in the .deb version, but it's also possible that it just doesn't include the same instructions that the .run version does (to add it to the path). Ultimately, I downloaded the .run version.

You'll also need:

sudo apt install build-essential

Then you can:

wget https://developer.download.nvidia.com/compute/cuda/12.1.1/local_installers/cuda_12.1.1_530.30.02_linux.run
sudo sh cuda_12.1.1_530.30.02_linux.run

Please use the instructions on the actual download page, though, to make sure you get the latest version (rather than the latest version at the time this was written).

Afterwards, follow the directions provided by the installer to add the necessary PATH and LIB_PATH items. Here's what I received from the run file (before updating my Windows driver, at least):

Driver:   Not Selected
Toolkit:  Installed in /usr/local/cuda-12.1/

Please make sure that
 -   PATH includes /usr/local/cuda-12.1/bin
 -   LD_LIBRARY_PATH includes /usr/local/cuda-12.1/lib64, or, add /usr/local/cuda-12.1/lib64 to /etc/ld.so.conf and run ldconfig as root

To uninstall the CUDA Toolkit, run cuda-uninstaller in /usr/local/cuda-12.1/bin

***WARNING: Incomplete installation! This installation did not install the CUDA Driver. A driver of version at least 530.00 is required for CUDA 12.1 functionality to work.
To install the driver using this installer, run the following command, replacing <CudaInstaller> with the name of this run file:
    sudo <CudaInstaller>.run --silent --driver

Logfile is /var/log/cuda-installer.log

Note that while I have installed the Toolkit, and I believe successfully, I don't have a test project to try it with. As mentioned in the comments, there are many possible uses for the Toolkit, and without having detail on your intended use-case, I can't confirm it for you.

mLstudent33 avatar
ph flag
Game Ready Driver, I see, I think I installed the other one previously.
mLstudent33 avatar
ph flag
The installer did not prompt me "Afterwards, follow the directions provided by the installer to add the necessary PATH and LIB_PATH items."
NotTheDr01ds avatar
vn flag
@mLstudent33 Fortunately, I copied the text out of mine before I rebooted. I'll add it to the answer.
mLstudent33 avatar
ph flag
It's working! Thanks so much!
I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.