First, consider whether or not you really need the full CUDA Toolkit for your project. Many CUDA tasks can be accomplished with just the provided WSL CUDA libraries that are injected into each WSL instance.
For instance, without the CUDA Toolkit installed, take a look at:
ls /usr/lib/wsl/lib
You'll see, among others, libcuda.so
there. This injected library is hooked into the Windows NVIDIA driver. It's for this reason that the CUDA Toolkit installation page warns you against installing a Linux driver in WSL -- Doing so can break the WSL/CUDA integration.
You'll find instructions on Microsoft's CUDA on WSL doc page on how to use the existing CUDA (and/or DirectML) integration with:
- PyTorch
- TensorFlow
- Or using the NVIDIA Docker container
I've tested the PyTorch and TensorFlow integration personally, but not the Docker container at this point.
Again, that's "out of the box" functionality for WSL, as long as you have a supported Windows release (most any recent, supported release at this point) and a recent NVIDIA Windows driver. You don't need to install the CUDA Toolkit in order to work with those aspects.
Is this the default nvidia-driver for WSL2?
Well, pretty close. As far as I can tell, yes, it is the WSL driver that is attaching to the Windows Driver. Otherwise, I don't believe it would see the physical GPU.
It's slightly out of date, though -- A new Windows driver was released November 18th. After installing the CUDA Toolkit in my Ubuntu/WSL, I received a message that I needed a driver >= 530 to support the latest Toolkit.
So if you do want to install the full CUDA Toolkit, for example, for building native applications using the NVIDIA compiler (NVCC), you'll need to update your Windows Game Ready Driver (and reboot Windows, of course).
Then, you should follow the instructions on the site you linked. Make sure to download the WSL version of the CUDA toolkit, since all the "standard" ones for Ubuntu include the Linux driver.
Reports are that the NVIDIA compiler may not be found in the .deb
version, but it's also possible that it just doesn't include the same instructions that the .run
version does (to add it to the path). Ultimately, I downloaded the .run
version.
You'll also need:
sudo apt install build-essential
Then you can:
wget https://developer.download.nvidia.com/compute/cuda/12.1.1/local_installers/cuda_12.1.1_530.30.02_linux.run
sudo sh cuda_12.1.1_530.30.02_linux.run
Please use the instructions on the actual download page, though, to make sure you get the latest version (rather than the latest version at the time this was written).
Afterwards, follow the directions provided by the installer to add the necessary PATH
and LIB_PATH
items. Here's what I received from the run
file (before updating my Windows driver, at least):
Driver: Not Selected
Toolkit: Installed in /usr/local/cuda-12.1/
Please make sure that
- PATH includes /usr/local/cuda-12.1/bin
- LD_LIBRARY_PATH includes /usr/local/cuda-12.1/lib64, or, add /usr/local/cuda-12.1/lib64 to /etc/ld.so.conf and run ldconfig as root
To uninstall the CUDA Toolkit, run cuda-uninstaller in /usr/local/cuda-12.1/bin
***WARNING: Incomplete installation! This installation did not install the CUDA Driver. A driver of version at least 530.00 is required for CUDA 12.1 functionality to work.
To install the driver using this installer, run the following command, replacing <CudaInstaller> with the name of this run file:
sudo <CudaInstaller>.run --silent --driver
Logfile is /var/log/cuda-installer.log
Note that while I have installed the Toolkit, and I believe successfully, I don't have a test project to try it with. As mentioned in the comments, there are many possible uses for the Toolkit, and without having detail on your intended use-case, I can't confirm it for you.