Hello! :slightly_smiling_face: I'm running the <h...
# neural-forecast
c
Hello! 🙂 I'm running the https://github.com/Nixtla/neuralforecast example. I notice it's using CPU... how can I configure to utilize GPU? (Cuda 11.6)
c
Hi @Chad Williamson! You do not need to change the code in the examples to use GPU! Torch automatically determines when there is a GPU with cuda available and will use it. Where are you running the experiments? If you are in Google Colab, you need to change the instance to one with GPU.
For other cases, you need to make sure that the installation was done properly so that
cuda
is available for your PyTorch installation. Installing directly with
pip
or
conda
should install everything correctly. You can check that everything works properly with
torch.cuda.is_available()
, it should return True
m
@Chad Williamson, if
torch.cuda.is_available()
returns true, then GPU should be used automatically by NeuralForecast. You don't have to configure anything extra.
c
Thank you both for the info! It's working now, I had to explicitly install cuda toolkit for Torch, I thought that was included by default.