Hey everyone, I trained a model on a work cluster ...
# neural-forecast
p
Hey everyone, I trained a model on a work cluster with a GPU. When I try to load it on my local machine (CPU only). I get the following error message:
Copy code
nf2 = NeuralForecast.load(path='../cluster_results/results/b79f6fd0c50099a7519da800805bf436d4f4f4a6/')

RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
I'm trying to debug this error on my own. I found this stackoverflow post. https://stackoverflow.com/questions/56369030/runtimeerror-attempting-to-deserialize-object-on-a-cuda-device But I think this needs to be handle as part of the load function?https://github.com/Nixtla/neuralforecast/blob/1d78f503ac4cd0fdfa7aae9156a876fe94eb1db4/neuralforecast/core.py#L665 I think this can be solved if you have an extra argument to the load function
map_location
MODEL_FILENAME_DICT[model_name].load_from_checkpoint(f"{path}/{model}",  map_location=map_location)
If I specify
map_location=torch.device('cpu'))
, it works
j
Hey. Indeed I think that's the best solution. Would you like to make a PR with that change?
p
I might have time today. Worse comes to worse, I'll try to submit one this weekend
j
Nice! Please let us know if you need any help
p
@José Morales I simply used the UI to created one. Hopefully that's ok https://github.com/Nixtla/neuralforecast/pull/734
j
Thanks! We use nbdev to develop the project, so this change needs to be in the notebook as well. You can install nbdev and then use nbdev_update for that, or if you prefer I can do it for you
p
I am somewhat limited with my work computer and my computer at home. Is it ok if I pass the torch over to you?
j
Sure, just let me see if I have permission to push to your branch
Yeah all good. Thanks!
p
Thank you @José Morales!
j
The change has been merged. You should be able to use it now installing from github. Thanks for the contribution!
👍 1