I trained the model using gpu, but I want to use c...
# neural-forecast
f
I trained the model using gpu, but I want to use cpu in predicting. How could I do it?
2
j
Hey. If you saved it with
Neuralforecast.save
you can then load it to the CPU with
NeuralForecast.load(<path>, map_location=torch.device('cpu'))
f
@José Morales I loaded the saved model using the NeuralForecast.load(path, map_location=torch.device('cpu')). But when I run the model, the verbose still shows cuda used.
Using 16bit Automatic Mixed Precision (AMP)
`Trainer already configured with model summary callbacks: [class 'pytorch_lightning.callbacks.model_summary.ModelSummary']. Skipping setting a default
ModelSummary
callback.`
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
I saved the model using:
nf.save(path='./checkpoints/test_run_1s/',
model_index=None,
overwrite=True,
save_dataset=True)
And the initial model for training is set like this:
model = [TFT(
input_size=hist_length,
h=horizon,
max_steps=12000,
hist_exog_list=ex_hist_columns,
futr_exog_list=ex_future_columns,
batch_size=32,
loss=HuberLoss(),
windows_batch_size=64,
inference_windows_batch_size = 64,
num_workers_loader=12,
early_stop_patience_steps=20,
random_seed=1234,
accelerator='gpu',
# scaler_type='robust',
devices=1,
precision='16-mixed'
)]
j
Yeah, sorry. Seems like that only loads the model weights onto the CPU but lightning is kind enough to keep the accelerator configuration from training. I was able to use just the CPU by doing this:
NeuralForecast.load(<path>, map_location=torch.device('cpu'), accelerator=pl.accelerators.CPUAccelerator(), devices=1)
. Please let us know if it works for you
pl is pytorch_lightning
f
@José Morales Thanks. The proper setting should be NeuralForecast.load(path, accelerator='cpu', devices=1)
@José Morales Btw, when doing inference, it shows:
Using bfloat16 Automatic Mixed Precision (AMP)
`Trainer already configured with model summary callbacks: [<class 'pytorch_lightning.callbacks.model_summary.ModelSummary'>]. Skipping setting a default
ModelSummary
callback.`
GPU available: True (cuda), used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
How could I block this information ?
j
You probably need the map_location argument as well, otherwise it may be loading the weights in the GPU and copying them to CPU every time you run the model
The logs can probably be suppressed by setting the logging level. Do those come with the module name before? e.g.
pytorch_lightning.trainer