Frank Lu
10/25/2023, 1:31 PMJosé Morales
10/25/2023, 3:54 PMNeuralforecast.save
you can then load it to the CPU with NeuralForecast.load(<path>, map_location=torch.device('cpu'))
Frank Lu
10/26/2023, 1:08 AMUsing 16bit Automatic Mixed Precision (AMP)
`Trainer already configured with model summary callbacks: [class 'pytorch_lightning.callbacks.model_summary.ModelSummary']. Skipping setting a default ModelSummary
callback.`
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
I saved the model using:
nf.save(path='./checkpoints/test_run_1s/',
model_index=None,
overwrite=True,
save_dataset=True)
And the initial model for training is set like this:
model = [TFT(
input_size=hist_length,
h=horizon,
max_steps=12000,
hist_exog_list=ex_hist_columns,
futr_exog_list=ex_future_columns,
batch_size=32,
loss=HuberLoss(),
windows_batch_size=64,
inference_windows_batch_size = 64,
num_workers_loader=12,
early_stop_patience_steps=20,
random_seed=1234,
accelerator='gpu',
# scaler_type='robust',
devices=1,
precision='16-mixed'
)]
José Morales
10/26/2023, 4:38 PMNeuralForecast.load(<path>, map_location=torch.device('cpu'), accelerator=pl.accelerators.CPUAccelerator(), devices=1)
. Please let us know if it works for youFrank Lu
10/29/2023, 8:31 AMUsing bfloat16 Automatic Mixed Precision (AMP)
`Trainer already configured with model summary callbacks: [<class 'pytorch_lightning.callbacks.model_summary.ModelSummary'>]. Skipping setting a default ModelSummary
callback.`
GPU available: True (cuda), used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
How could I block this information ?José Morales
11/01/2023, 4:53 PMpytorch_lightning.trainer