Ml Club
05/22/2024, 1:38 PMptl/val_loss
which is not available.
Pass in or modify your EarlyStopping
callback to use any of the following: train_loss
, train_loss_step, train_loss_epoch "
fcst = NeuralForecast(
models=[RNN(h=12,
input_size=-1,
inference_input_size=24,
loss=MQLoss(level=[80, 90]),
scaler_type='robust',
encoder_n_layers=2,
encoder_hidden_size=128,
context_size=10,
decoder_hidden_size=128,
decoder_layers=2,
max_steps=300,
# futr_exog_list=['y_[lag12]'],
#hist_exog_list=['y_[lag12]'],
# stat_exog_list=['airline1'],
early_stop_patience_steps=5
)
],
freq='M'
)
crossvalidation_df = fcst.cross_validation(df=Y_train_df, static_df=AirPassengersStatic, n_windows=1, step_size=1)
RuntimeError Traceback (most recent call last)
Cell In[14], line 1
----> 1 crossvalidation_df = fcst.cross_validation(df=Y_train_df, static_df=AirPassengersStatic, n_windows=1, step_size=1)
NeuralForecast.cross_validation(self, df, static_df, n_windows, step_size, val_size, test_size, sort_df, use_init_models, verbose, **data_kwargs)
515 fcsts = np.full(
516 (self.dataset.n_groups * h * n_windows, len(cols)), np.nan, dtype=np.float32
517 )
519 for model in self.models:
--> 520 model.fit(dataset=self.dataset, val_size=val_size, test_size=test_size)
521 model_fcsts = model.predict(
522 self.dataset, step_size=step_size, **data_kwargs
523 )
525 # Append predictions in memory placeholder
BaseRecurrent.fit(self, dataset, val_size, test_size, random_seed)
630 self.trainer_kwargs["check_val_every_n_epoch"] = None
632 trainer = pl.Trainer(**self.trainer_kwargs)
--> 633 trainer.fit(self, datamodule=datamodule)
in Trainer.fit(self, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path)
542 self.state.status = TrainerStatus.RUNNING
543 self.training = True
--> 544 call._call_and_handle_interrupt(
...
--> 153 raise RuntimeError(error_msg)
154 if self.verbose > 0:
155 rank_zero_warn(error_msg, category=RuntimeWarning)
_*RuntimeError: Early stopping conditioned on metric ptl/val_loss
which is not available. Pass in or modify your EarlyStopping
callback to use any of the following: train_loss
,*_
_*`train_loss_step`, `train_loss_epoch`*_Marco
05/23/2024, 1:27 PMval_size
and test_size
and set n_windows
to None
. The following code runs:
fcst = NeuralForecast(
models=[RNN(h=12,
input_size=-1,
inference_input_size=24,
loss=MQLoss(level=[80, 90]),
scaler_type='robust',
encoder_n_layers=2,
encoder_hidden_size=128,
context_size=10,
decoder_hidden_size=128,
decoder_layers=2,
max_steps=200,
early_stop_patience_steps=5
)
],
freq='M'
)
cv_df = fcst.cross_validation(df=Y_train_df, static_df=AirPassengersStatic, val_size=12, test_size=12, n_windows=None)
Tina Sedaghat
05/23/2024, 6:15 PMconfig = AutoTSMixerx.get_default_config(h=horizon, backend='gpu', n_series=5)
config['early_stop_patience_steps']=2
config['val_check_steps'] = 50
model = AutoTSMixerx(h=horizon, n_series=5, config=config, num_samples=50)
nf = NeuralForecast(
models=[model],
freq='15T'
)
nf.fit(df=train_df, val_size=val_size)
Marco
05/23/2024, 6:35 PMmodel = AutoTSMixerx(h=horizon, n_series=5, config=config, num_samples=50, refit_with_val=True)
Ml Club
05/24/2024, 12:31 PMMl Club
05/24/2024, 12:33 PMMarco
05/24/2024, 1:02 PMMl Club
05/24/2024, 2:38 PMMarco
05/24/2024, 2:44 PMtrainer_kwargs={'logger': False}
Ml Club
05/24/2024, 2:50 PM