<#C031M8RLC66|>, I am having a strange issue with ...
# neural-forecast
m
#C031M8RLC66, I am having a strange issue with my code, I am defining a RNN model where I am using a early stop patience = 2, when i run the cross validation I get the following error "RuntimeError: Early stopping conditioned on metric
ptl/val_loss
which is not available. Pass in or modify your
EarlyStopping
callback to use any of the following:
train_loss
, train_loss_step, train_loss_epoch "
Copy code
fcst = NeuralForecast(
    models=[RNN(h=12,
                input_size=-1,
                inference_input_size=24,
                loss=MQLoss(level=[80, 90]),
                scaler_type='robust',
                encoder_n_layers=2,
                encoder_hidden_size=128,
                context_size=10,
                decoder_hidden_size=128,
                decoder_layers=2,
                max_steps=300,
                # futr_exog_list=['y_[lag12]'],
                #hist_exog_list=['y_[lag12]'],
                # stat_exog_list=['airline1'],
                early_stop_patience_steps=5
                )
    ],
    freq='M'
)

crossvalidation_df = fcst.cross_validation(df=Y_train_df, static_df=AirPassengersStatic, n_windows=1, step_size=1)
RuntimeError Traceback (most recent call last) Cell In[14], line 1 ----> 1 crossvalidation_df = fcst.cross_validation(df=Y_train_df, static_df=AirPassengersStatic, n_windows=1, step_size=1) NeuralForecast.cross_validation(self, df, static_df, n_windows, step_size, val_size, test_size, sort_df, use_init_models, verbose, **data_kwargs) 515 fcsts = np.full( 516 (self.dataset.n_groups * h * n_windows, len(cols)), np.nan, dtype=np.float32 517 ) 519 for model in self.models: --> 520 model.fit(dataset=self.dataset, val_size=val_size, test_size=test_size) 521 model_fcsts = model.predict( 522 self.dataset, step_size=step_size, **data_kwargs 523 ) 525 # Append predictions in memory placeholder BaseRecurrent.fit(self, dataset, val_size, test_size, random_seed) 630 self.trainer_kwargs["check_val_every_n_epoch"] = None 632 trainer = pl.Trainer(**self.trainer_kwargs) --> 633 trainer.fit(self, datamodule=datamodule) in Trainer.fit(self, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path) 542 self.state.status = TrainerStatus.RUNNING 543 self.training = True --> 544 call._call_and_handle_interrupt( ... --> 153 raise RuntimeError(error_msg) 154 if self.verbose > 0: 155 rank_zero_warn(error_msg, category=RuntimeWarning) _*RuntimeError: Early stopping conditioned on metric
ptl/val_loss
which is not available. Pass in or modify your
EarlyStopping
callback to use any of the following:
train_loss
,*_ _*`train_loss_step`, `train_loss_epoch`*_
m
Hello, Here, you have to set
val_size
and
test_size
and set
n_windows
to
None
. The following code runs:
Copy code
fcst = NeuralForecast(
    models=[RNN(h=12,
                input_size=-1,
                inference_input_size=24,
                loss=MQLoss(level=[80, 90]),
                scaler_type='robust',
                encoder_n_layers=2,
                encoder_hidden_size=128,
                context_size=10,
                decoder_hidden_size=128,
                decoder_layers=2,
                max_steps=200,
                early_stop_patience_steps=5
                )
    ],
    freq='M'
)

cv_df = fcst.cross_validation(df=Y_train_df, static_df=AirPassengersStatic, val_size=12, test_size=12, n_windows=None)
t
I’m also facing the same error when using Auto module:
Copy code
config = AutoTSMixerx.get_default_config(h=horizon, backend='gpu', n_series=5)
config['early_stop_patience_steps']=2
config['val_check_steps'] = 50

model = AutoTSMixerx(h=horizon, n_series=5, config=config, num_samples=50)

nf = NeuralForecast(
    models=[model],
    freq='15T'
)
nf.fit(df=train_df, val_size=val_size)
m
Can you try with
model = AutoTSMixerx(h=horizon, n_series=5, config=config, num_samples=50, refit_with_val=True)
gratitude thank you 1
m
@Marco Thank you so much for the help, it is very kind of you.
One Question @Marco , Can I turn off the logger of the auto model in neural forecast.?
m
Yes, but I forget how! Haha! Let me get back to you soon!
1
m
Sure, I will be waiting for your response. Thanks @Marco
m
This should disable logging:
trainer_kwargs={'logger': False}
m
But the Auto Model Docs doe snot show any trainer arguments.