Sarim Zafar
06/02/2025, 9:19 AMmy_lgb = AutoModel(
model=lgb.LGBMRegressor(),
config=my_lgb_config,
)
auto_mlf = AutoMLForecast(
models={'lgb':my_lgb},
freq='7D',
season_length=4,
init_config=my_init_config,
fit_config = my_fit_config,
)
auto_mlf.fit(
processed_df,
n_windows=12,
h=horizon,
num_samples=1000, # number of trials to run
loss=custom_loss,
)
I then save and reload it and re-eval with added metrics that werent being tracked in the automl bit. But the results yield different metric values:
auto_mlf.save('AutoLightGBM')
mlf=MLForecast.load('AutoLightGBM/lgb')
cv_res = mlf.cross_validation(processed_df,
n_windows=12,
h=4,
step_size=4,
refit=False,
static_features=['country','department'],
)
mae_error=mae(cv_res, models=['lgb'])['lgb'].mean()
mape_error=mape(cv_res, models=['lgb'])['lgb'].mean()
smape_error=smape(cv_res, models=['lgb'])['lgb'].mean()
bias = np.mean(cv_res['lgb']-cv_res['y'])
metrics = {'mae':mae_error, 'mape':mape_error, 'smape':smape_error, 'bias':bias}
Olivier
06/02/2025, 11:45 AMRetrain model for each cross validation window. If False, the models are trained at the beginning and then used to predict each window. If positive int, the models are retrained every refit windows.
So, it seems you're not comparing the same fitted model. The first is optimized over 1000 samples. The second then loads that optimized model, and refits it against the processed_df.Sarim Zafar
06/02/2025, 11:46 AMSarim Zafar
06/02/2025, 11:49 AMOlivier
06/02/2025, 11:54 AMSarim Zafar
06/02/2025, 11:56 AMSarim Zafar
06/02/2025, 11:58 AM