This message was deleted.
# general
s
This message was deleted.
j
Hey. The library is built around the premise that you want to forecast 10 periods ahead for example and you want to estimate how good are your models doing that, so you run that procedure a couple of times (maybe 4 times 10 periods ahead). I don't think there's a way to do what you're asking here with the built-in cross_validation. However, that's just a convenience function, you could achieve this by iterating over the series, determining what you want your train size and number of windows to be and just use
StatsForecast.forecast
on that subset.
j
Makes sense since the whole hyperparameter tuning stuff (especially since it is done once per window) is so computationally expensive. I will think about doing only one Auto* tuning with the entire time-series to get the best models parameters and then do the cross-val with the best model. maybe that way I will be able to do the "extensive" cross-validation with reasonable runtime performance. Thanks you.
👍 1