are we using the expanding window or without expan...
# neural-forecast
m
are we using the expanding window or without expanding i mean the training data it will be the same size for each fold ?
c
The training data is not moving, is fixed to end at the beginning of the validation set. What is moving is the input, so the model is using the latest data to make predictions (without updating parameters).
m
are you talking about cross validation ?
c
yes, is none of the images you showed
m
i am sorry but i cant understand how it works
you mean it is using the fixed taining and just the begining of the prediction period it moves right ? with fixed size for prediction period also and this beginning depend on the step size
c
Training and using data as inputs are different processes. When training the model we update its parameters, and only the data before the validation period is used. The parameters of the model are fixed during validation.
Then, for predicting each "split" (window of size horizon), the model will take the last available data (ground truth, actual values) as inputs.
you can simply see the cross-validation as a for loop of .predict(), where the date is moving. And the model is fixed, it was trained only once with the data prior to these validation windows.
b
does this behavior change when we put
refit_with_val=True
?
c
@Bradley de Leeuw we dont have the
refit_with_val
argument
b
AutoNHITS has (had?) that parameter and I used it in combination with
NeuralForecast.cross_validation
with the following settings:
Copy code
h=15
step_size=15
n_windows=4 
freq="W"
refit_with_val=True
So if I understand correctly, the last 60 weeks of data are never used to update the parameters of the model in this case? It simply trains on
n - (4*15)
datapoints and only the input data changes over the 4 folds.
c
the
refit_with_val
is for training the best model afterwards. If true, it will include the validation set. But this is not part of the
cross_validation
1
during the
cross_validation
the model's parameter are not updated
Here you can see how the
val_size=0
if True. This is to fit the final model after selecting the best configuration.
1
b
Got it, thanks! Out of curiosity, what’s the reason for not (optionally) refitting the model during crossval? In real life, I’d argue updating the parameters with some frequency e.g. every month, or whatever is best for your model makes sense right? This would also make it easier to do backtesting and mimic a real world scenario where you e.g. retrain every 4 weeks as new data comes in. PS: is this functionality the same in StatsForecast and MLForecast’s cross_validation?
Nvm I found
MLForecast
has the
refit=True
option in the cross_validation. Will this likely also be added for
NeuralForecast
in the future?