This message was deleted.
# neural-forecast
s
This message was deleted.
l
From my current understanding of the cross_validation function, the final trained model uses the optimal hyperparameters. If you want to save the test results of each trail, you can save it as follows.
nf = NeuralForecast(models=models, freq='s')
Y_hat_df = nf.cross_validation(df=data, val_size=val_size,
test_size=test_size, n_windows=None)
results = nf.models[0].results.trials_dataframe()
If you want to view the training process of each trail, you can use tensorboard to view it. Use the following code to view the training results in the lightning_logs folder generated in the same directory as your code.
# (nixtla_auto) C:\Users\Windows 10>tensorboard --logdir="F:\postguaduate\vibration\various model test\NIXTLA\new data\DeepAR\IMF7\lightning_logs"
You need to replace the file path in double quotes with your path.
c
Hi @Florian Stracke! To complement @Layon Hu answer, note that the purpose of
cross_validation
is to precisely get the out-of-sample predictions on the test set. The best model on the validation set is used to return the forecasts on the test set. All auto models have a
refit_with_val
parameter, that you can set to
True
to retrain the best model also using the validation set before doing the predictions on the test set.
So you will only do one call to
cross_validation
and not two.
f
Hey @Cristian (Nixtla), first of all thank your very much for taking the time trying to help me. Unfortunately I am still a bit confused. I tried to visualize want I want to achieve.
c
Sorry for the late reply @Florian Stracke. This is exactly what the
cross_validation
function is doing with the AutoModels. In your plot, the validation set would correspond to the three red months, from march to may.
👍 1
a
Thanks @Cristian (Nixtla) Just a quick question around the above discussion on cross-validation and test-size. So when using cross-validation with Auto models, we don't have to
retrain
the best model separately on complete data
(train+val)
using best parameters and do
.fit
and
.predict
. What I understood from above discussion that, we can simply use
refit_with_val
and let the best model train on complete data including validation data, and let it make predictions on the test data. In my case, I want to make future predictions and I'm using future exogenous variables as well, so I will have complete data
train + val + test (missing target variable, but containing future exogenous variables)
so i can make
out of sample predictions
on the test data using the best model trained on complete
(train + val)
data.
c
Hi @Asad Abbas! Cross validation would not work in this case because its intended use is for historical evaluation. That is, with timestamps where the target variable is available. You will need to use fit and predict for this. You can still do hyperpar selection in the fit method with the auto model, just define the val size.
a
Thanks @Cristian (Nixtla), that makes sense. Just wondering what if i instead create a dummy target variable for the test set,
train + val + test (dummy target variable, but containing future exogenous variables)
and use cross-validation to select the best model and use
refit_with_val
and get out of sample predictions on the test set would that work?