Slackbot
06/15/2023, 2:39 PMKin Gtz. Olivares
06/15/2023, 2:57 PMYang Guo
06/15/2023, 3:05 PMnf.predict
and directly apply subset of the dataset.Cristian (Nixtla)
06/15/2023, 4:48 PMcross_validation
method uses a validation set (of size val_size
) for model selection, and then automatically produces the forecast for the entire test set (of size test_size
or n_windows
of size h
).Cristian (Nixtla)
06/15/2023, 4:50 PMpredict_insample
(run it after the fit
or cross_validation
) method to recover the forecasts for the entire train AND validation sets. You can then filter the forecasts however you want.Cristian (Nixtla)
06/15/2023, 4:51 PMpredict_insample
already returns the true values in the y
column as well, here is the tutorial: https://nixtla.github.io/neuralforecast/examples/predictinsample.htmlCristian (Nixtla)
06/15/2023, 4:52 PMCristian (Nixtla)
06/15/2023, 4:53 PMcross_validation
is already doing model selection for you. It essentially covers the entire pipeline. (train model on the train set, select on validation set, predict on test set).Yang Guo
06/15/2023, 4:53 PMYang Guo
06/15/2023, 4:54 PMCristian (Nixtla)
06/15/2023, 4:55 PMcross_validation
only trains the model once.Yang Guo
06/15/2023, 4:57 PMCristian (Nixtla)
06/15/2023, 5:02 PMcross_validation
is the way to go. This is the function we have used in our published research, and it is the standard way of comparing the performance of models. The historic data is separated chronologically in train/val/test. Models are trained on the train set, and then it uses the validation set for model selection and hyperparameter tuning (for example, if you use an auto
model such as the AutoPatchTST
). Finally, it returns the forecast on the test set, which was never seen by the model during training.Cristian (Nixtla)
06/15/2023, 5:02 PMCristian (Nixtla)
06/15/2023, 5:05 PMpredict_insample
is to recover the forecasts for the train set and validation set. Was this useful? Let me know if you have additional doubts, we can chat using direct messages as well.Yang Guo
06/15/2023, 5:05 PMCristian (Nixtla)
06/15/2023, 5:07 PMYang Guo
06/15/2023, 5:08 PMCristian (Nixtla)
06/15/2023, 5:09 PMpredict
method for that, but as you said, it can only forecast one window. You just need to send predict(df=new_df)
. We actually have a tutorial on transfer learning with this use case here: https://nixtla.github.io/neuralforecast/examples/transfer_learning.htmlCristian (Nixtla)
06/15/2023, 5:14 PMnf.models[0].max_steps=0
. Then pass the new dataset to the fit
method (set use_init_models=False
), then call predict_insample
.