This message was deleted.
# mlforecast
s
This message was deleted.
j
That's what is done internally in predict, so usually you just split your initial dataframe (wihout lags), use that to train, call predict and compare afterwards
v
I don’t understand, my question is specifically about lags. The output from prep includes lower order lags in the test set. These lower order lags are taking information from the test set. Is the same output used to fit the model or there is additional processing to remove lower order lags? How can one get valid lags from prep?
j
The assumption of preprocess is that it will produce a training set only, so it uses all information available. Are you trying to create train and test sets for another model?
v
No I am trying to create train and test for mlforecast. To do inference mlforecast needs to create the same features as per train set right, and this includes lags?
j
That's handled in predict. If you only want to use lags you should be able to just use fit and when you call predict the lags will be generated automatically based on the previous predictions
preprocess is mostly used to verify the features and possibly modifying them before training the model. If the regular lags are ok for you then you don't need it
I believe you can use exactly this: https://github.com/Nixtla/mlforecast#quick-start
v
I see, so there is no way to use preprocessing to generate valid lags on test set? Either in mlforecast or some other Nixtla library. Is there a way to get features generated at prediction phase?
Without features it is not clear how one can explain the predictions coming from the model.
j
How would you get those valid lags? At the moment you run out of history you have to start taking them from somewhere else (we take them from the model predictions). You can get the input features with a callback
v
That’s great thank you @José Morales I will check it out
👍 1