https://github.com/nixtla logo
c

Chidi Nweke

05/23/2023, 8:53 AM
Can the Naive method be used in an in-sample way not just for
y_t = y_t-1
but for
y_t = t-t-n
? This is particularly important because we essentially have multiple tasks, forecasting say 1 week, 2 weeks, 3 weeks into the future and we assume that after the following week we will observe the next value. From what I understood from reading the docs in its entirety the
forecast
method in Naive would not achieve this as it only looks at your training data.
r

Rafael Correia Da Silva

05/23/2023, 12:02 PM
maybe the
cross_validation
method from the core neuralforecast can help? it basically trains your model up to a point then forecast for a given number of horizons in the future, which could be week 1, 2, .., n
c

Chidi Nweke

05/23/2023, 12:14 PM
That's certainly a possible workaround, thanks! For it to work properly I assume I'd have to append the last n observations from y_train to y_test and run cross_validation.
r

Rafael Correia Da Silva

05/23/2023, 12:18 PM
That's a fair workaround. I tend to leave out some of the dataset/y_test for out-of-sample testing (especially the most recent past), just to check for overfitting
c

Chidi Nweke

05/23/2023, 12:19 PM
How would this work for SimpleExponentialSmoothingOptimized though? This has no
forward
method. I'd expect to be able to find the optimized alpha value on the training set and then call forward on the test set. Being able to use y_true in a rolling way is a big requirement for me
r

Rafael Correia Da Silva

05/23/2023, 12:35 PM
I guess it does just that. I wasn't familiar with the model, but was able to whip up some quick example to show it in action: github gist. from what I get: • it cuts off the training data at 1958-11-30, fixing alpha • then uses the following y_hat as the rolling weights
c

Chidi Nweke

05/23/2023, 12:39 PM
Thank you so much for helping! The issue is that in our use case we might have access to y_true so we want to split experiments in cases where we use y_hat in a rolling way (as in your gist) and cases where we replace the oldest y_hat with y_true. I think forward does this (although it isn't really documented what it does...) but this model does not have it šŸ˜•
r

Rafael Correia Da Silva

05/23/2023, 12:44 PM
I guess you're right. Maybe a loop over fit/predict would work, where you choose whether to paste y_hat or y_true?
c

Chidi Nweke

05/23/2023, 12:50 PM
Yeah, I can make it work with a few ugly workarounds or looping. That's totally ok. Since the package already does so much I was implicitly expecting it to do eveyrhting.
r

Rafael Correia Da Silva

05/23/2023, 12:50 PM
yeah it'll get ugly fast šŸ˜…
c

Chidi Nweke

05/23/2023, 12:52 PM
I can open an issue on Git to support this because I think it's a common enough use case
r

Rafael Correia Da Silva

05/23/2023, 12:52 PM
perhaps there's a place for a
rolling
flag in
cross_validation
to do just that
yeah that could be helpful