This message was deleted.
# neural-forecast
s
This message was deleted.
k
Hey @James Wei, Thanks for using NeuralForecast 🙂 This is currently among one of our priority fixes in NeuralForecast inference. A partial fix was already implemented allowing you to use the
inference_input_size
, parameter that effectively trims the length of the time series to avoid using them entirely. I would recommend setting it to
inference_input_size=test_length*3
. Let me know if this helps. Would you be able to report this in a github issue if your problem persists?
j
This worked, thank you very much!
k
Good to know @James Wei Would you be able to post this as an issue on github? That way if somebody has similar issues they find the solution :)
j
Of course, will do it now!
k
Thanks
j
It looks like for fitting RNN, memory consumption allocated is still orders of magnitude larger than what is implied by the number of parameters. For example, using a dataset of length 10000 with 14 endogenous time series using default network & batch sizes uses much more than my 12GB capacity. Have you tried running nf.fit() on dataframes of similar size, and if so, what was the peak memory usage?
k
Hey @James Wei, We might have a curse of dimensionality problem showing up. Our RNN-based methods use a forking sequences optimization technique that creates the windows of size [series, time, channels, horizon]. Possible ways to alleviate the memory usage: • reduce the series batch size. • filter the length of the series of your data directly ir with input_size and inference_input_size • change to WindowsBased models like MLP, NBEATS, NHITS From our side we need to batch at the time level the creation of the forecasts.
j
Thanks for your info, I will take a closer look at the source code to get a better idea of how this technique works in practice!