To train a global timeseries it is usually mentioned that the number of time series should be more, the more the better. But what about the length of each time series itself? For example if i have 1000 timeseries, each having only 100 observation spaced around an hour OR if i have a 100 series, each having 10000 observations each spaced around 10 mints. In which case would the global time series model be able to fit better? Or would we just have to try and find out.
06/23/2023, 5:50 PM
Hi @Syed Umair Hassan! I think it is still an open question, and probably there is no definitive answer. It would probably depend more on the characteristics of the particular datasets. In general, yes, with more timestamps (either due to length or number of time series) global models tend to perform better on average than local models. Additionally, with short time series, models would probably struggle to produce accurate forecasts, as the
would be also short.
We are currently doing research on the characteristics of datasets that favor transfer learning and global models, we will share our findings soon.
It would be awesome if you can share your findings as well 🙂
Syed Umair Hassan
06/23/2023, 6:06 PM
Well, I am just a newbie compared to you guys =D. Have just started doing time series. I am researching on wind speed forecasting and wanted to research on using global models in this area. Also, wanted to check how transfer learning works. Have just started , would be glad to share my results when I get some. Thanks alot for your guidance.