I'm trying to train a NBEATSx model on about 10,00...
# neural-forecast
m
I'm trying to train a NBEATSx model on about 10,000 time series with weekly frequency and yearly seasonality (I've also tried other algorithms such as NHITS but in this specific case NBEATSx seems to give me better results). Not all time series have the same seasonal pattern (e.g. some have peaks in summer, some in winter, some in both summer and winter). For example, one time series has peaks in both summer and winter but the trained model could not predict the winter peak. So I tried to oversample that time series by duplicating it 100 times in the training dataset. Now the model can predict the winter peak, but the funny thing is that it also predicts winter peaks for other time series that do not have winter peaks. I've tried adding binary static covariates to inform the model of the different types of time series, but it does not seem to be enough to overcome this problem. It is as if the model cannot generalize to different seasonal patterns but can only model a specific type of seasonality (even though they are always yearly seasonalities). The first thing I can think of to get around this problem is to train a different model for each different type of seasonality. Do you have suggestions for other things to try? Intuitively also adding static covariates to indicate the different types of time series might have made sense, but as mentioned above it did not work. Thanks
k
Hey @Manuel, You have an interesting challenge. Here are some ideas: 1. One thing that we can try is to use static variable encoders. In its current version, NBEATSx is using the static exogenous variables without encoding them first: https://github.com/Nixtla/neuralforecast/blob/main/neuralforecast/models/nbeatsx.py#L224 2. The [TFT architecture](https://nixtla.github.io/neuralforecast/examples/forecasting_tft.html) already has them implemented: https://github.com/Nixtla/neuralforecast/blob/main/neuralforecast/models/tft.py#L141 The downside with the TFT architecture will be its speed, for which I recommend you use a Google Colab GPU for your experiments. If using the encoded static exogenous improves your predictions, please let me know so that we improve NBEATSx encoders.
m
@Kin Gtz. Olivares Thanks! If I use the TFT model and set a high number of training steps (30-40 epochs), the model is somehow able to predict the winter peak that the global NBEATSx model could not predict, even if the predicted values are underestimated. Interestingly, if I train NBEATSx for the same large number of epochs, it is still unable to predict the peak. I do not know whether the difference is due to the different handling of exogenous variables or to the model architecture itself.
k
It is in the architecture The NBEATSx does not have a static encoder While TFT does
You can help NBEATSx and TFT by feeding a SeasonalNaive “anchor” prediction that catches the winter peaks, if you have enough information for your series
👍 1