Hi Team, I'm new to neuralforecast, am trying to d...
# neural-forecast
r
Hi Team, I'm new to neuralforecast, am trying to do a POC on how effective global models are for our use-case, I have around 400 TS in my POC at monthly grain ~10 yrs of history for most. The scale is large(50K to 100K) for a small set of time series and the remaining TS scale ranges from 100s to a few k's. I'm exploring nbeats and nbeatsx(with statics exogs only) and below are a few things I have tried to improve the performance against a baseline of robust ensemble method using multiple statistical models. The results are promising and are on par with baseline especially nbeats/x are doing well with the trend. The issue I'm having is with seasonality. The TS I have are quite volatile and the seasonality produced by nbeats is very muted even in cases where seasonality is consistent and very evident to naked eye and these leads to poor performance especially on those TS with larger scale. Thing I tried as follows: I'm using Optuna for hparam tuning, TPE sampler Optimizer: adamw seems to work well loss:huberMQloss, using median loss with 5 fold cv, delta I've tried range of values between 0 to 1 and a few other values like 5,10, etc.. Normalization: revin and minimax were helpful in improving accuracy Stacks: Seeing Improved performance with Trend and seasonality stacks so only sticking with those. mlp units: range from 32 to 256 MLP units per layer ( 1 to 5 layers) per block,(3 to 5 layers with 32 units) are picked mostly by optuna. no blocks: range from 2 to 8, 4 to 7 mostly picked in tuning. no stacks: range from 1 to 6(used Identity stack as the last stack for odd numbered stacks). 4 and above mostly picked by optuna harmonics: range from 2 to 18(optuna picks 10 and above most times) poly: 1 to 3, backast_length: 2x is mostly picked by optuna for a 12 month forecast horizon. batch_size: tried 32, 64, 128 max_steps: 500 shared_weight: True is picked by optuna mostly dropout_prob_theta: errors out, don't think the param is implemented for nbeats/x? Let me know what else I can try to improve seasonality? or improve generalization? Should I try any other models? Thanks in Advance!!
m
Hello! This could be a drawback of training a global model. Maybe if you can separate your series into two groups: 1. Very clear seasonality 2. Volatile, no clear seasonality and train two separate models, it could perform better. Otherwise, your approach is reasonable. You could try NHITS and see if it generalizes better. It also supports static exogs.
r
Thanks you for the prompt response Marco, I will try out NHITS too, can you point me to any research or techniques y'all found useful to segregate Timeseries to seasonal and non-seasonal groups?