Greetings. I'm testing most nf automodels on my da...
# neural-forecast
r
Greetings. I'm testing most nf automodels on my dataset and there's no winner, some times one models outperforms the others. Can anyone pls suggest an approach to choose which model should i use at some point? Smth like merging all forecasts in one dataframe and, for example, use random forests to tell which ones are closer to the real series (maybe weighted).
o
Surely one model performs best overall on the metric of your choice? Obviously that model will not have pareto optimality over all other models, that's nearly never the case. If you want to go the route of ensembling, simple averaging usually works quite well. But I'd just pick the best overall model, it keeps the pipeline much easier
r
Thanks Olivier but I think I didn't explained correctly. The performance variation is much higher that I supposed. For example, in one set GRU is the best, for another it's one of the worst and MLP won. I can't find any clear pattern, I wish I could find a good-size-fits-all model, it would indeed make my pipeline much easier, but there's nothing close to a best overall.
o
What does a set mean in this context?
r
The dataset i train
Sorry, I think i didn't understand your question
o
Just pick the best model per dataset? It's not uncommon that different datasets give different best models
r
Ok Olivier, thanks. This information, "_It's not uncommon that different datasets give different best models",_ was precious!
👍 1