FYI: Quick singular use case using 2-years worth o...
# timegpt
s
FYI: Quick singular use case using 2-years worth of data (1-year input and 1-year test). Compared a highly tweaked (and trained) Neural-Prophet model versus out of the box, no tweaks one-shot TimeGPT; the MAPE difference was only 1.4% higher using TimeGPT. Great job Nixtla.
🙌 1
👍 2
m
That's amazing! Probably worth trying fine-tuning TimeGPT a bit and see if it improves further! Let us know if you try that.
s
Thanks Marco. What is odd is that when holidays are included, the MAPE decreases from only 8.47 to 8.33. I had higher expectations. However, after thinking about it, given how TimeGPT is trained on many other time-series data, holidays clearly do not affect all time-series equally.
And fine tuning (10, 20 ...) raised the MAPE as well. My guess is that I am not applying fine tuning correctly. Still quite amazed at TimeGPS's "out of the box" forecasting accuracy.
Here are results using "out-of-box" TimeGPT settings. Not bad. Some holidays are significant anomalies in this business vertical
m
Hmm, I see! Maybe fine-tuning makes the model overfit. You could also try specifying
finetune_loss='mape'
along with
finetune_steps=10
and see if you can get more improvements.
👀 1