This message was deleted.
# general
s
This message was deleted.
m
Hi @Christopher Lo, thanks for posting here. We really enjoyed your talk at the dev days of sktime and felt humbled that you mentioned us :) Just yesterday we were discussing about the mentioned recursion method and we’re intrigued by it. As we understand it, the massive speed gains apply only to certain ARIMAs and came to the conclusion that at the moment we are not sure about the trade off between implementation effort vs those gains. That being said, we are very happy to discuss the issue with you. You opinión is going to be surely very helpful. (Have you played around with it or have some intuitions?) Regarding bigger season lengths we have experienced good results using MSTL (with ARIMA).
c
Thanks for the response Max! I've done a small benchmark (~50 time series) comparing Gretl (the C econometrics library)'s auto_arima with AS 197 against statsforecast and found the following: the variance in fit times is much lower (between 1-10 seconds vs 3-60). Once again, it seems that statsforecast's auto_arima if "fast enough" if we know beforehand that the time series is relatively well-behaved (e.g. AR(1)MA(1)) or (as you suggested) applying MSTL decomposition beforehand. That being said, no other implementation in Python compares! To clarify, I found that the longer fit times come from more "complex" seasonal time series, i.e. when the step-wise search has to look at higher orders before early-stopping. So in production, I've lowered the max p, q, P, Q by half.

https://www.youtube.com/watch?v=uyIlAO390v4

😂 2
Also @Max (Nixtla) thanks for watching the talk and looking into the suggestion! Hopefully I've made a convincing case to move towards global models in a vertically scalable setting. Looking forward to seeing how
neuralforecast
develops!
🙌 2
P.S. Your MSTL-ARIMA suggestion(ala https://www.census.gov/data/software/x13as.html) should probably be the standard benchmarking "AR" technique over standard auto_arima....it is just 2-3 factors slower than all other benchmarking models. Do you think a thin wrapper around such a model such as x13as is a good idea?
🚀 1
m
I really enjoyed reading this thread! About Anodot, I’m curious about what they are doing.
But maybe also somewhat skeptical
Do you think a thin wrapper around such a x13as model is a good idea?
Great idea 🙂
Also, how do you patent an arima?