Christopher Lo
11/18/2022, 4:23 AMauto_arima
(less numba compile time) and found a massive variance in training times (between 3-60 seconds) across time series as opposed to <5-10 seconds with auto_ets
. After doing a meta-analysis of the fit-times against the optimal orders, I found that the current implementation is still quite slow at higher orders and with seasonal orders.
Nevertheless, while looking through statsmodels
, I found an issue that suggests using Chandrasekhar recursions (https://github.com/statsmodels/statsmodels/issues/6812) to dramatically speed up ARMA training times (up to >2-4x speed-up for higher orders with conditional SS). I've attached a screenshot from the paper that demonstrates this.
Note: AS154 is the current Kalman filter implementation used in statsforecast
and fable
.Max (Nixtla)
11/18/2022, 3:37 PMChristopher Lo
11/18/2022, 3:53 PMneuralforecast
develops!Max (Nixtla)
11/18/2022, 5:47 PMDo you think a thin wrapper around such a x13as model is a good idea?Great idea 🙂