J T
10/23/2022, 2:35 AMHector G. LOPEZ-RUIZ
10/23/2022, 1:36 PMHector G. LOPEZ-RUIZ
10/24/2022, 10:47 AMMax (Nixtla)
10/24/2022, 2:14 PMGinger Holt
10/26/2022, 11:03 PMY_fitted_df, S_fitted, tags_fitted = aggregate(Y_fitted_df, hiers)
When I run this, I get an error:
Y_fitted_df, S_fitted, tags_fitted = HierarchicalData.load(Y_fitted2_df)
TypeError: load() missing 1 required positional argument: 'group'Eric Braun
10/31/2022, 4:51 PMTia Guo
10/31/2022, 7:26 PMTia Guo
11/03/2022, 7:49 PMChris Gervais
11/08/2022, 11:06 AMKevin Kho
11/09/2022, 5:58 PMKevin Kho
11/10/2022, 3:46 PMforecast
function in the utils can take exogenous regressors?
Code snippet in threadJose Bordon
11/14/2022, 4:03 PMJose Bordon
11/14/2022, 4:03 PMJose Bordon
11/14/2022, 4:03 PMJose Bordon
11/14/2022, 4:04 PMJose Bordon
11/14/2022, 4:04 PMMatias Calderini
11/14/2022, 9:47 PMfede (nixtla) (they/them)
11/15/2022, 8:53 PMHierarchicalForecast
, show some love community ❤️
https://twitter.com/fede_gr/status/1592621393975586817Chris Gervais
11/17/2022, 12:45 PMChris Gervais
11/17/2022, 12:48 PMmike
11/17/2022, 10:35 PMfable
to statsforecast in our production environment. However, I’m running into an issue with replication. Is it possible to boxcox
transform prior to the forecasting step and back transform to produce the forecast mean rather than median? I’ve tried using the scipy
python package, but it’s producing lower predictions than the R implementation. Any ideas / thoughts?
Examples:
ETS -> fable::ETS(fabletools::box_cox(qty + 1, lambda = 0.3), opt_crit = "mae")
ARIMA -> fable::ARIMA((fabletools::box_cox(qty + 1, lambda = 0.4)) ~ PDQ(period = 13), stepwise = TRUE)
THETA -> THETA(fabletools::box_cox(qty + 1, lambda = 0.1) ~ season(method = "additive"))
Jose Bordon
11/17/2022, 10:45 PMJose Bordon
11/17/2022, 10:45 PMJose Bordon
11/17/2022, 10:45 PMChristopher Lo
11/18/2022, 4:23 AMauto_arima
(less numba compile time) and found a massive variance in training times (between 3-60 seconds) across time series as opposed to <5-10 seconds with auto_ets
. After doing a meta-analysis of the fit-times against the optimal orders, I found that the current implementation is still quite slow at higher orders and with seasonal orders.
Nevertheless, while looking through statsmodels
, I found an issue that suggests using Chandrasekhar recursions (https://github.com/statsmodels/statsmodels/issues/6812) to dramatically speed up ARMA training times (up to >2-4x speed-up for higher orders with conditional SS). I've attached a screenshot from the paper that demonstrates this.
Note: AS154 is the current Kalman filter implementation used in statsforecast
and fable
.Andrei Tulbure
11/22/2022, 1:26 PMMALISETTY SUMANTH
11/25/2022, 11:59 AMMALISETTY SUMANTH
11/25/2022, 12:00 PMMALISETTY SUMANTH
11/25/2022, 12:02 PMMALISETTY SUMANTH
11/25/2022, 2:20 PM