Tracy Teal
06/03/2024, 11:55 PMNaren Castellon
06/05/2024, 9:43 PMFarzad E
06/06/2024, 1:49 AMmodels = [
AutoCES(model='S', season_length=52)
]
sf = StatsForecast(
df=df_train,
models=models,
freq='W',
fallback_model=SeasonalNaive(season_length=52)
)
frcst_df = sf.forecast(h=52, level=[95], X_df=df_test)
Natasha Watkins
06/07/2024, 12:51 AMNaren Castellon
06/08/2024, 3:08 PMmodels = [AutoARIMA(season_length=season_length),
SeasonalNaive(season_length=season_length),
SklearnModel(Lasso()),
SklearnModel(Ridge()),
SklearnModel(RandomForestRegressor())
]
# Forecast
preds = sf.forecast(
df = train,
h = 120,
X_df = test, # Exogenous variables
prediction_intervals = ConformalIntervals(n_windows = 5, h = 120),
level = [95],
)
# Cross Validation
sf.cross_validation(df = train, h = 120, n_windows = 5)
Natasha Watkins
06/10/2024, 6:23 PMunique_id
to the fitted models? I'm trying to list out the arima_string
for every model.Makarand Batchu
06/12/2024, 4:45 PMmodel.predict()
to return the forecasts of only a specific unique_id
that it was trained on. For example, if model
was trained on data for unique_id's
1, 2, 3; is there a way for model.predict()
to return the forecasts for only a subset of the unique_ids
?Jeff Tackes
06/13/2024, 1:56 PMthomas delaunait
06/14/2024, 8:33 PMDidier Merk
06/18/2024, 12:57 PMfrom statsforecast.models import MSTL, AutoARIMA
mstl_model = [MSTL(
season_length=[7, 30, 365], # seasonalities of the time series
trend_forecaster=AutoARIMA() # model used to forecast trend
)]
sf = StatsForecast(models=mstl_model, freq='D')
sf.fit(df=Y_train_df)
# Make prediction
preds = sf.predict(h=30)
As expected, when I test this on a smaller subset of 100 time series it already takes quite some time (around 3 minutes). Is there an efficient way to do a statistical forecast on this many time series (each possibly with their own trends, seasonality, residuals)?
In the documentation it states for example that:
Use theThis links to the TimeGPT model. Is this an indication that it will simply take too long to do forecasting for this many time series? Or do you have any other recommendations for statistical benchmarks on this amount of time series? Thanks in advance!method to fit each model to each time series. In this case, we are just fitting one model to one series. Check this guide to learn how to fit many models to many series.fit
sergio lopez
06/25/2024, 12:48 AMDidier Merk
06/26/2024, 4:15 PM# Initialize the model
mstl_model = [MSTL(
season_length=[7, 30, 365], # seasonalities of the time series
trend_forecaster=AutoARIMA() # model used to forecast trend
)]
auto_arima_model = StatsForecast(models=mstl_model, freq='D')
# Fit the model
auto_arima_model.fit(df=Y_train_df)
# Make prediction
arima_model_pred = auto_arima_model.predict(h=30)
For AutoETS:
# Initialize the model
mstl_model_ets = [MSTL(
season_length=[7, 30, 365],
trend_forecaster=AutoETS(model=["Z", "Z", "N"])
)]
auto_ets_model = StatsForecast(models=mstl_model_ets, freq='D')
# Fit the model
auto_ets_model.fit(df=Y_train_df)
# Make prediction
ets_model_pred = auto_ets_model.predict(h=30)
My goal is simple, use ARIMA and Exponential Smoothing on a very large set of time series. From the documentation I am trying to determine whether this is the correct approach, however the models seem to very often give almost identical predictions (see the image below for two examples). Maybe I am not understanding the theory behind both models correctly, but is this behaviour expected? Thanks for the help and the amazing library!David Rice
06/28/2024, 12:44 PMvirgilio espina
06/28/2024, 4:25 PMXubin Lou
07/01/2024, 4:18 PMGuillaume GALIE
07/01/2024, 7:31 PMsergio lopez
07/01/2024, 9:12 PMJeff Tackes
07/01/2024, 9:17 PMMakarand Batchu
07/02/2024, 9:44 AMsf.cross_validation
. I have the below code to find the crossvalidation_df
. sf
is an array of models that I want to cross-validate.
Please note that this was working a few days ago and the only thing that has changed is the amount of data that is being passed as part of train
.
crossvalidation_df = sf.cross_validation(
df=train,
h=int(args.horizon),
step_size=int(args.horizon / 4),
n_windows=3
)
But, I get the error
Exception: no model able to be fitted
Can you please help me understand the reason for this exception?Filipa Encarnação Louzeiro
07/02/2024, 3:11 PMmodels = [AutoARIMA()]
sf = StatsForecast(models=models, freq='M', n_jobs=1)
fcst = sf.forecast(df= train, h=1, X_df=X_test, level = [90])
fcst.head()
The train and X_test dataframes are dataframes like the ones in example https://nixtlaverse.nixtla.io/statsforecast/docs/how-to-guides/exogenous.html
Do you have any idea of what is going on? Thanks in advance!Omri Kramer
07/05/2024, 4:12 PMfreq
argument in StatsForecast()
is used for? More specifically, if I forecast on an hourly time series but it has some gaps in the data will there be any kind of gap filling/interpolation behind the scenes?Didier Merk
07/09/2024, 6:47 PM# Initialize the model
mstl_model_ets = [MSTL(
season_length=[7, 30, 365],
trend_forecaster=AutoETS(model=["Z", "Z", "N"])
)]
auto_ets_model_insample = StatsForecast(models=mstl_model_ets, freq='D')
# Fit the model
auto_ets_model_insample.fit(df=Y_train_df)
Now I would like to see the parameters that the model has found for each individual time series, however I can't find in the documentation exactly how to do this.
I have a similar model but using AutoARIMA as the trend forecaster and saw in the documentation this line of code would do what I wanted:
from statsforecast.arima import arima_string
arima_string(auto_arima_model_insample.fitted_[0,0].model_)
However, I get a KeyError when running that line of code (see image). It possibly has to do with the fact it is an MSTL model. Any help or pointers by any chance?Kyle Schmaus
07/09/2024, 11:09 PMNeal Knight
07/10/2024, 6:09 PMWindowAverage
model:
I noticed when running the .forecast
method, the WindowAverage didn’t actually calculate the average over the parameters but rather just uses the naive
forecast from the last period. However, when I do the normal .fit
& .predict
, it behaves as normal.
Is this intended?Matthew Lesko
07/17/2024, 9:29 PMTheOraware
07/24/2024, 4:10 AMMl Club
07/24/2024, 12:48 PMHoltwinters
model from statsforecast. When i am fitting the model and trying to use predict_in_sample function i am getting error. Then i use forecast_fitted_values
, it asks me to use model.forecast()
method not model.predict
. can you please help me what am i missing here?Ml Club
07/24/2024, 12:48 PMdef train_multiplicative_hwes_model(data, horizon):
cNames = data.columns
models = [HoltWinters(season_length=12, error_type='M', alias='Multiplicative HWES')] #AutoARIMA(season_length=12)]#
fcst = StatsForecast(
models=models,
freq='MS',
)
fcst.fit(df=data, id_col=cNames[0], time_col=cNames[1], target_col=cNames[2])
df_predict = fcst.predict(h=horizon)
# df_predict = fcst.forecast(df=data, h=horizon, fitted=True, id_col=cNames[0], time_col=cNames[1], target_col=cNames[2])
df_backward = fcst.forecast_fitted_values()
return df_predict.reset_index(), df_backward.reset_index()
Ml Club
07/24/2024, 12:48 PMI:\Development\Agnostic Forecasting with AADS\Agnostic_Forecasting_Model\.venv\Lib\site-packages\statsforecast\core.py:492: FutureWarning: In a future version the predictions will have the id as a column. You can set the `NIXTLA_ID_AS_COL` environment variable to adopt the new behavior and to suppress this warning.
warnings.warn(
Error training model Multiplicative HWES: Please run `forecast` method using `fitted=True`
Ml Club
07/24/2024, 12:49 PM