Indar Karhana
03/04/2025, 5:19 PMSteffen Runge
03/07/2025, 11:32 AMArvind Puthucode
03/17/2025, 9:02 AMWill Atwood
03/17/2025, 5:36 PMRodrigo Sodré
03/19/2025, 1:35 AMLuis Enrique Patiño
03/19/2025, 9:20 PMhadar sharvit
03/23/2025, 1:53 PMdef fit(df,...):
df = feature_encoding(df)
df = normalize(df)
for X,y in dataloader(...):
pred = model(X) ...
in the training loop:
def fit(df,...):
for X,y in dataloader(...):
X = feature_encoding(X)
X = normalize(X)
pred = model(X) ...
Alex
03/27/2025, 1:35 PMLuis Enrique Patiño
03/31/2025, 11:11 PMSamuel
04/13/2025, 4:38 PMHeitor Carvalho Pinheiro
04/14/2025, 2:45 AMplot_series
function in utils, can anyone tell me why there's a gap between the training data and the predictions? It does not bother me much, but some people might find it weird when I'm presenting. Is there any way to get rid of that gap betwwen the series?Samuel
04/15/2025, 6:44 AMRodrigo Sodré
04/18/2025, 7:37 PMSamuel
04/22/2025, 7:46 AMjan rathfelder
04/24/2025, 9:31 PMValeriy
04/25/2025, 7:46 AMValeriy
04/28/2025, 8:11 AMRenan Avila
04/29/2025, 7:44 PMJason Phillips
04/30/2025, 5:21 AMJason Phillips
04/30/2025, 5:22 AMSteven Smith
04/30/2025, 12:37 PMSteven Smith
04/30/2025, 12:47 PMSteven Smith
04/30/2025, 3:02 PMLuis Enrique Patiño
05/08/2025, 3:26 PMintervals = ConformalIntervals(h=12, n_windows=3)
sf = StatsForecast(
models=[
AutoETS(season_length=52, model='AAA', prediction_intervals=intervals),
],
freq='W-MON',
n_jobs=-1,
verbose=True,
fallback_model=SeasonalExponentialSmoothing(season_length=52, alpha=0.95, prediction_intervals=intervals)
)
levels = [95, 90]
y_pred = sf.forecast(h=12, df=df, level=levels)
Valeriy
05/13/2025, 11:16 AMLuis Enrique Patiño
05/21/2025, 4:52 PMfrom functools import partial
from utilsforecast.losses import rmse, mae, mase
from utilsforecast.evaluation import evaluate
models = cv_df.drop('unique_id', 'ds', 'cutoff', 'y').columns
metrics = [
mae,
rmse,
partial(mase, seasonality=52),
partial(rmsse, seasonality=52),
]
cv_df_eval = cv_df.drop('cutoff')
evaluation = evaluate(
cv_df_eval,
metrics=metrics,
models=models,
train_df=cv_df_eval
)
evaluation.display()
I'm using spark on statsforecast, my question is:
¿This is the correct way of evaluate the CV results? I'm planning to get the "best model" from the evaluation to get my prod forecast.Jing Qiang Goh
05/25/2025, 10:49 AMYou have reached the maximum number of finetuned models. Please delete existing models before creating a new one.
However, I followed this guide https://nixtlaverse.nixtla.io/nixtla/docs/tutorials/reusing_finetuned_models.html
and nixtla_client.delete_finetuned_model()
to delete the returned nixtla_client.finetuned_models(), it does not help to address the issue. Anything I could have missed here?
cc: @Marcojoel iYush
05/27/2025, 8:43 PMstatsforecast
, I am experimenting with it in order to assess whether it is better than our current approach using statsmodels
and to start, I just chose models AutoARIMA
, AutoETS
and CrostonOptimized
and forecasting all the items (after constructing the pannel
DataFrame
) and calling sf.forecast
is working, but then when I start evaluating using cross_validation
I get this error comming from `AutoETS`:
File "/home/joiyushkay/dev/biocartis_demand_planning/forecast_assessment/apps/core/data_process/forecasting.py", line 47, in get_forecasts
evaluation_df = evaluation.evaluate()
^^^^^^^^^^^^^^^^^^^^^
File "/home/joiyushkay/dev/biocartis_demand_planning/forecast_assessment/apps/core/data_process/forecasting.py", line 97, in evaluate
cross_validation_df = self.sf.cross_validation(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/joiyushkay/dev/biocartis_demand_planning/.venv/lib/python3.12/site-packages/statsforecast/core.py", line 1588, in cross_validation
return super().cross_validation(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/joiyushkay/dev/biocartis_demand_planning/.venv/lib/python3.12/site-packages/statsforecast/core.py", line 1007, in cross_validation
res_fcsts = self.ga.cross_validation(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/joiyushkay/dev/biocartis_demand_planning/.venv/lib/python3.12/site-packages/statsforecast/core.py", line 339, in cross_validation
raise error
File "/home/joiyushkay/dev/biocartis_demand_planning/.venv/lib/python3.12/site-packages/statsforecast/core.py", line 336, in cross_validation
res_i = model.forecast(**forecast_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/joiyushkay/dev/biocartis_demand_planning/.venv/lib/python3.12/site-packages/statsforecast/models.py", line 790, in forecast
fcst = forecast_ets(mod, h=h, level=level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/joiyushkay/dev/biocartis_demand_planning/.venv/lib/python3.12/site-packages/statsforecast/ets.py", line 1241, in forecast_ets
fcst = pegelsfcast_C(h, obj)
^^^^^^^^^^^^^^^^^^^^^
File "/home/joiyushkay/dev/biocartis_demand_planning/.venv/lib/python3.12/site-packages/statsforecast/ets.py", line 929, in pegelsfcast_C
states = obj["states"][-1, :]
~~~~~~~~~~~~~^^^^^^^
IndexError: too many indices for array: array is 1-dimensional, but 2 were indexed
Any suggestion?Liu Chen
05/29/2025, 7:10 AMChris Naus
05/29/2025, 2:56 PM