https://github.com/nixtla logo
Join Slack
Powered by
# general
  • i

    Indar Karhana

    03/04/2025, 5:19 PM
    Hi folks, Is there a implementation of Pinball loss in nixtla or similar loss function for probabilistic forecasts?
    m
    o
    • 3
    • 5
  • s

    Steffen Runge

    03/07/2025, 11:32 AM
    Not sure whether it's just me or applies to others, but recently when I run the nf models in Colab (cpu) they run very slowly. Anybody else is experiencing this?
    m
    o
    • 3
    • 6
  • a

    Arvind Puthucode

    03/17/2025, 9:02 AM
    How to do hierarichal forecast for a grouped spark df i notice that u provide the reconicle method for one Hierarchical series But how to do i apply this in a grouped series eg one set is ['country,'brand','device_no'] another is ['country','device_no','state'] can the value hierarchy be maintained across these trees as well ie value for device no S23 must be same after rollup in tree 1 as well as tree 2
    o
    • 2
    • 9
  • w

    Will Atwood

    03/17/2025, 5:36 PM
    My team and I have been working with Nixtla for several months now and we feel like there must be a better solution for how we are backtesting using the cross-validation method. We currently are using a retail calendar which creates non uniform window sizes for our backtest, we hacked together a way to still use the cross-validation method but we are now running into issues with our exogenous variables. We thought being able to pass something like a list of dates for the cutoffs or something similar would work, but I wanted to reach out to the pros to get some advice!
    m
    e
    • 3
    • 3
  • r

    Rodrigo Sodré

    03/19/2025, 1:35 AM
    Hi everybody. Are there helper methods to transform a general timeseries dataframe to Nixtla's unique_id/ds/y dataframe? My dataframe is quite large (20k x 100) and for every predicted step I'm updating it with the real value calling .concat and predicting the next step inside a 5000 loop iteration. Even Pandas documentation alerts regatding .append and.concat methods, how inefficient they are. The dataset manipulation is taking 99% of the time, my gpu is mostly idle. I'm stuck trying to figure out a structure to efficiently store my data and updating it to convert each iteration to the unique_id/ds/y dataframe. Any idea will be greatly appreciated.
    j
    o
    • 3
    • 22
  • l

    Luis Enrique Patiño

    03/19/2025, 9:20 PM
    Hello team, the Statsforecast packages has the option of direct or recursive forecasting? Is this possible?
  • h

    hadar sharvit

    03/23/2025, 1:53 PM
    hey guys. a question on normalizations and feature encodings: are those calculated at on raw df at the beginning, before training loop, or on a batch of data? something like: Beginning:
    Copy code
    def fit(df,...):
        df = feature_encoding(df)
        df = normalize(df)
        for X,y in dataloader(...):
            pred = model(X) ...
    in the training loop:
    Copy code
    def fit(df,...):
        for X,y in dataloader(...):
            X = feature_encoding(X)
            X = normalize(X)
            pred = model(X) ...
  • a

    Alex

    03/27/2025, 1:35 PM
    Hi All! I am happy to contribute to the Nixtla Community open source projects, but I do not know how the issues are prioritized. At the moment the issues look like they are of equal importance. If I want to help how do I know what is most pressing?
    m
    o
    • 3
    • 8
  • l

    Luis Enrique Patiño

    03/31/2025, 11:11 PM
    Hello I'm working with a weekly freq time series of format Year-week_number, is there a way of using this data inside Nixtla? I tried to use an int freq but i get a lot of errors, and can convert directly to date because i get wrong dates.
    m
    • 2
    • 3
  • s

    Samuel

    04/13/2025, 4:38 PM
    Hi guys, i'm interested about the transfer learning for time series. Last time i watch Max's presentation it still on progress. Does it work? is there any paper and article that shows how is it performed against other dataset?
    j
    m
    • 3
    • 5
  • h

    Heitor Carvalho Pinheiro

    04/14/2025, 2:45 AM
    Hi guys! Regarding the
    plot_series
    function in utils, can anyone tell me why there's a gap between the training data and the predictions? It does not bother me much, but some people might find it weird when I'm presenting. Is there any way to get rid of that gap betwwen the series?
    m
    • 2
    • 1
  • s

    Samuel

    04/15/2025, 6:44 AM
    Hi guys, i'm trying to make a sales forecast for grocery store per sku and i'm thinking about adding a new column called past_30_days_sales as exogenous variables, what do you guys think about this? will it work in general?
    j
    • 2
    • 4
  • r

    Rodrigo Sodré

    04/18/2025, 7:37 PM
    Hi everyone. A very basic question I'm having just now. I'm providing datasets for neural forecast models training and predicting. My datasets all have many value columns, ie, after I convert to Nixtla's expected long format, these datasets always have many unique_ids. My question is: are the models considering them univariate or multivariate? My concern is: are the models assuming I'm expecting them to analyze any correlation between the series? Or no matter if I train each column individually or all together, the prediction of each one's next step will be the same? A final observation: according to this, every models I'm using is univariate.
    j
    m
    • 3
    • 2
  • s

    Samuel

    04/22/2025, 7:46 AM
    Hi guys, i wonder if 36 monthly sales data points for thousands of SKUs is sufficient for training the NBEATx model, is it enough? i've tried it but it doesnt work, the prediction is bad
    o
    • 2
    • 1
  • j

    jan rathfelder

    04/24/2025, 9:31 PM
    Hi, a general question about the implementation of the fourier transformation. I can set the freq and season length. But does this has to match the format of my timestamp or can I have daily data and set 'W' and season length 53 and it will give me valid fourier terms for weekly patterns? When I plot the data it looks to me that I first have to transform my data into the respective time aggregation and then I can run the fourier transformation only for this freq. or am i wrong here and i can have my daily data and get fourier terms also for week and month directly?
    j
    • 2
    • 2
  • v

    Valeriy

    04/25/2025, 7:46 AM
    Hyndman’s new book now includes a section on conformal prediction https://otexts.com/fpppy/nbs/05-toolbox.html#conformal-prediction-for-distribution-free-prediction-intervals 🔥🔥🔥
    💙 3
    m
    t
    m
    • 4
    • 12
  • v

    Valeriy

    04/28/2025, 8:11 AM
    People are welcome to feature cool use cases for Nixtla ecosystem beyond documentation for my new book https://valeman.gumroad.com/l/MasteringModernTimeSeriesForecasting dm me here or on LinkedIn and if your usecase (incl dataset) is interesting I will feature it in the book and give full credit.
    🙌 2
    m
    • 2
    • 1
  • r

    Renan Avila

    04/29/2025, 7:44 PM
    Hello, thanks for supporting open source! Not sure if this is the right channel to post a subject regarding datasetsforecast but I just wanted to know why is there an "OT" time series in the end of all of the datasets comingo from longhorizon2. Is it the target prediction time series? And the rest would be the exogenous ones? Thanks in advance.
    r
    m
    • 3
    • 9
  • j

    Jason Phillips

    04/30/2025, 5:21 AM
    👋 Hi everyone!
  • j

    Jason Phillips

    04/30/2025, 5:22 AM
    Joined the community this evening.
  • s

    Steven Smith

    04/30/2025, 12:37 PM
    I have a legacy seasonal decompostion model that includes data engineering, modelling and data post processing. I want to wrap it as a "nixtla model" so we can use it as a model to compare with ARIMA, ETS, etc using nixtla's forecasting and cross validation functions. Is there a base class, you would recommend I use or would you suggest a different approach? We are using statsforecast for the other models. Thanks in advance for the help.
  • s

    Steven Smith

    04/30/2025, 12:47 PM
    I will try using _TS as a base class. I will just need to keep an eye on any changes you make to it.
  • s

    Steven Smith

    04/30/2025, 3:02 PM
    The challenge I have is that the seasonal coefficients change each week, so I need to be able to dynamically change them based on the cross-validation window that is currently being processed.
  • l

    Luis Enrique Patiño

    05/08/2025, 3:26 PM
    Hello again team, I been using conformal intervals for a project but every time I check the results my intervals only have nulls. What could be the issue?
    Copy code
    intervals = ConformalIntervals(h=12, n_windows=3)
    
    sf = StatsForecast(
        models=[
            AutoETS(season_length=52, model='AAA', prediction_intervals=intervals),
        ],
        freq='W-MON',
        n_jobs=-1,
        verbose=True,
        fallback_model=SeasonalExponentialSmoothing(season_length=52, alpha=0.95, prediction_intervals=intervals)
    )
    
    levels = [95, 90]
    
    y_pred = sf.forecast(h=12, df=df, level=levels)
    • 1
    • 1
  • v

    Valeriy

    05/13/2025, 11:16 AM
    Hi all, for my new book Mastering Modern Time Series Forecasting https://valeman.gumroad.com/l/MasteringModernTimeSeriesForecasting I am looking for interesting examples and use cases across the full spectrum of forecasting from metrics to models and beyond. If you have an interesting use case with open data and code please feel free to reach out (full credit will be given in the book). Nixtlaverse use more than welcome!
    🙌 2
  • l

    Luis Enrique Patiño

    05/21/2025, 4:52 PM
    Hello team I'm working on the evaluation of a crossvalidation using the evaluate method of utilsforecast. My code looks like this:
    Copy code
    from functools import partial
    
    from utilsforecast.losses import rmse, mae, mase
    from utilsforecast.evaluation import evaluate
    
    models = cv_df.drop('unique_id', 'ds', 'cutoff', 'y').columns
    metrics = [
        mae,
        rmse,
        partial(mase, seasonality=52),
        partial(rmsse, seasonality=52),
    ]
    
    cv_df_eval = cv_df.drop('cutoff')
    evaluation = evaluate(
        cv_df_eval,
        metrics=metrics,
        models=models,
        train_df=cv_df_eval
    )
    
    evaluation.display()
    I'm using spark on statsforecast, my question is: ¿This is the correct way of evaluate the CV results? I'm planning to get the "best model" from the evaluation to get my prod forecast.
    m
    • 2
    • 2
  • j

    Jing Qiang Goh

    05/25/2025, 10:49 AM
    Hi, I encounter
    You have reached the maximum number of finetuned models. Please delete existing models before creating a new one.
    However, I followed this guide https://nixtlaverse.nixtla.io/nixtla/docs/tutorials/reusing_finetuned_models.html and
    nixtla_client.delete_finetuned_model()
    to delete the returned nixtla_client.finetuned_models(), it does not help to address the issue. Anything I could have missed here? cc: @Marco
    m
    n
    • 3
    • 3
  • j

    joel iYush

    05/27/2025, 8:43 PM
    Hello, great job with the tools. I have just started using
    statsforecast
    , I am experimenting with it in order to assess whether it is better than our current approach using
    statsmodels
    and to start, I just chose models
    AutoARIMA
    ,
    AutoETS
    and
    CrostonOptimized
    and forecasting all the items (after constructing the
    pannel
    DataFrame
    ) and calling
    sf.forecast
    is working, but then when I start evaluating using
    cross_validation
    I get this error comming from `AutoETS`:
    Copy code
    File "/home/joiyushkay/dev/biocartis_demand_planning/forecast_assessment/apps/core/data_process/forecasting.py", line 47, in get_forecasts
        evaluation_df = evaluation.evaluate()
                        ^^^^^^^^^^^^^^^^^^^^^
      File "/home/joiyushkay/dev/biocartis_demand_planning/forecast_assessment/apps/core/data_process/forecasting.py", line 97, in evaluate
        cross_validation_df = self.sf.cross_validation(
                              ^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/joiyushkay/dev/biocartis_demand_planning/.venv/lib/python3.12/site-packages/statsforecast/core.py", line 1588, in cross_validation
        return super().cross_validation(
               ^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/joiyushkay/dev/biocartis_demand_planning/.venv/lib/python3.12/site-packages/statsforecast/core.py", line 1007, in cross_validation
        res_fcsts = self.ga.cross_validation(
                    ^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/joiyushkay/dev/biocartis_demand_planning/.venv/lib/python3.12/site-packages/statsforecast/core.py", line 339, in cross_validation
        raise error
      File "/home/joiyushkay/dev/biocartis_demand_planning/.venv/lib/python3.12/site-packages/statsforecast/core.py", line 336, in cross_validation
        res_i = model.forecast(**forecast_kwargs)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/joiyushkay/dev/biocartis_demand_planning/.venv/lib/python3.12/site-packages/statsforecast/models.py", line 790, in forecast
        fcst = forecast_ets(mod, h=h, level=level)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/joiyushkay/dev/biocartis_demand_planning/.venv/lib/python3.12/site-packages/statsforecast/ets.py", line 1241, in forecast_ets
        fcst = pegelsfcast_C(h, obj)
               ^^^^^^^^^^^^^^^^^^^^^
      File "/home/joiyushkay/dev/biocartis_demand_planning/.venv/lib/python3.12/site-packages/statsforecast/ets.py", line 929, in pegelsfcast_C
        states = obj["states"][-1, :]
                 ~~~~~~~~~~~~~^^^^^^^
    IndexError: too many indices for array: array is 1-dimensional, but 2 were indexed
    Any suggestion?
    m
    • 2
    • 13
  • l

    Liu Chen

    05/29/2025, 7:10 AM
    Hi, I am curious, can the statsforecast package be used for non-stationary time series data directly without doing transformation like log or differencing? Or there is in-built transformation? Are we recommended to test the stationarity of the data and do transformation before applying the statsforecast package?
    j
    m
    • 3
    • 5
  • c

    Chris Naus

    05/29/2025, 2:56 PM
    Is there a function in either the utilsforecast or coreforecast that checks if a time series is complete? I know there is fill_gaps(), but would prefer to check before doing that.
    m
    • 2
    • 3