# neural-forecast

- f
Francisco Trejo

05/08/2023, 11:42 PMHey everyone! Thank you for all the amazing work and for continuing to keep it updated. I saw that recently a PR was approved for predict_insample. Does anyone have any examples on how to use it? I'm attempting to use it but keep getting an error. I'd love to see some examples if anyone has used it successfully out there!m- 2
- 2

- p
Pascal Schindler

05/09/2023, 9:29 AMHey everyone! I often receive the following error while training AutoNHITS

with the following config: What are the reasosns?`IntegerGreaterThan(lower_bound=0)`

Copy code`horizon = 60 config_nhits = { "input_size": tune.choice([14 ,28, 28*2, 28*3, 28*5, 2 * horizon]), # Length of input window "n_blocks": 5*[1], # Length of input window "mlp_units": 5 * [[512, 512]], # Length of input window "interpolation_mode": tune.choice(['linear']), "n_pool_kernel_size": tune.choice([5*[1], 5*[2], 5*[4], [8, 4, 2, 1, 1], [16, 8, 1]]), # MaxPooling Kernel size "n_freq_downsample": tune.choice([[8, 4, 2, 1, 1], [1, 1, 1, 1, 1], [168, 24, 1], [24, 12, 1], [1, 1, 1]]), # Interpolation expressivity ratios "learning_rate": tune.loguniform(1e-4, 1e-2), # Initial Learning rate "scaler_type": tune.choice([None]), # Scaler type "max_steps": tune.choice([1000]), # Max number of training iterations "batch_size": tune.choice([16, 32, 64, 128, 256, 512]), # Number of series in batch "windows_batch_size": tune.choice([32, 64, 128, 256, 512, 1024, 2048]), # Number of windows in batch "random_seed": tune.randint(1, 20), "scaler_type": tune.choice(["robust", None]), # Random seed "hist_exog_list": ["week_day", "month", "trends"], "futr_exog_list": ["week_day", "month"] }`

c- 2
- 2

- k
Kevin

05/09/2023, 3:58 PMI am trying to use Temporal Fusion Transformer (TFT) (https://nixtla.github.io/neuralforecast/models.tft.html#tft). Can I extract variable importances from NeuralForecast's implementation of TFT? I am referring to Pytorch's implementation of TFT (https://pytorch-forecasting.readthedocs.io/en/stable/tutorials/stallion.html). Thanks.c- 2
- 1

- c
Chris Gervais

05/11/2023, 2:41 PMOn the topic of TFT, any suggestions on speeding up the training (besides GPUs obviously)? There seems to be some bottlenecks, maybe on the dataset loader side?ck- 3
- 6

- t
Tyler Nisonoff

05/13/2023, 12:58 PMHello! I'm just getting started with the library and having a lot of fun with it! My goal is to set up a model that predicts 24 hours worth of prices every day. I was originally going to do this by training up to some date with horizon=24, and for the next N days, call

However, it seems that this always returns a dataframe with a ds column with just the next 24 hours after where I stopped training. Is there some way to apply to model to the next N days without retraining it every time? or would i have to retrain / finetune on the data since then? Perhaps the latter is the only way to support historical features?`nf.predict(futr_df=<day-to-pred>)`

kc- 3
- 14

- m
marah othman

05/15/2023, 2:36 PMis there any constrain on the period i should predict on the future?c- 2
- 24

- s
Shreya Mathur

05/15/2023, 3:35 PMCan you explain more about max_steps parameter in NHITS? As I increase the no. of max_steps (from 1 to 100), the model gets more accurate, however, I am unable to replicate my predictions with higher no. of max_steps.- 1
- 1

- g
Gerrit Rindermann

05/15/2023, 7:15 PMHi! I'm currently working my way into NeuralForecast and have a somewhat basic question I couldn't find any documentation about: When I'm creating my timeseries, let's say daily sells of a certain product, should I add a y=0 for dates the product hasn't been sold at all or can I just skip the date in the time series? I know it works with gaps in the time series, but I'm wondering if the result is accurate or if that distorts the modelmp- 3
- 4

- t
Tyler Nisonoff

05/16/2023, 9:40 PMHaving trouble thinking through how to fit a current forecasting problem into the neuralforecast model: Suppose every day at 10am, I want to predict tomorrows 24 hours worth of prices for some series (midnight to midnight). It seems that currently the assumption is if I'm predicting time t -> t+horizon, I have data up to t-1. But in this case, I only have data up to

as i do not yet have prices from 11am-midnight I could extend the horizon to be`t-14`

to predict 10am -> 12am the next day, but then in training / fit we'd step by the horizon, instead of just stepping forward 10 days. is there some way to fit this into the neuralforecast approach?`24+14`

c- 2
- 2

- p
Pascal Schindler

05/17/2023, 4:50 PMHey, maybe you can help me here: I use Auto-NHITS with the standard parameters to forecast the following timeseries: Unfortunately, I get an MAPE of 35%. With which parameters should I play to increase the accuracy?fc+2- 5
- 9

- n
Nakul Upadhya

05/17/2023, 8:54 PMHi! I have a quick question about NBEATSx. I noticed that the both exogenous basis detailed in the paper aren't currently available in neuralforecast atm so I'm trying to add them in myself based on the research paper repo. For these two basis, what is the "proper" way to use historical exogenous variables? It looks like the research code handles all variables as future exog variables (unless I'm mistaken). Thanks again for the help!k- 2
- 3

- c
Chris Gervais

05/18/2023, 10:43 AMLooking at adding the HINT architecture tomorrow, will circle back with metrics when available. Any thoughts on a reasonable hparam search space? Didn’t see anything added to the auto modulek- 2
- 3

- d
Dawie van Lill

05/30/2023, 9:52 AMHi, I just want to make sure I fully understand everything. I have the following config dictionaryCopy code

The`mlp_config = { "input_size": tune.choice([2, 4, 12, 20]), "hidden_size": tune.choice([256, 512, 1024]), "num_layers": tune.randint(2, 6), "learning_rate": tune.loguniform(1e-4, 1e-1), "batch_size": tune.choice([32, 64, 128, 256]), "windows_batch_size": tune.choice([128, 256, 512, 1024]), "random_seed": tune.randint(1, 20), "hist_exog_list": tune.choice([pcc_list]), "futr_exog_list": tune.choice([fcc_list]), "max_steps": tune.choice([500, 1000]), "scaler_type": tune.choice(["robust"]), }`

and`hist_exog_list`

are lists of historical and future exogenous variables. When I fit the model I use`futr_exog_list`

where the`nf.fit(df=df, val_size=20)`

is my dataframe that contains the target variable as well as the exogenous variables. Then when I predict I use`df`

where`nf.predict(futr_df=futr_df)`

contains only the observations of the future exogenous variables that extend beyond the time period of the point of prediction. Does this seem correct? Or am I doing something wrong in the specification of the future exogenous variables. In my case there is only one period beyond the cut-off for the target where future exogenous variables are available, so the`futr_df`

dataframe only has one row and many columns (for the different features).`future_df`

k- 2
- 6

- m
marah othman

05/30/2023, 2:24 PMhello i get this error when try to use cross validation with automodels RuntimeError: maximum size for tensor at dimension 2 is 2751 but size is 3360 does any one have idea? - n
Nasreddine D

05/30/2023, 3:08 PMHi, I was wondering if I had enough data to try any model in neural-forecast. I have 176 months history and I need to forecast 24 months into the future. Is it enough? Or not suitable? Which model would you recommend to test? Thank you very much.k- 2
- 1

- f
Francisco Trejo

05/31/2023, 6:23 PMHello I have a question with regards to how the unique_id impacts NeuralForecast. I have been using the NHITS model with hierarchical data like in the first image and when I use the aggregate function from the HierarchicalForecast to create the data frame the data looks like the second image. I noticed when using the unique_ids from the HF aggregate function I get different forecast results despite everything else being 100% the same. This leads to RMSE values that are substantially different. The current work around I have is to just rename the unique_ids back to the aggregated ones given by the aggregate function from HF once I get the forecasts but just wondering if anyone might have any insight as to why this is happening and if its supposed to be behaving this way. Thank you!ke- 3
- 5

- s
Syed Umair Hassan

06/01/2023, 8:20 AMIs their a way to do transfer learning using neural forecast library?k- 2
- 3

- y
Yang Guo

06/07/2023, 12:11 AMHi all, I am relatively new to time series forecasting, and wondering what is the proper way to treat to multiple independent time series. Consider the example of k data series x_{t_0}^{t_i}[k] from time t_0 to t_i (e.g. stocks price for different stocks, or data series data for different agents), and we want to predict their results for the next T steps x_{t_i+1}^{t_i+T+1}[k]. We cannot simply append these series as in text data, since we want to preserve the time information across series. I think a naive way to use the unique_id as the identifier for the different series. However, unique_id is a dataset feature, and I am not sure how it is treated for different models. And if it is always the suggested way to handle univariate multiple (independent/dependent) series?kc- 3
- 10

- s
Syed Umair Hassan

06/10/2023, 7:09 PMIs their a way to add embedding vector or glove vector to neural or ml models in nixtla?k- 2
- 4

- m
Manuel Chabier Escolá Pérez

06/11/2023, 3:23 PMHi all, Is there any notebook example in which I can see how to implement AutoNBEATSx? So far I have seen this (for NBEATSx without hyperparmenter tunning) and this (for hyperparameter tunning without exogenous variables). For example, when I run this: models = [AutoNBEATSx(h = n_day_forecast, loss=MAE(), config=nbeats_config, input_size = 5*24, futr_exog_list = fut_exog_list, hist_exog_list = hist_exog_list, stat_exog_list = static_list, scaler_type = 'robust', search_alg=HyperOptSearch(), num_samples=20)] I have the following errors: TypeError: AutoNBEATSx.__init__() got an unexpected keyword argument 'input_size' TypeError: AutoNBEATSx.__init__() got an unexpected keyword argument 'futr_exog_list' Thank you very much!k- 2
- 3

- m
marah othman

06/12/2023, 1:27 PMis there function built in neuralforcat to check seasonalityc- 2
- 1

- k
Kaustav Chaudhury

06/13/2023, 10:27 AMWhen loading from checkpoint why getting this error NotImplementedError ("{} cannot be pickled", MultiProcessingDataLoader)k- 2
- 3

- p
Patrick

06/13/2023, 12:25 PMI don’t understand why my chart Y-axis is like that, any hints?🙌 1km- 3
- 20

- p
Patrick

06/13/2023, 12:25 PM3,1,2 makes no sense in my eyes 😄 - y
Yang Guo

06/13/2023, 5:04 PMWhen running nf.predict() for vanilla transformer, I got "can only concatenate str (not "pandas._libs.tslibs.offsets.Hour") to str", any hints? I am following xxxx-xx-xx xxxxxx format. The fit function works fine though. I think there is some issue with the format.c- 2
- 4

- d
Dawie van Lill

06/13/2023, 8:40 PMGood day, I seem to be getting an error

. This only happens with the AutoNBEATS and AutoNBEATSx models. Other models run fine. The code that I have for AutoNBEATS worked perfectly fine until about a week ago.`RuntimeError: einsum(): subscript p has size 2 for operand 1 which does not broadcast with previously seen size 0`

c- 2
- 3

- s
Syed Umair Hassan

06/14/2023, 11:55 AMCan a global model trained on data with a freq of 1 hour be used to predict a similar data but with a freq of 10 mints?c- 2
- 1

- v
Viet Yen Nguyen

06/15/2023, 11:36 AMHey everyone, I’m considering to train a pre-trained model for our use case (5 years of media data). I was wondering: is there more information about the training set for the pretrained models on https://github.com/Nixtla/transfer-learning-time-series?- 1
- 1

- y
Yang Guo

06/15/2023, 2:39 PMHi, I am wondering what is the proper way to perform in-sample evaluation? For example, the transformer-based model is trained to learn within window behavior, where we are given data

, but asked to make prediction on a data of`input_size`

. The goal is to make evaluate over randomly chosen window on the validation dataset. Currently, I am thinking of doing resampling myself, but I am wondering if there is in-built features for insample evaluation. I think`h`

might be doing this, but I am confused about what the output of predict_insample is. Does it only contain the predicted value? Is it a fair metrics to simply compute accuracy of predict_insample with the original df as a measurement of method.`predict_insample`

kc- 3
- 19

- a
Aditya Limaye

06/15/2023, 4:30 PMhello! first, thanks for putting together a great library and running this community, i really appreciate it! i had a question about one of the NHITS hyperparameters,

. i noticed in the AutoNHITS default_config definition, there is a tune.choice over the following values:`n_freq_downsample`

Copy code

do you all have any intuition about whether lining up these frequencies with known natural frequencies of the data is useful for performance? for example, [168, 24, 1] seems to correspond to weekly (24 x 7) , daily (24 x 1), and hourly frequencies. the reason i ask is as follows: let's say i have an NHITS model that predicts hourly-sampled data, and i find through the course of hyperparameter optimization that`"n_freq_downsample": tune.choice( [ [168, 24, 1], [24, 12, 1], [180, 60, 1], [60, 8, 1], [40, 20, 1], [1, 1, 1], ] ),`

is most performant. if i was then to train a model that predicts the same series, but now sampled at 10minute frequency (6 samples per hour), should i then change my hyperparameter search space to include a choice for`n_freq_downsample=[168, 24, 1]`

? any insight you might have would be appreciated - thanks in advance!`n_freq_downsample = [168*6, 24*6, 6]`

km- 3
- 4