Slackbot
05/30/2023, 9:52 AMDawie van Lill
05/30/2023, 9:59 AMDawie van Lill
05/30/2023, 12:47 PMh=1
vs h=2
, I get very different results with respect to the one-period ahead forecast. Why would this be the case? I am determining the size of the validation set in the fit
method. Are there other places where the horizon features in tuning of hyperparameters? I have fixed the input_size
as well, so that the input_size
is not dependent on the horizon.Kin Gtz. Olivares
05/30/2023, 1:10 PMdropout_prob_theta
regularization.
◦ a robust loss like MAE/HuberLoss (https://nixtla.github.io/neuralforecast/losses.pytorch.html#huber-loss).
◦ increase the valid_size, to improve the validation signal h=1, h=2, is a very small window and the hyperparameter optimization will be noisy (https://nixtla.github.io/neuralforecast/common.base_auto.html).Dawie van Lill
05/30/2023, 1:22 PMh=1
. I realise after playing around with the code that the valid_size
is dependent on the horizon, so I fixed that to be about 10% of the entire sample size (at around 25).
I am also interested in the longer term forecast h=4
, which gives me all the quarterly forecasts up to one year ahead. However, this is where I notice a significant difference between my specification with h=1
and h=4
. Nothing is different between the models and I use the MAE loss function, like you specified. I might be missing something here.
Then finally, I tried the regularisation with the NBEATSx architecture and NHITS, but I am getting some errors with regard to dropout. The code runs fine without it, but once I include the dropout component it breaks for some reason. I have "dropout_prob_theta": tune.choice([0.1, 0.3])
specified in the config.Kin Gtz. Olivares
05/30/2023, 2:25 PMKin Gtz. Olivares
05/30/2023, 2:28 PM