Slackbot
12/11/2023, 7:48 PMCristian (Nixtla)
12/11/2023, 8:25 PMfit
or cross_validation
method. Simply specify val_size
and test_size
(or n_windows
). See Section 5 of the tutorial. The purpose of the first part is to simply show how to train models.
2. Yes, is the hidden_size
of the LSTM cell.
3. The loss defines the train loss to optimize and update the weights with gradient descent. You can also modify the valid_loss
parameter of the AutoLSTM
if you want a different validation loss than the train loss.
4. In cross validation you produce multiple forecasts starting at different timestamps. See image below. The cutoff specifies the last tiemestamp before each window of forecasts starts. So for the plot below cutoffs for each window are t_0, t_2, and t_4.Signe Byrith
12/11/2023, 8:40 PMCristian (Nixtla)
12/11/2023, 8:50 PMcross_validation
is the easiest method 🙂Signe Byrith
12/11/2023, 8:51 PMMarco Zucchini
01/07/2024, 3:09 PMtest_size - h
should be module `step_size`: in which method shall I have to set the test_size? Thanks! P.S. is it perhaps because I create the model like that? models=[NHITS(h=30,input_size=90,max_steps=100)] nf=NeuralForecast(models=models,freq='B') nf.fit(df=df_train, test_size=30)?Cristian (Nixtla)
01/08/2024, 4:15 PMcross_validation
method.Cristian (Nixtla)
01/08/2024, 4:17 PMpredict_insample
method is internally specifying the entire length of the series as the "test_size". We currently have a limitation that test_size-h
(length - h
in this case) must be module of step_size
, to avoid having forecasts past the last date. You will need to trim the time series slightly to account for this.Marco Zucchini
01/13/2024, 4:01 PM