Hello. Sometimes cross_validation produces NaN for...
# neural-forecast
j
Hello. Sometimes cross_validation produces NaN forecasts and the best model has loss = 0.0 when using Recurrent Auto* Models. My TimeseriesDataset has 54 unique time-series and every time-series is exactly of length 800. There are no NaNs in my data. I use h=1, step_size=1, AutoRNN + AutoLSTM models, and the configs are the default configs. num_samples=50. It seems like sometimes the combination of hyperparameters leads to loss=0.0, as can be seen in the screenshots. The last two screenshots are the two best hyperparameter combinations (loss=0.0, next ist loss=43.91..) extracted from the ray ResultGrid. I also checked all the other samples where
input_size=-1 && inference_input_size=-1
but they have loss is >0.0 so it must be something else. Any help is appreciated!