This message was deleted.
# neural-forecast
s
This message was deleted.
m
Hello, Yes, it will choose the hyperparameters that minimize the validation loss. So the use of a validation set combined with early stopping should prevent overfitting. With the validation set, we are evaluating the model's abilitiy to make predictions from training on previous data. I hope this helps!
n
thanks Marco, ill make sure to use early stopping to prevent overfitting
how is this possible @Mariana Menchero?
Copy code
'early_stop_patience_steps': 2,
there are clearly more than 2 consecutive validation loss increases
@José Morales can you please investigate this issue as well? seems it might be related
r
I experienced a similar issue. Early stopping functioned correctly when I utilized the TFT model. However, when I switched to using AutoTFT, early stopping didn't seem to work as expected. The validation loss continued to increase beyond the designated early_stop_patience_steps.
đź‘€ 1
n
yah I dont think it works with AutoTFT
s
Neither works for me with the AutoModels of nf. The val loss increases and increases check by check and yet early stopping is not triggered.
n
@José Morales FYI
c
Thanks for reporting this issue. We will take a look at it and fix it!
@José Morales this one as well
j
@nickeleres @Steffen @Rafiq Darwis Mohammad are you using ray or optuna?
n
Ray @José Morales
👍 1
j
Thanks! I'm able to reproduce the issue. I'll keep investigating.
This seems to be specific to ray, the optuna backend stops at the expected iteration
n
ok lmk when ray is fixed 🙂
j
Found the issue. The sad part is that I had already seen it a couple of months ago but forgot about it. I'll work on a fix and we're planning on making another release soon so it'll be available probably next week
n
brilliant thanks Jose
hey @José Morales any update on the new release with
early_stop_patience_steps
fix?
j
It's on PyPI now
n
brilliant thank you Jose