when training a NHIST model via `nf.core.NeuralFor...
# neural-forecast
j
when training a NHIST model via
nf.core.NeuralForecast.fit()
, is there a reason we cannot set the size of the test set?
m
Hello! The
fit
method expects data for training only, not the entire dataset. So, it's a bit like in sklearn, we do a train/test split first, then we past the training data to the
fit
method. You can specify a validation size if you want to early stop, using the
val_size
parameter.
j
@Marco follow up question, how am I supposed to get the training loss so I can plot it to see if overfitting or undertraining is happening?
nvm I found it. It should be an example in the docs
Copy code
from pytorch_lightning.loggers import TensorBoardLogger

tb_logger = TensorBoardLogger(save_dir="tb_logs", name="tb_logs")
nf = NeuralForecast(models=[NHITS(..., logger=tb_logger)])
nf.fit(...)
Then in another shell:
Copy code
tensorboard --logdir ./tb_logs