While trying TFT, seeing only 20% usage of the GPU. Any suggestion, how to maximize it ?
02/17/2023, 9:22 PM
we’ve also seen this, it seems related to the num_workers attribute on the dataloaders. since the pl.Trainer class + data modules are abstracted away in the NeuralForecast core module, the only work around we’ve found is to fit the model with a single epoch so that the trainer class + data modules initialize. then you can copy the datamodule and make any customized changes you want to try from: https://pytorch-lightning.readthedocs.io/en/stable/guides/speed.html
that said 1.4.0 just dropped a few days ago and might have some goodies to check out :)