Hi :slightly_smiling_face:, I have testing differ...
# neural-forecast
m
Hi 🙂, I have testing different models in the package, but I tend to run out of cuda memory.
Copy code
OutOfMemoryError: CUDA out of memory.
Is there a parameter to limit that or any recommendations on how to avoid that?
c
Hi @Mads Jensen. Can you provide more details? Does it usually occur during training or inference? What models are causing the error? What GPUs are you using? Most models in the library are very memory efficient. In general, we only observe the error with Transformer-based methods (TFT, Informer, Autoformer, etc). One suggestion is to reduce the
batch_size
,
valid_batch_size
, and
windows_batch_size
. Additionally, you can reduce the size of the model. Every model has its own hyperparameters controlling different aspects such as number of layers,
hidden_size
, etc.
m
Sorry, yes that was incredible vaguely explained. I am trying to predict 168 hours (7 days) in advance on an hourly basis. I have tried with a NBEATSx model (but I also get the error with AutoTFT) tried batch sizes down to 2, and also reduced input size down a lot. So I guess reducing model size is the way to explore. The card is a NVIDIA Tesla K80 with 12GB vRAM. It should be mentioned that I can without problems run a AutoNHITS model searching for hyperparams on the same card and data.
k
Hey @Mads Jensen, We fixed this week TFT's memory usage: https://github.com/Nixtla/neuralforecast/pull/597 We will merge that PR into main soon. If you want to try the changes, you can install them with this line:
Copy code
!pip install git+<https://github.com/Nixtla/neuralforecast.git>
m
Ah cool. Thanks for letting me know!