James Wei
05/01/2023, 10:58 PMmodel = [RNN(h = test_length,
input_size = test_length*3,
scaler_type = 'robust')]
nf = NeuralForecast(models=model, freq='H')
nf.fit(df=df)
where test_length=1000. The training data consist of a single length 9000 time series.
Using the default hidden layer numbers and sizes, a very rough estimate of the memory usage gives me: 1,280,000 parameters for 5.12 MB. Of course this would be multiplied by a few times for forward and backward pass memory and storing optimizer state in order to obtain total memory usage. What I don’t understand is that when I run:
nf.predict()
I receive the error message “OutOfMemoryError: CUDA out of memory. Tried to allocate 6.71 GiB (GPU 0; 11.75 GiB total capacity; 7.10 GiB already allocated; 3.88 GiB free; 7.13 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.” I have checked to make sure there is negligble GPU memory usage before executing nf.predict(). What do you think might be causing this issue? I am happy to provide the dataframe df I used if needed.
I should also mention that this error does not occur when I restrict the RNN to single encoder/decoder hidden layers of size 100, rather than the default double layers of size 200.Kin Gtz. Olivares
05/02/2023, 1:44 PMinference_input_size
, parameter that effectively trims the length of the time series to avoid using them entirely. I would recommend setting it to inference_input_size=test_length*3
.
Let me know if this helps.
Would you be able to report this in a github issue if your problem persists?James Wei
05/02/2023, 2:08 PMKin Gtz. Olivares
05/02/2023, 3:24 PMJames Wei
05/02/2023, 3:53 PMKin Gtz. Olivares
05/02/2023, 6:21 PMJames Wei
05/04/2023, 5:25 PMKin Gtz. Olivares
05/05/2023, 3:05 PMJames Wei
05/05/2023, 8:29 PM