Felipe Rincon
04/03/2023, 10:04 PMStepLR
doesn't follow PyTorch's LRScheduler API. You should override the <http://LightningModule.lr|LightningModule.lr>_scheduler_step
hook with your own logic if you are using a custom LR scheduler. ThanksZhamal Toktamysova
04/04/2023, 2:54 PMneuralforecast
application to the sales forecast?
If there a comparison of neuralforecast
result with other approaches that would be greatAsterios Tsiourvas
04/04/2023, 3:21 PMKaustav Chaudhury
04/04/2023, 3:49 PMMads Jensen
04/04/2023, 5:10 PMFelipe Rincon
04/04/2023, 10:21 PMStepLR
does not follow PyTorch's LRScheduler API. You should replace the <http://LightningModule.lr|LightningModule.lr>_scheduler_step
hook with your own logic if you are using a custom LR scheduler. ThanksManuel
04/06/2023, 12:06 PMKaustav Chaudhury
04/06/2023, 12:55 PMMax (Nixtla)
04/06/2023, 1:25 PMTuhin Mallick
04/10/2023, 3:11 PMOutOfMemoryError: CUDA out of memory.
Is there a parameter to limit that or any recommendations on how to avoid that?Yachna Hasija
04/11/2023, 12:04 PMAsterios Tsiourvas
04/12/2023, 3:32 PMSimon Weppe
04/14/2023, 5:29 AMManuel
04/17/2023, 11:38 AMAsterios Tsiourvas
04/17/2023, 6:28 PMManuel
04/18/2023, 7:05 AMneuralforecast
with hierarchicalforecast
for hierarchical reconciliation (I've also seen HINT
, but the provided reconciliation methods, MinTrace
and BottomUp
, are not suitable for my case). Now the problem is this: since time series in the dataset have a hierarchical relationship and time series at the higher levels tend to have higher values (because they are the sum of the hierarchically lower time series), loss functions such as plain RMSE
and MAE
seem to be unsuitable because the errors at the lower levels of the hierarchy, being smaller in absolute value, tend to be penalized too little. A workaround might be to use scale invariant loss functions such as SMAPE
, which being a percentage error might mitigate the problem. Another solution might be to give more weight in the loss function (e.g., RMSE) to hierarchically lower time series. Do you know if with neuralforecast (and in particular with NHITS
) there is a way to give a higher weight to some unique_id during model training? Do you have any other ideas about this? I also tried using losses such as PMM, GMM and NBMM but they did not give good results. ThanksCyril de Catheu
04/21/2023, 6:43 PMthomas delaunait
04/23/2023, 6:18 PMRachel Yee
04/27/2023, 1:27 PMRachel Yee
04/27/2023, 2:22 PMMilim Kim
04/28/2023, 4:16 AMAngel Berihuete Macías
04/28/2023, 7:28 AMRachel Yee
04/28/2023, 12:07 PMJames Wei
05/01/2023, 10:58 PMmodel = [RNN(h = test_length,
input_size = test_length*3,
scaler_type = 'robust')]
nf = NeuralForecast(models=model, freq='H')
nf.fit(df=df)
where test_length=1000. The training data consist of a single length 9000 time series.
Using the default hidden layer numbers and sizes, a very rough estimate of the memory usage gives me: 1,280,000 parameters for 5.12 MB. Of course this would be multiplied by a few times for forward and backward pass memory and storing optimizer state in order to obtain total memory usage. What I don’t understand is that when I run:
nf.predict()
I receive the error message “OutOfMemoryError: CUDA out of memory. Tried to allocate 6.71 GiB (GPU 0; 11.75 GiB total capacity; 7.10 GiB already allocated; 3.88 GiB free; 7.13 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.” I have checked to make sure there is negligble GPU memory usage before executing nf.predict(). What do you think might be causing this issue? I am happy to provide the dataframe df I used if needed.
I should also mention that this error does not occur when I restrict the RNN to single encoder/decoder hidden layers of size 100, rather than the default double layers of size 200.marah othman
05/03/2023, 12:29 PMRaghuvansh
05/03/2023, 12:30 PMManuel
05/04/2023, 10:12 AMmarah othman
05/04/2023, 11:27 AMDawie van Lill
05/04/2023, 2:24 PMManuel
05/05/2023, 11:18 AM