Hello: I am running a mq-rnn model. It works until...
# neural-forecast
m
Hello: I am running a mq-rnn model. It works until the predict. I got the following error message. ValueError: Expected parameter loc (Tensor of shape (8627, 61)) of distribution Normal(loc: torch.Size([8627, 61]), scale: torch.Size([8627, 61])) to satisfy the constraint Real(), but found invalid values: tensor([[854705.3125, 771030.3750, 771763.9375, ..., 776580.5625, 716499.4375, -13461.0000], [680814.5625, 649476.7500, 612983.8125, ..., 629873.4375, 29146.4375, -37391.3125], [558954.5000, 501322.7500, 62070.6250, ..., 40826.6875, 2804.1250, 635278.6250], ..., [ nan, nan, nan, ..., nan, nan, nan], [ nan, nan, nan, ..., nan, nan, nan], [ nan, nan, nan, ..., nan, nan, nan]], device='cuda:0')
🙌 1
k
Hey @Muhammed Islam, The Normal Distribution can be numerically unstable to optimize when the variances go very close to zero. Two things interact in the Normal output layer, the
domain_map
and the
scale_decouple
methods: - https://github.com/Nixtla/neuralforecast/blob/main/neuralforecast/losses/pytorch.py#L695-L708 - https://github.com/Nixtla/neuralforecast/blob/main/neuralforecast/losses/pytorch.py#L711-L723 To get a stable variance denominator, a solution is to reduce the learning rate so that the variance does not become too small too fast in the negative log likelihood.