https://github.com/nixtla logo
#neural-forecast
Title
# neural-forecast
s

Stefan Wiegand

08/25/2023, 12:12 PM
Dear Nixla Team, could you please help me with some questions regarding Temporal Normalization and Loss Functions: 1. Temporal Normalization will determine e.g. min and max on the past data points and ignore future data points so we don’t have look ahead bias, correct? 2. Are predicted values scaled back to the original domain before loss is calculated and back propagation happens? 3. If I train on two time series, 1. with magnitude of ~ 1000 and 2. with magnitude of ~0.1, both with the same relative errors. MAE will cause back propagation to update the weights more to suit time series 1, no matter what temporal normalization I use, correct?
c

Cristian (Nixtla)

08/25/2023, 1:37 PM
Hi @Stefan Wiegand! Absolutely: 1. Yes, it is only based on the input window to prevent leakage from the future. 2. No, the training loss is scaled to improve stability (this is the most common approach). The validation loss is scaled back, as well as the forecasts of the
predict
method. 3. If you dont scale the data, then yes. With temporal normalization both series will have roughly the same scale, so they will both weight the same for the loss. (Note that in particular the MAE is more robust to scale, because the gradients are always +1/-1 regardless of the magnitude of the error.)
s

Stefan Wiegand

08/28/2023, 2:16 PM
Great! Thank you very much for making this clear to me!