Hello everyone, I'm continuing my experiments wit...
# neural-forecast
a
Hello everyone, I'm continuing my experiments with DeepAR, and I have a few questions for you... Putting aside my suspicion of bug with the Negative Binomial, the performance I'm getting is still below the GluonTS implementation. One possibility would be the absence of lags_seq option. DeepAR being autoregressive, it's a waste to do without lag 1, for example. Or am I missing something? Second question, GPU predictions with mixtures mech loss (like PMM) require far too much GPU RAM (+90GO in my case). I suppose that by playing with
inference_windows_batch_size
I should be able to handle it, but does that mean that for inference DeepAR doesn't keep only the last
input_size
points as it does for training? Thanks in advance!
👀 3
m
Maybe the DeepAR implementation could be changed to allow for a
lags
parameter (as in
mlforecast
), or to accept a list of lags as
input_size
(instead of a single integer), so that we can include only some lags as with GluonTS's lags_seq parameter instead of the full input_size range? @Kin Gtz. Olivares
c
Hi! Thanks for the suggestion. We will add the
lags_sec
parameter soon!
🙌 1