This message was deleted.
# neural-forecast
s
This message was deleted.
k
Hey @Phil, Thanks for using the HINT model. Regarding quantile and HINT coherence: • HINT model samples are coherent by construction. • The 95th quantiles for the marginal distributions (each individual series themselves) are not guaranteed to satisfy coherence. • Quantiles are usually univariate by definition, I suspect one could try to build elliptic sets based on covariances to achieve 95th coverage. But it is speculative. Your question/requirement is really interesting. • An interesting idea would be to condition on the total aggregated series to hit the 95th quantile and see what are the samples that achieved that. You would need of course to reduce somehow the variance of the samples. Regarding the limited data that you have: • I suggest to start simple, predicting 18 months ahead with such limited data might be challenging. Without enough data NHITS might be capturing a Naive1 prediction and if you are lucky a simple trend in your data. • You might want to check if there are larger datasets that share nature to yours and see if you can apply pre-train the models on them. On the hyperparameters of HINT/NHITS: • My recommendation is to always start with random seeds and learning rates. (Still you may have very limited validation signal). • After that you can explore the complexity of the model through the layers depth and number of hidden units.
p
Thank you for your response Kin! I'm definitely observing the "Naive1" prediction. If forecasting the raw signals and reconsolidating is not performing well, I might preprocess all the timeseries in the hierarchy by taking a rolling p95 quantile and forecasting these metrics directly. I won't be able to use HINT in these cases but the signals should be smoother and the model would not have to learn seasonality patterns as much.
k
There is an interesting trend, you might want to try NBEATSx with a linear trend stack
Predictions will be mostly the trend and the Naive1 level
But for this amount of data, I think that is a reasonable expectation
Here is an example of NBEATSx, you can change the n_polynomials=1
Another recommendation I can make is for you to use the HuberMQLoss, that way the spikes in your data won't affect so much the predictions: https://nixtla.github.io/neuralforecast/losses.pytorch.html#huberized-mqloss
n
Hi, for this business problem, could it be a good idea to do a temporal hierarchical forecasting? Grouping data by day, week and months. Forecast each "group" and reconciliate. I'm not sure if it can help.
k
I am always of the recommendation to start with simple baselines and build on top of them. Otherwise it is difficult to tell if more complicated ideas are helping the forecasting task.
👍 1
p
Thank you for the tips. I'll try NBEATSx next