This message was deleted.
# neural-forecast
s
This message was deleted.
j
I think something like this would work:
Copy code
nf = NeuralForecast(models=[NBEATSx(...)], ...)
nf.fit(Y_train_df)
y_hat = nf.models[0].decompose(dataset=nf.dataset)
a
@José Morales thanks! This is fine. Actually my objective is to extract out some explainability information from the model, especially the N-HITS. Do you know of any
lime
or
shap
equivalent libraries that can help me explain the N-HITS model? We're getting reasonably good forecast after including some transformations on the exogenous variables. But unfortunately, it doesn't seem we can extract out the information related to feature importance, weight.
👀 1
c
Hi @Afiq Johari. This has been a recurrent question/request from many users. We are building methods to retrieve measures of feature importance. In the meantime, I can suggest doing some forms of what-if scenarios, where you change the values for your variables of interest and measure how they change the forecasts. The inference times are extremely fast and can be batched so it should be fairly efficient.
a
@Cristian (Nixtla) thanks for your thoughts, yes, we're currently thinking of doing something similar, sensitivity analysis. While looking for solutions, I found some academic articles related to this problem, although couldn't find any implemented codes to play around with them "TS-MULE: Local Interpretable Model-Agnostic Explanations for Time Series Forecast Models" https://arxiv.org/abs/2109.08438 "WindowSHAP: An efficient framework for explaining time-series classifiers based on Shapley values" https://www.sciencedirect.com/science/article/abs/pii/S1532046423001594 There's however an attempt of using DeepLIFT here, but I couldn't really test against my actual data. https://github.com/danielhkt/deep-forecasting