Hello @Marco does nixtla support feature importance for TiDE?
o
Olivier
03/21/2025, 3:13 PM
No, but you can use something like KernelExplainer from the Shap package.
Olivier
03/21/2025, 3:16 PM
That is slow though, but gives the (imho) most intuitive results. Most feature importance methods for neural networks that rely on investigating activations (e.g. like TFT) are less intuitive, as hidden states or activations don't necessarily correlate with "importance".
m
Marco
03/21/2025, 3:20 PM
I have a plan to include Shap values from the KernelExplainer for all models, but I'm still unsure about the timeframe. But what Olivier suggested is the way to go 🙂
a
Ankit Hemant Lade
03/21/2025, 4:08 PM
Thank you @Olivier and @Marco for getting back to me.!!