Ahh. Sorry for the noise. Source Code never lies. ...
# general
p
Ahh. Sorry for the noise. Source Code never lies. mlf_evaluation.models_ it is. Nevermind. Thx anyways for that great toolkit,.
j
Sorry for the confusion, we're working on a tutorial to access the trained models' parameters that will mention this.
đź‘Ť 1
This is the guide. please let us know if you feel something's missing
đź‘€ 1
p
Hi, thx for that. I think that's helpfull.
It's really powerfull that NIXTLA just wraps the original modles. If you think that's desireable I could add another paragprag after the SHAP section where I use a Catboost model and plot the feature importance? Just to bring home the point => you can access all the models and do the stuff you do with Pytorch/scikit-learn/catboost/lightGBM/XGBoost for model diagnostics?
j
Sure, go ahead. We use nbdev so you'll have to modify that notebook (nbs/docs/how-to-guides/analyzing_models.ipynb). We have these instructions on how to contribute but feel free to reach out if you need any help