Bersu T
03/07/2025, 10:51 AMjan rathfelder
03/07/2025, 11:00 AMBersu T
03/07/2025, 11:08 AMjan rathfelder
03/07/2025, 11:21 AMBersu T
03/07/2025, 11:23 AMBersu T
03/07/2025, 11:25 AMjan rathfelder
03/07/2025, 11:28 AMBersu T
03/07/2025, 11:30 AMBersu T
03/07/2025, 11:30 AMBersu T
03/07/2025, 11:31 AMBersu T
03/07/2025, 11:32 AMjan rathfelder
03/07/2025, 11:38 AMBersu T
03/07/2025, 11:52 AMBersu T
03/07/2025, 12:02 PMjan rathfelder
03/07/2025, 12:50 PMBersu T
03/07/2025, 2:06 PMjan rathfelder
03/07/2025, 3:57 PMBersu T
03/07/2025, 4:02 PMBersu T
03/07/2025, 4:03 PMBersu T
03/07/2025, 4:08 PMjan rathfelder
03/07/2025, 4:13 PMBersu T
03/07/2025, 4:19 PMjan rathfelder
03/07/2025, 4:30 PMBersu T
03/07/2025, 4:40 PMBersu T
03/07/2025, 4:41 PMjan rathfelder
03/07/2025, 4:52 PMBersu T
03/07/2025, 4:54 PMjan rathfelder
03/07/2025, 5:59 PMfrom sklearn.inspection import permutation_importance
import matplotlib.pyplot as plt
# Suppose you extract the best model (or any fitted model) from MLForecast:
best_model = auto_mlf.best_model_
# Prepare your validation data in the format expected by your pipeline.
# (Make sure to include all features that your pipeline expects.)
X_val = validation_df.drop(columns=['y'])
y_val = validation_df['y']
# Compute permutation importance. Choose an appropriate scoring metric (e.g., neg_mean_absolute_error).
result = permutation_importance(
best_model,
X_val,
y_val,
scoring='neg_mean_absolute_error',
n_repeats=10,
random_state=42
)
# For example, plot the mean importance values:
plt.figure(figsize=(10, 6))
plt.bar(range(len(result.importances_mean)), result.importances_mean, yerr=result.importances_std)
plt.xticks(range(len(result.importances_mean)), X_val.columns, rotation=45)
plt.ylabel("Decrease in MAE")
plt.title("Permutation Feature Importance")
plt.show()