Hi I'm using optuna to gridsearch on a LGBM instan...
# mlforecast
d
Hi I'm using optuna to gridsearch on a LGBM instance. Since the workaround is taking this process too much time, I decided to try the mlforecast_objective option. Now I'm intrigued... clearly in the mlforecast_objective instance we can see it fits and refits (if we decide it). But how can this study optimization run of 200 trials be faster then it takes to fit the model afterwards using the recomended params? In another part, I'm using TPESampler... idenpendent of the params i apply to it. Even if I change certain MLForecast params (such has lags, or lag_transform) the output 'best' params stay the same all the way? I mean each float params goes till 1e-17 "Something wrong is not right!" 🤣🤣🤣
j
Hey. Can you provide some sample code? I have no idea what you're doing
j
Does it even run? The final model gets the lags argument twice (lags=LAGS and **best_cfg['mlf_init_params']). Also the search doesn't use date features and the final does. Same for the freq, you use 1 in the search and W-MON in the final
d
yes yes... is update now...
j
The final uses max_horizon, so it trains
h
models instead of one