Besides distributed training, which methods do you...
# mlforecast
j
Besides distributed training, which methods do you know to speed up tuning. I am not using autotune models, because i am also tuning my lag_transform features, but it gets slow because I am using > 500 iterations in optuna using larger data sets. I added median pruning already in optuna. If you guys know any other tricks or parameters to pick (not directly in the search space) to speed up the search, then please let me knwo. I am happy about any additional ideas.
Update: i just saw that the auto models also allow to tune additional params, very cool.
t
a big slowdown will always be having to recompute lags, target scaler, and transformations and all that. It is a hassle but you could compute everything upfront and optuna would just grab columns
👍 2
j
ok, i think i understand what you mean: don't use the in-build lag_transform directly, but create all possible columns upfront and then based on optuna picks also select the lag_transform columns manually and add them as columns. i like that idea. thx!