This message was deleted.
# mlforecast
s
This message was deleted.
j
can you explain a bit more? you can use the ml models im combination with any tuning library you want (e.g. optuna). instead of having a sklearn model (for example) you just se the ml models with the specific params.
k
Just wanted to know if there's an easy way to pass Optuna tuning function to optimize each horizon model that had been defined in Mlforecast(models=[model]). As far as I understand if I do
def tune(trial):
mlf = Mlforecast(models=[model])
mlf.fit(df, max_horizon=horizon)
I'll get a hyperparameters based on the output of the whole horizon. What I'd like to do is somehow pass that tune function to each horizon (so to run tune 30 times in that example)
j
i guess then you would need to build a loop for horizon = [0, 1, 2, n] and calculate the model error for each horizon and aggregate at the end. probably a faster way would to run this in steps of 5 or so, but at least not have a step size of 1. but my question is: do you really want that? do you not know your forecast horizon upfront? and maybe if you have different fc horizons, maybe it makes sense to tune different models? but in terms of implementation it is simple: additional to your original dictionary for optuna, you define a list (or so) for all possible values for horizon inside your objective function. then you start your loop below: for n in n_horizon: fit() + predict(horizon=n) + store mse (or whatever). this will take a while depending on how many steps you want to test. was that helpful?
k
max_horizon argument is already training model for each of the steps in horizon, so my only "issue" at the moment is how to pass an optuna tuning function into a process that happens behind the scences.
j
We don't have a built-in way to do that. My suggestion is using preprocess to get the features and targets and then run the optimization for each column in the target. Once you're done you can assign the models as a list like:
mlf.models_[model_name] = [model_h1, model_h2,...]
j
i still dont understand what your goal is? you want to tune each model for each step (from horizon 1 to horizon n)? or do you want to get the best params for one model that does a good job at forecasting different fc horizons? Joses answers would apply if you need m models each with a different fc horizon, right? so for each model you would get a different set of params, or do i understand your answer wrong @José Morales?
k
you want to tune each model for each step (from horizon 1 to horizon n)?
Precisely. Jose answer is correct for my case