<#C031M8RLC66|> I have question on Hyperparameter ...
# neural-forecast
r
#C031M8RLC66 I have question on Hyperparameter tuning for LSTM model I'm building a multivariate forecasting model using LSTM for which I only have
hist_exog_list
, How can I make this as a learnable param, below is my parameter tuning code I using
Copy code
def objective(trial):
    encoder_n_layers = trial.suggest_int('encoder_n_layers', 1, 10)
    learning_rate = trial.suggest_loguniform("learning_rate", 1e-5, 1e-1)
    input_size = trial.suggest_int('input_size', 1, 120)
    inference_input_size = trial.suggest_int('inference_input_size', 1, 120)
    batch_size = trial.suggest_categorical("batch_size", [16, 32, 64])
    # random_seed = trial.suggest_int("random_seed", 1, 10)
    max_steps = 100
    val_check_steps = 50
    scaler_type = trial.suggest_categorical("scaler_type", ['standard', 'revin', 'invariant', 'minmax1', 'robust', 'identity'])
    encoder_hidden_size = trial.suggest_int('encoder_hidden_size', 50, 800)
    encoder_dropout = trial.suggest_float('encoder_dropout', 0.0, 0.7)
    decoder_layers = trial.suggest_int('decoder_layers', 1, 5)

    models_tmp = [LSTM(
        h=h,
        input_size = input_size,
        inference_input_size = inference_input_size,
        encoder_n_layers = encoder_n_layers,
        learning_rate = learning_rate,
        max_steps=max_steps,
        batch_size = batch_size,
        hist_exog_list=regressor_cols,
        futr_exog_list = gTrends,
        val_check_steps = val_check_steps,
        scaler_type = scaler_type,
        encoder_hidden_size = encoder_hidden_size,
        encoder_dropout = encoder_dropout,
        decoder_layers = decoder_layers
        )
        ]

    model_xy = NeuralForecast(models=models_tmp, freq='W-Sat')
    model_xy.fit(train_df)

    p = model_xy.predict(futr_df=df).reset_index()
    p = p.merge(test_df[['ds', 'unique_id', 'y']], on=['ds', 'unique_id'], how='left')

    loss = mape(p['y'], p['LSTM'])
    return loss

def run_hyper(trials = 2):
    study = optuna.create_study(direction='minimize')
    study.optimize(objective, n_trials=trials)
    return study
m
What do you mean by learnable param? Do you mean wether to use
hist_exog_list
or not?
r
No not like that, something on this line, We pass
batch_size
as learnable params right and after this function will say 32 is the best param, similar to this from
hist_exog_list
can we find which features will get the better MAPE
m
Aren't you already doing that with the line
batch_size = trial.suggest_categorical("batch_size", [16, 32, 64])
?