https://github.com/nixtla logo
#neural-forecast
Title
# neural-forecast
n

Naveen Chandra

09/14/2023, 4:13 PM
Hi Nixtla Team. Thanks for developing a wonderful library for forecasting. I am getting an error when I try to fit AutoLSTM and AutoNHITS. The error is as follows: "
Copy code
--> 168             raise RuntimeError(error_msg)
    169 
    170         return self._trial_to_result(best_trial)
RuntimeError: No best trial found for the given metric: loss. This means that no trial has reported this metric, or all values reported for this metric are NaN. To not ignore NaN values, you can set the
filter_nan_and_inf
arg to False." My config looks like this: %%capture from ray import tune config_nhits = { "input_size": tune.choice([4, 8]), # Length of input window "start_padding_enabled": True, "n_blocks": 3*[1], # Length of input window "activation": "relu", "mlp_units": 3 * [[3, 3]], # Length of input window "n_pool_kernel_size": tune.choice([3*[1], 3*[2], 3*[4], [8, 4, 2, 1, 1]]), # MaxPooling Kernel size "n_freq_downsample": tune.choice([[8, 4, 2, 1, 1], [1, 1, 1, 1, 1]]), # Interpolation expressivity ratios "learning_rate": tune.loguniform(1e-4, 1e-2), # Initial Learning rate "scaler_type": tune.choice(['robust', 'standard']), # Scaler type "max_steps": tune.choice([500, 1000]), # Max number of training iterations "batch_size": tune.choice([1,2,4,8,16,32,64,128]), # Number of series in batch "windows_batch_size": tune.choice([1,2,4,8,16,32,64,128]), # Number of windows in batch "stack_types": ['trend', 'seasonality'], "random_seed": tune.randint(1, 20), # Random seed "hist_exog_list": ['VOICE_TRAFFIC','SUCCESSFULL_CALLS'] } config_lstm = { "input_size": tune.choice([4,8]), # Length of input window "encoder_hidden_size": tune.choice([4,8,16,32,64,128]), # Hidden size of LSTM cells "encoder_activation": tune.choice(['relu']), "encoder_n_layers": tune.choice([1,2,3]), # Number of layers in LSTM "learning_rate": tune.loguniform(1e-4, 1e-2), # Initial Learning rate "scaler_type": tune.choice(['robust', 'standard']), # Scaler type "max_steps": tune.choice([100, 200]), # Max number of training iterations "batch_size": tune.choice([2,4,8,16,32,64,128]), # Number of series in batch "random_seed": tune.randint(1, 20), # Random seed "hist_exog_list": ['VOICE_TRAFFIC','SUCCESSFULL_CALLS'] } config_nbeatsx = { "input_size": tune.choice([4,8]), # Length of input window "learning_rate": tune.loguniform(1e-4, 1e-2), "scaler_type": tune.choice(['robust', 'standard']), "max_steps": tune.choice([100, 200]), "n_polynomials": 0, "n_blocks": [1,1], "stack_types": ['trend', 'seasonality'], "dropout_prob_theta": 0.5, "mlp_units": [[4,4],[8,8],[16,16],[32,32]], "random_seed": tune.randint(1, 20), "hist_exog_list": ['VOICE_TRAFFIC','SUCCESSFULL_CALLS'] } #config_tft = { # "input_size": tune.choice([4,8]), # "hidden_size": tune.choice([8, 16]), # "learning_rate": tune.loguniform(1e-4, 1e-2), # "scaler_type": tune.choice(['robust']), # "max_steps": tune.choice([250, 500]), # "batch_size": tune.choice([8, 16]), # "random_seed": tune.randint(1, 20), # "hist_exog_list": ['VOICE_TRAFFIC','SUCCESSFULL_CALLS'] My Prediction Code looks like this: import numpy as np import pandas as pd import pytorch_lightning as pl import matplotlib.pyplot as plt from neuralforecast import NeuralForecast from neuralforecast.auto import AutoLSTM,AutoNHITS,AutoTFT, AutoNBEATSx from neuralforecast.losses.pytorch import MQLoss,MAE,HuberMQLoss from ray.tune.search.hyperopt import HyperOptSearch levels=[80,90] h = Y_test_df['ds'].nunique() #search_alg=HyperOptSearch(metric="loss", mode="min"), modelNHITS = AutoNHITS(h=h, loss=MAE(), config=config_nhits, num_samples=50, cpus=1) modelLSTM = AutoLSTM(h=h, loss=MAE(), config=config_lstm, num_samples=50, cpus=1) #modelNBEATSx = AutoNBEATSx(h=h, config=config_nbeatsx, loss=MAE(),valid_loss=MAE(), num_samples=50, cpus=1) #modelTFT = AutoTFT(h=h, config=config_tft, loss=MAE(),valid_loss=MAE(), num_samples=50, cpus=1) nf = NeuralForecast(models=[modelNHITS,modelLSTM],freq='W') nf.fit(df=Y_train_df) #Y_hat_df = nf.predict() Y_hat_df = nf.predict_insample(step_size=h) #Y_hat_df = Y_hat_df.reset_index(drop=False).drop(columns=['unique_id','ds']) #plot_df = pd.concat([Y_test_df, Y_hat_df], axis=1) #plot_df = pd.concat([Y_train_df, plot_df]) #plot_df = plot_df[plot_df.unique_id=='WEEKLY'].drop('unique_id', axis=1) #plt.plot(plot_df['ds'], plot_df['y'], c='black', label='Actual') #plt.plot(plot_df['ds'], plot_df['AutoNHITS'], c='blue', label='NHITS-Prediction') #plt.plot(plot_df['ds'], plot_df['AutoLSTM'], c='purple', label='LSTM-Prediction') ##plt.plot(plot_df['ds'], plot_df['AutoTFT-median'], c='green', label='AutoTFT-Prediction') ##plt.plot(plot_df['ds'], plot_df['AutoNBEATSx'], c='Green', label='NBeatsx-Prediction') ##plt.fill_between(x=plot_df['ds'][-40:], ## y1=plot_df['AutoLSTM-lo-90'][-40:].values, ## y2=plot_df['AutoLSTM-hi-90'][-40:].values, ## alpha=0.4, label='level 90') #plt.legend() #plt.grid() #plt.plot() plt.figure(figsize=(10, 5)) plt.plot(Y_hat_df['ds'], Y_hat_df['y'], label='True') plt.plot(Y_hat_df['ds'], Y_hat_df['AutoNHITS'], c='blue', label='NHITS-Forecast') plt.plot(Y_hat_df['ds'], Y_hat_df['AutoLSTM'], c='purple', label='LSTM-Forecast') #plt.plot(Y_hat_df['ds'], Y_hat_df['AutoNBEATSx'], c='green', label='NBeatsx-Prediction') plt.axvline(Y_hat_df['ds'].iloc[-20], color='black', linestyle='--', label='Train-Test Split') plt.xlabel('Timestamp [t]') plt.ylabel('Demand for Voice Services(Call Attempts)') plt.grid() plt.legend() Can someone please guide me what I am doing wrong here? Any help/guidance will help me..
c

Cristian (Nixtla)

09/14/2023, 4:23 PM
Hi @Naveen Chandra! Tune wont stop if there is a bug for each individual run, it will simply go to the next. The error you see occurs when all individual
num_samples
runs failed. I suggest starting simpler, with much smaller configs and less
num_samples
(3, or 5). There are some issues with the config, for instance the
nhits
only have
identity
blocks, no trend or seasonality. Also the numbers of elements in
n_pool_kernel_size
needs to match the number of stacks (length of
stack_types
).
here is a shorter config that should work:
Copy code
config_nhits = {
    "input_size": tune.choice([4, 8]),              # Length of input window
    "start_padding_enabled": True,
    "stack_types": 3*['identity'],
    "n_blocks": 3*[1],                                           # Length of input window
    "mlp_units": 3 * [[128, 128]],                                  # Length of input window
    "n_pool_kernel_size": tune.choice([3*[1], 3*[2], 3*[4]]),            # MaxPooling Kernel size
    "n_freq_downsample": tune.choice([[4, 2, 1],
                                      [1, 1, 1]]),            # Interpolation expressivity ratios
    "learning_rate": tune.loguniform(1e-4, 1e-2),                   # Initial Learning rate
    "scaler_type": tune.choice(['robust', 'standard']),                             # Scaler type
    "max_steps": tune.choice([500, 1000]),                               # Max number of training iterations
    "batch_size": tune.choice([32,64,128]),                          # Number of series in batch
    "windows_batch_size": tune.choice([64, 128, 256]),             # Number of windows in batch
    "random_seed": tune.randint(1, 20),                             # Random seed
}
👍 1
Some other tips:
mlp_units
of 3 is extremely small, I changed it to 128.
batch_size
and
windows_batch_size
of 1 are also too small. Let me know if the config above works. Try with
num_samples=2
while debugging.
👍 1
n

Naveen Chandra

09/14/2023, 4:31 PM
Hi Crisitan, Thanks for such a quick response! I am getting this error which I have no idea how to fix with the config I mentioned. Any pointers around why am I getting this error: RuntimeError: No best trial found for the given metric: loss. This means that no trial has reported this metric, or all values reported for this metric are NaN. To not ignore NaN values, you can set the
filter_nan_and_inf
arg to False."
c

Cristian (Nixtla)

09/14/2023, 4:32 PM
The error you see occurs when all individual
num_samples
runs failed, or have a NaN loss
👍 1
can you try with the config I sent you?
n

Naveen Chandra

09/14/2023, 4:55 PM
@Cristian (Nixtla): Thanks for the config which solved my problem! Really appreciate your quick responses. Any ideas/suggestion as to which specific config might be causing this error so that I can avoid making same mistake in future?
Also I am getting the same issue for LSTM Config as well. A quick note: I am training a very small dataset of 101 time samples of 'weekly' data. Hence I was keeping the mlp_units and other config small. My LSTM Config: config_lstm = { "input_size": tune.choice([4,8]), # Length of input window "encoder_hidden_size": tune.choice([4,8,16,32,64,128]), # Hidden size of LSTM cells "encoder_activation": tune.choice(['relu']), "encoder_n_layers": tune.choice([1,2,3]), # Number of layers in LSTM "learning_rate": tune.loguniform(1e-4, 1e-2), # Initial Learning rate "scaler_type": tune.choice(['robust', 'standard']), # Scaler type "max_steps": tune.choice([100, 200]), # Max number of training iterations "batch_size": tune.choice([2,4,8,16,32,64,128]), # Number of series in batch "random_seed": tune.randint(1, 20), # Random seed "hist_exog_list": ['VOICE_TRAFFIC','SUCCESSFULL_CALLS'] }
6 Views