This message was deleted.
# neural-forecast
s
This message was deleted.
c
Hi @Layon Hu, thanks for using our library! I see that you are working on long-horizon, but you set
horizon=1
. The model is only learning to forecast the next timestamp, and could explain the behavior on the plot. With
cross_validation
the forecasts are rolled, using the latest true value available (t-1 in this case because horizon is 1.
It is still strange that the forecasts are so consistently off. Can you try with the
AutoNHITS
to see how it compares?
l
Thank you for your reply. The following is the prediction result of the model I tried to use AutoNHITS. (Try to ensure that the hyperparameters in the config are consistent with those in DeepAR.) I assigned h to 1 before mainly because it can only be set in this way. When the value is assigned to 2 or larger, an error will be reported (cuda out of memory QAQ). In addition, I have a question. When I checked the loss information of each trail during the training process, I found that in the lightning_logs folder under the same path as the code, the train_loss_step and train_loss_epoch of the training set at each step were recorded, and valid_loss of the validation set. The number of train_loss_step is consistent with the number of steps. However, I found that valid_loss records data once every one hundred steps. I would like to ask if it can be adjusted so that valid_loss is recorded once every step? If possible, in which code source file should I modify it?
import <http://plotly.io|plotly.io> as pio
pio.renderers.default = 'browser'
import optuna
import pandas as pd
import datetime
import warnings
warnings.filterwarnings("ignore")
import matplotlib.pyplot as plt
from neuralforecast.core import NeuralForecast
from neuralforecast.models import NHITS
from neuralforecast.losses.pytorch import DistributionLoss, MQLoss
data = pd.read_csv("IMF6.csv")
data['ds'] = <http://pd.to|pd.to>_datetime(data['ds'], unit='s')
val_size = 64*320
test_size = 64*320
models = [NHITS(
h = horizon,
input_size = 40,
futr_exog_list= [
'railway',
'SPEED',
],
hist_exog_list=None,
stat_exog_list=None,
exclude_insample_y=False,
stack_types = ["identity", "identity", "identity"],
n_blocks = [1, 1, 1],
mlp_units = 2 * [[128, 128]],
n_pool_kernel_size = [16, 8, 1],
n_freq_downsample = [24, 12, 1],
pooling_mode = "MaxPool1d",
interpolation_mode = "linear",
dropout_prob_theta= 0.0,
activation="ReLU",
loss=DistributionLoss(distribution="Normal", level=[60, 90], return_params=False),
valid_loss=MQLoss(level=[60, 90]),
max_steps = 1600,
learning_rate = 0.00011156,
num_lr_decays = 3,
early_stop_patience_steps = -1,
val_check_steps = 100,
batch_size = 20,
valid_batch_size = None,
windows_batch_size = 64,
inference_windows_batch_size = -1,
start_padding_enabled=False,
step_size = 1,
scaler_type = "robust",
random_seed = 1,
num_workers_loader=0,
drop_last_loader=False,
)
]
nf = NeuralForecast(models=models, freq='s')
Y_hat_df = nf.cross_validation(df=data, val_size=val_size,
test_size=test_size, n_windows=None)