quick question for folks here - I'm building an NH...
# neural-forecast
d
quick question for folks here - I'm building an NHITS model as follows:
Copy code
model = NeuralForecast(

    models=[NHITS(loss=MQLoss(level=REPORTED_CONFIDENCE_INTERVALS), batch_size=100, input_size=XXX, h=365, max_steps=50, num_workers_loader=8)],

    freq="D"

)
m
Hello! A forecasting model is train to take a certain input sequence of numbers and output another sequence of numbers (at a very high level). For example, I can train a model to that takes 28 days of data and outputs 7 days of predictions. Now, the input size depends on the model. Some require longer input sequences, other perform better with shorter inputs. In the case of N-HiTS, you can try 3x to 5x your horizon and see what works best. For Transformer-based models, usually 1x to 2x the horizon is a good input size. I hope this helps!
d
Super helpful @Marco thank you
while I have you here - what are the main tunable params for speeding this up? I assume
batch_size
max_steps
num_workers_loader
- anything else worth reviewing here?
m
batch_size is mostly for memory management. You can set ealy_stop_patience to avoid overfitting and end training earlier. Otherwise, speed is mostly dependent on models. N-HiTS, NBEATS, MLP are super fast, PatchTST, Vanilla Transformer are slower
d
I assume "super fast" is via GPU yeah? It's.... 30 seconds or so via CPU with the settings I have above (conf intervals are
[75, 95, 99]
)
annoyingly, my cloud provider doesn't have any GPU machines in their inventory... working on workarounds this week