Hi all, is there a way on neuralforecast to use mu...
# general
m
Hi all, is there a way on neuralforecast to use multiple CPUs/GPUs for cross-validation if I only have a single time series?
f
hey @Merlin! Yes, it is possible to use multiple CPUs/GPUs for cross-validation even if you only have a single time series. To do this, you can pass
cpus=1
and
gpus=1
to the Auto classes during hyperparameter optimization. This will allow you to perform parallel processing by indicating the number of resources allocated to each trial. Note that this approach is particularly useful when training models such as NBEATS and NHITS, which are trained using different windows of different series. However, in your case, you will be training them using different windows of the same series.
But in that case using just one gpu should work well
m
Ok and with the non auto models it is not possible? Or would I just pass a default config and set num_samples=1? Also is it possible to log each run after each epoch to wandb?
Oh & if I pass a custom config to any of the auto models, I get the following error... ValueError: Trial returned a result which did not include the specified metric(s)
loss
that
tune.TuneConfig()
expects. Make sure your calls to
tune.report()
include the metric, or set the TUNE_DISABLE_STRICT_METRIC_CHECKING environment variable to 1. Result: {'trial_id': '6f0eddce', 'experiment_id': 'c928f65feeb747c7abbda5181ebdfcd6', 'date': '2023-04-05_13-47-44', 'timestamp': 1680702464, 'pid': 79944, 'hostname': 'shipgpu001', 'node_ip': '10.99.151.1', 'done': True, 'config/h': 7, 'config/encoder_hidden_size': 100, 'config/encoder_n_layers': 1, 'config/context_size': 50, 'config/decoder_hidden_size': 256, 'config/learning_rate': 0.04436648573316852, 'config/max_steps': 10, 'config/batch_size': 32, 'config/loss': MQLoss(), 'config/random_seed': 13, 'config/valid_loss': MQLoss()}