Slackbot
04/04/2023, 10:14 AMfede (nixtla) (they/them)
04/04/2023, 10:33 PMcpus=1
and gpus=1
to the Auto classes during hyperparameter optimization. This will allow you to perform parallel processing by indicating the number of resources allocated to each trial. Note that this approach is particularly useful when training models such as NBEATS and NHITS, which are trained using different windows of different series. However, in your case, you will be training them using different windows of the same series.fede (nixtla) (they/them)
04/04/2023, 10:33 PMMerlin
04/05/2023, 10:00 AMMerlin
04/05/2023, 1:50 PMloss
that tune.TuneConfig()
expects. Make sure your calls to tune.report()
include the metric, or set the TUNE_DISABLE_STRICT_METRIC_CHECKING environment variable to 1. Result: {'trial_id': '6f0eddce', 'experiment_id': 'c928f65feeb747c7abbda5181ebdfcd6', 'date': '2023-04-05_13-47-44', 'timestamp': 1680702464, 'pid': 79944, 'hostname': 'shipgpu001', 'node_ip': '10.99.151.1', 'done': True, 'config/h': 7, 'config/encoder_hidden_size': 100, 'config/encoder_n_layers': 1, 'config/context_size': 50, 'config/decoder_hidden_size': 256, 'config/learning_rate': 0.04436648573316852, 'config/max_steps': 10, 'config/batch_size': 32, 'config/loss': MQLoss(), 'config/random_seed': 13, 'config/valid_loss': MQLoss()}