Dear Nixtla, after upgrading to NF 1.7.6 and train...
# neural-forecast
d
Dear Nixtla, after upgrading to NF 1.7.6 and training models with cross_validation, i’m getting
File "/opt/conda/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/opt/conda/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/opt/conda/lib/python3.10/site-packages/kfp/dsl/executor_main.py", line 109, in <module>
executor_main()
File "/opt/conda/lib/python3.10/site-packages/kfp/dsl/executor_main.py", line 101, in executor_main
output_file = executor.execute()
File "/opt/conda/lib/python3.10/site-packages/kfp/dsl/executor.py", line 361, in execute
result = self.func(**func_kwargs)
File "/tmp/tmp.mkcsWMcuA9/ephemeral_component.py", line 251, in train_neural_network
nf.save(path=model_artifact.path, model_index=None, overwrite=True, save_dataset=True)
File "/opt/conda/lib/python3.10/site-packages/neuralforecast/core.py", line 1537, in save
"prediction_intervals": self.prediction_intervals,
AttributeError: 'NeuralForecast' object has no attribute 'prediction_intervals'
While looking for prediction_intervals, i found that is problem might be somehow related to loss (i’m using HubertMQLoss). Same code working fine with NF 1.7.4. Is there any critical changes from 1.7.4 to 1.7.6 i’m missing, or any other suggestions? Thank you in advance.
j
@D N There is a big change in NF 1.7.6 that conformal prediction was introduced. May I first clarify with you 1. Did you train this model by loading a previously trained model with earlier NF version < 1.7.6? 2. Could you provide a minimal example that reproduces this issue? This is likely a bug that I suggest you could file an issue, when I have time, I will help to fix this
attribute ‘prediction_intervals’ is related to conformal prediction
On a second thought, I think there is a way to solve this issue. Let me work on this.
To the best of my knowledge, I think suggested reproducible steps can trigger this issue. But please help to leave comments on how to reproduce this https://github.com/Nixtla/neuralforecast/issues/1223
This issue happens when you load an old model with NF version < 1.7.6. Later you will attempt to save the model using NF version 1.7.6. I have introduced the fix for this issue, waiting for the Nixtla’s team to review
d
Hi @Jing Qiang Goh, thank you very much for your fast and meaningful responses. To summarize, i’m training fresh (new) set of auto models (without loading models created with previuos versions of NF) via KFP on Vertex AI. As i wrote before, same code working flawlessly with NF 1.7.4. What i found is that bug can be consistently reproduces by using cross_validation with parameters n_windows=1, step_size=1, refit=False. (i do understand that these parameters kinda wierd, but they do work to reproduce the bug). To be honest, i’m bit confused how to continue, to wait for hotfix seems the right way, because so far i did not found any legit way to train fresh set of models with cross validation without having this bug.
j
I believe the fix I include in the PR should help you to solve the issue. But it is interesting to learn that the bug can be reproduced by calling cross-validation without using the previously loaded model. I will give it a try and see whether this is a separate issue
❤️ 1
d
I really hope Nixtla will release emergency update, because 1.7.6 has brought many awaited fixes on the table, but this issue is totally ruining experience.
Got another confirmation, pretty much cross_validation is totally broken, because it does not work with any settings. Even example from End-To-End walkthrough failing with
AttributeError: 'NeuralForecast' object has no attribute 'prediction_intervals'
Hopefully Nixtla team will release update soon @Marco @José Morales @Christian Ngnie (pleeeeaase :))
@Jing Qiang Goh Thanks for PR, i build custom wheel and it seems working for now ;)
👍 1
j
Can you provide a small, reproducible example?
d
Hi Jose! The example for cross validation from NF documentation.
j
@José Morales I could not reproduce the bug by running the cross-validation tutorial alone ( I use CPU with my local laptop)
👍 1
d
The PR from @Jing Qiang Goh seems to fix it
j
that PR modifies the saving part, cross validation should have nothing to do with it
1
j
But I could reproduce this by loading an old model and try to save the model, as described by the filed issue
d
And I can confirm that PR fixes the problem with fresh models saving after cross validation training
j
so it's not the cross validation. you are saving and loading the models, which you said you werent
To summarize, i’m training fresh (new) set of auto models (without loading models created with previuos versions of NF)
j
The cross-validation behavior sounds strange to me but hopefully @D N can share more info on that
I am not sure whether we are facing the same issue but it can be triggered by different ways. This looks like a backward compatibility issue that I missed previously
d
It’s related to saving models, when I train them with CV bug appears every time with Probabilistic Losses (HubertMQLoss etc). When training models with nf.fit - they are saved fine.
j
So it has nothing to do with cross validation and it's just the issue that JQ's already reported
👍 1
d
Ok, so I was just saying that constantly having issue when using CV, so probably used the wrong interpretation, my bad.
@Jing Qiang Goh @José Morales Thank you very much for quick 1.7.7 bufgix release 🙂 Makes lifes so much easy 😉
❤️ 1