Hey Javier, have you tried the colab examples?
# general
k
Hey Javier, have you tried the colab examples?
j
Yes, I'm working with that, thanks! But I was trying to get hold of the order of the model calculated and the AIC value
k
We are currently working on the new sklearn-like classes with fit/predict methods that store the parameters
j
that would be great,,,, Thanks!
k
For the moment there is a tradeoff between speed and available information within the models. We created the options to run without giving the parameters back which is really fast vs running with the parameters as output too which is a bit slower
We will release it next week hopefully
j
Be ready to test it!!! My current models are taking 10 days to run....
k
🙂
We have a pending blog post on how to run 10^6 series with ETS under 2 minutes
👍 1
Quick question @Javier Vasquez Are you trying to use AIC as validation tool?
Would it not be better to use a cross validation measurement of error?
AIC and leave-one-out cross validation have known connections since the 80s/90s
j
I'm using both Akaike and MAE as measurements, as the models I'm using run with over 300 exogenous variables, and trying to see if the outcome is related to aic or mae..,
as in this case it seems that aic is not a good predictor of the model
k
Might be a good idea to try LASSO regularization on those exogenous Or Ridge regression So many exogenous variables could give you some autocorrelation problems
j
yes, they are creating problems, but that is part of the exploration as I'm trying to cluster the temporal series before and check if they convey additional information that may be useful for the models
k
StatsForecast cross_validation method can help to create predictions for rolling windows with which you might be able to get better measurements of prediction performance
j
I'll try that, thanks for the suggestion!
k
Have you tried PCA before on top of the exogenous?
That could help a lot too
PCA / or LASSO
j
not yet... its on the list to measure differences between methods.