Nasreddine D05/07/2023, 3:48 PM
Max (Nixtla)05/08/2023, 8:01 PM
i. Should I try it with all models and see the result with a crossval? Is it appropriate to tune these models? (I have not seen how to perform it in the documentation)
1.Start with the auto models. These models find the best parameters for you. AutoTheta, AutoMSTL, etc,
Nasreddine D05/09/2023, 1:26 PM
fede (nixtla) (they/them)05/09/2023, 6:44 PM
Nasreddine D05/10/2023, 2:31 PM
fede (nixtla) (they/them)05/10/2023, 8:18 PM
method automatically handles the exgenous variables. So if you have more variables after the target column
, they will be considered exogenous variables and used by the models that allow it. Since, for each window, the exogenous variables of the future are available in cross-validation, their handling is done automatically. • Yes, an approach to use unknown exogenous variables is to forecast them separately and use the forecasted values to produce forecasts of the target variable.
Nasreddine D05/12/2023, 9:45 AM
: • Rolling window vs Expanding window : ◦ Is there a way to choose one or the other? Or what is the best approach? • This is the configuration I've used to test different models from statsforecast. I would like to do the same with
but I want it to be comparable (same number of windows. I am not sure how it will work because with
there must be a validation set to adjust the model and a test set? If I put
with N-HITS what will happen to val/test? Hope my question is clear.
I am going to start with NeuralForecast or MLForecast: • Which one should I start with? (Remember my TS has around 180 months history). So not very long. • For these 2 librairies should I create Features? Or it's just for ML Forecast? • Feature Engineering: what is the best strategy? I read I could create a bunch of features and then select the best ones (lasso...), do you know any ressource that explain that with code? • Do I need to normalize the data for N-HITS? And other models in NeuralForecast? Thanks again for your valuable time.
crossvalidation_df = sf.cross_validation( df=Y_ts, h=24, step_size=1, n_windows=100 )
fede (nixtla) (they/them)05/17/2023, 8:40 PM
(currently this option is only available for the Auto models in statsforecast). If you set
, those windows will be treated as a test set (the model will not use those values during training). Usually, a good workflow starts with ststaforecast, mlforecast, and then neuralforecast. (less complex to more complex models). About features, neuralforecast does not need them, but you’ll need to specify them using mlforecast. The best strategy for feature engineering is to start with lags and simple transformations and then add more to see if the cross-validation signal improves. It is always best practice to scale (or normalize) the data using global models (neuralforecast or mlforecast). The models included in the neuralforecast library can receive the
argument to perform different strategies of scaling, here’s an example: https://nixtla.github.io/neuralforecast/examples/longhorizon_probabilistic.html
Nasreddine D05/21/2023, 2:06 PM