Greetings. I'm writing an MBA monograph on AI whe...
# general
r
Greetings. I'm writing an MBA monograph on AI where I compare some traditional statistic and ML models with newer transform-based ones, like FEDFormer, applied to financial time series. I'm using the M6 dataset. I can share the results at the end. I found your set of libraries and it's helping a lot on my notebooks, thank you! I'm using only auto models: AutoArima, AutoLSTM, AutoMLP, AutoDLinear, AutoNLinear, AutoInformer, AutoAutoformer and AutoFedFormer. My supervisor asked me if the models hyperparameters are optimized during the fit. As I understood from the documentation they are:
"Auto Forecast: Automatic forecasting tools search for the best parameters and select the best possible model for a series of time series. These tools are useful for large collections of univariate time series. Includes automatic versions of: Arima, ETS, Theta, CES."
I'd be glad if anyone could confirm that for all models I mentioned and, if so, whether I can read them after fitting the model and how. This is my DL version of the benchmark notebook: https://colab.research.google.com/drive/1D40TeGI5rTToj-63ggGPjTTJsEz_lKE0 Best regards
m
Yes, the parameters are optimized and you can see them using
auto_model_object.results_['name_of_model'].best_trial.user_attrs['config']
Make sure to replace
auto_model_object
and
name_of_model
with the appropriate names in your code.
r
Thanks a lot Marco, I'll try that!
k
Fedformer, Informer and Autoformer are particularly bad models @Rodrigo Sodré Why would you like to choose those?
r
Hi @Kin Gtz. Olivares! How are you?
It was my monograph subject. "Abstract Since its proposal in 2019, Transformers models have been revolutionizing applications in various tasks, from Large Language Models to Generative Artificial Intelligence. In the context of time series it was no different, since the beginning variations of the original model have been proposed to assess whether such technology could also benefit prediction tasks. On the other hand, recent studies questions the real applicability of these models, due to their high cost but inferior prediction quality compared to simpler and more efficient models. This study aims to evaluate the main known Transformers models adapted for the prediction of time series, comparing them with widely used classical models for this same task, specifically in the domain of financial observations. The results demonstrate that even for a dataset with few observations, Transformers can obtain results similar to those of models traditionally used for this task, but at a higher computational cost. The use of benchmark datasets and the understanding of the behavior of well-established models will allow the construction of a relevant reference link for comparison with other models and studies." Unfortunately it's written on Portuguese but these 3 Transformer models got similar MAE, RMSE and POCID scores to ARIMA, MLP, DLinear and NLinear. They got even better results than LSTM. I used the M6 dataset. The "only" drawback was their resources consumption, 20 to 30 times more than ARIMA and MLP, but since they got similar results than the most know algorithms, my conclusion is that they're indeed effective (not efficient) for time series forecasting. Anyway that was just an academic monograph, It's finally concluded and i'm now studying for a more practical approach for financial time series forecasting. I'm putting all neuralforecasting automodels on Nixtla library to run now and compare their performance and resource consumption. I'm really interested on better models, your comment made me curious, which ones do you suggest for that task? Best regards!