"Abstract
Since its proposal in 2019, Transformers models have been revolutionizing applications in various tasks, from Large Language Models to Generative Artificial Intelligence. In the context of time series it was no different, since the beginning variations of the original model have been proposed to assess whether such technology could also benefit prediction tasks. On the other hand, recent studies questions the real applicability of these models, due to their high cost but inferior prediction quality compared to simpler and more efficient models. This study aims to evaluate the main known Transformers models adapted for the prediction of time series, comparing them with widely used classical models for this same task, specifically in the domain of financial observations. The results demonstrate that even for a dataset with few observations, Transformers can obtain results similar to those of models traditionally used for this task, but at a higher computational cost. The use of benchmark datasets and the understanding of the behavior of well-established models will allow the construction of a relevant reference link for comparison with other models and studies."