https://github.com/nixtla logo
f

Farzad E

01/19/2023, 6:06 PM
I am trying to understand performance of the example from https://www.anyscale.com/blog/how-nixtla-uses-ray-to-accurately-predict-more-than-a-million-time-series. It is using a m5.2xlarge instance with 8 cores and the min worker in the yaml file is set to 249. It then says that the deployed cluster is using 2000 CPUs. I don't understand this. Does it mean it spins up 249 EC2 instances of m5.2xlarge with each of them having 8 cores? That seems unlikely to me but I don't get it how it is using 2000 CPUs. I need to understand this because I am currently predicting only 10 time series on a c6a.8xlarge with 32 cores and it's taking 5 minutes! If a million series take 30 minutes, then I think 10 series should take a second. I am confused how that performance was achieved. Any insight please?
This is my code:
Copy code
sf = StatsForecast(
    df=df,
    models=models,
    freq='W',
    n_jobs=-1,
    ray_address='10.10.10.110:6379'
)

forecasts_df = sf.forecast(h=52, level=[90])
I don't have a yaml file though. I start my ray cluster on my EC2 instance and then pass the address to StatsForecast.
f

fede (nixtla) (they/them)

01/19/2023, 6:38 PM
Hi @Farzad E! Thank you for using statsforecast. Speed using a cluster is typically achieved when you have many time series, usually more than available cpus. In your example, since you are handling only 10 time series, using a ray cluster may be less useful. Running your code on a c6a.8xlarge instance with
n_jobs=-1
might be best. StatsForecast uses a map reduce approach. So if you have 10 time series and 32 cores available, statsforecast will use 10 cores to train (one for each series). The training speed of those time series will depend on the models used and the length of the series. For example, models like autoarima in very long time series (more than 100 observations) are usually very slow. Models like MSTL tend to be faster.
f

Farzad E

01/19/2023, 7:06 PM
@fede (nixtla) (they/them) I am using AutoARIMA with weekly data of 7 years so the length of the series is 364 and my horizon is 52 weeks. Thanks a lot for your explanation but one question though. How did your example on the m5.2xlarge with 8 cores performed so well? That also used AutoARIMA and had millions of series. Is it because the forecast horizon was short in your case (7 days)?
f

fede (nixtla) (they/them)

01/19/2023, 7:28 PM
In that case, using the MSTL model (even the Theta or ETS) is probably better. Large seasonalities (as in the weekly case) are often detrimental in time to the autoarima model. Your first intuition is correct about the blog post: 250 EC2 instances of 8 cores each were deployed to obtain the cluster with 2000 cpus. And the experiment with the millions of time series used the 2000 cpus, hence the good performance in time. The horizon is not a problem once you fit the model.
👍 1
f

Farzad E

01/19/2023, 7:30 PM
Thanks a lot. That clarified the details.
👍 1