Kaustav Chaudhury

03/31/2023, 3:58 AM
Hello everyone, I have come to know about nixtla from databricks support team while our company was browsing for solutions for distributed forecasting. Before we move forward with implementation of nixtla we want to verify that nixtla has following features : 1. Distributed computing across cluster and workers in spark environment 2. Compatible in GPUs and Multi node GPU 3. Bring your own models 4. Control over dataset creation Could this community provide blogs or links or answers this questions Cc : <!here> @Nixtla Team

Max (Nixtla)

03/31/2023, 6:16 PM
Hi @Kaustav Chaudhury! Thanks for the interest. We are happy to jump into a brief call with you if that works. Either way @fede (nixtla) (they/them) will provide some examples.

Kaustav Chaudhury

03/31/2023, 7:52 PM
Sure max we can come up with some time 🙂

fede (nixtla) (they/them)

04/04/2023, 12:37 AM
hey @Kaustav Chaudhury! 1. That’s perfectly possible with our mlforecast and statsforecast libraries. Regarding statsforecast here’s an example using databricks (;_ga=2[…]08250371.1680567893-82c188cb-e021-40d7-bdec-55707a9664cd). The API changed a little bit, now it is not necessary to declare a backend, it is as simple as passing a spark dataframe to the forecast method. MLForecast works similarly, you have to import the
class and pass a spark dataframe to the
method, here’s an example: 2. Our neuralforecast library is compatible with GPUs and multinode GPUs, but we are still working on making it compatible with spark. We haven’t tested it yet, but mlforecast also could work in such environments (through lightgbm). Unfortunately, StatsForecast currently does not support gpu. 3. Bringing your own models is perfectly possible using statsforecast (univariate models). We can help you with that if you are interested. MLForecast can also support custom models but in a distributed environment such as spark it could not be easy since the model needs to be distributed as well (for example, we use synapse ml to train lightgbm). 4. You can use your own (spark,dask, or ray) dataframe without a problem. The only requirement is to have at least three columns: unique_id, identifying the time series,
, identifying the temporal column, and
, the target column. Here’s a description of the input dataset: Let us know how we can help you. :)

Kaustav Chaudhury

04/04/2023, 5:06 AM
We want to bring deepvar as a model in neural forcast