Hi, I have a dilemma I’d like thoughts on: I’m doi...
# general
m
Hi, I have a dilemma I’d like thoughts on: I’m doing big data coding/training (years of intraday data with Nixtla) and looking to get a new computer. For a desktop I know that GPU acceleration with Cuda is great (I’m borrowing an RTX 3070 Ti), but I’m wondering if a laptop such as the upcoming M4 Max would be powerful enough too. I’d prefer a laptop but I’m afraid of speed limitations. Any suggestions or things to avoid, with Nixtla and in general? Thanks
o
Hi - I had similar dilemma couple of months ago. Our packages work on Linux/Mac/Windows, so in general there shouldn't be any compatibility issues. As for the dilemma: • If you only want a single computer, I'd go for something beefy, and personally I'd want to have a top-end discrete nVidia GPU. M CPUs are great but nowhere near CUDA performance of top-end GPUs. That said, mobile nVidia GPU like an 3070Ti is not very powerful - In your specific example an M4 Max could well outperform (the relatively old) 3070Ti. Thereby, I'm assuming you're not doing very intensive DS tasks, so a Mac with M4 Max would be plenty powerful. In addition, you'll benefit from much better battery and overall CPU performance as compared to the laptop that has the 3070Ti. • If the laptop is only for light mobile work - in my case I have a separate desktop with Windows+RTX3090 - I opted for a laptop that can do some coding, but is mostly used to RDP into my main machine. So no beefy laptop required. Hope this helps.
m
Got it, very helpful advice and thank you!
👍 1
Hi, I had a follow up question: as this is my first time with an Apple silicon M chip, how does one setup GPU acceleration similar to Cuda (Pytorch, TensorFlow) ? What about for the Nixtla library ? Also, is it more optimal to use Xcode instead of VScode now (for python and jupyter nb) ? I’m not finding clear, in-depth tutorials online
o
Just follow the quickstarts on Pytorch to install with the supported backends (M chips don't support accelerators other than MPS which Pytorch probably install by default if you just follow then get started guide) https://pytorch.org/get-started/locally/
For Nixtla I don't think you need to do anything special, just follow the installation guides.
I don't know Xcode, but try both and choose what you like best.
🙌 1
m
Sounds good, thank you Olivier!
Hi again Olivier, apologies but I have a related issue: In VSCode on mac I made sure to install Pytorch with Metal (verified, it works) and given the Nixtla documentation, I set os.environ['PYTORCH_ENABLE_MPS_FALLBACK'] = '1'. However, then I run an RNN and obtain this: --------------------------------------------------------------------------- NotImplementedError Traceback (most recent call last) Cell In[4], <vscode-notebook-cell:?execution_count=4&line=1|line 1> ----> <vscode-notebook-cell:?execution_count=4&line=1|1> rnn_cv = nf.cross_validation(df, n_windows=5, step_size=1, val_size=0, test_size=None) # 5-fold cross-validation, recurrent models only allow step_size=1 File ~/miniconda3/envs/gcm/lib/python3.10/site-packages/neuralforecast/core.py:1158, in NeuralForecast.cross_validation(self, df, static_df, n_windows, step_size, val_size, test_size, sort_df, use_init_models, verbose, refit, id_col, time_col, target_col, **data_kwargs) 1156 df = df.reset_index(id_col) 1157 if not refit: -> 1158 return self._no_refit_cross_validation( 1159 df=df, 1160 static_df=static_df, 1161 n_windows=n_windows, 1162 step_size=step_size, 1163 val_size=val_size, 1164 test_size=test_size, 1165 sort_df=sort_df, 1166 verbose=verbose, 1167 id_col=id_col, 1168 time_col=time_col, 1169 target_col=target_col, *1170 *data_kwargs, 1171 ) 1172 if df is None: 1173 raise ValueError("Must specify
df
with
refit!=False
.") ... ---> 30 x_median, _ = x_nan.nanmedian(dim=dim, keepdim=keepdim) 31 x_median = torch.nan_to_num(x_median, nan=0.0) 32 return x_median NotImplementedError: The operator 'aten::nanmedian.dim_values' is not currently implemented for the MPS device. If you want this op to be added in priority during the prototype phase of this feature, please comment on https://github.com/pytorch/pytorch/issues/77764. As a temporary fix, you can set the environment variable
PYTORCH_ENABLE_MPS_FALLBACK=1
to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS.
Would you have any suggestions? I really appreciate it 🙏
o
Hey, sorry for the late response, this thread escaped my attention. Could you try GRU instead of RNN?
m
Hi, no worries. It actually seems to work now, maybe related to the neural forecast update. Although, it is indeed much slower than if I use CUDA...
Also, the neural forecast update mentioned a new cross-validation doc, do you know where I can find it? Thanks