Hi, I try to use TFT or NHITS to train a larger da...
# general
f
Hi, I try to use TFT or NHITS to train a larger dataset with the shape of (1089214, 32) and only 1 unique_id.
Copy code
horizon = 30
hist_length = 120
model = [TFT(
                input_size=hist_length,
                h=horizon, 
                max_steps=30, 
                hist_exog_list=ex_columns,
                batch_size=32, 
                accelerator='gpu',
                devices=2
               )]
nf = NeuralForecast(models=model, freq='S')
nf.fit(df=data, verbose=True) # data shape (1089214, 32)
The ex_columns size is 12. It shows OutOfMemory Error using GPU. I try to set a smaller batch_size to save more memory, but it doesn't work. Under a debug mode, I find in the training_step in _base_windows.py line 444, the the size of batch["temporal"] is (1, 14,1089214), which consume too much gpu memory.
Copy code
def training_step(self, batch, batch_idx):
        # Create and normalize windows [Ws, L+H, C]
        windows = self._create_windows(batch, step="train")
        y_idx = batch["temporal_cols"].get_loc("y")
        original_outsample_y = torch.clone(windows["temporal"][:, -self.h :, y_idx])
        windows = self._normalization(windows=windows)
My question is why the training_step in BaseWindows class takes all the data instead of a batch_sized one ? Thanks.