https://github.com/nixtla logo
#neural-forecast
Title
# neural-forecast
p

Phil

10/13/2023, 8:55 PM
Hi everyone, I trained an NHITS model on larger dataset and and trying to tune it on a separate smaller dataset. I'm trying to change the number of
early_stop_patience_steps
. It does not seem to work for me. I have the following function from this post: https://nixtlacommunity.slack.com/archives/C031M8RLC66/p1689171619028769
Copy code
from pytorch_lightning.callbacks import TQDMProgressBar
from pytorch_lightning.callbacks.early_stopping import EarlyStopping
def set_trainer_kwargs(nf, max_steps, early_stop_patience_steps):
	 ## Trainer arguments ##
        # Max steps, validation steps and check_val_every_n_epoch
        trainer_kwargs = {**{'max_steps': max_steps}}

        if 'max_epochs' in trainer_kwargs.keys():
            raise Exception('max_epochs is deprecated, use max_steps instead.')

        # Callbacks
        if trainer_kwargs.get('callbacks', None) is None:
            callbacks = [TQDMProgressBar()]
            # Early stopping
            if early_stop_patience_steps > 0:
                callbacks += [EarlyStopping(monitor='ptl/val_loss',
                                            patience=early_stop_patience_steps)]

            trainer_kwargs['callbacks'] = callbacks

        # Add GPU accelerator if available
        if trainer_kwargs.get('accelerator', None) is None:
            if torch.cuda.is_available():
                trainer_kwargs['accelerator'] = "gpu"
        if trainer_kwargs.get('devices', None) is None:
            if torch.cuda.is_available():
                trainer_kwargs['devices'] = -1

        # Avoid saturating local memory, disabled fit model checkpoints
        if trainer_kwargs.get('enable_checkpointing', None) is None:
            trainer_kwargs['enable_checkpointing'] = False

        nf.models[0].trainer_kwargs = trainer_kwargs
        nf.models_init[0].trainer_kwargs = trainer_kwargs
Adding
early_stop_patience_steps
to trainer_kwargs gives me the error
Copy code
nf.fit(Y_df_train, use_init_models=False, val_size=180)
...
TypeError: Trainer.__init__() got an unexpected keyword argument 'early_stop_patience_steps'
When I try the following:
Copy code
nf.models[0].val_check_steps = 3
nf.models[0].start_padding_enabled = False
nf.models[0].early_stop_patience_steps = 1
It seems to work for the
val_check_steps
parameter but it does not seem to work for the
early_stop_patience_steps.
How do I do this?
c

Cristian (Nixtla)

10/15/2023, 5:28 PM
Hi @Phil!
nf.models[0].early_stop_patience_steps = 1
Wont work because it is an argument of the
callbacks
object of the Trainer. The function in that post should still work, is it giving an error?
p

Phil

10/16/2023, 9:12 PM
Hi Cristian. sorry for the delay in my response. It's been a chaotic morning at LinkedIn this morning. I managed to make it work. I adapted the function above to this
Copy code
def set_trainer_kwargs(
    nf: NeuralForecast, 
    max_steps: int, 
    early_stop_patience_steps: int, 
    val_check_steps: Optional[int] = None) -> None:
    """Set trainer arguments for fine-tuning a pre-trained NeuralForecast model.

    Args:
        nf: A pre-trained NeuralForecast model.
        max_steps: The maximum number of training steps.
        early_stop_patience_steps: Patience for early stopping (0 to disable).
        val_check_steps: The frequency of validation checks during training.

    Returns:
        None

    Example usage:
        trained_model_path = "./results/12315464155/"
        nf = load_neural_forecast_model(model_path=trained_model_path)
        set_trainer_kwargs(nf=nf, max_steps=1000, early_stop_patience_steps=3, val_check_steps=35)
        nf.fit(df=new_df, use_init_models=False, val_size=nf.models[0].h)
    """
    # Trainer arguments.
    trainer_kwargs = {
        # The maximum number of training steps.
        "max_steps": max_steps,
        # Display a progress bar during training.
        "callbacks": [TQDMProgressBar()],  
        # Use GPU if available, or "auto" to decide automatically.
        "accelerator": "gpu" if torch.cuda.is_available() else "auto",  
        # Use all GPUs if available, or 1 CPU if not.
        "devices": -1 if torch.cuda.is_available() else 1, 
        # Disable model checkpointing.
        "enable_checkpointing": False,
    }

    # Early stopping callback.
    # Stop training early if validation loss doesn't improve for 'patience' steps.
    if early_stop_patience_steps > 0:
        trainer_kwargs["callbacks"].append(
            EarlyStopping(monitor="ptl/val_loss", patience=early_stop_patience_steps)
        ) 
    # Set custom validation check frequency.
    if val_check_steps:
        nf.models[0].val_check_steps = val_check_steps
    
    # Update trainer arguments for the model and its initialization.
    nf.models[0].trainer_kwargs = trainer_kwargs
    nf.models_init[0].trainer_kwargs = trainer_kwargs
If I put the
val_check_steps
inside the
trainer_kwargs
it throws an error
I had to do it like the code above shows and set it here instead
Copy code
nf.models[0].val_check_steps = val_check_steps
3 Views