Isaac
03/26/2024, 2:30 PMsteps
but I don't see a way to do the same with the validation set.Marco
03/26/2024, 2:38 PMn_windows
.
Not sure if this is what you are looking for, so let me know if it helps!Isaac
03/26/2024, 3:00 PMsteps
I can limit training to a smaller subset of my training data. Is there way to do some the same for validation? As in, not run through the entire validation set so I can speedily test my script?Marco
03/26/2024, 3:10 PMval_size
to 0?Isaac
03/26/2024, 4:31 PMIsaac
03/27/2024, 12:33 AMCristian (Nixtla)
03/27/2024, 4:18 PMfit
or cross_validation
? In both cases, you only pass one dataframe with continous time series, and you define the length of the validation set with val_size
, as Marco said. We currently do not support splitting in the unique_id
dimension.Cristian (Nixtla)
03/27/2024, 4:18 PMval_check_steps
, so that validation is performed less frequently.Isaac
03/27/2024, 5:29 PMpredict
so I'm trying to quickly run through fit
to get straight to predict
. Since I use a batch size of 128, I can set steps
to 2 and only "fit" on the first 256 unique ids. However, the validation part of fit
runs through the entire set of unique ID's every time. I'd like to bypass it, finish the fitting and go straight into predicting. Is there a way to do that? Does that make sense?Cristian (Nixtla)
03/27/2024, 9:47 PMval_size=0
plus num_sanity_val_steps=0
should workCristian (Nixtla)
03/27/2024, 9:48 PMval_check_steps
larger than the number of stepsIsaac
03/27/2024, 9:59 PM