<mailto:personalt@mailfence.com|personalt@mailfenc...
# support
t
personalt@mailfence.com has a question about data with timestamps at 1-min intervals. He's using raw HTTP posts to generate the forecasts and is getting an output that doesn't make sense to him. Question in his thread. Maybe @Yibei or @José Morales you have thoughts? Thanks so much.
1
j
oh wait, that wasn't the problem, was it? the response looks ok (it's incrementing by 1minute). is that person asking about the forecasted values?
t
yes, they're saying the forecasted values don't make sense.
But I don't think this person has a lot of experience with forecasting.
j
I got distracted by the fact that the frequency was also wrong. Setting finetune_steps will help
"finetune_steps": 10
or similar
y
I guess forecasting for lower frequency data could be more challenging and requires more input? Finetuning the model could help improve accuracy in this scenario
j
oh but I think that model requires 28 samples, so it'll return an error, since that person is sending 12 samples. That's part of the reason the forecast isnt' great, the model is using 16 zeros and 12 real values. Ask the person if they can provide more samples, if they have more than 28 they can also finetune
t
Thanks, here's a draft response
You can also improve your forecasts by sending more samples and doing fine tuning.
You can do fine tuning, by setting
"finetune_steps": 10
or something similar. That model does require at least 28 samples though. If you aren't sending 28 the model will fill in with zeros, so the results won't be very accurate.
Can you provide more than 28 samples when you're doing your forecasts? Then you should see improved results and be able to use fine tuning.
👍 2
Thanks @José Morales and @Yibei!
This user had a follow up question. I think the answer to 'reduce the number of data points required' answer is 'no'. But is there any other advice we can offer him?
I tried to add "finetune_steps":10 and received the following:
Minimum number of samples by id required for finetuning is 841, got 2
Is there any way to reduce the number of data points required?
j
840 is the long horizon model, the regular model (
"model": "timegpt-1"
) requires 120 (I got this wrong yesterday). We can't reduce that at the moment.
Also, are they only providing 2 samples now? That's going to be even worse haha
😅 1
t
oh dear. He's trialing Premium support, so I just added him to a Slack channel where he can talk with Yibei and Cristian. 🙂 So, I'll answer about the long horizon and 120. And then we can follow up with him in Slack to try to figure things out.
@Yibei that's the mailfence channel I just added you to.
y
I see, thanks @Tracy Teal.