Hi team, I have a question regarding TimeGPT in AW...
# squads
m
Hi team, I have a question regarding TimeGPT in AWS.
h
for example, we can ask for the typical timeseries length and parameters they use
and we create 200-300 synthetic ts to test on a 8gb machine
to see if we can reproduce the issue
m
We actually have some of their data, so we can use that for testing.
h
ok, so how do we reproduce their issue? How to run timegpt using their spec?
m
We have almost everything (data and code), so if it is ok with you, we can work on that next Monday.
h
Oh I am totally ok
I am just curious how you are going to run timegpt using a 8gb machine that has a gpu?
m
yes, I was planning to do two things: 1. Ask the client for details on their AWS configuration. 2. Try to replicate their issue using the instances Nixtla has available to see if we hit the same memory limitations.
and as I said before, I think this is a difficult case to debug, but they’re one of our new enterprise clients, so we’re trying to help them implement TimeGPT in production asap
h
The reason I am asking is I think we may need a simple way to create similar environment in order to debug issues
One thing I have tried is to use databricks notebook, you can select whatever aws instance with gpu
but how to pull a certain version of timegpt is still a question
alternatively, I was thinking that given a timegpt docker, can we quickly deploy to aws ecs and test on it interactively?
m
yes, I think we can do that. Plus, we already have some of the client's data and even their code, as they sent it to us for review.