I’ve submitted many requests for one year of daily data each, but the system I ran the requests on has a 24-hour compute limit so many jobs timed out before the retrieval was able to finish and download. The completed requests appeared a day or two later under Your Requests in the climate data store, but downloading one by one and by hand is quite slow. If I have the list of request IDs, is there a way to request to download all the completed files in Python? I looked through this post but have not managed to get quite the right syntax.
From what I could tell this isn’t possible (anymore). The old API implemented this, the new one doesn’t. You always have to keep track of all job IDs to track their status. You can’t do this in one pass querying them all, it seems. The documentation of the API is severely lacking so I might have missed something but I don’t think so. I’ve implemented the tracking of job IDs as a state variable in R
, I will not make an effort in python without transparency in documentation.
1 Like