I’ve submitted many requests for one year of daily data each, but the system I ran the requests on has a 24-hour compute limit so many jobs timed out before the retrieval was able to finish and download. The completed requests appeared a day or two later under Your Requests in the climate data store, but downloading one by one and by hand is quite slow. If I have the list of request IDs, is there a way to request to download all the completed files in Python? I looked through this post but have not managed to get quite the right syntax.
From what I could tell this isn’t possible (anymore). The old API implemented this, the new one doesn’t. You always have to keep track of all job IDs to track their status. You can’t do this in one pass querying them all, it seems. The documentation of the API is severely lacking so I might have missed something but I don’t think so. I’ve implemented the tracking of job IDs as a state variable in R
, I will not make an effort in python without transparency in documentation.
I managed to download my completed requests on the Atmospheric Data Store based on their ID using the following function:
def download_ADS_completed_request(request_id: str, target_name: str) → None:
request = Remote(
url=f"https://ads.atmosphere.copernicus.eu/api/retrieve/v1/jobs/{request_id}",
headers={“User-Agent”: “ecmwf-datastores-client/0.2.0”, “PRIVATE-TOKEN”: client.key},
session=client.session,
retry_options={“maximum_tries”: 500, “retry_after”: 120},
request_options={“timeout”: 60, “verify”: True},
download_options={“progress_bar”: multiurl.base.progress_bar},
sleep_max=120,
cleanup=False,
log_callback=None,
)
request.download(target=target_name)
Note that this only works if you don’t have ‘:’ in your personal access token, because in this case the client would not be the same.
This may come late for you sorry, but maybe it can help others