This certainly works; however, I am retrieving and downloading with independent scripts. In order to not tie up the job queue with a job simply waiting for a request to process and complete, I am sending the requests first. I then log the requestID and run a new script to download the completed requests. Here is a snippet showing what I am doing now (aside from error checking, etc
session = requests.Session()
url = f"https://cds.climate.copernicus.eu/api/retrieve/v1/jobs/{request_id}"
headers={'User-Agent': 'datapi/0.1.1', 'PRIVATE-TOKEN': api_key}
session.headers.update(headers)
s = session.get(url)
# some checking here to make sure request exists and is complete
# Query results to get URL for download
r = session.get(url + "/results")
# more checking
result = session.get(asset['href'])
That’s what I meant, or see as the only option. There does not seem to be an option to outright list all request statuses, as there was before since the request id is included. If you do find it, let me know.
with requests.Session() as session:
session.headers = {
"PRIVATE-TOKEN": YOUR_PRIVATE_KEY
}
r = session.get('https://cds.climate.copernicus.eu/api/retrieve/v1/jobs')
result = r.json()
for job in result.get("jobs"):
print(job["jobID"], job["status"])