How to create efficient MARS archive request using cdsapi/Python

Hi! I am new to this forum and I hope this is the appropriate location for my inquiry. I am looking for some assistance in creating an efficiently formatted request(s) for downloading 15 recent years of archived ERA5 data (forecast mean flux data on model levels). From my reading in the documentation (which I may not have understood correctly), all variables from one month of forecast data are USUALLY on one tape. However, it seems my requests always fail if they are for more than one or two days worth of data. Combined with spending 12+ hours in the queue, downloading 15 years in this fashion appears unsustainable. I am hoping that I might improve my performance by properly formatting my request in the most efficient manner possible, and I am hoping someone more knowledgeable than I about the MARS system could assist me in ensuring my requests are well-formatted. I have attached a sample script to this post that details what I am trying to download. I will try to summarize my questions below.

  1. How much data can I request from a tape at one time? I have tried to request one month of four, 4D fields at 4x daily, 0.25 deg lat-lon resolution as a netCDF file, which I thought should be all on the same tape. However one month does not work, neither does 2 weeks. But my request succeeds if only used for 1-2 days. But this seems inefficient, if more data is on one tape, I am not sure why I cannot download all of it at once.
  2. Does requesting GRIB files rather than netCDF allow for larger downloads? I understand there is some added processing time by requesting netCDF, but this does not seem to be where my requests are failing. They seem to fail before the data is even retrieved from the archive. But if this would help me bundle my requests into larger (i.e. month-long/tape-size) and more efficient ones, then I would do this.
  3. What is the trade-off in queue time for having many requests? I know I will need to have many simultaneous requests in order to be most efficient, and I know the limit is 50. But I do not know if my jobs are demoted in the queue for having more than a few. I would like to optimize both my individual requests and the number of my requests to achieve the best throughput.

I have tried to the best of my ability to answer these questions in the documentation and through trial-and-error, but because my requests sit in the queue for a day before failing, trial-and-error may not be the optimal route. Any assistance or guidance provided is much appreciated!


get_era5_data.py