QBOi Experiments

Updates for the QBOi discussion Forum ‘QBOi Discussions’ can be found at qboiexperiments.blogspot.co.uk

Subscribe feed://qboiexperiments.blogspot.co.uk/feeds/posts/default?alt=rssfeed://qboiexperiments.blogspot.co.uk/feeds/posts/default?alt=rssshapeimage_1_link_0
 


5.Output and diagnostics

A discussion around the choice of diagnostics was initiated by Francois Lott and can be found on the QBOi blogging site: http://qboiexperiments.blogspot.co.uk/. This and subsequent discussions led to the consensus on output diagnostics described here. The spatial and temporal resolution of the requested diagnostic variables are first described (Sec. 5.1), followed by the requested output periods for the different experiments (Sec. 5.2). This is followed by brief comments on data format and location of the archive (Sec. 5.3), and then a summary table listing the requested output variables, including their standardized names (Sec. 5.4).


5.1 Spatial and temporal resolution of the requested diagnostics


For the 5 core experiments a good vertical resolution is required for outputting diagnostics. Accordingly, the following extended set of 30 pressure levels are requested:


Pext (hPa) ∈ {1000, 925, 850, 700, 600, 500, 400, 300, 250, 200, 175, 150, 120, 100, 85, 70, 60, 50, 40, 30, 20, 15, 10, 7, 5, 3, 2, 1.5, 1.0, 0.4}


Storage These pressures are adapted from the extended levels set requested by CMIP6. They have been slightly modified to obtain vertical resolution of 1.0 to 1.5 km over the altitude range 200 hPa to 40 hPa, i.e. through the upper tropical troposphere and lower stratosphere.


All output variables are requested to use this standard set of pressure levels, for ease of comparison between models. There are two exceptions, however:


  1. (1)Data to be used for calculating equatorial wave spectra (6-hourly instantaneous fields) should be provided at vertical resolution equivalent to the model resolution, as discussed in more detail below. The reason for high vertical resolution is to ensure accurate calculation of QBO wave forcing (e.g. see Kim and Chun 2015).

  2. (2)Daily-mean 3D variables should be provided on the set of 8 pressure levels used by CMIP5: 1000, 850, 700, 500, 250, 100, 50, 10 hPa. This will reduce data volume, and since it is anticipated that these data will be used to examine QBO influence on other regions of the atmosphere (e.g. on the NAO), we see no need for high vertical resolution.

There is no prescribed horizontal grid on which data are to be provided. Data should be provided on latitude-longitude grids at a resolution that ideally is representative of the actual model resolution. For models with such high horizontal resolution that the size of the dataset becomes prohibitive, use of a coarser resolution grid is acceptable. In this case the reduction method should be documented so that there is no confusion between the model’s resolution and the diagnostics’ resolution.


Models may not simulate the QBO for the right (or similar) reasons, and in particular the fraction of resolved and parameterised waves will most likely be different in each model. To examine the zonal- mean QBO momentum budget, EP Fluxes (EPF), the EP-Flux divergence (DIVF), and other terms in the TEM zonal momentum equation are requested. Although requested as both daily-mean and monthly-mean fields, EPF-derived diagnostics should be calculated using 6-hourly model wind and temperature fields, at the minimum. However, DIVF alone may not be a sufficient diagnostic, as the EP-Fluxes can include large opposing contributions from different wave types. To examine the dependence of QBO wave driving on different types of equatorial waves, wavenumber-frequency spectra of EPF can be calculated. This requires storage of instantaneous values of u, w, v, and T every 6 hours on model levels or on pressure levels at roughly equivalent vertical resolution to the model levels. To reduce the size of the dataset, these data will be saved on a reduced range of vertical levels, from 100 hPa to 0.4 hPa. Note that the vertical resolution within this range will be as high as possible since the data will be on model levels or on pressure levels with similar vertical resolution. To further reduce the size of the dataset, the 6-hourly output should be provided only on a reduced set of latitudes near the equator, Φred (degrees), specifically:


Φred : 15°S ≤ Φ ≤ 15°N


It should be noted that high vertical resolution of the 6-hourly dataset is desirable for the following reasons: 1) to improve the representation and evolution of spectra as described in Horinouchi et al. (2003) and Lott et al. (2014), for model simulations having a QBO; 2) to better understand how quickly the equatorial waves dissipate as they propagate upward, and 3) to understand the behaviour of equatorial waves near the Tropical Tropopause Layer (TTL) and in the SAO region. Differences between vertical levels may also help reduce the contribution of tidal signals in the time- longitude spectra, something that can be problematic at sub-diurnal periods.


If the 6-hourly data are provided on model levels, the accompanying data allowing conversion of the data from model levels to pressure levels must also be provided. For example, those models with a height-based vertical coordinate should also upload pressure/density data too. However, if a group decides to provide the 6-hourly data on a set of pressure levels at vertical resolution equivalent to the model resolution, then of course the accompanying data for conversion from model to pressure levels does not need to be provided.


5.2 Output periods


Monthly-mean output should be provided for the full requested durations of all ensemble members of all experiments. The full requested durations for each experiment are:


EXP 1:        1 Jan 1979 28 Feb 2009

EXP 2 - 4:   30 years

EXP5:         9 - 12 months


Notes on these requested durations:


  1. (1)For EXPT 1, the requested period facilitates ease of comparison with ERA-Interim and other recent reanalyses such as MERRA, NCEP-CFSR, and JRA-55. All of these reanalyses begin in Jan 1979, with the exception of JRA-55 which begins in 1958, and extend to the present day. (For an overview of current reanalyses, see https://reanalyses.org/atmosphere/overview- current-reanalyses.)

  2. (2)As noted above, in Sec. 3, 6 month integrations are acceptable for EXPT 5 if the requested 9 - 12 month integrations are prohibitively expensive or otherwise unfeasible.

Daily-mean output should be provided for the full requested durations of each experiment for the following ensemble members:


EXP1 - 4: 1 ensemble member

EXP 5:     All ensemble members


The full requested durations are identical to those requested for monthly-mean data.


High-frequency (6-hourly) diagnostics for calculating equatorial wave spectra should be provided for the following periods and ensemble members of each experiment:


EXP 1: 1997-2002, first ensemble member


EXP 2-4: years 1-4, first ensemble member


EXP 5: first 3 months, all ensemble members


Notes re. the 6-hourly diagnostics:


  1. (1)If it is feasible for a group to save 6-hourly diagnostics for all 9-12 months of each ensemble member for each start date, this is encouraged. However, saving the first three months of each ensemble member is the minimum requirement.

  2. (2)The 1997-2002 period is suggested for EXPT 1 because this period encompasses positive, negative and neutral ENSO phases.

5.3 Data Format & Storage


Storage for the duration of the project is available at BADC, and it is strongly suggested that groups upload their data in a common format (CF-compliant netCDF) to BADC. This will involve registering to obtain an account, and preparing datasets to the specified common format (https://badc.nerc.ac.uk/help/formats/netcdf/index_cf.html). Please notify the Project Coordinators before registering.


5.4 Table of requested output variables


Below is a table listing those variables to be saved for both standard and “high-frequency” (i.e. 6- hourly) diagnostics. It is anticipated that modelling groups may locally store a more comprehensive diagnostic set than which is requested below. The diagnostics list chosen also aligns with requests from CCMi and the DYNVAR Diagnostic MIP.


Notes re. the following diagnostics table:


  1. (1)“Monthly mean & daily mean” indicates that both monthly mean and daily mean variables are to be saved.

  2. (2)The dimensions are denoted “XYT” for lon-lat-time, “XYPT” for lon-lat-pressure-time, etc.

  3. (3)The horizontal grid for all diagnostics is not prescribed (as discussed in Sec 5.1, above). It
    should be appropriate for the model, and ideally close to the model’s native horizontal resolution, although a reduced grid is acceptable if this produces impractically large data files. If the horizontal grid is reduced, it should be noted how this was done.

  4. (4)The vertical grid for all diagnostics is the prescribed set of standard pressure levels, with the exceptions for 6-hourly data and daily-mean 3D data; see Sec 5.1 for further details.

QBOi experimental protocol

Version 1.0  Drafted by John Scinocca, Tim Stockdale & Francois Lott

Version 1.21 Drafted by John Scinocca, Tim Stockdale, Francois Lott, Scott Osprey, Neal Butchart, Andrew Bushell, and James Anstey.

Version 1.22 Update to include ozone dataset recommended for high-top models

Version 1.23 Clarification of update to high-top models, also including recommendation for ozone climatology. Update of short-name for convective precipitation flux (prc)

Version 1.24 Adding suggested experiment extensions

(10-10-2016)


  1. 1.Overview

This is the protocol for a set of five QBO experiments, and is based on the outcome of discussions at and following the QBO Modelling and Reanalyses Workshop, Victoria, March 2015, and is briefly summarised in Anstey et al., 2015 and Hamilton et al., 2015. The motivations and goals of the experiments are described below, followed by the technical specification of the experiments and information on data and diagnostics. The experiments themselves are designed to be simple and accessible to a wide range of groups.

It is expected that each group will submit a set of results from all the experiments, made with a single “best shot” model version. Use of the same model version for the different experiments is crucial for learning the most from this study.


back to top



  1. 2.Experiment list and goals


a) Present-Day Climate: Identify and distinguish the properties of and mechanisms underlying the different model simulations of the QBO in present-day conditions:


EXPERIMENT 1: AMIP – specified interannually varying SSTs, sea ice, and external forcings


EXPERIMENT 2: 1xCO2 - identical simulation to the AMIP above except employing repeated annual cycle SSTs, sea ice, and external forcings


These experiments will allow an evaluation of the realism of modelled QBOs under present-day climate conditions, employing diagnostics and metrics discussed in Section 5. The impact of interannual forcing on the model QBO can also be assessed, and Experiment 2 is a control for the climate projection experiments.


  1. b)Climate Projections: Subject each modelled QBO contribution to an external forcing that is similar to that typically applied for climate projections:


EXPERIMENT 3: 2xCO2 - identical to Experiment 2, but with a change in CO2 concentration and specified SSTs appropriate for a 2xCO2 world


EXPERIMENT 4: 4xCO2 - identical to Experiment 2 but with a change in CO2 concentration and specified SSTs appropriate for a 4xCO2 world


The response of the QBO, its forcing mechanisms, and its impact/influence will be evaluated by the same set of diagnostics used for diagnosing Experiments 1 and 2, so as to evaluate the response (2xCO2 - 1xCO2 and 4xCO2 - 1xCO2).  Obvious questions that will arise:


  1. -    What is the spread/uncertainty of the forced model response?

  2. -    Do different model contributions cluster in any particular way?

  3. -    Can a connection/correlation be made between QBOs with similar metrics/        diagnostics in present day climate and their response to CO2 forcing?


The hope is that these experiments mayindicate what aspects of modelled QBOs determine the spread, or uncertainty, of the QBO response to CO2 forcing.  These aspects should receive the most attention by QBOi in order to reduce uncertainty in future projections. Such experiments also will inform the community what the general uncertainty in future predictions might be for state-of-the-art QBOs in CMIP6 projection experiments.


  1. c) QBO Hindcast and process study: Evaluate and compare the predictive skill of modelled QBOs in a seasonal prediction hindcast context, and study the model processes driving the evolution of the QBO.


EXPERIMENT 5: A set of initialized QBO hindcasts, with 9-12 month range.  Observed SSTs and forcings specified as in Experiment 1, with reanalysis providing atmospheric initial conditions for a set of given start dates.


These are not strictly prediction experiments in the seasonal forecast sense (they use prescribed observed SST), but still represent a challenge as to how well the models can predict the evolution of the QBO from specified initial conditions. Obvious questions that will arise:


    -How much does model prediction skill vary between models, and to what extent are models able to predict the QBO evolution correctly at different vertical levels and different phases of the QBO?

    -How does the forecast skill relate to the behaviour of the QBO in Experiment 1? Does a realistic QBO in a long model run guarantee good predictions, or vice versa, or neither?

    -Do the models that cluster and/or do well in the prediction experiments cluster in the CO2 forcing experiments?


The hope is that these experiments might indicate what aspects of modelled QBOs determine the quality of QBO prediction, so that these aspects can receive attention in order to improve prediction.  Alternatively, the hindcast framework may be helpful for directly assessing model changes, to help drive improvements in free-running models. Can these experiments help narrow the range of plausible models for climate change experiments?


Process Studies: Experiment 5 has a dual purpose: it not only provides information on the predictive capabilities of the models, it offers a unique opportunity to investigate and evaluate differences in wave dissipation and momentum deposition, so as to understand the processes driving the QBO in each model.  The initialization of the seasonal forecasts will necessarily present each QBO contribution with the same initial basic state. The evolution of that state immediately after the start of the forecast offers an opportunity to compare and contrast the properties of wave dissipation and momentum deposition between different models given an identical basic state. Specifying the same observed SST in all models (rather than allowing each model to predict its own SST evolution) helps focus attention on the model mechanisms that drive the QBO, and the extent to which they are correctly represented.


It is likely that any focus on processes driving the QBO will benefit from including a special set of high-frequency diagnostic output. See Sec. 5, below, for specifications of this output.

back to top


3.Experiment details


Five sets of simulations/experiments have been defined above:


    -EXPERIMENT 1 - AMIP,  interannually varying SSTs, sea ice, and external forcing

    -EXPERIMENT 2 - 1x CO2, repeated annual cycle  SSTs, sea ice, and external forcings

    -EXPERIMENT 3 - 2x CO2, as EXPT 2 with +2K SST perturbation and 2xCO2

    -EXPERIMENT 4 - 4x CO2, as EXPT 2 with +4K SST perturbation and 4xCO2

    -EXPERIMENT 5 - QBO hindcasts, with reanalysis initial conditions on specified start dates.


For each experiment it is requested that all modelling groups use the same set of SST and sea ice boundary conditions, as specified below. External forcings should be followed to the extent possible, although it is recognized that models may vary in how they specify aerosols, volcanic forcing etc. For the purposes of these experiments (sensitivity studies of the QBO), what matters is that the external forcing remains constant when it is supposed to be constant, and varies as realistically as the model allows when it is supposed to vary. In all cases, the intention is for the experiments to be made using only reasonable efforts. Experimental details should be documented by all groups, and any changes to prescribed forcings should be highlighted.

Ensemble sizes are given as a range, from minimum to preferred size. Each group should assess what is reasonable, given costs, resources and expected results (e.g. some models may have a highly regular or phase-locked QBO).


EXPERIMENT 1 - AMIPCost: 30-90y


This is based on the CMIP5: Expt 3.3


Period:  30y , using SSTs and sea ice from 1 Jan 1979 to 28 Feb 2009.


Ensemble size: 1-3


Boundary Conditions: CMIP5 interannually varying sea ice and SSTs obtained from:


http://www-pcmdi.llnl.gov/projects/amip/AMIP2EXPDSN/BCS/amipbc_dwnld.php


External Forcings: CMIP5 external forcings for radiative trace gas concentrations, aerosols, solar, explosive volcanoes etc. obtained from:


http://cmip-pcmdi.llnl.gov/cmip5/forcing.html#amip


Ozone forcing datasets appropriate for use in high-top models can be obtained from:


ftp://ftp.atm.ox.ac.uk/pub/user/sosprey/QBOi_O3


Atmospheric initial conditions: Not prescribed. Modellers may initialize as they see fit, such as from a spun-up QBO run, from a default set of initial conditions used by the model, or from reanalysis data consistent with the timing of the SSTs and sea ice (e.g. begin the model run with 1 Jan 1979 SSTs, sea ice, and atmospheric conditions).


Notes:


  1. (1) 1 Jan 1979 to 28 Feb 2009 is the date range requested for model output to be uploaded to the common QBOi archive (see Sec. 5 below for further details on diagnostics). Modellers may wish to begin their runs earlier than 1 Jan 1979 if spin-up time is required.


EXPERIMENT 2 - 1xCO2Cost: 30-90y


Repeated annual cycle simulation.


Period:  30y, after a suitable spinup (5y).


Ensemble size: 1-3


Boundary Conditions: CMIP5 SST Climatology 1988-2007 and sea ice Climatology 1988-2007 obtained from:


http://www-pcmdi.llnl.gov/projects/amip/AMIP2EXPDSN/BCS/amipbc_dwnld.php


External Forcings: repeated annual cycle forcings. Ideally this would be some sort of climatological forcing averaged over the 30 year period used in EXPT1, but that doesn't really exist.  The suggestion is to use year 2002 of the CMIP5 external forcings:


http://cmip-pcmdi.llnl.gov/cmip5/forcing.html#amip


and an ozone forcing dataset appropriate for use in high-top models obtained from:


ftp://ftp.atm.ox.ac.uk/pub/user/sosprey/QBOi_O3


Note that the year 2002 has neutral ENSO, neutral PDO, and is well away from any historical explosive volcanoes; therefore we anticipate that any possible effects of SST interannual variability on the external forcings – e.g. an ENSO influence on the ozone distribution – would be minimized for that year. This is desirable since the SST and sea ice boundary conditions use climatology. Since this experiment will be the base for the 2xCO2/4xCO2 experiments, a constant value of CO2 corresponding to the average over the year 2002 should be used.


For ozone, modelling groups may find it most appropriate to use a climatology of zonal-mean ozone instead of the 3D ozone distribution from the year 2002, since this is the year of the SH SSW, and in general a single year of ozone data will reflect the meteorology of that year. The ozone dataset at the above link contains 1850-2099 monthly-mean 3D ozone concentration on pressure levels, from which a zonal-mean climatology can be constructed.


Note that although these choices are not ideal (the 30-year comparison period, the 20-year SST climatology and the 2002 fixed forcing are all inconsistent with each other), the observed dependence of the QBO on changing climate through this period appears to be negligible. Thus for QBO purposes (and in particular for comparing model responses) the protocol is believed adequate, if all models use the same approach.


Atmospheric initial conditions: As with EXPT1, not prescribed.


EXPERIMENTS 3 and 4 - 2xCO2/4xCO2Cost: 60-180y


Period:  30y, after suitable spinup


Ensemble size: 1-3


Boundary Conditions: Boundary Conditions: SST as in EXPT 2 but with a spatially uniform +2/+4K perturbation added for EXPT 3/4. Sea ice identical to EXPT 2.


External Forcings: the forcings in these two experiments should be exactly the same as used in EXPT 2 except for the CO2 concentration, which should be doubled and quadrupled. Only CO2 forcings should be changed, not other radiatively active trace species. Please note that this includes the ozone distribution – it should be identical to that used in EXPT 2. These are sensitivity experiments, not attempts to predict specific periods in the future.


Atmospheric initial conditions: As with EXPT1, not prescribed.


EXPERIMENT 5 - QBO hindcastsCost: 68-150y


These are atmosphere-only experiments, initialized from re-analysis data, providing multiple short integrations from a relatively large set of start dates sampling different phases of the QBO.


Start dates: 1 May and 1 November in each of the years 1993-2007 (15 years, 30 start dates)


Hindcast length: 9-12 months


Ensemble size: 3-5 members


The boundary conditions and forcings for this experiment closely follows the prescription of the AMIP experiment (EXPT 1).


Boundary Conditions: CMIP5 interannually varying sea ice and SSTs obtained from:


http://www-pcmdi.llnl.gov/projects/amip/AMIP2EXPDSN/BCS/amipbc_dwnld.php


External Forcings: CMIP5 external forcings for radiative trace gas concentrations, aerosols, solar, explosive volcanoes etc. obtained from:


http://cmip-pcmdi.llnl.gov/cmip5/forcing.html#amip


Initial data for these dates should be taken from the ERA-interim reanalysis.  ERA-interim data is available for download from apps.ecmwf.int/datasets (registration is required; if downloading lots of start dates from this site, it may be easier to use the “batch access” method described on the site, although interactive download of each date is also possible. Data are available on either standard pressure levels or original model levels, and in either grib or netCDF. Try to download only the data you need, e.g. at 0 z on the 1st of the month).


The ensemble is expected to be generated by perturbing each ensemble member by a small anomaly, which needs do no more than change the bit pattern of the simulation.

back to top


4.Additional experiments

Some groups may want to conduct additional experiments, to provide further information on the sensitivity of the results to various factors.


EXPERIMENT 5A: As EXPT5, but using a coupled ocean-atmosphere model and predicting the SST, instead of specifying observed values. External forcings could also be fixed so as not to use future information. This is then a true forecast experiment for the QBO, and can be compared with the results of EXPT5.


Further, groups may want to run some or all of the experiments with multiple model versions, to explore the sensitivity of some of the results e.g. to vertical resolution or physics package. Although ideally all experiments would be re-run for any given model version, this may not be practical. Model versions for which complete experiment sets are available are likely to be considered the “primary” results when analysis takes place. If a group does run experiments using more than one model version, the different model versions should be given distinct names, such as when labelling output data files, to prevent ambiguities arising in the analysis of results.


Other extensions to experiments could include:

Extending ensemble size and/or run length:

  1. Experiment 2 for Holton-Tan

  2. Experiment 5 for longer range predictability

Sensitivity tests:

  1. separating stratospheric and tropospheric climate change effects

  2. El Nino/La Nina perturbations

Role of ozone:

  1. Ozone recovery specified in Experiments 3 and 4

  2. Ozone feedback on QBO (models that can run chemistry)


back to top


6.Project Participation & Acknowledgements


Groups interested in actively participating in the project, including running the core experiments, uploading data and leading or participating in analyses, should contact the Project Coordinators. You should identify the name of your model and modelling group, the experiments undertaken and those individuals to be listed as PIs. A list of participating groups will be updated on the project webpages.


It is both anticipated and encouraged that publications will arise from analysis of the project data. Please inform the Project Coordinators and PIs of the status of any planned work (including start and submission dates and further relevant details). It envisaged that all participating modelling group PIs should be offered co-authorship on publications for an identified period of time. Following this period, the offer of co-authorship will be relaxed, but there should still be due acknowledgement. Details for the embargo period and suggested wording for publication acknowledgements will be issued in due course.


References:


D. G. Andrews, J. R. Holton and C. B. Leovy, 1987: Middle Atmosphere Dynamics. Academic Press, San Diego


J. Anstey, K. Hamilton, S. Osprey, N. Butchart, L. Gray, 2015: Report on the 1st QBO Modelling and Reanalyses Workshop, 16-18 March 2015, Victoria, BC, Canada, SPARC Newsletter, 45, July


K. Hamilton, S. Osprey, N. Butchart, 2015: Modeling the stratosphere’s “heartbeat”, Eos, 96, 2 July, doi:10.1029/2015EO032301


Horinouchi, T., S. Pawson, K. Shibata, E. Manzini, M.A. Giorgetta, F. Sassi, R. J. Wilson, K. Hamilton, J. DeGrandpe and A.A. Scaife, 2003: Tropical cumulus convection and upward propagating waves in middle-atmospheric GCMs, J. Atmos. Sci. , 60, 2765-2782.


Kim, Y.-H. and Chun, H.-Y.: Momentum forcing of the quasi-biennial oscillation by equatorial waves in recent reanalyses, Atmos. Chem. Phys., 15, 6577-6587, doi:10.5194/acp-15-6577-2015, 2015.


Lott, F. S. Denvil , N. Butchart, C. Cagnazzo , M. Giorgetta, S. Hardiman, E. Manzini, T. T. Krishmer , J.- P. Duvel, P. Maury, J. Scinocca, S. Watanabe, S. Yukimoto, 2014: Kelvin and Rossby gravity wave packets in the lower stratosphere of some high-top CMIP5 models, J. Geophys. Res., 119(5), 2156- 2173, doi: 10.1002/2013JD020797



back to top