In recent years there has been a growing appreciation of the potential advantages of using a seamless approach to weather and climate prediction. However, what exactly should this mean in practice? To help address this question, we document some of the experiences already gathered over 25 years of developing and using the Met Office Unified Model (MetUM) for both weather and climate prediction. Overall, taking a unified approach has given enormous benefits, both scientific and in terms of efficiency, but we also detail some of the challenges it has presented and the approaches taken to overcome them.
Practical experience in developing and using the U.K. Met Office Unified Model for both weather and climate prediction provides lessons about both the benefits and challenges of seamless prediction.
The concept of a unified or seamless framework for weather and climate prediction has attracted a lot of attention in the last few years (Hurrell et al. 2009; Brunet et al. 2010; Shapiro et al. 2010; Nobre et al. 2010; Hazeleger et al. 2010; Senior et al. 2011). Traditionally the weather and climate prediction problems have been seen as different disciplines. Numerical weather prediction (NWP) is crucially dependent on defining an accurate initial state and running at the highest possible resolutions, while climate prediction has sought to incorporate the full complexity of the Earth system in order to accurately capture long time-scale variations and feedbacks determining the current climate and potential climate change. Unifying modeling and prediction across time scales stems from a recognition that the evolution of the weather and climate are linked by the same physical processes in the atmosphere– ocean–land–cryosphere system operating across multiple space and time scales. In addition, there is an increasing requirement to include Earth system complexity in NWP models (e.g., atmospheric chemistry for air quality predictions) and growing evidence that improvements to the resolution and initialization of coupled climate models are required to accurately capture important modes of atmospheric and oceanic variability on monthly to decadal time scales (e.g., Scaife et al. 2011).
What does seamless prediction look like in practice? The aim of this paper is to discuss the Met Office experiences over the last 25 years as we have moved toward a fully unified framework for our global and regional atmospheric, land, and ocean prediction systems, highlighting the clear benefits but also the potential drawbacks and pitfalls encountered along the way. We will also discuss the current status of our unified prediction systems and vision for the future.
HISTORICAL DEVELOPMENT OF THE MET OFFICE WEATHER AND CLIMATE MODELS.
Phase 1 (1960–90): Separate NWP and climate models.
As in most other modeling centers, the Met Office initial development of numerical models for weather forecasting and climate was entirely separate. The main reason for this was that in the 1960s numerical weather forecasting was limited to short-range prediction over small areas, while climate prediction was quickly recognized as an inherently global problem.
In the Met Office the forecast models were based on the model of Bushby and Timpson (1967) and written on a map projection, while the climate model was based on that of Corby et al. (1977). In the move to global forecast models it was natural to use the climate model as the basis, since as well as global coverage it also had a more sophisticated treatment of physical processes. However, a very efficient integration scheme had been developed for the forecast model (Gadd 1978) to allow timely production of results. At the time, this was not seen as an imperative for the climate model, which had to enforce conservation laws more accurately.
The result was that in the 1980s the Met Office had similar but separate global forecast and climate models. In addition, a nonhydrostatic mesoscale model originally developed by Tapp and White (1976) was introduced for operational use over the United Kingdom (Golding 1990). All these models were continually enhanced and increased in complexity. Additionally, the global models were written in a manufacturer-specific language for efficiency on the CYBER 205 supercomputer made by Control Data Limited, and it was realized that reliance on multiple models with machine-specific codes was no longer a viable strategy because of the cost and time taken to rewrite the models for new computers. A gradual convergence of the global forecast and climate model formulations also made a unified strategy sensible. However, it is not clear whether this would actually have been achieved on a reasonable time scale without the withdrawal in 1989 by Control Data Limited of the computer that was the designated replacement for the CYBER 205. The sudden urgent need for a new model code forced the decision to create the Met Office Unified Model (MetUM) on a 2-yr time scale.
Phase 2 (1990–2010): Unified models.
Under the circumstances a minimum risk strategy was followed of incorporating the efficient integration scheme of the forecast model into a conservative finite-volume dynamical formulation (Cullen and Davies 1991). The physics was largely taken from the climate model. After an intensive period of work, the global forecast model became operational in June 1991. However, it took a further two years before the climate model was considered acceptable, at which point the performance was documented in Cullen (1993). The operational mesoscale model also switched to using the new Unified Model at the end of 1992. This new mesoscale system had advantages relative to its predecessor in having a more complete data assimilation system and some improved physics (e.g., a more complete representation of surface processes from an old climate model scheme was beneficial for some nearsurface weather parameters). However, this came at the cost of the loss of nonhydrostatic capability and some other physics [e.g., a cumulus convection scheme able to represent shower advection (Golding 1990) that had been developed for the old model]. The loss of nonhydrostatic capability was addressed in 2002, when a new dynamical formulation (semiimplicit, semi-Lagrangian) (Davies et al. 2005) was introduced for all configurations.
Once the global climate and weather models were using the Unified Model system, the strategy was to keep the atmosphere components aligned with one another. Major physics developments introduced in one were usually introduced to the other within a year or two. One example was the development of a new orographic drag scheme, implemented with beneficial effect in both NWP (Milton and Wilson 1996) and climate (Gregory et al. 1998). Similarly, a new nonlocal boundary layer scheme with an explicit representation of entrainment was introduced for both weather (global and mesoscale) and climate at the same time as the dynamical core upgrade in 2002 (Lock et al. 2000; Martin et al. 2000).
With this approach the physics of the weather and climate models was kept broadly in step, and periodically they would be very close to each other. For example, the atmospheric physics of the Hadley Centre Atmosphere Model 2b (HadAM2b) climate model (Stratton 1999) was essentially the same as the NWP model of 1996, while the Hadley Centre Global Environmental Model version 1 (HadGEM1) climate model (Martin et al. 2006) shared atmospheric physics with the NWP model of 2005. Global and regional ensemble systems [Met Office Global and Regional Ensemble Prediction System (MOGREPS)-G and -R] have also been developed in the same framework (Bowler et al. 2008), and the dynamical seasonal forecasting system was brought into the fold with the introduction of the Met Office Global Seasonal Forecast System, version 4 (GloSea4) system (Arribas et al. 2011). Accordingly, as illustrated schematically in Fig. 1, the Unified Model is now used for a wide range of applications, with grid resolutions varying from 1.5 km in operational NWP (hundreds of meters in research mode) to 300 km for low-resolution climate simulations, for time scales from hours to centuries, and with varying levels of Earth system complexity.
Phase 3 (2010 onward): Unified, seamless, and traceable prediction systems.
We believe that the level of seam-lessness applied in phase 2 gave significant advantages— both scientific and in terms of efficiency. Nevertheless, the separate evaluation and adoption of developments in NWP and climate did increase the risk that a given change might become well established in one configuration, but difficult to integrate into the other because of a degradation of performance. In turn this could lead to divergence of configurations, with associated increased maintenance costs and dilution of valuable model developer effort. Accordingly, in this third phase, we have chosen to go one step further, and are attempting to really embrace the seamless concept. This involves trying to build unified prediction systems that are developed from the outset to operate across time scales with as far as possible the same physical and dynamical formulation. To help with this, we have restructured our internal management so that the same group is now responsible for the development of the atmosphere model for all time scales. More details on our model development process, and on how seamless we aspire to be in practice, are given in the “How seamless to be?” section.
A further important evolution through phases 2 and 3 has been the recognition that the demands of developing and maintaining a complex modeling system are best met not by one organization in isolation, but by various parties working together (both on the technical infrastructure and the scientific analysis and development). We have therefore developed strategic relationships with a number of United Kingdom universities, with the Natural Environment Research Council (NERC), which funds much relevant academic research in the United Kingdom, and with a number of international centers [the Centre for Australian Weather and Climate Research (CAWCR), Korean Meteorological Agency (KMA), Norwegian Meteorological Institute (met.no), National Centre for Medium Range Weather Forecasting (India) (NCMRWF), New Zealand National Institute of Water and Atmospheric Research (NIWA), and South African Weather Service (SAWS)] that have adopted the model and now contribute to its improvement. These partnerships are contributing valuable insights into various aspects of the model, and, in some instances, major components of it [e.g., the Joint U.K. Land Environment Simulator (JULES) land surface scheme (Best et al. 2011) developed jointly by the Met Office and U.K. academia].
SCIENTIFIC ADVANTAGES OF A UNIFIED FRAMEWORK.
The scientific advantages of a unified modeling and prediction framework have been highlighted in many other recent review papers on seamless prediction. One clear advantage of having a single numerical model at the heart of the prediction systems is that we can trace the evolution of error growth across multiple space and time scales to help unravel the key error sources responsible for the failure of models to adequately capture major modes of climate variability.
As an example of model error across time scales, the MetUM shows a remarkable similarity in the mean precipitation biases on the very largest scales between the average day 1 NWP forecast errors and a 20-yr climate integration (Fig. 2). The dominant error is for excessive rainfall over the tropical oceans. A closer examination of the variability of modeled tropical precipitation (not shown) reveals that excess mean precipitation is delivered by the well-known bias of the model precipitating too often and with too little intensity (Trenberth et al. 2003; Stephens et al. 2010), even in short-range NWP forecasts. Elsewhere over land we see common error signatures between time scales, with excessive precipitation over major orography and dry biases over India and South America. The similarity of short-range error to that at longer time scales (also seen in some studies with other modeling systems; e.g., Klein et al. 2006) strongly suggests a major role for local physical processes and that real progress can be made from studying the error growth at the short-range prediction time scales. However, there are also significant differences in structure as the errors at short time scales evolve toward a climate equilibrium state that involves the interaction of fast and long time-scale processes and local and nonlocal (remotely forced) systematic errors. For example, the drying over the Asian monsoon region is more extensive in the climate simulation compared to the NWP forecast. Even with a seamless prediction framework the unraveling of error sources is still nontrivial and requires sophisticated diagnostic approaches. We believe that the benefits of a seamless approach for understanding and alleviating model errors still remain to be fully exploited, but good progress is being made across a number of research areas. Some recent examples include the following:
Horizontal-resolution studies. Global NWP case studies have been run from 300- to 17-km horizontal resolution. This provides a model framework strongly constrained by data assimilation and observations to allow a systematic study of the relative impacts of resolution and parameterized physical process on model error before nonlocal error sources become important (e.g., Klinker and Sardesmukh 1992; Rodwell and Palmer 2007). Parallel studies of coupled climate predictions are also being made in collaboration with U.K. universities across a range of resolutions from 150 to 25 km to systematically explore the role of resolution at seasonal to decadal prediction time scales.
Convective-scale modeling across time scales. A 1.5-km convection-permitting version of the MetUM is operational for short-range weather prediction over the United Kingdom and provides a continuous time series of detailed simulations with which to assess the larger-scale models' parameterized convective processes. A series of climate change experiments are also being undertaken with the same 1.5-km model over the United Kingdom to explore the role of resolving convection on potential climate change signals at the regional scale (Kendon et al. 2012). Convection-resolving experiments are also being run over large tropical domains with the MetUM as part of the Cascade consortium project being carried out in collaboration with U.K. universities. The aim of this project is to use the simulations to provide improved understanding of the convective scale and interactions with larger scales, enabling subsequent development of an improved representation of the key processes for the operational models.
Evaluation of physical processes against field experiments. This is an active area of research for scientists developing parameterizations working together with observations experts and model evaluation teams. Within an NWP framework one can evaluate individual weather systems at regional scales against high-quality observations. Recent examples include i) the Variability of the American Monsoon Systems (VAMOS) Ocean–Cloud–Atmosphere–Land Study (VOCALS) experiment to study the coupled ocean–atmosphere–land system on diurnal to interannual time scales and, of particular interest to us, the study of marine stratocumulus and aerosols over the eastern subtropical Pacific (Abel et al. 2010); ii) several aircraft campaigns to measure dust over the West African region (see case study 1 in the appendix); and iii) the Arctic Summer Cloud Ocean Study (ASCOS) experiment that took place in the Arctic in summer 2008 and was used to evaluate both the MetUM NWP and climate predictions. This revealed issues with surface albedo, surface temperature prediction, and modeling of cloud in stable boundary layers (Birch et al. 2009).
Evaluation of models against Earth observations. Williams and Brooks (2008) carried out a systematic study of cloud regimes against International Satellite Cloud Climatology Project (ISCCP) data from short-range NWP and climate simulations showing great similarity between the two time scales. New Earth observation platforms such as CloudSat and Meteosat Second Generation (MSG) [Geostationary Earth Radiation Budget (GERB), Spinning Enhanced Visible and Infrared Imager (SEVERI) instruments] have been used to evaluate physical parameterizations in MetUM (Bodas-Salcedo et al. 2008; Allan et al. 2007).
Stochastic physics across space and time scales and model traceability. Effort is being invested in exploring stochastic approaches to the parameterization problem both for purposes of generating more realistic ensemble spread (Shutts 2005) and to try to make performance of lower-resolution climate models traceable to higher-resolution models (Sanchez et al. 2012).
Two more detailed examples are also discussed. Case study 1 looks at the crossover of Earth system complexity (dust prediction) from climate to NWP versions of the MetUM, while in case study 2 the growing interest in coupling the atmosphere to the ocean for weather predictions is discussed.
COMPROMISES AND CHALLENGES.
There is no doubt that while moving to a seamless approach can bring significant advantages, it is not without its challenges. In order to present a balanced picture, we detail some of these below.
First of all, to make an initial move from multiple modeling systems to a single one inevitably involves the loss of models in which a lot of investment has been made, and to which many scientists may have personal commitment. This can therefore be a difficult decision both to make and to successfully implement. It was probably relatively easy for the Met Office, owing to having the weather and climate modeling in the same organization—although even here it is apparent that the need to produce new models in short order following the collapse of a computer procurement helped precipitate the decision to go for a unified approach. Where weather and climate are modeled using different systems in separate organizations, the decision is potentially more difficult, although the likely need for significant reformulation of both weather and climate models to cope with new computer architectures in the future may provide opportunities and motivation for coming together.
Second, in order to gain scientific and efficiency advantages of seamlessness, it is likely to be necessary sometimes to make some decisions that are not absolutely optimal for all applications. For example, the initial move to the Unified Model involved the loss of nonhydrostatic capability from the mesoscale model, and, relative to the previous machine-specific code, some loss of efficiency in the global forecast model. Similarly, the later change to a new dynamical core in 2002 involved the adoption of a semi-Lagrangian advection scheme that, although accurate and efficient, does not exactly respect local conservation laws. This is an issue particularly for long-lived tracers in climate simulations.
Major upgrades also become potentially more difficult to achieve, as having to show satisfactory performance across multiple metrics and multiple applications at the same time is clearly challenging— both scientifically and also in terms of having the technical capability and computer resources to efficiently run and analyze trials. However, the alternative of allowing major upgrades to one application while finding major degradations in another would run the risk of long-term significant divergence of configurations. One specific lesson that we have learned concerns the number of major changes to make at one time. At the time when work was going on to upgrade the MetUM to use a new dynamical core (Davies et al. 2005), it was decided to make significant changes to the physics at the same time. The argument made was that it made more sense to make all the changes together so that final performance only had to be optimized once. With the benefit of hindsight, our view is that this was a bad decision as, with many things changing at once, it was more difficult to track down the causes of any problems, and hence the operational implementation of the new core took longer than it might otherwise have done. Accordingly, for future major upgrades to the model dynamical formulation we will look to make them without changes to the physics, except for any minor changes or tunings required to optimize performance of the new system.
HOW SEAMLESS TO BE?
Whatever the merits of seamless prediction, it is clear that any dogmatic insistence on weather and climate prediction systems being identical in all aspects would be doomed to failure. For example, the high resolutions, which are found to be extremely beneficial for many NWP applications, will typically be unaffordable for most climate computations (although with expanding computer power, climate can certainly learn from NWP experience). Similarly, the computational cost incurred when including many processes interactively in climate models cannot be justified for NWP (although we are starting to see the import of some additional complexity from climate models to NWP). The real question then is what level of seamlessness to strive for in order to gain maximum benefit? The current Met Office position indicates our judgment as to the appropriate answer to this question, based on our long experience of using the MetUM. It is this practical experience that has led us to make the changes from our “phase 2” level of seamlessness to “phase 3” (i.e., we have been evolving our working practices and management structures to be more, rather than less, seamless).
In terms of the traditional atmospheric physics parameterizations used for both weather and climate, our approach now is to, as far as possible, keep them identical for use on all time scales. This has recently been formalized through the annual definition of a single global atmosphere (GA) configuration (Walters et al. 2011). Candidate changes to go in such a configuration are tested on, and expected to perform adequately on, all time scales (including coupled climate simulations). Ideally, the final configuration is acceptable for all applications. However, we do pragmatically accept that there will be times when it is necessary to maintain a branch off the main development path (Fig. 3) with minor modifications that optimize performance in a given application. This is currently the case where the global NWP model is using GA3.1, whereas the seasonal system and development version of the coupled climate model use GA3.0. Nevertheless, it is important to emphasize that the changes between GA3.0 and GA3.1 are minor—primarily an increase strength of mixing in the stable atmospheric boundary layer and a computationally cheaper version of the radiation scheme (i.e., the weather and climate models are very close to using the same physics schemes). See Walters et al. (2011) for more details. Our vision is that successive releases (GA4.0, GA5.0, etc.) will always be developed from a starting point of the previous main release (e.g., GA3.0 and not GA3.1), as this will maximize the chances of not perpetuating differences between NWP and climate, with branches only reintroduced (e.g., GA4.1) from the new cycle if necessary.
Of course, the global model is run across a wide range of horizontal resolutions, from 25 km in operational NWP (and higher in research) to 150 or even 300 km in climate. Our basic philosophy is to use the same schemes across this range, arguing, for example, that the problems of parameterizing boundary layer turbulence and cloud are essentially the same across this resolution range. Resolution dependence is thus largely limited at present to some numerical choices (e.g., diffusion settings and the choice of a less diffusive interpolation method in the lowest-resolution climate simulations to maintain reasonable levels of eddy kinetic energy) and some changes to the time scale of the convection scheme (and the mountain drag scheme, which, in a sense, switches itself off as resolution increases as the variance of the subgrid orography, on which it depends, decreases). It is possible that other horizontal-resolution-dependent features (e.g., related to physical length scales of inhomogeneity) may be introduced in the future, where they can be physically motivated. In terms of vertical resolution, the approach is to keep the tropospheric vertical resolution the same in all global configurations, although we allow different resolutions in the stratosphere. The operational NWP model uses 70 levels up to a top at 80 km, while the seasonal and climate models use 85 levels with a top at 85 km and the same tropospheric levels/resolution as the NWP model below about 18 km. Experimentation is under way to see whether switching the NWP vertical resolution to match that of the climate model gives sufficient benefits (in terms of improved results, improved potential to use the short-range errors to understand longer-range ones, or simply reduced maintenance cost) to justify the increased computational cost.
For the limited-area models (for weather prediction or climate downscaling) operating at around 10-km resolution or coarser, our strategy is to use the GA configurations. For convection-permitting models (e.g., the 1.5-km-resolution U.K. model) more significant changes are made to the physics (e.g., turning down or off the convection parameterization, and currently using the simple statistical cloud scheme of Smith (1990) rather than the prognostic scheme used in the global models). Hence, the “seamlessness” with the global models is less tight than between global models for different time scales, although our philosophy remains to use the same components where possible (e.g., nonlocal boundary layer scheme and many aspects of radiation).
A similar approach is being taken for defining a physical global land (GL) surface configuration of the JULES model (Best et al. 2011) suitable for use in weather and climate, with GL3.0 currently used in climate, and a slightly different version, GL3.1, in NWP. Again it is important to emphasize that the differences between the two configurations are minor (details in Walters et al. 2011) and that most aspects are identical in weather and climate. We also aim to bring them completely together, or at least keep them very close to each other, as without doing that it is much harder to take advantage of the extra insights that can be gained through a systematic examination of the growth of model errors across time scales.
For extra processes that are required in long-term climate simulations (e.g., inclusion of an interactive ocean or detailed treatment of aerosols and chemistry) the approach we take for shorter time scales is to consider on a case-by-case basis whether that process should be included, completely omitted, or whether a traceable simplification of the full climate scheme should be used. An example of the last approach is for aerosols, where the NWP and seasonal models recently switched to use monthly varying aerosol climatologies derived from using the full interactive aerosol scheme in the climate model. Work is now ongoing to assess the cost benefit of switching to a prognostic representation of at least some of the aerosols in the NWP and seasonal configurations (e.g., Milton et al. 2008 and case study 1).
As discussed in case study 2, we are also investigating the use of an interactive ocean on NWP time scales, both to see if it gives benefits and also as a way of gaining insights into biases seen at longer range. Already it seems clear that many of the issues affecting the quality of coupled NWP and climate predictions (and offline ocean-only simulations) are common; for example, in addition to atmospheric forcing errors, problems associated with the representation of mixing in the oceanic boundary layer (e.g., Belcher et al. 2012) leading to mixed layer depth and sea surface temperature errors are relevant for all time scales. Again, this encourages a seamless approach, and we envisage defining a global ocean (GO) configuration for use for all time scales.
One issue that we still find difficult is the question of how long to use and support old versions. We envisage that a new version of global atmosphere will be developed and released each year (and would expect it to be adopted for operational applications), but a number of research projects will need to use a fixed model version for a greater length of time [e.g., performing and analyzing a major set of climate simulations for an international intercomparison such as the Coordinated Regional Climate Downscaling Experiment (CORDEX), or an individual doctoral study where a model change midstudy is undesirable]. Maintaining lots of old versions enables greater continuity of science projects, but it also discourages adoption of newer (and hopefully better!) models and, importantly, the older the model, the less likely any analysis performed will be relevant for the improvement of the current model. The most extreme example of this is probably the third climate configuration of the Met Office Unified Model (HadCM3) (Gordon et al. 2000), the climate model of the late 1990s, which was expected to be retired some years ago. However, it is still extensively used for cutting-edge climate research in academia—in part because huge investment in its analysis over many years means that it has known pedigree and characteristics, and in part because it is significantly cheaper computationally than the more recent models. The widely used Providing Regional Climates for Impacts Studies (PRECIS) regional climate model is also based on HadCM3.
We have attempted in this paper to give an overview of our practical experiences of a seamless prediction approach. It is clear that it is not a panacea—if improving some aspect of modeling such as tropical convection is a difficult problem for weather modelers and for climate modelers, then it is still a difficult problem for seamless modelers! Nevertheless, the approach does provide clear efficiencies in terms of code development and maintenance, and a scientific framework that can help understand errors and thus make progress more likely—and we believe that there are increasing numbers of examples of this approach delivering real benefit.
Much of our application of the seamless approach up to now has been around modeling of the atmosphere. However, our experiences have encouraged us to take this further, both in terms of truly integrating our atmosphere model development for all time scales, and also expanding into other Earth system components. Hence, without underestimating the difficulties to be overcome, we conclude that the seamless vision espoused in a number of recent papers in BAMS (e.g., Brunet et al. 2010; Shapiro et al. 2010) is appropriate and worth striving for.
APPENDIX: CASE STUDIES
Case study 1: Dust prediction—An interplay of climate and NWP.
As part of Earth system model developments for climate prediction, the Met Office has included aerosol prediction components since HadAM4 in the early 2000s (e.g., Jones et al. 2001) and has continued to improve the representation of aerosol processes for climate prediction (Martin et al. 2006; Bellouin et al. 2007). However, during 2004–06 there was a growing customer requirement for shortrange (1–3 days) predictions of dust. In response to this we were able to quickly implement the dust parameterization scheme developed for the climate model (Woodward 2001) into a regional NWP model. This included full dust–radiative interactions and was evaluated extensively against field experiments over West Africa (Greed et al. 2008).
The global NWP model in the mid-2000s still had a very simple representation of aerosol based on a land–sea split developed for an early version of the climate model (Cusack et al. 1998). During the African Monsoon Multidisciplinary Analysis (AMMA) campaign in 2006, work began to evaluate the global NWP version of MetUM over the West African and Saharan regions, using data from multiple observing platforms including MSG satellite SEVIRI and GERB instruments; the mobile Atmospheric Radiation Measurement Program (ARM) sites at Niamey and Banizoumbou, Niger (Miller and Slingo 2007); and from aircraft campaigns during AMMA to measure the properties of mineral dust [Dust and Biomass Experiment (DABEX) (Haywood et al. 2008)]. Evaluation studies (Haywood et al. 2005; Milton et al. 2008) demonstrated that the failure to represent mineral dust (and biomass-burning aerosol) in the global NWP model was a significant source of systematic errors in the short-term radiative balance over West Africa, the Sahara, and the Atlantic Ocean (Figs. A1a–c). This impacted near-surface temperatures and the atmospheric circulation. There was also evidence of clear predictability out to 5 days ahead (Fig. A1d) in representing major large-scale dust events such as that observed during the AMMA campaign in March 2006 (Slingo et al. 2006; Milton et al. 2008).
Since then dust has been implemented as a tracer into the operational global NWP model and work is in progress to make it a fully radiatively active component in the model forecasts. Having dust prediction in MetUM NWP versions has also allowed exploitation of the real-time predictions in combination with observations to suggest improvements to the dust prediction scheme. Indeed, in 2007 the Met Office in collaboration with U.K. universities ran a further observational campaign, Geostationary Earth Radiation Budget Intercomparison of Longwave and Shortwave Radiation (GERBILS), specifically designed to target the known systematic error in the model's radiative balance over West Africa (Fig. A1b) associated with poor representation of dust (Haywood et al. 2011). Based on these studies, developments are being made to the dust scheme (e.g., improved uplift and dust–vegetation interactions), which we expect to implement in both NWP and climate. Hence, the work will have gone full circle, with the initial import of a scheme from climate to NWP, being followed by developments made in an NWP framework, and then feeding back to the climate prediction problem.
Case study 2: Ocean–atmosphere interactions across weather and climate time scales.
At short-range prediction time scales (1–5 days), the Met Office currently runs independent atmospheric and ocean numerical forecast models. The atmosphere is forced with fixed sea surface temperatures (SSTs) from the initial analysis time and the ocean is forced with fluxes, wind stresses, and precipitation from the global atmospheric forecast. The medium-range ensemble prediction system (MOGREPS; Bowler et al. 2008) currently persists the SST anomaly from initial analysis time, while the seasonal prediction system (GloSea4; Arribas et al. 2011) is a fully coupled atmosphere–ocean (AO) system. Our aim in the near term is to rationalize the ensemble systems to be fully coupled AO systems from days to decades. Ultimately, we would also envisage coupling the atmosphere, ocean, and wave models for the global deterministic NWP forecasts.
A research project is underway to explore the benefits of atmosphere–ocean interactions on short- and medium-range prediction time scales using a 60-km 85-level atmosphere and 0.25° version of the Nucleus for European Modelling of the Ocean (NEMO) model that will form the basis of the GloSea seasonal prediction system in 2012–13. Air–sea interactions have been shown to be important in a number of phenomena from tropical cyclones (Goni and Trinanes 2003) to the Madden–Julian oscillation (Kim et al. 2010). However, coupled predictions also rapidly develop large systematic biases or drifts in the basic mean state, which can impact the performance and predictability of forecasts. Understanding the SST drifts is key and preliminary investigation (Fig. A2) has shown some strong qualitative similarity between the error patterns seen in forecasts of just a few days in length and those seen in extended climate runs. This suggests that a seamless approach offers considerable promise in improving coupled simulations, with work to analyze the short-range error growth in coupled AO simulations being likely an effective way to understand the source of at least some of the drifts on seasonal to decadal time scales.