Search Results

You are looking at 11 - 20 of 22 items for

  • Author or Editor: Richard Wilson x
  • Refine by Access: All Content x
Clear All Modify Search
Laurence J. Wilson, Stephane Beauregard, Adrian E. Raftery, and Richard Verret

Abstract

Bayesian model averaging (BMA) has recently been proposed as a way of correcting underdispersion in ensemble forecasts. BMA is a standard statistical procedure for combining predictive distributions from different sources. The output of BMA is a probability density function (pdf), which is a weighted average of pdfs centered on the bias-corrected forecasts. The BMA weights reflect the relative contributions of the component models to the predictive skill over a training sample. The variance of the BMA pdf is made up of two components, the between-model variance, and the within-model error variance, both estimated from the training sample. This paper describes the results of experiments with BMA to calibrate surface temperature forecasts from the 16-member Canadian ensemble system. Using one year of ensemble forecasts, BMA was applied for different training periods ranging from 25 to 80 days. The method was trained on the most recent forecast period, then applied to the next day’s forecasts as an independent sample. This process was repeated through the year, and forecast quality was evaluated using rank histograms, the continuous rank probability score, and the continuous rank probability skill score. An examination of the BMA weights provided a useful comparative evaluation of the component models, both for the ensemble itself and for the ensemble augmented with the unperturbed control forecast and the higher-resolution deterministic forecast. Training periods around 40 days provided a good calibration of the ensemble dispersion. Both full regression and simple bias-correction methods worked well to correct the bias, except that the full regression failed to completely remove seasonal trend biases in spring and fall. Simple correction of the bias was sufficient to produce positive forecast skill out to 10 days with respect to climatology, which was improved by the BMA. The addition of the control forecast and the full-resolution model forecast to the ensemble produced modest improvement in the forecasts for ranges out to about 7 days. Finally, BMA produced significantly narrower 90% prediction intervals compared to a simple Gaussian bias correction, while achieving similar overall accuracy.

Full access
Richard G. Williams, Chris Wilson, and Chris W. Hughes

Abstract

Signatures of eddy variability and vorticity forcing are diagnosed in the atmosphere and ocean from weather center reanalysis and altimetric data broadly covering the same period, 1992–2002. In the atmosphere, there are localized regions of eddy variability referred to as storm tracks. At the entrance of the storm track the eddies grow, providing a downgradient heat flux and accelerating the mean flow eastward. At the exit and downstream of the storm track, the eddies decay and instead provide a westward acceleration. In the ocean, there are similar regions of enhanced eddy variability along the extension of midlatitude boundary currents and the Antarctic Circumpolar Current. Within these regions of high eddy kinetic energy, there are more localized signals of high Eady growth rate and downgradient eddy heat fluxes. As in the atmosphere, there are localized regions in the Southern Ocean where ocean eddies provide statistically significant vorticity forcing, which acts to accelerate the mean flow eastward, provide torques to shift the jet, or decelerate the mean flow. These regions of significant eddy vorticity forcing are often associated with gaps in the topography, suggesting that the ocean jets are being locally steered by topography. The eddy forcing may also act to assist in the separation of boundary currents, although the diagnostics of this study suggest that this contribution is relatively small when compared with the advection of planetary vorticity by the time-mean flow.

Full access
Kevin Hamilton, R. John Wilson, and Richard S. Hemler

Abstract

The large-scale circulation in the Geophysical Fluid Dynamics Laboratory “SKYHI” troposphere–stratosphere–mesosphere finite-difference general circulation model is examined as a function of vertical and horizontal resolution. The experiments examined include one with horizontal grid spacing of ∼35 km and another with ∼100 km horizontal grid spacing but very high vertical resolution (160 levels between the ground and about 85 km). The simulation of the middle-atmospheric zonal-mean winds and temperatures in the extratropics is found to be very sensitive to horizontal resolution. For example, in the early Southern Hemisphere winter the South Pole near 1 mb in the model is colder than observed, but the bias is reduced with improved horizontal resolution (from ∼70°C in a version with ∼300 km grid spacing to less than 10°C in the ∼35 km version). The extratropical simulation is found to be only slightly affected by enhancements of the vertical resolution. By contrast, the tropical middle-atmospheric simulation is extremely dependent on the vertical resolution employed. With level spacing in the lower stratosphere ∼1.5 km, the lower stratospheric zonal-mean zonal winds in the equatorial region are nearly constant in time. When the vertical resolution is doubled, the simulated stratospheric zonal winds exhibit a strong equatorially centered oscillation with downward propagation of the wind reversals and with formation of strong vertical shear layers. This appears to be a spontaneous internally generated oscillation and closely resembles the observed QBO in many respects, although the simulated oscillation has a period less than half that of the real QBO.

Full access
Richard Wilson, Hubert Luce, Francis Dalaudier, and Jacques Lefrère

Abstract

The Thorpe analysis is a recognized method used to identify and characterize turbulent regions within stably stratified fluids. By comparing an observed profile of potential temperature or potential density to a reference profile obtained by sorting the data, overturns resulting in statically unstable regions, mainly because of turbulent patches and Kelvin–Helmholtz billows, can be identified. However, measurement noise may induce artificial inversions of potential temperature or density, which can be very difficult to distinguish from real (physical) overturns.

A method for selecting real overturns is proposed. The method is based on the data range statistics; the range is defined as the difference between the maximum and the minimum of the values in a sample. A statistical hypothesis test on the range is derived and evaluated through Monte Carlo simulations. Basically, the test relies on a comparison of the range of a data sample with the range of a normally distributed population of the same size as the data sample. The power of the test, that is, the probability of detecting the existing overturns, is found to be an increasing function of both trend-to-noise ratio (tnr) and overturns size. A threshold for the detectable size of the overturns as a function of tnr is derived. For very low tnr data, the test is shown to be unreliable whatever the size of the overturns. In such a case, a procedure aimed to increase the tnr, mainly based on subsampling, is described.

The selection procedure is applied to atmospheric data collected during a balloon flight with low and high vertical resolutions. The fraction of the vertical profile selected as being unstable (turbulent) is 47% (27%) from the high (low) resolution dataset. Furthermore, relatively small tnr measurements are found to give rise to a poor estimation of the vertical extent of the overturns.

Full access
Kory J. Priestley, Bruce R. Barkstrom, Robert B. Lee III, Richard N. Green, Susan Thomas, Robert S. Wilson, Peter L. Spence, Jack Paden, D. K. Pandey, and Aiman Al-Hajjah

Abstract

Each Clouds and the Earth’s Radiant Energy System (CERES) instrument contains three scanning thermistor bolometer radiometric channels. These channels measure broadband radiances in the shortwave (0.3–5.0 μm), total (0.3–>100 μm), and water vapor window regions (8–12 μm). Ground-based radiometric calibrations of the CERES flight models were conducted by TRW Inc.’s Space and Electronics Group of Redondo Beach, California. On-orbit calibration and vicarious validation studies have demonstrated radiometric stability, defined as long-term repeatability when measuring a constant source, at better than 0.2% for the first 18 months of science data collection. This level exceeds by 2.5 to 5 times the prelaunch radiometric performance goals that were set at the 0.5% level for terrestrial energy flows and 1.0% for solar energy flows by the CERES Science Team. The current effort describes the radiometric performance of the CERES Proto-Flight Model on the Tropical Rainfall Measuring Mission spacecraft over the first 19 months of scientific data collection.

Full access
Richard Swinbank, Masayuki Kyouda, Piers Buchanan, Lizzie Froude, Thomas M. Hamill, Tim D. Hewson, Julia H. Keller, Mio Matsueda, John Methven, Florian Pappenberger, Michael Scheuerer, Helen A. Titley, Laurence Wilson, and Munehiko Yamaguchi

Abstract

The International Grand Global Ensemble (TIGGE) was a major component of The Observing System Research and Predictability Experiment (THORPEX) research program, whose aim is to accelerate improvements in forecasting high-impact weather. By providing ensemble prediction data from leading operational forecast centers, TIGGE has enhanced collaboration between the research and operational meteorological communities and enabled research studies on a wide range of topics.

The paper covers the objective evaluation of the TIGGE data. For a range of forecast parameters, it is shown to be beneficial to combine ensembles from several data providers in a multimodel grand ensemble. Alternative methods to correct systematic errors, including the use of reforecast data, are also discussed.

TIGGE data have been used for a range of research studies on predictability and dynamical processes. Tropical cyclones are the most destructive weather systems in the world and are a focus of multimodel ensemble research. Their extratropical transition also has a major impact on the skill of midlatitude forecasts. We also review how TIGGE has added to our understanding of the dynamics of extratropical cyclones and storm tracks.

Although TIGGE is a research project, it has proved invaluable for the development of products for future operational forecasting. Examples include the forecasting of tropical cyclone tracks, heavy rainfall, strong winds, and flood prediction through coupling hydrological models to ensembles.

Finally, the paper considers the legacy of TIGGE. We discuss the priorities and key issues in predictability and ensemble forecasting, including the new opportunities of convective-scale ensembles, links with ensemble data assimilation methods, and extension of the range of useful forecast skill.

Full access
Clifford F. Mass, Mark Albright, David Ovens, Richard Steed, Mark Maciver, Eric Grimit, Tony Eckel, Brian Lamb, Joseph Vaughan, Kenneth Westrick, Pascal Storck, Brad Colman, Chris Hill, Naydene Maykut, Mike Gilroy, Sue A. Ferguson, Joseph Yetter, John M. Sierchio, Clint Bowman, Richard Stender, Robert Wilson, and William Brown

This paper examines the potential of regional environmental prediction by focusing on the local forecasting effort in the Pacific Northwest. A consortium of federal, state, and local agencies have funded the development and operation of a multifaceted numerical prediction system centered at the University of Washington that includes atmospheric, hydrologic, and air quality models, the collection of real-time regional weather data sources, and a number of real-time applications using both observations and model output. The manuscript reviews northwest modeling and data collection systems, describes the funding and management system established to support and guide the effort, provides some examples of regional real-time applications, and examines the national implications of regional environmental prediction.

Full access
David Gochis, Russ Schumacher, Katja Friedrich, Nolan Doesken, Matt Kelsch, Juanzhen Sun, Kyoko Ikeda, Daniel Lindsey, Andy Wood, Brenda Dolan, Sergey Matrosov, Andrew Newman, Kelly Mahoney, Steven Rutledge, Richard Johnson, Paul Kucera, Pat Kennedy, Daniel Sempere-Torres, Matthias Steiner, Rita Roberts, Jim Wilson, Wei Yu, V. Chandrasekar, Roy Rasmussen, Amanda Anderson, and Barbara Brown

Abstract

During the second week of September 2013, a seasonally uncharacteristic weather pattern stalled over the Rocky Mountain Front Range region of northern Colorado bringing with it copious amounts of moisture from the Gulf of Mexico, Caribbean Sea, and the tropical eastern Pacific Ocean. This feed of moisture was funneled toward the east-facing mountain slopes through a series of mesoscale circulation features, resulting in several days of unusually widespread heavy rainfall over steep mountainous terrain. Catastrophic flooding ensued within several Front Range river systems that washed away highways, destroyed towns, isolated communities, necessitated days of airborne evacuations, and resulted in eight fatalities. The impacts from heavy rainfall and flooding were felt over a broad region of northern Colorado leading to 18 counties being designated as federal disaster areas and resulting in damages exceeding $2 billion (U.S. dollars). This study explores the meteorological and hydrological ingredients that led to this extreme event. After providing a basic timeline of events, synoptic and mesoscale circulation features of the event are discussed. Particular focus is placed on documenting how circulation features, embedded within the larger synoptic flow, served to funnel moist inflow into the mountain front driving several days of sustained orographic precipitation. Operational and research networks of polarimetric radar and surface instrumentation were used to evaluate the cloud structures and dominant hydrometeor characteristics. The performance of several quantitative precipitation estimates, quantitative precipitation forecasts, and hydrological forecast products are also analyzed with the intention of identifying what monitoring and prediction tools worked and where further improvements are needed.

Full access
Philippe Bougeault, Zoltan Toth, Craig Bishop, Barbara Brown, David Burridge, De Hui Chen, Beth Ebert, Manuel Fuentes, Thomas M. Hamill, Ken Mylne, Jean Nicolau, Tiziana Paccagnella, Young-Youn Park, David Parsons, Baudouin Raoult, Doug Schuster, Pedro Silva Dias, Richard Swinbank, Yoshiaki Takeuchi, Warren Tennant, Laurence Wilson, and Steve Worley

Ensemble forecasting is increasingly accepted as a powerful tool to improve early warnings for high-impact weather. Recently, ensembles combining forecasts from different systems have attracted a considerable level of interest. The Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Globa l Ensemble (TIGGE) project, a prominent contribution to THORPEX, has been initiated to enable advanced research and demonstration of the multimodel ensemble concept and to pave the way toward operational implementation of such a system at the international level. The objectives of TIGGE are 1) to facilitate closer cooperation between the academic and operational meteorological communities by expanding the availability of operational products for research, and 2) to facilitate exploring the concept and benefits of multimodel probabilistic weather forecasts, with a particular focus on high-impact weather prediction. Ten operational weather forecasting centers producing daily global ensemble forecasts to 1–2 weeks ahead have agreed to deliver in near–real time a selection of forecast data to the TIGGE data archives at the China Meteorological Agency, the European Centre for Medium-Range Weather Forecasts, and the National Center for Atmospheric Research. The volume of data accumulated daily is 245 GB (1.6 million global fields). This is offered to the scientific community as a new resource for research and education. The TIGGE data policy is to make each forecast accessible via the Internet 48 h after it was initially issued by each originating center. Quicker access can also be granted for field experiments or projects of particular interest to the World Weather Research Programme and THORPEX. A few examples of initial results based on TIGGE data are discussed in this paper, and the case is made for additional research in several directions.

Full access
Leo J. Donner, Bruce L. Wyman, Richard S. Hemler, Larry W. Horowitz, Yi Ming, Ming Zhao, Jean-Christophe Golaz, Paul Ginoux, S.-J. Lin, M. Daniel Schwarzkopf, John Austin, Ghassan Alaka, William F. Cooke, Thomas L. Delworth, Stuart M. Freidenreich, C. T. Gordon, Stephen M. Griffies, Isaac M. Held, William J. Hurlin, Stephen A. Klein, Thomas R. Knutson, Amy R. Langenhorst, Hyun-Chul Lee, Yanluan Lin, Brian I. Magi, Sergey L. Malyshev, P. C. D. Milly, Vaishali Naik, Mary J. Nath, Robert Pincus, Jeffrey J. Ploshay, V. Ramaswamy, Charles J. Seman, Elena Shevliakova, Joseph J. Sirutis, William F. Stern, Ronald J. Stouffer, R. John Wilson, Michael Winton, Andrew T. Wittenberg, and Fanrong Zeng

Abstract

The Geophysical Fluid Dynamics Laboratory (GFDL) has developed a coupled general circulation model (CM3) for the atmosphere, oceans, land, and sea ice. The goal of CM3 is to address emerging issues in climate change, including aerosol–cloud interactions, chemistry–climate interactions, and coupling between the troposphere and stratosphere. The model is also designed to serve as the physical system component of earth system models and models for decadal prediction in the near-term future—for example, through improved simulations in tropical land precipitation relative to earlier-generation GFDL models. This paper describes the dynamical core, physical parameterizations, and basic simulation characteristics of the atmospheric component (AM3) of this model. Relative to GFDL AM2, AM3 includes new treatments of deep and shallow cumulus convection, cloud droplet activation by aerosols, subgrid variability of stratiform vertical velocities for droplet activation, and atmospheric chemistry driven by emissions with advective, convective, and turbulent transport. AM3 employs a cubed-sphere implementation of a finite-volume dynamical core and is coupled to LM3, a new land model with ecosystem dynamics and hydrology. Its horizontal resolution is approximately 200 km, and its vertical resolution ranges approximately from 70 m near the earth’s surface to 1 to 1.5 km near the tropopause and 3 to 4 km in much of the stratosphere. Most basic circulation features in AM3 are simulated as realistically, or more so, as in AM2. In particular, dry biases have been reduced over South America. In coupled mode, the simulation of Arctic sea ice concentration has improved. AM3 aerosol optical depths, scattering properties, and surface clear-sky downward shortwave radiation are more realistic than in AM2. The simulation of marine stratocumulus decks remains problematic, as in AM2. The most intense 0.2% of precipitation rates occur less frequently in AM3 than observed. The last two decades of the twentieth century warm in CM3 by 0.32°C relative to 1881–1920. The Climate Research Unit (CRU) and Goddard Institute for Space Studies analyses of observations show warming of 0.56° and 0.52°C, respectively, over this period. CM3 includes anthropogenic cooling by aerosol–cloud interactions, and its warming by the late twentieth century is somewhat less realistic than in CM2.1, which warmed 0.66°C but did not include aerosol–cloud interactions. The improved simulation of the direct aerosol effect (apparent in surface clear-sky downward radiation) in CM3 evidently acts in concert with its simulation of cloud–aerosol interactions to limit greenhouse gas warming.

Full access