Search Results

You are looking at 1 - 10 of 33 items for

  • Author or Editor: Rebecca E. Morss x
  • User-accessible content x
Clear All Modify Search
Rebecca E. Morss

Atmospheric science information is a component of numerous public policy decisions. Moreover, many resources for atmospheric science are allocated by governments, in other words, through public policy decisions. Thus, all atmospheric scientists—those interested in helping address societal problems, and those interested primarily in advancing science—have a stake in public policy decisions. Yet atmospheric science and public policy are sufficiently different that atmospheric scientists often find it challenging to contribute effectively to public policy. To help reduce this gap, this article examines the area where atmospheric science, public policy research, and public policy decisions intersect. Focusing on how atmospheric science and public policy inform each other, the article discusses and illustrates a key concept in public policy—the importance of problem definition—using an atmospheric science policy issue of current interest: observing-system design for weather prediction. To help the atmospheric science community participate more effectively in societal decision making (on observing-system design and other topics), the article closes with three suggestions for atmospheric scientists considering policy issues.

Full access
Rebecca E. Morss and Fuqing Zhang

After the 2005 hurricane season, several meteorology students at Texas A&M University became interested in understanding Hurricane Rita's forecasts and societal impacts in greater depth. In response to the students' interest, we developed a collaborative student research study at Texas A&M University associated with an undergraduate course in the spring semester of 2006. The study included both a meteorological and an interdisciplinary component, in which students performed an in-person survey of Texas Gulf Coast residents. Students were involved in multiple phases of the research, from the design to implementation to dissemination of results. This collaborative research model engaged and motivated the students, providing substantial educational benefits. The study and class linked the students' classroom knowledge to reality while generating new knowledge about the societal aspects of Hurricane Rita and other hurricanes. This paper reviews key aspects of the study and class, presenting a prototype integrated research-education model for others interested in incorporating active learning, collaborative inquiry, and interdisciplinary study into undergraduate classrooms. The model can be implemented at both colleges and research universities for a variety of topics of interest to students, teachers, the research community, and society.

Full access
Rebecca E. Morss, Chris Snyder, and Richard Rotunno

Abstract

Results from homogeneous, isotropic turbulence suggest that predictability behavior is linked to the slope of a flow’s kinetic energy spectrum. Such a link has potential implications for the predictability behavior of atmospheric models. This article investigates these topics in an intermediate context: a multilevel quasigeostrophic model with a jet and temperature perturbations at the upper surface (a surrogate tropopause). Spectra and perturbation growth behavior are examined at three model resolutions. The results augment previous studies of spectra and predictability in quasigeostrophic models, and they provide insight that can help interpret results from more complex models. At the highest resolution tested, the slope of the kinetic energy spectrum is approximately at the upper surface but −3 or steeper at all but the uppermost interior model levels. Consistent with this, the model’s predictability behavior exhibits key features expected for flow with a shallower than −3 slope. At the highest resolution, upper-surface perturbation spectra peak below the energy-containing scales, and the error growth rate decreases as small scales saturate. In addition, as model resolution is increased and smaller scales are resolved, the peak of the upper-surface perturbation spectra shifts to smaller scales and the error growth rate increases. The implications for potential predictive improvements are not as severe, however, as in the standard picture of flows exhibiting a finite predictability limit. At the highest resolution, the model also exhibits periods of much faster-than-average perturbation growth that are associated with faster growth at smaller scales, suggesting predictability behavior that varies with time.

Full access
Rebecca E. Morss and David S. Battisti

Abstract

The Tropical Atmosphere Ocean (TAO) array of moored buoys in the tropical Pacific Ocean is a major source of data for understanding and predicting the El Niño–Southern Oscillation (ENSO). Despite the importance of the TAO array, limited work has been performed to date on the number and locations of observations required to predict ENSO effectively. To address this issue, this study performs a series of observing system simulation experiments (OSSEs) with a linearized intermediate coupled ENSO model, stochastically forced. ENSO forecasts are simulated for a number of observing network configurations, and forecast skill averaged over 1000 years of simulated ENSO events is compared.

The experiments demonstrate that an OSSE framework can be used with a linear, stochastically forced ENSO model to provide useful information about requirements for ENSO prediction. To the extent that the simplified model dynamics represent ENSO dynamics accurately, the experiments also suggest which types of observations in which regions are most important for ENSO prediction. The results indicate that, using this model and experimental setup, subsurface ocean observations are relatively unimportant for ENSO prediction when good information about sea surface temperature (SST) is available; adding subsurface observations primarily improves forecasts initialized in late summer. For short lead-time (1–2 month) forecasts, observations within approximately 3° of the equator are most important for skillful forecasts, while for longer lead-time forecasts, forecast skill is increased by including information at higher latitudes. For forecasts longer than a few months, the most important region for observations is the eastern equatorial Pacific, south of the equator; a secondary region of importance is the western equatorial Pacific. These regions correspond to those where the leading singular vector for the ENSO model has a large amplitude. In a continuation of this study, these results will be used to develop efficient observing networks for forecasting ENSO in this system.

Full access
Rebecca E. Morss and William H. Hooke

In many respects, the prospects for U.S. meteorological research have never been brighter. Knowledge is advancing rapidly, as are supporting observing and information technologies. The accuracy, timeliness, and information content of forecasts are improving year by year. As a result, new and growing markets eagerly await the products of weather research, and opportunities for commercialization abound. Furthermore, no end to the progress of knowledge is in sight; there is plenty of interesting research left to do.

Other trends, however, give cause for concern. In particular, the growing value of weather services and science is straining long-established public–private and international partnerships, vital to our field. Closer to home, the meteorological community can see nascent signs of some of the same commercialization-related difficulties that now challenge biotechnology.

In fact, the biotechnology community's experience with commercialization of research teaches valuable lessons. Attention to these issues now, and appropriate early action, may help the meteorological community benefit from commercialization while avoiding similar pitfalls. This would not only serve our field well, it would also ensure that society continues to benefit from meteorological research advances in the decades to come.

Full access
Rebecca E. Morss and F. Martin Ralph

Abstract

Winter storms making landfall in western North America can generate heavy precipitation and other significant weather, leading to floods, landslides, and other hazards that cause significant damage and loss of life. To help alleviate these negative impacts, the California Land-falling Jets (CALJET) and Pacific Land-falling Jets (PACJET) experiments took extra meteorological observations in the coastal region to investigate key research questions and aid operational West Coast 0–48-h weather forecasting. This article presents results from a study of how information provided by CALJET and PACJET was used by National Weather Service (NWS) forecasters and forecast users. The primary study methodology was analysis of qualitative data collected from observations of forecasters and from interviews with NWS personnel, CALJET–PACJET researchers, and forecast users. The article begins by documenting and discussing the many types of information that NWS forecasters combine to generate forecasts. Within this context, the article describes how forecasters used CALJET–PACJET observations to fill in key observational gaps. It then discusses researcher–forecaster interactions and examines how weather forecast information is used in emergency management decision making. The results elucidate the important role that forecasters play in integrating meteorological information and translating forecasts for users. More generally, the article illustrates how CALJET and PACJET benefited forecasts and society in real time, and it can inform future efforts to improve human-generated weather forecasts and future studies of the use and value of meteorological information.

Full access
Rebecca E. Morss and David S. Battisti

Abstract

The Tropical Atmosphere Ocean (TAO) array of moored buoys in the tropical Pacific Ocean is a major source of data for understanding and predicting El Niño–Southern Oscillation (ENSO). Despite the importance of the TAO array, limited work has been performed where observations are most important for predicting ENSO effectively. To address this issue, this study performs a series of observing system simulation experiments (OSSEs) with a linearized intermediate coupled ENSO model, stochastically forced. ENSO forecasts are simulated for a variety of observing network configurations, and forecast skill averaged over many simulated ENSO events is compared.

The first part of this study examined the relative importance of sea surface temperature (SST) and subsurface ocean observations, requirements for spacing and meridional extent of observations, and important regions for observations in this system. Using these results as a starting point, this paper develops efficient observing networks for forecasting ENSO in this system, where efficient is defined as providing reasonably skillful forecasts for relatively few observations. First, efficient networks that provide SST and thermocline depth data at the same locations are developed and discussed. Second, efficient networks of only thermocline depth observations are addressed, assuming that many SST observations are available from another source (e.g., satellites). The dependence of the OSSE results on the duration of the simulated data record is also explored. The results suggest that several decades of data may be sufficient for evaluating the effects of observing networks on ENSO forecast skill, despite being insufficient for evaluating the long-term potential predictability of ENSO.

Full access
Thomas M. Hamill, Chris Snyder, and Rebecca E. Morss

Abstract

A perfect model Monte Carlo experiment was conducted to explore the characteristics of analysis error in a quasigeostrophic model. An ensemble of cycled analyses was created, with each member of the ensemble receiving different observations and starting from different forecast states. Observations were created by adding random error (consistent with observational error statistics) to vertical profiles extracted from truth run data. Assimilation of new observations was performed every 12 h using a three-dimensional variational analysis scheme. Three observation densities were examined, a low-density network (one observation ∼ every 202 grid points), a moderate-density network (one observation ∼ every 102 grid points), and a high-density network (∼ every 52 grid points). Error characteristics were diagnosed primarily from a subset of 16 analysis times taken every 10 days from a long time series, with the first sample taken after a 50-day spinup. The goal of this paper is to understand the spatial, temporal, and some dynamical characteristics of analysis errors.

Results suggest a nonlinear relationship between observational data density and analysis error; there was a much greater reduction in error from the low- to moderate-density networks than from moderate to high density. Errors in the analysis reflected both structured errors created by the chaotic dynamics as well as random observational errors. The correction of the background toward the observations reduced the error but also randomized the prior dynamical structure of the errors, though there was a dependence of error structure on observational data density. Generally, the more observations, the more homogeneous the errors were in time and space and the less the analysis errors projected onto the leading backward Lyapunov vectors. Analyses provided more information at higher wavenumbers as data density increased. Errors were largest in the upper troposphere and smallest in the mid- to lower troposphere. Relatively small ensembles were effective in capturing a large percentage of the analysis-error variance, though more members were needed to capture a specified fraction of the variance as observation density increased.

Full access
Rebecca E. Morss, Julie L. Demuth, and Jeffrey K. Lazo

Abstract

Weather forecasts are inherently uncertain, and meteorologists have information about weather forecast uncertainty that is not readily available to most forecast users. Yet effectively communicating forecast uncertainty to nonmeteorologists remains challenging. Improving forecast uncertainty communication requires research-based knowledge that can inform decisions on what uncertainty information to communicate, when, and how to do so. To help build such knowledge, this article explores the public’s perspectives on everyday weather forecast uncertainty and uncertainty information using results from a nationwide survey. By contributing to the fundamental understanding of laypeople’s views on forecast uncertainty, the findings can inform both uncertainty communication and related research.

The article uses empirical data from a nationwide survey of the U.S. public to investigate beliefs commonly held among meteorologists and to explore new topics. The results show that when given a deterministic temperature forecast, most respondents expected the temperature to fall within a range around the predicted value. In other words, most people inferred uncertainty into the deterministic forecast. People’s preferences for deterministic versus nondeterministic forecasts were examined in two situations; in both, a significant majority of respondents liked weather forecasts that expressed uncertainty, and many preferred such forecasts to single-valued forecasts. The article also discusses people’s confidence in different types of forecasts, their interpretations of the probability of precipitation forecasts, and their preferences for how forecast uncertainty is conveyed. Further empirical research is needed to study the article’s findings in other contexts and to continue exploring perception, interpretation, communication, and use of weather forecast uncertainty.

Full access
Hyun Mee Kim, Michael C. Morgan, and Rebecca E. Morss

Abstract

The structure and evolution of analysis error and adjoint-based sensitivities [potential enstrophy initial singular vectors (SVs) and gradient sensitivities of the forecast error to initial conditions] are compared following a cyclone development in a three-dimensional quasigeostrophic channel model. The results show that the projection of the evolved SV onto the forecast error increases during the evolution.

Based on the similarities of the evolved SV to the forecast error, use of the evolved SV is suggested as an adaptive observation strategy. The use of the evolved SV strategy for adaptive observations is evaluated by performing observation system simulation experiments using a three-dimensional variational data assimilation scheme under the perfect model assumption. Adaptive strategies using the actual forecast error, gradient sensitivity, and initial SV are also tested. The observation system simulation experiments are implemented for five simulated synoptic cases with two different observation spacings and three different configurations of adaptive observation location densities (sparse, dense, and mixed), and the impact of the adaptive strategies is compared with that of the nonadaptive, fixed observations.

The impact of adaptive strategies varies with the observation density. For a small number of observations, several of the adaptive strategies tested reduce forecast error more than the nonadaptive strategy. For a large number of observations, it is more difficult to reduce forecast errors using adaptive observations. The evolved SV strategy performs as well as or better than the adjoint-based strategies for both observation densities. The impact of using the evolved SVs rather than the adjoint-based sensitivities for adaptive observation purposes is larger in the situation of a large number of observation stations for which the forecast error reduction by adjoint- based adaptive strategies is difficult.

Full access