Search Results

You are looking at 1 - 10 of 29 items for

  • Author or Editor: Michael E. Baldwin x
  • All content x
Clear All Modify Search
S. Lakshmivarahan, Michael E. Baldwin, and Tao Zheng

Abstract

The goal of this paper is to provide a complete picture of the long-term behavior of Lorenz’s maximum simplification equations along with the corresponding meteorological interpretation for all initial conditions and all values of the parameter.

Full access
Benjamin R. J. Schwedler and Michael E. Baldwin

Abstract

While the use of binary distance measures has a substantial history in the field of image processing, these techniques have only recently been applied in the area of forecast verification. Designed to quantify the distance between two images, these measures can easily be extended for use with paired forecast and observation fields. The behavior of traditional forecast verification metrics based on the dichotomous contingency table continues to be an area of active study, but the sensitivity of image metrics has not yet been analyzed within the framework of forecast verification. Four binary distance measures are presented and the response of each to changes in event frequency, bias, and displacement error is documented. The Hausdorff distance and its derivatives, the modified and partial Hausdorff distances, are shown only to be sensitive to changes in base rate, bias, and displacement between the forecast and observation. In addition to its sensitivity to these three parameters, the Baddeley image metric is also sensitive to additional aspects of the forecast situation. It is shown that the Baddeley metric is dependent not only on the spatial relationship between a forecast and observation but also the location of the events within the domain. This behavior may have considerable impact on the results obtained when using this measure for forecast verification. For ease of comparison, a hypothetical forecast event is presented to quantitatively analyze the various sensitivities of these distance measures.

Full access
Michael E. Baldwin and John S. Kain

Abstract

The sensitivity of various accuracy measures to displacement error, bias, and event frequency is analyzed for a simple hypothetical forecasting situation. Each measure is found to be sensitive to displacement error and bias, but probability of detection and threat score do not change as a function of event frequency. On the other hand, equitable threat score, true skill statistic, and odds ratio skill score behaved differently with changing event frequency. A newly devised measure, here called the bias-adjusted threat score, does not change with varying event frequency and is relatively insensitive to bias. Numerous plots are presented to allow users of these accuracy measures to make quantitative estimates of sensitivities that are relevant to their particular application.

Full access
Michael E. Baldwin, John S. Kain, and Michael P. Kay

Abstract

The impact of parameterized convection on Eta Model forecast soundings is examined. The Betts–Miller–Janjić parameterization used in the National Centers for Environmental Prediction Eta Model introduces characteristic profiles of temperature and moisture in model soundings. These specified profiles can provide misleading representations of various vertical structures and can strongly affect model predictions of parameters that are used to forecast deep convection, such as convective available potential energy and convective inhibition. The specific procedures and tendencies of this parameterization are discussed, and guidelines for interpreting Eta Model soundings are presented.

Full access
Elizabeth E. Ebert, Ulrich Damrath, Werner Wergen, and Michael E. Baldwin

Twenty-four-hour and 48-h quantitative precipitation forecasts (QPFs) from 11 operational numerical weather prediction models have been verified for a 4-yr period against rain gauge observations over the United States, Germany, and Australia to assess their skill in predicting the occurrence and amount of daily precipitation.

Model QPFs had greater skill in winter than in summer, and greater skill in midlatitudes than in Tropics, where they performed only marginally better than “persistence.” The best agreement among models, as well as the best ability to discriminate raining areas, occurred for a low rain threshold of 1–2 mm d−1. In contrast, the skill for forecasts of rain greater than 20 mm d−1 was generally quite low, reflecting the difficulty in predicting precisely when and where heavy rain will fall. The location errors for rain systems, determined using pattern matching with the observations, were typically about 100 km for 24-h forecasts, with smaller errors occurring for the heaviest rain systems.

It does not appear that model QPFs improved significantly during the four years examined. As new model versions were introduced their performance changed, not always for the better. The process of improving model numerics and physics is a complicated juggling act, and unless the accurate prediction of rainfall is made a top priority then improvements in model QPF will continue to come only slowly.

Full access
Nathan M. Hitchens, Michael E. Baldwin, and Robert J. Trapp

Abstract

Extreme precipitation was identified in the midwestern United States using an object-oriented approach applied to the NCEP stage-II hourly precipitation dataset. This approach groups contiguous areas that exceed a user-defined threshold into “objects,” which then allows object attributes to be diagnosed. Those objects with precipitation maxima in the 99th percentile (>55 mm) were considered extreme, and there were 3484 such objects identified in the midwestern United States between 1996 and 2010. Precipitation objects ranged in size from hundreds to over 100 000 km2, and the maximum precipitation within each object varied between 55 and 104 mm. The majority of occurrences of extreme precipitation were in the summer (June, July, and August), and peaked in the afternoon into night (1900–0200 UTC) in the diurnal cycle. Consistent with the previous work by the authors, this study shows that the systems that produce extreme precipitation in the midwestern United States vary widely across the convective-storm spectrum.

Full access
John S. Kain, Stephen M. Goss, and Michael E. Baldwin

Abstract

The process of atmospheric cooling due to melting precipitation is examined to evaluate its contribution to determining precipitation type. The “melting effect” is typically of second-order importance compared to other processes that influence the lower-tropospheric air temperature and hence the type of precipitation that reaches the ground. In some cases, however, cooling due to melting snowflakes can emerge as the dominant agent of temperature change, occasionally surprising forecasters (and the public) by inducing an unexpected changeover from rain to heavy snow. One such case occurred on 3–4 February 1998 in east-central Tennessee and surrounding areas.

Commonly applied considerations for predicting precipitation type had convinced forecasters that significant snowfall was not likely with this event. However, real-time observations and a postevent analysis by forecasters at the Storm Prediction Center led to the hypothesis that the melting effect must have provided the cooling necessary to allow widespread heavy snowfall. To test this hypothesis, the Pennsylvania State University–NCAR Mesoscale Model was used to generate a mesoscale-resolution, four-dimensional dataset for this event. Diagnostic analysis of the model output confirmed that cooling due to melting snowflakes was of a sufficient magnitude to account for the disparity between observed and forecasted lower-tropospheric temperatures in this case.

A simple formula is derived to provide a “rule of thumb” for anticipating the potential impact of the melting effect. In addition, guidelines are provided for identifying meteorological patterns that favor a predominance of the melting effect.

Full access
Nicole P. Kurkowski, David J. Stensrud, and Michael E. Baldwin

Abstract

One of the challenges in land surface modeling involves specifying accurately the initial state of the land surface. Most efforts have focused upon using a multiyear climatology to specify the fractional coverage of vegetation. For example, the National Centers for Environmental Prediction (NCEP) Eta Model uses a 5-yr satellite climatology of monthly normalized difference vegetation index (NDVI) values to define the fractional vegetation coverage, or greenness, at 1/8° (approximately 14 km) resolution. These data are valid on the 15th of every month and are interpolated temporally for daily runs. Yet vegetation characteristics change from year to year and are influenced by short-lived events such as fires, crop harvesting, droughts, floods, and hailstorms that are missed using a climatological database. To explore the importance of the initial state vegetation characteristics on operational numerical weather forecasts, the response of the Eta Model to initializing fractional vegetation coverage directly from the National Oceanic and Atmospheric Administration's Advanced Very High Resolution Radiometer (AVHRR) data is investigated. Numerical forecasts of the Eta Model, using both climatological and near-real-time values of fractional vegetation coverage, are compared with observations to examine the potential importance of variations in vegetation to forecasts of 2-m temperatures and dewpoint temperatures from 0 to 48 h for selected days during the 2001 growing season. Results show that use of the near-real-time vegetation fraction data improves the forecasts of both the 2-m temperature and dewpoint temperature for much of the growing season, highlighting the need for this type of information to be included in operational forecast models.

Full access
Kimberly L. Elmore, David M. Schultz, and Michael E. Baldwin

Abstract

A previous study of the mean spatial bias errors associated with operational forecast models motivated an examination of the mechanisms responsible for these biases. One hypothesis for the cause of these errors is that mobile synoptic-scale phenomena are partially responsible. This paper explores this hypothesis using 24-h forecasts from the operational Eta Model and an experimental version of the Eta run with Kain–Fritsch convection (EtaKF).

For a sample of 44 well-defined upper-level short-wave troughs arriving on the west coast of the United States, 70% were underforecast (as measured by the 500-hPa geopotential height), a likely result of being undersampled by the observational network. For a different sample of 45 troughs that could be tracked easily across the country, consecutive model runs showed that the height errors associated with 44% of the troughs generally decreased in time, 11% increased in time, 18% had relatively steady errors, 2% were uninitialized entering the West Coast, and 24% exhibited some other kind of behavior. Thus, landfalling short-wave troughs were typically underforecast (positive errors, heights too high), but these errors tended to decrease as they moved across the United States, likely a result of being better initialized as the troughs became influenced by more upper-air data. Nevertheless, some errors in short-wave troughs were not corrected as they fell under the influence of supposedly increased data amount and quality. These results indirectly show the effect that the amount and quality of observational data has on the synoptic-scale errors in the models. On the other hand, long-wave ridges tended to be underforecast (negative errors, heights too low) over a much larger horizontal extent.

These results are confirmed in a more systematic manner over the entire dataset by segregating the model output at each grid point by the sign of the 500-hPa relative vorticity. Although errors at grid points with positive relative vorticity are small but positive in the western United States, the errors become large and negative farther east. Errors at grid points with negative relative vorticity, on the other hand, are generally negative across the United States. A large negative bias observed in the Eta and EtaKF over the southeast United States is believed to be due to an error in the longwave radiation scheme interacting with water vapor and clouds. This study shows that model errors may be related to the synoptic-scale flow, and even large-scale features such as long-wave troughs can be associated with significant large-scale height errors.

Full access
Melissa S. Bukovsky, John S. Kain, and Michael E. Baldwin

Abstract

Bowing, propagating precipitation features that sometimes appear in NCEP's North American Mesoscale model (NAM; formerly called the Eta Model) forecasts are examined. These features are shown to be associated with an unusual convective heating profile generated by the Betts–Miller–Janjić convective parameterization in certain environments. A key component of this profile is a deep layer of cooling in the lower to middle troposphere. This strong cooling tendency induces circulations that favor expansion of parameterized convective activity into nearby grid columns, which can lead to growing, self-perpetuating mesoscale systems under certain conditions. The propagation characteristics of these systems are examined and three contributing mechanisms of propagation are identified. These include a mesoscale downdraft induced by the deep lower-to-middle tropospheric cooling, a convectively induced buoyancy bore, and a boundary layer cold pool that is indirectly produced by the convective scheme in this environment. Each of these mechanisms destabilizes the adjacent atmosphere and decreases convective inhibition in nearby grid columns, promoting new convective development, expansion, and propagation of the larger system. These systems appear to show a poor correspondence with observations of bow echoes on time and space scales that are relevant for regional weather prediction, but they may provide important clues about the propagation mechanisms of real convective systems.

Full access