Search Results

You are looking at 1 - 10 of 13 items for :

  • Author or Editor: Zoltan Toth x
  • Bulletin of the American Meteorological Society x
  • All content x
Clear All Modify Search
Feifan Zhou and Zoltan Toth

Abstract

The success story of numerical weather prediction is often illustrated with the dramatic decrease of errors in tropical cyclone track forecasts over the past decades. In a recent essay, Landsea and Cangialosi, however, note a diminishing trend in the reduction of perceived positional error (PPE; difference between forecast and observed positions) in National Hurricane Center tropical cyclone (TC) forecasts as they contemplate whether “the approaching limit of predictability for tropical cyclone track prediction is near or has already been reached.” In this study we consider a different interpretation of the PPE data. First, we note that PPE is different from true positional error (TPE; difference between forecast and true positions) as it is influenced by the error in the observed position of TCs. PPE is still customarily used as a proxy for TPE since the latter is not directly measurable. As an alternative, TPE is estimated here with an inverse method, using PPE measurements and a theoretically based assumption about the exponential growth of TPE as a function of lead time. Eighty-nine percent variance in the behavior of 36–120-h lead-time 2001–17 seasonally averaged PPE measurements is explained with an error model using just four parameters. Assuming that the level of investments, and the pace of improvements to the observing, modeling, and data assimilation systems continue unabated, the four-parameter error model indicates that the time limit of predictability at the 181 nautical mile error level (n mi; 1 n mi = 1.85 km), reached at day 5 in 2017, may be extended beyond 6 and 8 days in 10 and 30 years’ time, respectively.

Full access
Zoltan Toth and Eugenia Kalnay

On 7 December 1992, The National Meteorological Center (NMC) started operational ensemble forecasting. The ensemble forecast configuration implemented provides 14 independent forecasts every day verifying on days 1–10. In this paper we briefly review existing methods for creating perturbations for ensemble forecasting. We point out that a regular analysis cycle is a “breeding ground” for fast-growing modes. Based on this observation, we devise a simple and inexpensive method to generate growing modes of the atmosphere.

The new method, “breeding of growing modes,” or BGM, consists of one additional, perturbed short-range forecast, introduced on top of the regular analysis in an analysis cycle. The difference between the control and perturbed six-hour (first guess) forecast is scaled back to the size of the initial perturbation and then reintroduced onto the new atmospheric analysis. Thus, the perturbation evolves along with the time-dependent analysis fields, ensuring that after a few days of cycling the perturbation field consists of a superposition of fast-growing modes corresponding to the contemporaneous atmosphere, akin to local Lyapunov vectors.

The breeding cycle has been designed to model how the growing errors are “bred” and maintained in a conventional analysis cycle through the successive use of short-range forecasts. The bred modes should thus offer a good estimate of possible growing error fields in the analysis. Results from extensive experiments indicate that ensembles of just two BGM forecasts achieve better results than much larger random Monte Carlo or lagged average forecast (LAF) ensembles. Therefore, the operational ensemble configuration at NMC is based on the BGM method to generate efficient initial perturbations.

The only two methods explicitly designed to generate perturbations that contain fast-growing modes corresponding to the evolving atmosphere are the BGM and the method of Lorenz, which is based on the singular modes of the linear tangent model. This method has been adopted operationally at The European Centre for Medium-Range Forecasts (ECMWF) for ensemble forecasting. Both the BGM and the ECMWF methods seem promising, but since it has not yet been possible to compare in detail their operational performance we limit ourselves to pointing out some of their similarities and differences.

Full access
Zoltan Toth, Steve Albers, and Yuanfu Xie

No abstract available.

Full access
Yuejian Zhu, Zoltan Toth, Richard Wobus, David Richardson, and Kenneth Mylne

The potential economic benefit associated with the use of an ensemble of forecasts versus an equivalent or higher-resolution control forecast is discussed. Neither forecast systems are postprocessed, except a simple calibration that is applied to make them reliable. A simple decision-making model is used where all potential users of weather forecasts are characterized by the ratio between the cost of their action to prevent weather-related damages, and the loss that they incur in case they do not protect their operations. It is shown that the ensemble forecast system can be used by a much wider range of users. Furthermore, for many, and for beyond 4-day lead time for all users, the ensemble provides greater potential economic benefit than a control forecast, even if the latter is run at higher horizontal resolution. It is argued that the added benefits derive from 1) the fact that the ensemble provides a more detailed forecast probability distribution, allowing the users to tailor their weather forecast–related actions to their particular cost–loss situation, and 2) the ensemble's ability to differentiate between high-and low-predictability cases. While single forecasts can statistically be supplemented by more detailed probability distributions, it is not clear whether with more sophisticated postprocessing they can identify more and less predictable forecast cases as successfully as ensembles do.

Full access
Edmund K. M. Chang, Malaquías Peña, and Zoltan Toth
Full access
Joseph J. Barsugli, Jeffrey S. Whitaker, Andrew F. Loughe, Prashant D. Sardeshmukh, and Zoltan Toth

Can an individual weather event be attributed to El Niño? This question is addressed quantitatively using ensembles of medium-range weather forecasts made with and without tropical sea surface temperature anomalies. The National Centers for Environmental Prediction (NCEP) operational medium-range forecast model is used. It is found that anomalous tropical forcing affects forecast skill in midlatitudes as early as the fifth day of the forecast. The effect of the anomalous sea surface temperatures in the medium range is defined as the synoptic El Niño signal. The synoptic El Niño signal over North America is found to vary from case to case and sometimes can depart dramatically from the pattern classically associated with El Niño. This method of parallel ensembles of medium-range forecasts provides information about the changing impacts of El Niño on timescales of a week or two that is not available from conventional seasonal forecasts.

Knowledge of the synoptic El Niño signal can be used to attribute aspects of individual weather events to El Niño. Three large-scale weather events are discussed in detail: the January 1998 ice storm in the northeastern United States and southeastern Canada, the February 1998 rains in central and southern California, and the October 1997 blizzard in Colorado. Substantial impacts of El Nino are demonstrated in the first two cases. The third case is inconclusive.

Full access
Zoltan Toth, Mark Tew, Daniel Birkenheuer, Steve Albers, Yuanfu Xie, and Brian Motta
Full access
Sharanya J. Majumdar, Edmund K. M. Chang, Malaquías Peña, Renee Tatusko, and Zoltan Toth
Full access
Thomas M. Hamill, Michael J. Brennan, Barbara Brown, Mark DeMaria, Edward N. Rappaport, and Zoltan Toth

Uncertainty information from ensemble prediction systems can enhance and extend the suite of tropical cyclone (TC) forecast products. This article will review progress in ensemble prediction of TCs and the scientific issues in ensemble system development for TCs. Additionally, it will discuss the needs of forecasters and other users for TC uncertainty information and describe some ensemble-based products that may be able to be disseminated in the near future. We hope these proposals will jump-start a community-wide discussion of how to leverage ensemble-based uncertainty information for TC prediction.

A supplement to this article is available online (10.1175/2011BAMS3106.2)

Full access
Hongli Jiang, Steve Albers, Yuanfu Xie, Zoltan Toth, Isidora Jankov, Michael Scotten, Joseph Picca, Greg Stumpf, Darrel Kingfield, Daniel Birkenheuer, and Brian Motta

Abstract

The accurate and timely depiction of the state of the atmosphere on multiple scales is critical to enhance forecaster situational awareness and to initialize very short-range numerical forecasts in support of nowcasting activities. The Local Analysis and Prediction System (LAPS) of the Earth System Research Laboratory (ESRL)/Global Systems Division (GSD) is a numerical data assimilation and forecast system designed to serve such very finescale applications. LAPS is used operationally by more than 20 national and international agencies, including the NWS, where it has been operational in the Advanced Weather Interactive Processing System (AWIPS) since 1995.

Using computationally efficient and scientifically advanced methods such as a multigrid technique that adds observational information on progressively finer scales in successive iterations, GSD recently introduced a new, variational version of LAPS (vLAPS). Surface and 3D analyses generated by vLAPS were tested in the Hazardous Weather Testbed (HWT) to gauge their utility in both situational awareness and nowcasting applications. On a number of occasions, forecasters found that the vLAPS analyses and ensuing very short-range forecasts provided useful guidance for the development of severe weather events, including tornadic storms, while in some other cases the guidance was less sufficient.

Full access