Search Results

You are looking at 31 - 40 of 42 items for

  • Author or Editor: Zoltan Toth x
  • All content x
Clear All Modify Search
Istvan Szunyogh, Zoltan Toth, Aleksey V. Zimin, Sharanya J. Majumdar, and Anders Persson

Abstract

The propagation of the effect of targeted observations in numerical weather forecasts is investigated, based on results from the 2000 Winter Storm Reconnaissance (WSR00) program. In this field program, nearly 300 dropsondes were released adaptively at selected locations over the northeast Pacific on 12 separate flight days with the aim of reducing the risk of major failures in severe winter storm forecasts over the United States. The data impact was assessed by analysis–forecast experiments carried out with the T62 horizontal resolution, 28-level version of the operational global Medium Range Forecast system of the National Centers for Environmental Prediction.

In some cases, storms that reached the West Coast or Alaska were observed in an earlier phase of their development, while at other times the goal was to improve the prediction of storms that formed far downstream of the targeted region. Changes in the forecasts were the largest when landfalling systems were targeted and the baroclinic energy conversion was strong in the targeted region.

As expected from the experience accumulated during the 1999 Winter Storm Reconnaissance (WSR99) program, downstream baroclinic development played a major role in propagating the influence of the targeted data over North America. The results also show, however, that predicting the location of significant changes due to the targeted data in the forecasts can be difficult in the presence of a nonzonal large-scale flow. The strong zonal variations in the large-scale flow over the northeast Pacific during WSR00 did not reduce the positive forecast effects of the targeted data. On the contrary, the overall impact of the dropsonde data was more positive than during WSR99, when the large-scale flow was dominantly zonal on the flight days. This can be attributed to the improved prediction of the large-scale flow that led to additional improvements in the prediction of the synoptic-scale waves.

Full access
Sharanya J. Majumdar, Edmund K. M. Chang, Malaquías Peña, Renee Tatusko, and Zoltan Toth
Full access
Mio Matsueda, Masayuki Kyouda, Zoltan Toth, H. L. Tanaka, and Tadashi Tsuyuki

Abstract

Atmospheric blocking occurred over the Rocky Mountains at 1200 UTC 15 December 2005. The operational medium-range ensemble forecasts of the Canadian Meteorological Center (CMC), the Japan Meteorological Agency (JMA), and the National Centers for Environmental Prediction (NCEP), as initialized at 1200 UTC 10 December 2005, showed remarkable differences regarding this event. All of the NCEP members failed to predict the correct location of the blocking, whereas almost all of the JMA members and most of the CMC members were successful in predicting the correct location. The present study investigated the factors that caused NCEP to incorrectly predict the blocking location, based on an ensemble-based sensitivity analysis and the JMA global spectral model (GSM) multianalysis ensemble forecasts with NCEP, regionally amplified NCEP, and globally amplified NCEP analyses.

A sensitive area for the blocking formation was detected over the central North Pacific. In this area, the NCEP control analysis experienced problems in the handling of a cutoff cyclone, and the NCEP initial perturbations were ineffective in reducing uncertainties in the NCEP control analysis. The JMA GSM multianalysis ensemble forecasts revealed that regional amplification of initial perturbations over the sensitive area could lead to further improvements in forecasts over the blocking region without degradation of forecasts over the Northern Hemisphere (NH), whereas the global amplification of initial perturbations could lead to improved forecasts over the blocking region and degraded forecasts over the NH. This finding may suggest that excessive amplification of initial perturbations over nonsensitive areas is undesirable, and that case-dependent rescaling of initial perturbations may be of value compared with climatology-based rescaling, which is widely used in current operational ensemble prediction systems.

Full access
Thomas M. Hamill, Michael J. Brennan, Barbara Brown, Mark DeMaria, Edward N. Rappaport, and Zoltan Toth

Uncertainty information from ensemble prediction systems can enhance and extend the suite of tropical cyclone (TC) forecast products. This article will review progress in ensemble prediction of TCs and the scientific issues in ensemble system development for TCs. Additionally, it will discuss the needs of forecasters and other users for TC uncertainty information and describe some ensemble-based products that may be able to be disseminated in the near future. We hope these proposals will jump-start a community-wide discussion of how to leverage ensemble-based uncertainty information for TC prediction.

A supplement to this article is available online (10.1175/2011BAMS3106.2)

Full access
Zhao-Xia Pu, Eugenia Kalnay, David Parrish, Wanshu Wu, and Zoltan Toth

Abstract

The errors in the first-guess (forecast field) of an analysis system vary from day to day, but, as in all current operational data assimilation systems, forecast error covariances are assumed to be constant in time in the NCEP operational three-dimensional variational analysis system (known as a spectral statistical interpolation or SSI). This study focuses on the impact of modifying the error statistics by including effects of the “errors of the day” on the analysis system. An estimate of forecast uncertainty, as defined from the bred growing vectors of the NCEP operational global ensemble forecast, is applied in the NCEP operational SSI analysis system. The growing vectors are used to estimate the spatially and temporally varying degree of uncertainty in the first-guess forecasts used in the analysis. The measure of uncertainty is defined by a ratio of the local amplitude of the growing vectors, relative to a background amplitude measure over a large area. This ratio is used in the SSI system for adjusting the observational error term (giving more weight to observations in regions of larger forecast errors). Preliminary experiments with the low-resolution global system show positive impact of this virtually cost-free method on the quality of the analysis and medium-range weather forecasts, encouraging further tests for operational use. The results of a 45-day parallel run, and a discussion of other methods to take advantage of the knowledge of the day-to-day variation in forecast uncertainties provided by the NCEP ensemble forecast system, are also presented in the paper.

Full access
Jeffrey S. Whitaker, Thomas M. Hamill, Xue Wei, Yucheng Song, and Zoltan Toth

Abstract

Real-data experiments with an ensemble data assimilation system using the NCEP Global Forecast System model were performed and compared with the NCEP Global Data Assimilation System (GDAS). All observations in the operational data stream were assimilated for the period 1 January–10 February 2004, except satellite radiances. Because of computational resource limitations, the comparison was done at lower resolution (triangular truncation at wavenumber 62 with 28 levels) than the GDAS real-time NCEP operational runs (triangular truncation at wavenumber 254 with 64 levels). The ensemble data assimilation system outperformed the reduced-resolution version of the NCEP three-dimensional variational data assimilation system (3DVAR), with the biggest improvement in data-sparse regions. Ensemble data assimilation analyses yielded a 24-h improvement in forecast skill in the Southern Hemisphere extratropics relative to the NCEP 3DVAR system (the 48-h forecast from the ensemble data assimilation system was as accurate as the 24-h forecast from the 3DVAR system). Improvements in the data-rich Northern Hemisphere, while still statistically significant, were more modest. It remains to be seen whether the improvements seen in the Southern Hemisphere will be retained when satellite radiances are assimilated. Three different parameterizations of background errors unaccounted for in the data assimilation system (including model error) were tested. Adding scaled random differences between adjacent 6-hourly analyses from the NCEP–NCAR reanalysis to each ensemble member (additive inflation) performed slightly better than the other two methods (multiplicative inflation and relaxation-to-prior).

Full access
Hongli Jiang, Steve Albers, Yuanfu Xie, Zoltan Toth, Isidora Jankov, Michael Scotten, Joseph Picca, Greg Stumpf, Darrel Kingfield, Daniel Birkenheuer, and Brian Motta

Abstract

The accurate and timely depiction of the state of the atmosphere on multiple scales is critical to enhance forecaster situational awareness and to initialize very short-range numerical forecasts in support of nowcasting activities. The Local Analysis and Prediction System (LAPS) of the Earth System Research Laboratory (ESRL)/Global Systems Division (GSD) is a numerical data assimilation and forecast system designed to serve such very finescale applications. LAPS is used operationally by more than 20 national and international agencies, including the NWS, where it has been operational in the Advanced Weather Interactive Processing System (AWIPS) since 1995.

Using computationally efficient and scientifically advanced methods such as a multigrid technique that adds observational information on progressively finer scales in successive iterations, GSD recently introduced a new, variational version of LAPS (vLAPS). Surface and 3D analyses generated by vLAPS were tested in the Hazardous Weather Testbed (HWT) to gauge their utility in both situational awareness and nowcasting applications. On a number of occasions, forecasters found that the vLAPS analyses and ensuing very short-range forecasts provided useful guidance for the development of severe weather events, including tornadic storms, while in some other cases the guidance was less sufficient.

Full access
T. N. Krishnamurti, K. Rajendran, T. S. V. Vijaya Kumar, Stephen Lord, Zoltan Toth, Xiaolei Zou, Steven Cocke, Jon E. Ahlquist, and I. Michael Navon

Abstract

This paper addresses the anomaly correlation of the 500-hPa geopotential heights from a suite of global multimodels and from a model-weighted ensemble mean called the superensemble. This procedure follows a number of current studies on weather and seasonal climate forecasting that are being pursued. This study includes a slightly different procedure from that used in other current experimental forecasts for other variables. Here a superensemble for the ∇2 of the geopotential based on the daily forecasts of the geopotential fields at the 500-hPa level is constructed. The geopotential of the superensemble is recovered from the solution of the Poisson equation. This procedure appears to improve the skill for those scales where the variance of the geopotential is large and contributes to a marked improvement in the skill of the anomaly correlation. Especially large improvements over the Southern Hemisphere are noted. Consistent day-6 forecast skill above 0.80 is achieved on a day to day basis. The superensemble skills are higher than those of the best model and the ensemble mean. For days 1–6 the percent improvement in anomaly correlations of the superensemble over the best model are 0.3, 0.8, 2.25, 4.75, 8.6, and 14.6, respectively, for the Northern Hemisphere. The corresponding numbers for the Southern Hemisphere are 1.12, 1.66, 2.69, 4.48, 7.11, and 12.17. Major improvement of anomaly correlation skills is realized by the superensemble at days 5 and 6 of forecasts. The collective regional strengths of the member models, which is reflected in the proposed superensemble, provide a useful consensus product that may be useful for future operational guidance.

Full access
Dingchen Hou, Mike charles, Yan Luo, Zoltan Toth, Yuejian Zhu, Roman Krzysztofowicz, Ying Lin, Pingping Xie, Dong-Jun Seo, Malaquias Pena, and Bo Cui

Abstract

Two widely used precipitation analyses are the Climate Prediction Center (CPC) unified global daily gauge analysis and Stage IV analysis based on quantitative precipitation estimate with multisensor observations. The former is based on gauge records with a uniform quality control across the entire domain and thus bears more confidence, but provides only 24-h accumulation at ⅛° resolution. The Stage IV dataset, on the other hand, has higher spatial and temporal resolution, but is subject to different methods of quality control and adjustments by different River Forecasting Centers. This article describes a methodology used to generate a new dataset by adjusting the Stage IV 6-h accumulations based on available joint samples of the two analyses to take advantage of both datasets. A simple linear regression model is applied to the archived historical Stage IV and the CPC datasets after the former is aggregated to the CPC grid and daily accumulation. The aggregated Stage IV analysis is then adjusted based on this linear model and then downscaled back to its original resolution. The new dataset, named Climatology-Calibrated Precipitation Analysis (CCPA), retains the spatial and temporal patterns of the Stage IV analysis while having its long-term average and climate probability distribution closer to that of the CPC analysis. The limitation of the methodology at some locations is mainly associated with heavy to extreme precipitation events, which the Stage IV dataset tends to underestimate. CCPA cannot effectively correct this because of the linear regression model and the relative scarcity of heavy precipitation in the training data sample.

Full access
Edward I. Tollerud, Brian Etherton, Zoltan Toth, Isidora Jankov, Tara L. Jensen, Huiling Yuan, Linda S. Wharton, Paula T. McCaslin, Eugene Mirvis, Bill Kuo, Barbara G. Brown, Louisa Nance, Steven E. Koch, and F. Anthony Eckel
Full access