• Black, T. I., 1994: The new NMC mesoscale Eta model: Description and forecast examples. Wea. Forecasting,9, 265–278.

    • Crossref
    • Export Citation
  • ——, D. G. Deaven, and G. DiMego, 1993: The step-mountain eta coordinate model: 80km Early’ version and objective verifications. Technical Procedures Bull. 412, NOAA/NWS, 31 pp.

  • Hart, R. E., 1997: Forecasting studies using hourly model-generated soundings. M.S. thesis, Dept. of Meteorology, The Pennsylvania State University, 166 pp. [Available from The Pennsylvania State University, 503 Walker Building, University Park, PA 16802.].

  • ——, G. S. Forbes, and R. H. Grumm, 1998: The use of hourly model-generated soundings to forecast mesoscale phenomena. Part I: Initial assessment in forecasting warm-season phenomena. Wea. Forecasting,13, 1165–1185.

  • View in gallery
    Fig. 1.

    Model sounding stations used in the development of the nonconvective wind gust probability product.

  • View in gallery
    Fig. 2.

    Rmse (m s−1) statistics for a comparison between a given model layer forecast wind speed, Eta (+) and MESO (○), and the corresponding observed surface wind gust in nonconvective events. The analysis was performed using 14 months of forecast and observational data and excluded apparent convectively driven events.

  • View in gallery
    Fig. 3.

    Example experimental forecast output for surface wind gust potential from Eta.

  • View in gallery
    Fig. 4.

    POD, FAR, and CSI as functions of forecast threshold probability for (a) 13 m s−1 (30 mph) and (b) 18 m s−1 (40 mph) gusts. All 12 stations and both Eta and MESO have been used from the independent dataset period. The 40% threshold probability was found to be the threshold producing the greatest forecast skill for 13 m s−1 (30 mph) gusts, and 30% for the 18 m s−1 (40 mph) gust.

  • View in gallery
    Fig. 5.

    Forecast reliability of the wind gust product when applied to an independent dataset for (a) Eta and (b) MESO. Each forecast model layer 2 wind speed was rounded to the nearest 2 m s−1 (5 mph) for purposes of verifying by 5-mph intervals. For each 2 m s−1 (5 mph) forecast wind speed group, we determined the occurrence frequency of three different surface wind gust values. These observed probability values (solid lines) are then compared to the predicted probabilities given from the dependent dataset (dashed lines) obtained from Table 2.

  • View in gallery
    Fig. 6.

    Analysis of the impact of lower atmosphere static stability on forecast accuracy using the independent dataset. A critical threshold 30-mph forecast gust probability of 40% was used as a binary predictor of 30-mph gusts. Forecast (F)–observed (O) wind gust pairs were divided into four independent groups: predicted events (F = yes; O = yes), missed events (F = no; O = yes), false alarms (F = yes; O = no), and nonevents (F = no; O = no). The mean lapse rate between the first and second model layers above ground for each of the four groups is plotted by a filled circle. The range of lapse rates for each group is represented by the bars at one standard deviation above and below the mean.

  • View in gallery
    Fig. 7.

    The impact of low-level forecast lapse rate on POD, FAR, and CSI for (a) 13 m s−1 (30 mph) and (b) 18 m s−1 (40 mph) gusts using the independent dataset.

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 204 51 4
PDF Downloads 47 23 3

The Use of Hourly Model-Generated Soundings to Forecast Mesoscale Phenomena. Part II: Initial Assessment in Forecasting Nonconvective Strong Wind Gusts

Robert E. HartDepartment of Meteorology, The Pennsylvania State University, University Park, Pennsylvania

Search for other papers by Robert E. Hart in
Current site
Google Scholar
PubMed
Close
and
Gregory S. ForbesDepartment of Meteorology, The Pennsylvania State University, University Park, Pennsylvania

Search for other papers by Gregory S. Forbes in
Current site
Google Scholar
PubMed
Close
Full access

Abstract

This paper presents results from pilot studies of the use of model-generated hourly soundings to forecast nonconvectively produced strong wind gusts. Model soundings from the operational Eta and Meso Eta Models were used for a period of 14 months in 1996 and 1997. Skill does exist in forecasting strong to damaging surface wind gusts, although the forecasts are at the mercy of the model-based boundary layer stability forecast. The wind gust forecasts are more accurate during the daytime, when the boundary layer depth and stability is more accurately forecasted and also more conducive to vertical mixing of boundary layer winds. The results of this preliminary evaluation show that the model sounding–based forecasts provide a reasonable prediction tool for nonconvective strong wind gusts. Additionally, the results warrant more complete evaluations once the dataset has grown to sufficient size.

Corresponding author address: Mr. Robert E. Hart, Department of Meteorology, The Pennsylvania State University, 503 Walker Bldg., University Park, PA 16802-5013.

Email: hart@ems.psu.edu

Abstract

This paper presents results from pilot studies of the use of model-generated hourly soundings to forecast nonconvectively produced strong wind gusts. Model soundings from the operational Eta and Meso Eta Models were used for a period of 14 months in 1996 and 1997. Skill does exist in forecasting strong to damaging surface wind gusts, although the forecasts are at the mercy of the model-based boundary layer stability forecast. The wind gust forecasts are more accurate during the daytime, when the boundary layer depth and stability is more accurately forecasted and also more conducive to vertical mixing of boundary layer winds. The results of this preliminary evaluation show that the model sounding–based forecasts provide a reasonable prediction tool for nonconvective strong wind gusts. Additionally, the results warrant more complete evaluations once the dataset has grown to sufficient size.

Corresponding author address: Mr. Robert E. Hart, Department of Meteorology, The Pennsylvania State University, 503 Walker Bldg., University Park, PA 16802-5013.

Email: hart@ems.psu.edu

1. Introduction

In a previous paper (Hart et al. 1998) the utility of hourly model-generated forecast soundings and derived products in forecasting summer (warm season) phenomena was examined. The model soundings were found to be useful in forecasting the timing and initiation of convection, with lesser skill in predicting the intensity of convection. Skill was found in using the products to forecast low-level fog and stratus removal along with episodes of clear-air turbulence (CAT). In addition, the soundings were exceptionally useful when synthesized with hourly real-time surface observations to produce model-enhanced analyses of convective potential and thermodynamic fields.

This paper continues the examination of model sounding applications. Here we examine the utility of the high-resolution hourly model forecasts in predicting the probability of strong or damaging surface wind gusts not produced by thunderstorms. Additional studies are under way that examine the utility of the model soundings in forecasting precipitation type and mesoscale precipitation banding, such as from conditional symmetric instability or frontogenesis.

The hourly resolution of the model forecasts provides forecasters with the ability to far more precisely predict times of frontal passages and low-level jets, and midlevel winds associated with these events. While the 10-m sustained wind velocity is predicted by the model, surface-based forecasts of wind gusts are not. Often, these wind gusts can produce tree and structural damage during extreme events such as those observed on 22 February 1997 and 6 March 1997, in the northeastern United States. Typically, the question is not whether midlevel winds are sufficiently strong, but whether boundary layer stability is sufficiently low and vertical wind shear is sufficiently large to allow transport of high-momentum air to the surface as strong (even damaging) wind gusts. This paper illustrates how the model-forecast boundary layer wind velocity can be used to anticipate strong surface wind gusts. The methodology is presented in section 2 and results are given in section 3. A more detailed examination of these results can be found in Hart (1997). A concluding summary is given in section 4, with suggestions for future research on this topic in section 5.

2. Methodology

The utility of hourly model-generated soundings in forecasting nonconvective strong wind gusts was examined for a 14-month period (February 1996–March 1997). The soundings were provided through the anonymous file transfer protocol (FTP) server by the National Centers for Environmental Prediction (NCEP). Model sounding data from both the Eta (Black et al. 1993) and Meso Eta (MESO) Models (Black 1994) was used. The models were examined in both an operational setting [in cooperation with the National Weather Service (NWS) in State College, PA] and in a research setting. The raw soundings were then displayed graphically as time–height cross sections, time series plots, and skew T–logp animations. A detailed description of the types, advantages, and disadvantages of these formats is given in Hart et al. (1998). All these forecast products were made available to forecasters through the World Wide Web (http://www.ems.psu.edu/wx/etats.html).

The wind gust probabilities product was one of several experimental products that were developed during the evaluation period using the raw hourly model sounding data. A detailed description of the other forecast products, their generation procedures, and software used can be found in Hart (1997) and Hart et al. (1998). Wind gust probability forecasts were generated based on hourly sounding data for the 12 stations shown in Fig. 1. At the end of the evaluation period, a statistical analysis of the forecasting accuracy of the model soundings and experimental products was performed. Verification of experimental products was performed by comparing them to the nearest available surface station. As described in Hart (1997) and Hart et al. (1998), this station model sounding displacement was occasionally significant (>50 km) and could involve elevation differences of hundreds of meters

The Eta and MESO operational models have sufficient vertical resolution that the influence of friction on reducing boundary layer wind speeds is realistically parameterized. However, the model might not correctly predict the low-level stability, potentially impacting the accuracy of the forecast low-level winds. With this in mind, an experimental surface wind gust forecast product was derived under the assumption that the wind speeds in the upper regions of the boundary layer are accurately forecasted, but that the surface gusts might not relate best to the 2-m forecast winds. The model layer wind speed that was most correlated to the observed wind gust speed was determined and identified as the “source layer.” Probabilities of surface wind gusts reaching several speed thresholds were then determined empirically based upon the velocity of the wind in the source layer. During this statistical analysis, events were excluded when convectively induced downdrafts were suspected. The empirical prediction equations were then tested on an independent dataset from October 1997 through March 1998.

3. Results

a. Development of forecast equations

Figure 1 shows the model sounding stations (and corresponding observing stations) that were used in the analysis. All forecasts with a lead time of 24 h or less were used. Figure 2 shows the root-mean-square error (rmse) between a model layer forecast wind speed and the observed surface wind gust. For the 48-km Eta Model, the minimum rmse (1.7 m s−1) was found in the second layer above the ground, which represents the second layer above the surface at any given location. For the MESO model, the wind speed in the same (second) layer was most correlated (rmse of 1.6 m s−1) to surface gust speed. In both models, this second layer is deemed the source layer, the one most reliable for forecasting observed surface wind gusts.

Once the source layer in the Eta and MESO was selected, research was conducted to compare the forecast magnitude of the model layer 2 wind speed to the observed surface wind gust. From several thousand forecast–observation pairs, probabilities of surface wind gust speed were derived from the model forecast layer 2 wind speed. The results of this analysis are given in Table 1. Empirical curves were then fit to this data to determine the probability of certain gust speeds over a continuous range (Table 2).

Examination of Table 2 reveals a subtle difference between the two model regression equations. For a given value of “S” (model forecast of sustained wind speed at second layer above the surface), the MESO produces higher probabilities than the Eta. This is partly the result of the MESO layer 2 being slightly closer (2–3 hPa) to the ground, on average, than the Eta Model layer 2. It is also a consequence of the smaller rmse of the MESO layer 2 winds (Fig. 2). This greater reliability of the MESO wind forecasts is, in turn, converted to a higher expected gust for an identical mean wind speed. Thus, for a given value of S, the MESO model indicates a greater surface wind gust potential than the Eta.

For each forecast hour of each station available on the Web site, wind gust probabilities are presented for forecasters in time series format. These probabilities, which are derived using the equations in Table 2, are for gusts of 13 m s−1 (30 mph), 18 m s−1 (40 mph), and 22 m s−1 (50 mph). An example of such a plot is shown in Fig. 3. Examining the probability plots from one model run to the next gives forecasters a feel for the model trends in surface wind gust potential. Forecasters should recognize that elevated regions are likely to have higher gust probabilities since the source layer is higher in the atmosphere (and therefore within a higher wind speed, on average). This is consistent with the observations of higher wind gust speeds at mountain ridges than sheltered valleys. Places known for orographic channeling may also have gust probabilities higher than indicated.

Probabilities of strong wind gusts provide the forecaster with a relative sense of times and threats of strong wind potential. However, the question of what threshold probabilities are most appropriate for issuing advisories or warnings is still undetermined. After examination of numerous cases of high wind events in the Northeast, the 40% threshold is recommended for forecasters as a flag for a likely occurrence of such gusts. Figure 4 examines empirically the accuracy of this 40% threshold forecast probability. The probability of detection (POD), false alarm rate (FAR), and critical success index (CSI) are plotted as functions of forecast threshold probability. As expected, both the POD and FAR decrease with increasing threshold probability. However, the maximum CSI (indicating maximum forecast skill) for 13 m s−1 (30 mph) gusts was at the 40% threshold probability (CSI = 0.26; Fig 4a), confirming the subjective evaluation performed during research. For 18 m s−1 (40 mph) gusts, the maximum skill was found at a forecast probability threshold of 30% (CSI = 0.07; Fig 4b). These statistics were obtained by applying the equations in Table 2 to an independent dataset, which will next be examined in more detail.

b. Testing of equations on independent dataset

The equations developed in section 3a were tested on an independent dataset spanning six months (October 1997–March 1998). Apart from independence from the developmental dataset, this independent testing was required for several reasons. First, the vertical resolution of the Eta Model increased since the developmental dataset was obtained, pushing the second model level (above ground) slightly closer to the ground. Second, the Eta and MESO boundary layer and radiation parameterizations were improved to reduce biases during the developmental dataset period, as noted in Hart et al. (1998).

Table 3 presents the results for this independent dataset and uses the method described for Table 1. The shortened time period of the independent dataset (number of forecast–observation pairs N ≈ 14 000 for independent; N ≈ 35 000 for developmental) resulted in fewer strong wind gusts in the independent sample. In particular, very few 22 m s−1 (50 mph) wind gusts were observed during the duration of any MESO forecasts (Table 3b). Consequently, meaningful independent results from that model for 22 m s−1 (50 mph) gusts could not be reliably presented.

The results are more easily interpreted when plotted as a forecast reliability plot (Fig. 5). The data in Table 3 are plotted (solid lines) along with the regression-based predicted frequency (dashed lines) derived from Table 2. Squares represent the 13 m s−1 (30 mph) gust probability data and circles represent the 18 m s−1 (40 mph) gust probability data. The correlation between the Eta 13 m s−1 (30 mph) independent data and developmental regression is quite good overall. However, the Eta 18 m s−1 (40 mph) gust independent sample deviates moderately from the developmental sample when level 2 wind speed exceeds 18 m s−1 (40 mph). For example, the developmental sample crosses the 50% threshold for 18 m s−1 (40 mph) gusts at a model level 2 wind speed of 20 m s−1 (45 mph), while the independent sample crosses the same threshold at 25 m s−1 (55 mph). Thus, in the independent sample a stronger model level 2 wind speed is required to produce a 18 m s−1 (40 mph) surface gust than is the case in the developmental sample.

The MESO (Fig. 5b) exhibits patterns similar to the Eta, although small sampling sizes at higher forecast wind speeds make the verification suspect. Regardless, the systematic differences between the two datasets (developmental and independent) appear to be independent of model type (Eta vs MESO).

Explanations for these differences go beyond the typical problems with using independent datasets. The vertical resolution in Eta has increased by about 25% during the period between the two datasets. Consequently, the second model layer above the ground is slightly closer to the ground (2–3 hPa, on average; ∼15–30 m) than during the developmental dataset. As a result, the average forecast wind speed at this level will be slightly lower, explaining why it takes a stronger wind in the independent sample to produce the same surface wind gust probability as in the developmental sample. Second, the boundary layer parameterizations have changed considerably since the developmental sample (Hart et al. 1998). It was noted in Hart et al. (1998) that erroneous radiation and boundary layer parameterizations produced boundary layer wind speeds in the developmental sample that were too high on average. As a result, the developmental sample is likely biased by this parameterization bias. With the bias decreased since then, the independent sample will undoubtedly produce lower frequencies of 13 m s−1 (30 mph) and 18 m s−1 (40 mph) wind gusts based on the same model level.

c. Implications of variable static stability

One important factor that has become apparent during the operational use of this product is the static stability of the boundary layer. Quite often it would appear that erroneous wind gust forecasts were the result of not fully accounting for changes in low-level static stability. Therefore, the impact of static stability on the forecast probability needs to be determined. It was shown in section 3a that the 40% and 30% probability forecasts should be used as a threshold for the occurrence of 13 m s−1 (30 mph) and 18 m s−1 (40 mph) wind gusts, respectively. Thus, we routinely used this 40% threshold as a binary verification method for 13 m s−1 (30 mph) gusts (forecast probability <40% = no, forecast probability >40% = yes), and then examined the static stability statistics of resulting successful and failed forecasts (Fig. 6).

Each forecast (F)–observation (O) pair was classified as one of four mutually exclusive groups: predicted events (F = yes; O = yes), missed events (F = no; O = yes), false alarms (F = yes; O = no), and nonevents (F = no; O = no). Both model forecasts during the independent period for all 12 stations were evaluated. The circles represent the mean model forecast static stability between the first and second model levels above the ground. The bars above and below the circles indicate the one standard deviation ranges. The predicted events fell predominantly within the typical atmospheric lapse rates, with only 20% occurring during superadiabatic or inversion lapse rates. In contrast, a significant fraction (40%) of missed events occurred during adiabatic or superadiabatic lapse rates. Therefore, the regression equations likely underestimate the probability of 30-mph wind gusts when the lower atmosphere is statically unstable and mixing is strong. An overwhelming 60% of false alarms occurred during inversion lapse rates and almost never during superadiabatic lapse rates. Physically, this makes good sense, since increased static stability would act to inhibit mixing of strong winds at model level 2 to the surface. Therefore, the regression equations (which were developed independent of lapse rate) appear to overestimate the probability of a 13 m s−1 (30 mph) gust when the lapse rate indicates an inversion layer; conversely, it will underestimate the probability when the lapse rate is adiabatic or superadiabatic. Finally, the nonevents had no clear preference for low-level lapse rate and occurred frequently in all three stability domains, presumably due to weak level 2 winds.

The hypothesized impact of lapse rate on the wind gust forecast accuracy given above is quantitatively examined in Fig. 7. In this figure, the FAR, POD, and CSI are calculated for each of the three stability regimes using all 12 stations and both models for the independent dataset. The FAR is clearly increased when the forecast lapse rate indicates an inversion layer and is a minimum when the forecast lapse rate is superadiabatic. This confirms the argument that the low-level stability strongly impacts the ability of boundary layer winds to reach the surface. However, the POD is a maximum when the lapse rate is neither inversion nor superadiabatic. Explanations for the POD response are not immediately evident. Regardless, the CSI is maximum when the lapse rate is between inversion and superadiabatic. Extreme lapse forecasts correspond to reduced skill in gust forecasts.

To further strengthen the significance of lapse rate on the forecasts, a t test was performed on each of the four populations in Fig. 6. All four populations were statistically independent from each other, to 99% confidence, with the exception of predicted events and nonevents, which were not statistically independent even to 80% confidence. These results show clearly that the forecast low-level static stability must be accounted for in future revisions of this forecast method, and in forecasts of surface wind gusts in general.

Therefore, from an operational setting these plots must be used with caution when the model’s forecast low-level thermodynamic structure is in question. If the model appears to have underforecast the intensity of low-level stability resulting from a maritime inversion, nocturnal inversion, cold-air damming, or cloudiness, then the forecast probability is likely overestimated. Conversely, if a low-level deck of stratocumulus clouds breaks up and the boundary layer becomes mixed beyond the model’s predictions, then the model forecast probability is likely underestimating the gust potential. In general, daytime forecasts for wind gust potential were more accurate than nighttime forecasts, likely a result of the increased and varied boundary layer stability during the night. Thus, it is again necessary to emphasize the need to use the time–height cross-section fields together with two-dimensional grids and observations when examining the hourly model output.

d. Preliminary set of secondary prediction equations

The biases shown in Figs. 5 and 7, especially for the 18 m s−1 (40 mph) wind gust probability, are very significant when compared to the magnitude of the probabilities themselves. This relative error greatly limits the utility of the 40-mph forecast probabilities, with CSI values of less than 10% (Fig. 7). Since the systematic model biases resulting from model changes are the major cause of this error, it is worthwhile to develop a preliminary set of secondary equations for wind gust prediction based solely on the independent dataset. These equations are shown in Table 4. However, we must emphasize strongly that these equations have not been tested against another independent dataset and, thus, must be used with caution. It is quite likely, however, that this second set of equations will provide forecast skill for 18 m s−1 (40 mph) gusts that is significantly higher than those provided in Table 2. Once these secondary equations have been tested against an independent dataset, they will replace the equations currently used to produce the Web-based daily output.

4. Concluding summary and forecaster guidelines

This paper examined the application of model soundings to forecast nonconvective strong wind gusts. Forecasts for this phenomenon are produced for each operational model run of the Eta and MESO Models. The output from these forecasts is made available to forecasters daily through the World Wide Web. Forecasters have found this product useful in anticipating the duration and intensity of strong surface wind gusts, especially when attempting to determine a window of greatest threat.

Forecast probabilities of surface wind gust potential were derived from forecast boundary layer wind speeds. The forecast probabilities have proven to be a moderately reliable method of anticipating strong to damaging wind gusts during frontal passages, explosive cyclogenesis, and low-level jets. The largest factor in creating false alarms of high winds or missed high wind events is the static stability in the lowest model levels. Future changes to this prediction method, as well as other wind gust prediction methods, must take into account static stability when using model-based wind speeds to forecast surface wind gusts. Consequently, the product is more accurate during the daytime, when the boundary layer stability is less likely to inhibit momentum transfer to the surface.

The independent test has pointed out that empirical forecast products are quite vulnerable to changes in model physics. Forecasters are encouraged to monitor the performance of empirical forecast tools for biases that may develop as models evolve. At present, forecasters can make reasonably confident use of the experimental 13 m s−1 (30 mph) gust probabilities displayed on the Web.

5. Suggestions for future research

The results presented here are preliminary and intended as a pilot project to demonstrate the forecast potential that exists in using model soundings. Future work in the forecasting of surface-based wind gusts must take into account the lowest-level static stability. Further work on this topic is encouraged once the dataset becomes sufficiently large and Eta and MESO have stabilized.

Based on the results shown here and the satisfaction of forecasters and students alike in using the model soundings, it is strongly suggested that other regions of the United States set up a similar real-time Web-based display of the hourly model forecast products. As the number and complexity of models increase, real-time model comparison is going to become increasingly more valuable in the forecasting process and also increasingly more time consuming. One method to reduce the time taken in model comparison is through the use of hourly displays of the type shown here and in Hart et al. (1998).

It is recommended that NCEP provide hourly soundings from the Rapid Update Cycle (RUC) model, as well as from the Regional Spectral Model (RSM). These forecasts would allow for further model comparison to the Eta and Meso Eta. Also, since the RUC is run six times each day, the profiles would provide an excellent method to watch the temporal changes in model synoptic-scale and mesoscale phenomena. Several universities across the United States, including The Pennsylvania State University (Penn State), are running real-time versions of the Penn State–National Center for Atmospheric Research (NCAR) Mesoscale Model version 5 (MM5) and placing output on the World Wide Web. In addition to the standard gridded output, it is suggested that these sites also provide hourly profiles of output so that models with other parameterizations can be compared to the Eta and Meso Eta Models on an hourly basis. Ultimately, these forecast methods will benefit the forecaster most greatly when they can be applied to full-resolution hourly model grids. Then forecasters can choose any column of grid points from which to produce a high-resolution model sounding.

Acknowledgments

The authors gratefully acknowledges the support of the National Weather Service and COMET in sponsoring this research through an NWS/COMET Fellowship. In particular, funds were provided by the University Corporation for Atmospheric Research (UCAR) Subawards UCAR S95-59695 and UCAR S96-75664 and pursuant to the National Oceanic and Atmospheric Administration (NOAA) Awards NA37WD0018-01V and NA57GP0576. Participation by the second author was supported in part by the NWS Cooperative Institute at Penn State through NOAA Award NA77WA0566. The views expressed herein are those of the authors and do not necessarily reflect the views of NOAA, its subagencies, or UCAR.

Numerous scientists, students, and forecasters provided invaluable guidance and support during the two years of research without whom this research could not have been as complete. Keith Brill and Dr. Geoff DiMego of NCEP provided the dataset on which this research was supported. Their efforts to constantly improve the quality, completeness, and timeliness of the dataset are greatly appreciated. The efforts to modify the scope of the dataset to meet the needs of this research are also appreciated, including the addition of several model sounding stations in the Pennsylvania region. Finally, the authors thank the Center for Ocean–Land–Atmosphere Studies (COLA) of the University of Maryland for the use of the GrADS software package. Additionally, we would like to thank Dr. Mike Fiorino of the Lawrence Livermore National Laboratory for his help with the package during the early stages of this research.

We also would like thank the many forecasters at the National Weather Service Office in State College, Pennsylvania, who provided valuable and critical feedback on the forecast products.

REFERENCES

  • Black, T. I., 1994: The new NMC mesoscale Eta model: Description and forecast examples. Wea. Forecasting,9, 265–278.

    • Crossref
    • Export Citation
  • ——, D. G. Deaven, and G. DiMego, 1993: The step-mountain eta coordinate model: 80km Early’ version and objective verifications. Technical Procedures Bull. 412, NOAA/NWS, 31 pp.

  • Hart, R. E., 1997: Forecasting studies using hourly model-generated soundings. M.S. thesis, Dept. of Meteorology, The Pennsylvania State University, 166 pp. [Available from The Pennsylvania State University, 503 Walker Building, University Park, PA 16802.].

  • ——, G. S. Forbes, and R. H. Grumm, 1998: The use of hourly model-generated soundings to forecast mesoscale phenomena. Part I: Initial assessment in forecasting warm-season phenomena. Wea. Forecasting,13, 1165–1185.

Fig. 1.
Fig. 1.

Model sounding stations used in the development of the nonconvective wind gust probability product.

Citation: Weather and Forecasting 14, 3; 10.1175/1520-0434(1999)014<0461:TUOHMG>2.0.CO;2

Fig. 2.
Fig. 2.

Rmse (m s−1) statistics for a comparison between a given model layer forecast wind speed, Eta (+) and MESO (○), and the corresponding observed surface wind gust in nonconvective events. The analysis was performed using 14 months of forecast and observational data and excluded apparent convectively driven events.

Citation: Weather and Forecasting 14, 3; 10.1175/1520-0434(1999)014<0461:TUOHMG>2.0.CO;2

Fig. 3.
Fig. 3.

Example experimental forecast output for surface wind gust potential from Eta.

Citation: Weather and Forecasting 14, 3; 10.1175/1520-0434(1999)014<0461:TUOHMG>2.0.CO;2

Fig. 4.
Fig. 4.

POD, FAR, and CSI as functions of forecast threshold probability for (a) 13 m s−1 (30 mph) and (b) 18 m s−1 (40 mph) gusts. All 12 stations and both Eta and MESO have been used from the independent dataset period. The 40% threshold probability was found to be the threshold producing the greatest forecast skill for 13 m s−1 (30 mph) gusts, and 30% for the 18 m s−1 (40 mph) gust.

Citation: Weather and Forecasting 14, 3; 10.1175/1520-0434(1999)014<0461:TUOHMG>2.0.CO;2

Fig. 5.
Fig. 5.

Forecast reliability of the wind gust product when applied to an independent dataset for (a) Eta and (b) MESO. Each forecast model layer 2 wind speed was rounded to the nearest 2 m s−1 (5 mph) for purposes of verifying by 5-mph intervals. For each 2 m s−1 (5 mph) forecast wind speed group, we determined the occurrence frequency of three different surface wind gust values. These observed probability values (solid lines) are then compared to the predicted probabilities given from the dependent dataset (dashed lines) obtained from Table 2.

Citation: Weather and Forecasting 14, 3; 10.1175/1520-0434(1999)014<0461:TUOHMG>2.0.CO;2

Fig. 6.
Fig. 6.

Analysis of the impact of lower atmosphere static stability on forecast accuracy using the independent dataset. A critical threshold 30-mph forecast gust probability of 40% was used as a binary predictor of 30-mph gusts. Forecast (F)–observed (O) wind gust pairs were divided into four independent groups: predicted events (F = yes; O = yes), missed events (F = no; O = yes), false alarms (F = yes; O = no), and nonevents (F = no; O = no). The mean lapse rate between the first and second model layers above ground for each of the four groups is plotted by a filled circle. The range of lapse rates for each group is represented by the bars at one standard deviation above and below the mean.

Citation: Weather and Forecasting 14, 3; 10.1175/1520-0434(1999)014<0461:TUOHMG>2.0.CO;2

Fig. 7.
Fig. 7.

The impact of low-level forecast lapse rate on POD, FAR, and CSI for (a) 13 m s−1 (30 mph) and (b) 18 m s−1 (40 mph) gusts using the independent dataset.

Citation: Weather and Forecasting 14, 3; 10.1175/1520-0434(1999)014<0461:TUOHMG>2.0.CO;2

Table 1.

Statistical results of forecast surface wind speed analysis used in generation of experimental wind gust probability prediction algorithm for (a) Eta and (b) MESO. Each forecast model layer 2 wind speed was rounded to the nearest 2 m s−1 (5 mph). For each 2 m s−1 (5 mph) forecast wind speed group, we determined the percentage occurrence of three different surface wind gust values. Thus, for MESO Model (b), if the model layer 2 wind speed was 16 m s−1 (35 mph), we can expect a surface wind gust of 13 m s−1 (30 mph) slightly more than half the time (51.1%).

Table 1.
Table 2.

Results of regression analysis performed on data presented in Table 1. The best equation fits were produced through exponential regression. Data points of 0% were ignored in the exponential regression. The correlation values shown are for comparison of the above equations to the original data in Table 2. The “S” in each equation is the forecast model layer 2 wind speed in mph. The equations produce a value of forecast probability for each of the three surface wind gust categories. Forecast probabilities greater than 100% are by-products of the regression approach and are forced to 100.

Table 2.
Table 3.

As in Table 1 except analysis performed using an independent dataset from 1997 to 1998.

Table 3.
Table 4.

As in Table 2 except equations were developed using independent dataset (Table 3). Since this independent dataset was six months long, the equations presented here should be used with caution. Further, the equations shown may not be representative of the future model state due to further model changes and improvements.

Table 4.
Save