• Andra, D. L., Jr., Quoetone E. M. , and Bunting W. F. , 2002: Warning decision making: The relative roles of conceptual models, technology, strategy, and forecaster expertise on 3 May 1999. Wea. Forecasting, 17, 559566, doi:10.1175/1520-0434(2002)017<0559:WDMTRR>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Boyatzis, R. E., 1998: Transforming Qualitative Information: Thematic Analysis and Code Development. Sage Publications, 184 pp.

  • Creswell, J. W., 2002: Research Design: Qualitative, Quantitative, and Mixed Methods Approaches. Sage Publications, 227 pp.

  • Gao, J., and Stensrud D. J. , 2012: Assimilation of reflectivity data in a convective-scale, cycled 3DVAR framework with hydrometeor classification. J. Atmos. Sci., 69, 10541065, doi:10.1175/JAS-D-11-0162.1.

    • Search Google Scholar
    • Export Citation
  • Gao, J., Xue M. , Brewster K. , and Droegemeier K. K. , 2004: A three-dimensional variational data analysis method with recursive filter for Doppler radars. J. Atmos. Oceanic Technol., 21, 457469, doi:10.1175/1520-0426(2004)021<0457:ATVDAM>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Gao, J., and Coauthors, 2013: A real-time weather-adaptive 3DVAR analysis system for severe weather detections and warnings with automatic storm positioning capability. Wea. Forecasting, 28, 727745, doi:10.1175/WAF-D-12-00093.1.

    • Search Google Scholar
    • Export Citation
  • Ge, G., Gao J. , and Xue M. , 2012: Diagnostic pressure equation as a weak constraint in a storm-scale three-dimensional variational radar data assimilation system. J. Atmos. Oceanic Technol., 29, 10751092, doi:10.1175/JTECH-D-11-00201.1.

    • Search Google Scholar
    • Export Citation
  • Heinselman, P. L., Cheong B. L. , Palmer R. D. , Bodine D. , and Hondl K. , 2009: Radar refractivity retrievals in Oklahoma: Insights into operational benefits and limitations. Wea. Forecasting, 24, 13451361, doi:10.1175/2009WAF2222256.1.

    • Search Google Scholar
    • Export Citation
  • Heinselman, P. L., LaDue D. S. , and Lazrus H. , 2012: Exploring impacts of rapid-scan radar data on NWS warning decisions. Wea. Forecasting, 27, 10311044, doi:10.1175/WAF-D-11-00145.1.

    • Search Google Scholar
    • Export Citation
  • Lusk, C., Kucera P. , Roberts W. , and Johnson L. , 1999: The process and methods used to evaluate prototype operational hydrometeorological workstations. Bull. Amer. Meteor. Soc., 80, 5764, doi:10.1175/1520-0477(1999)080<0057:TPAMUT>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Morss, R. E., and Ralph F. M. , 2007: Use of information by National Weather Service forecasters and emergency managers during CALJET and PACJET-2001. Wea. Forecasting, 22, 539555, doi:10.1175/WAF1001.1.

    • Search Google Scholar
    • Export Citation
  • Smith, T. M., and Coauthors, 2013: Examination of a real-time 3DVAR analysis system in the Hazardous Weather Testbed. Wea. Forecasting, 29, 63–77, doi:10.1175/WAF-D-13-00044.1.

    • Search Google Scholar
    • Export Citation
  • Stensrud, D. J., and Coauthors, 2009: Convective-scale warn-on-forecast system. Bull. Amer. Meteor. Soc., 90, 14871499, doi:10.1175/2009BAMS2795.1.

    • Search Google Scholar
    • Export Citation
  • Stensrud, D. J., and Coauthors, 2013: Progress and challenges with warn-on-forecast. Atmos. Res., 123, 216, doi:10.1016/j.atmosres.2012.04.004.

    • Search Google Scholar
    • Export Citation
  • Stewart, T. R., Moninger W. R. , Grassia J. , Brady R. H. , and Merrem F. H. , 1989: Analysis of expert judgment in a hail forecasting experiment. Wea. Forecasting, 4, 2434, doi:10.1175/1520-0434(1989)004<0024:AOEJIA>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • View in gallery

    Example domain size (inner 200 km2 box) and region of radar selection (outer 500 km2 box).

  • View in gallery

    Forecaster responses (%) on the usefulness of specific 3DVAR products during operations for a particular event. Forecasters were provided a range of choices from not at all useful (gray) to extremely useful (dark navy blue); if a forecaster did not examine or have access to a product during an event, N/A was chosen (off white). Average ranking refers to the mean value of all responses when usefulness is rated on a numeric scale of 1 (not at all) to 9 (extremely useful); the N/A category is not included in the numeric average.

  • View in gallery

    Forecaster screenshot of regional activity at 1900 UTC 10 May 2012 after first warnings were issued on shift. (top) The 0.5° elevation reflectivity from KCRP. (bottom) (top left) Max updraft composite, (top right) max vorticity below 3 km, (bottom left) reflectivity at the −20°C isothermal level, and (bottom right) MESH. The two highlighted storms (circle, square) were the first to receive tornado warnings at the beginning of the operational shift.

  • View in gallery

    Screenshot of a forecaster’s desktop during interrogation of a supercell near Loma Alta, TX, at 2300 UTC 10 May 2012. Shown are (top left) 1-km composite updraft max, (top right) 0.5° reflectivity, (bottom left) 1-km surface max vorticity, and (bottom right) 1–5-km updraft helicity.

  • View in gallery

    Forecaster screenshots of wind event evolution on 14 Jun 2012 near KOAX. (left) Radial velocity at 0.5° elevation from KOAX. (right) The 3DVAR winds at 1 km for the same time periods (approximately 5-min latency is inherent in the analysis product and all screenshots are time stamped to their arrival time within the AWIPS2 display). Contoured grid are set as meters per second. Wind barbs are in knots (kt; 1 kt = 0.51 m s−1).

  • View in gallery

    Forecaster AWIPS display at 2111 UTC 11 May 2011. (top left) 3DVAR 3–7-km vertical vorticity; (top right) storm relative velocity at 1.8° elevation from the Oklahoma City, OK, radar (KTLX); (bottom left) storm relative velocity at 2.4° elevation from KTLX; and (bottom right) storm relative velocity at 3.1° elevation from KTLX.

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 51 45 6
PDF Downloads 37 33 3

Forecaster Use and Evaluation of Real-Time 3DVAR Analyses during Severe Thunderstorm and Tornado Warning Operations in the Hazardous Weather Testbed

View More View Less
  • 1 Cooperative Institute for Mesoscale Meteorological Studies, University of Oklahoma, and NOAA/OAR/National Severe Storms Laboratory, Norman, Oklahoma
  • | 2 NOAA/OAR/National Severe Storms Laboratory, Norman, Oklahoma
© Get Permissions
Full access

Abstract

A weather-adaptive three-dimensional variational data assimilation (3DVAR) system was included in the NOAA Hazardous Weather Testbed as a first step toward introducing warn-on-forecast initiatives into operations. NWS forecasters were asked to incorporate the data in conjunction with single-radar and multisensor products in the Advanced Weather Interactive Processing System (AWIPS) as part of their warning-decision process for real-time events across the United States. During the 2011 and 2012 experiments, forecasters examined more than 36 events, including tornadic supercells, severe squall lines, and multicell storms. Products from the 3DVAR analyses were available to forecasters at 1-km horizontal resolution every 5 min, with a 4–6-min latency, incorporating data from the national Weather Surveillance Radar-1988 Doppler (WSR-88D) network and the North American Mesoscale model. Forecasters found the updraft, vertical vorticity, and storm-top divergence products the most useful for storm interrogation and quickly visualizing storm trends, often using these tools to increase the confidence in a warning decision and/or issue the warning slightly earlier. The 3DVAR analyses were most consistent and reliable when the storm of interest was in close proximity to one of the assimilated WSR-88D, or data from multiple radars were incorporated into the analysis. The latter was extremely useful to forecasters in blending data rather than having to analyze multiple radars separately, especially when range folding obscured the data from one or more radars. The largest hurdle for the real-time use of 3DVAR or similar data assimilation products by forecasters is the data latency, as even 4–6 min reduces the utility of the products when new radar scans are available.

Corresponding author address: Kristin Calhoun, National Severe Storms Laboratory, 120 David L. Boren Blvd., Norman, OK 73072. E-mail: kristin.kuhlman@noaa.gov

Abstract

A weather-adaptive three-dimensional variational data assimilation (3DVAR) system was included in the NOAA Hazardous Weather Testbed as a first step toward introducing warn-on-forecast initiatives into operations. NWS forecasters were asked to incorporate the data in conjunction with single-radar and multisensor products in the Advanced Weather Interactive Processing System (AWIPS) as part of their warning-decision process for real-time events across the United States. During the 2011 and 2012 experiments, forecasters examined more than 36 events, including tornadic supercells, severe squall lines, and multicell storms. Products from the 3DVAR analyses were available to forecasters at 1-km horizontal resolution every 5 min, with a 4–6-min latency, incorporating data from the national Weather Surveillance Radar-1988 Doppler (WSR-88D) network and the North American Mesoscale model. Forecasters found the updraft, vertical vorticity, and storm-top divergence products the most useful for storm interrogation and quickly visualizing storm trends, often using these tools to increase the confidence in a warning decision and/or issue the warning slightly earlier. The 3DVAR analyses were most consistent and reliable when the storm of interest was in close proximity to one of the assimilated WSR-88D, or data from multiple radars were incorporated into the analysis. The latter was extremely useful to forecasters in blending data rather than having to analyze multiple radars separately, especially when range folding obscured the data from one or more radars. The largest hurdle for the real-time use of 3DVAR or similar data assimilation products by forecasters is the data latency, as even 4–6 min reduces the utility of the products when new radar scans are available.

Corresponding author address: Kristin Calhoun, National Severe Storms Laboratory, 120 David L. Boren Blvd., Norman, OK 73072. E-mail: kristin.kuhlman@noaa.gov

1. Introduction

The primary objective of the Experimental Warning Program (EWP) in the National Oceanic and Atmospheric Administration (NOAA) Hazardous Weather Testbed (HWT) is to evaluate the accuracy and operational utility of new science, technology, and products in a test bed setting in order to gain feedback for improvements prior to their potential implementation into National Weather Service (NWS) operations. This combination of early feedback and testing is a vital aspect of a successful transition from research to operations. The focus for this experiment in 2011 and 2012 was the incorporation of three-dimensional variational data assimilation (3DVAR) analyses into an operational setting to test their ability to detect supercells as well as other hazardous weather and determine if these analyses can be utilized in operational forecasts and warnings.

The HWT facilities are located in the National Weather Center in Norman, Oklahoma, directly between the operations rooms of the NOAA/NWS Storm Prediction Center and the Norman NWS Weather Forecast Office (WFO). These facilities support a mutual collaboration between researchers and operational forecasters as new products and services are developed, evaluated, and disseminated. Every spring approximately 20–30 NWS operational forecasters from WFOs around the country come to the HWT to evaluate new products and concepts geared at severe and hazardous weather including satellite, radar, lightning, and numerical weather prediction as part of the EWP. Feedback provides multiple facets for testing and future development, as not only do forecasters get early and direct access to the latest research and products, but they are able to provide direct comments that will help shape the final products while also providing researchers with knowledge regarding the challenges and constraints of daily operations.

During the spring EWP experiments of 2011 and 2012, a 3DVAR analysis was created in real time with high horizontal resolution and high time frequency using operationally available radar data from the WSR-88D network for evaluation in the HWT (Gao et al. 2013; Smith et al. 2013). NWS forecasters were asked to evaluate the 3DVAR system by considering how the storm structure and morphology derived from the 3DVAR analyses compared to typical data analyses during current forecast/warning operations. Did it provide a useful integration of multiple data streams? Did it produce realistic values of vertical vorticity and updraft intensity? How could such products affect the warning–decision making process? How could the current products be improved?

This research and testing is an initial step in the long-term warn-on-forecast research program to enhance warning lead times of severe weather. The current warning paradigm is primarily a warn-on-detection system: human forecasters make a warning decision through mentally integrating multiple environmental or radar proxies for severe development and making the decision to warn or not warn or otherwise warn at the time of visual confirmation (the latter option providing zero or negative lead time). Warn on forecast may improve lead times by assimilating multiple data sources into a dynamically consistent analysis that provides the initial conditions for storm-scale numerical model forecasts that forecasters can then use and evaluate (Stensrud et al. 2009, 2013) in addition to providing 1-h forecasts of convective evolution and severe weather hazards. The rapid temporal development of local severe and high-impact weather (such as tornadoes) will require expeditious data assimilation, analysis, and NWP high-resolution forecasts in order to be useful to the forecaster. Examination of data from the first step (e.g., current analyses from data assimilation) in an operational environment, such as the HWT, allows researchers to discover possible difficulties and obtain forecaster input on design early in the process.

2. Methods and available products

The real-time data assimilation method used for evaluation within this project was the Advanced Regional Prediction System (ARPS) 3DVAR formulated in an incremental form (Gao et al. 2004, 2013). This system has the advantage of being able to analyze single-, dual-, or multiple-Doppler wind observations as well as conventional observations (e.g., Oklahoma Mesonet data) and includes background fields from forecast models. The ARPS 3DVAR itself has several features that make it suitable for radar data assimilation. It uses a multipass strategy where observations representing different spatial scales can be analyzed in different passes using appropriate error correlation scales, and an anisotropic recursive filter to model the effect of flow-dependent background error covariance. Recent developments include direct variational analysis of reflectivity, additional equation constraints, and the building of radar beam pattern weighting into the observation operator (Gao and Stensrud 2012; Ge et al. 2012). For this experiment, data from nearby WSR-88D and National Centers for Environmental Prediction (NCEP) North American Mesoscale (NAM) model 12-km-resolution numerical weather prediction products (used as a background field) were included in the analysis evaluated by forecasters in the HWT; a more detailed description of this system is provided by both Gao et al. (2013) and Smith et al. (2013).

Four separate moveable 200 km × 200 km 3DVAR domains with 1-km resolution were maintained throughout the evaluation period. During the 2011 experiment, an automated process positioned three of the domains to regions of high reflectivity using the Warning Decision Support System–Integrated Information (WDSSII) merged composite reflectivity product created for the conterminous United States (CONUS). A fourth domain was chosen by a scientist or forecaster in the test bed specific to the domain of operations. Throughout the 2012 experiment, all four domains were controlled by either a scientist or forecaster in the test bed, only defaulting to the automated system after an hour without human interaction. A larger, 500 km × 500 km window surrounding the initial domain was used to determine the WSR-88D that were incorporated into the assimilation (Fig. 1).

Fig. 1.
Fig. 1.

Example domain size (inner 200 km2 box) and region of radar selection (outer 500 km2 box).

Citation: Weather and Forecasting 29, 3; 10.1175/WAF-D-13-00107.1

The forecasters received a 3DVAR analysis every 5 min within the operational Advanced Weather Interactive Processing System (AWIPS) in 2011 and within the second-generation AWIPS2 in 2012. Data latency relative to the current WSR-88D display was about 5–6 min in 2011 and 4–5 min in 2012. The products available to forecasters within AWIPS/AWIPS2 included simulated reflectivity, updraft (composite, every 1 km, and 60-/120-min tracks of composite), downdraft (composite and 60-/120-min tracks; only available in 2011), and vertical vorticity (0–3 km, 3–7 km, composite, and 60-/120-min tracks). During the 2011 experiment, three-dimensional wind vectors were also available to forecasters via the WDSSII platform and were displayed in real time using the situational awareness display or secondary display for interrogation using the WDSSII–Graphical User Interface (GUI) in the HWT. The wind vectors were incorporated into the operational display following the installation of AWIPS2 in the test bed in early 2012. Additionally, following feedback from early forecaster evaluations, products for updraft helicity and storm-top divergence (maximum divergence above 8 km) were created for the 2012 experiment. Forecasters were not limited to only using the 3DVAR data for the warning-decision process, they had access to all base and derived products for the local WSR-88D for the chosen WFO of operations as well as other products typically available in an operational environment.

a. Data collection from forecasters

The primary feedback from the forecasters was qualitative in nature and acquired through a variety of processes, including participant (forecaster) observation by the project leads (e.g., Morss and Ralph 2007), forecaster surveys (e.g., Lusk et al. 1999; Heinselman et al. 2009), discussion and interviews between the lead scientists and forecasters during and postevent (e.g., Heinselman et al. 2012), and direct forecaster comments and screenshots collected via blogs posted by the individual forecasters during an event [capturing the event evolution and warning-decision process similar to Andra et al. (2002)]. Each of these methods is described in greater detail below within the context of daily operations in the HWT.

Forecasters typically worked in pairs with each pair acting as a different NWS WFO from around the CONUS. On extremely busy weather days, two sets of forecasters may work in different sectors of a WFO. Forecasters were asked not only to issue relevant warnings using WarnGen (a warning generation tool) within the AWIPS platform, but also to blog about their warning decision or items they found interesting during their storm interrogation period. Typically, one forecaster would concentrate on the warning products while the other would generate the blog posts (note: the use of the blog was increased in 2012 following an initial review of data from the 2011 experiment). The forecasters had the ability to take screenshots of their desktop (and were encouraged to do so anytime they saw a feature of interest) and use that image or series of images for inclusion in a blog post. These blog posts not only provided an opportunity to understand what products or combination of products forecasters were using in real time, but also a description of why forecasters were using those specific products and a deeper understanding how they were using the products within their warning-decision process.

Throughout an event, project scientists and developers were also able to have brief conversations with the forecasters, either to clarify what the forecasters were examining or to understand why a forecaster found a product useful (or not). Additionally, forecasters completed online surveys at the end of every day; the surveys contained a series of product rankings (using a Likert scale) and open-ended questions about product use and understanding. The majority of the questions from the survey were open ended, meaning that the analysis of these data was completed via coding and categorizing the forecaster responses (e.g., Boyatzis 1998; Creswell 2002). Responses were incrementally categorized through two techniques. First, an automated text analysis in which repeated answers (singular words or phrases) were grouped together followed by a manual categorization of repeated themes. Second, codes (automated and manually constructed) were organized using a tree structure to easily view grouped themes. Examples of repeated themes included warning decision, time or time scale, situational awareness, visualization, trends, latency, and confidence. A debriefing session covering the previous day of operations began every shift; this focused on a review of the 3DVAR products and how they were utilized within the context of the particular event. Finally, every week of the experiment concluded with an open discussion period between all the scientists/developers and forecasters to retrieve any final thoughts and opinions from using the data during a number of different settings and events.

This study uses a trade-off between realism and control that numerous operational impact studies have dealt with in the past (e.g., Stewart et al. 1989), namely in this case, “control” was given up to produce a realistic operational setting for forecaster evaluation. This issue was alleviated slightly by the number and variety of types of cases evaluated by a wide range of NWS forecasters (skill and experience included) such that feedback should be applicable to the larger field of NWS severe weather operations, not just one or two types of case studies. However, the current experiment implementation does produce the inherent limitation that quantifiable statistics cannot be related as they would from a study using a controlled, repeatable, real-time simulation. Finally, the forecasts and warnings issued by the HWT participants were not compared to forecasts and warnings made from the actual WFO nor were verification skill scores computed due to the variety of uncontrolled factors involved in HWT operations including, but not limited to, individual warning-forecaster experience, knowledge of the local area, and differences in regional storm spotter information and emergency management information.

b. Weekly schedule and forecaster expertise

During the EWP, between four and six NWS forecasters visited the test bed each week. In 2011, the forecasters were split into shifts, early (1000–1800 local time) and late (1300–2100 local time) Tuesday–Thursday, with the late shift typically focusing on 3DVAR and other convection and warning-based products. Monday was used primarily as a training day, as forecasters were introduced to each of the new products. This period contained discussion with the project scientists followed by a period of hands-on training within the AWIPS platform using the Warning Event Simulator (WES) to examine the products within the context of the 19 May 2010 tornadic supercell event in central Oklahoma. In 2012, the schedule was updated such that the forecasters all worked the same 8-h shift beginning as early as noon or as late as 1400 local time dependent upon the expected timing of the severe weather on a given day. Additionally, in 2012, the majority of the training, including the hands-on WES event (switched to the 24 May 2011 central Oklahoma tornadic supercell event), was completed prior to arrival in Norman for the experiment, freeing an additional day for real-time product evaluation.

There was a wide range in forecaster warning expertise among the HWT participants, as participants included NWS journeymen and lead forecasters as well as science operations officers and meteorologists in charge. The overall experience level was weighted by forecasters with a greater number of years of NWS warning experience. In 2011, 27.3% of survey respondents were from forecasters with more than 20 yr of experience, 13.6% with 16–20 yr, 20.5% with 11–15 yr, 31.8% with 6–10 yr, and 6.8% with 5 yr or less of warning experience. The distribution of experience was modified slightly in 2012: 28.2% of surveys were completed by forecasters with more than 20 yr of experience, 19.4% with 16–20 yr, 6.8% with 11–15 yr, 21.4% with 6–10 yr, and 24.3% with 5 yr or less of warning experience.

c. Cases evaluated

A variety of weather and storm modes were examined during the 2011 and 2012 EWPs. On almost all operational days in the HWT, we were able to operate in a region where severe or marginally severe activity occurred. In addition to the training cases reviewed by all forecasters, forecasters evaluated 14 different real-time events occurring on nine separate days during the 2011 EWP and 26 different events occurring over 18 days during the 2012 EWP. Potential bias was introduced via forecaster expectation of the data because the training case for all forecasters included an ideal case for supercell storms surrounded by multiple radars, while the real-time events included tornadic supercells, squall lines, and multicell and isolated storms with varying radar coverage patterns; the dates and event types are summarized in Table 1.

Table 1.

Events (date, location, and storm mode) in which forecasters evaluated 3DVAR analyses during the 2011 and 2012 EWPs.

Table 1.

3. Forecaster use and evaluation

a. Forecaster surveys

At the end of every shift in the HWT, forecasters were asked to complete an online survey regarding the various experimental products they evaluated that day. The 3DVAR section contained six questions, each to be evaluated within the context of the particular event worked during that shift. Five of the six questions were open ended:

  1. Did the 3DVAR products provide a realistic visualization of the storm structure and morphology? Why or why not?

  2. Did access to the 3DVAR products add value to your warning–decision making process? Please explain.

  3. What do you feel were the strengths and weaknesses of the 3DVAR analysis?

  4. Please describe any additional related features, visualizations, or products that you would like to see developed.

  5. In future years, we will be implementing a forecast step that will provide similar types of products that you have seen this week from the 3DVAR analysis (e.g., updraft intensity, vorticity, wind field, etc.), but from ensembles of 5–20 min in the future. Do you envision your warning–decision making process changing with access to these products? If so, how?

This final question was only answered once by every forecaster on the final survey of the week. All other questions were available after every event. In addition to the open-ended questions, forecasters were also asked to rank each product from “not at all useful” to “extremely useful,” choosing from nine different options, as well as “N/A” if a forecaster did not evaluate that product for a particular shift. In total, the survey was completed 41 times in 2011 and 88 times in 2012.

Results from the Likert scale question are shown in Fig. 2. The average rankings from the question can help quantify when the product was used and how frequently the forecaster found it useful. Due to the lack of use combined with low average rankings, the downdraft, downdraft track, and updraft volume products were removed between the 2011 and 2012 experiments. Overall, the updraft products were the highest rated over both years of the experiment with a majority of respondents ranking it between “very useful” and “extremely useful.” During 2011, the vorticity products (composite, low level, midlevel, and tracks) were used more often and ranked markedly higher than in the 2012 responses. This is likely due in part to the development of an updraft helicity product (which combines updraft and vorticity into a single grid) between the 2011 and 2012 experiments, but possibly also due to the slightly higher percentage (44% versus 37%) of supercell-focused events. In 2011, the high N/A response to the usefulness of the 2D wind vectors was primarily because the product was not available in AWIPS, though the forecasters could view the product on the situational awareness display in the HWT via the WDSSII platform. In 2012, with the introduction of AWIPS2 into the HWT, the vectors could be viewed directly on the forecaster display and the overall utilization of the product increased, though the responses showed a more varied degree of actual usefulness. Though the majority of forecasters used the simulated reflectivity product, there was a large amount of spread regarding how useful forecasters actually found that product, with the average ranking it only “moderately useful.”

Fig. 2.
Fig. 2.

Forecaster responses (%) on the usefulness of specific 3DVAR products during operations for a particular event. Forecasters were provided a range of choices from not at all useful (gray) to extremely useful (dark navy blue); if a forecaster did not examine or have access to a product during an event, N/A was chosen (off white). Average ranking refers to the mean value of all responses when usefulness is rated on a numeric scale of 1 (not at all) to 9 (extremely useful); the N/A category is not included in the numeric average.

Citation: Weather and Forecasting 29, 3; 10.1175/WAF-D-13-00107.1

From the coded responses to the open-ended questions, we summarize that the forecasters found the 3DVAR products provided a realistic representation of the storms viewed, but with some possible qualifications. One representative forecaster response exemplified some of these qualifications: “Yes [the 3DVAR products were realistic], but only when convection was relatively close to the radar, and two or more radars were able to look at the storm. Storms too far away from the radar (i.e., lowest slice greater than 8 kft) were not represented well by 3DVAR.” Almost all of the forecaster responses noted that the 3DVAR products added value to the warning-decision process through added confidence and continuity on the warnings. This added confidence was cited in survey responses by forecasters during both years of the experiment. Specifically, forecasters found that the 3DVAR products “acted like an algorithm that helped highlight attention to areas that deserved closer inspection using the base data” and “the combined nature of the products (using multiple radars) also served to highlight storms of interest that might not be well sampled by one radar, but were by another.”

However, the main weakness of the product, as noted by multiple respondents, was the latency of the product. In 2011, multiple separate survey responses noted that it was difficult to use the products within a warning context due specifically to latency issues. Additionally, some forecasters noted that “severe thresholds” varied within the products from case to case. One respondent worried that the access to the 3DVAR synthesis products and trends may discourage forecasters from performing a complete analysis of the storms using base radar data such as reflectivity and radial velocity. A final weakness of 3DVAR products within the survey results regards the spatial resolution: some forecasters believed the spatial resolution of the products to be too low (especially when compared to current WSR-88D data) and that increased resolution is desired in future implementations (though not at the cost of additional latency).

The majority of forecasters either noted that they were happy with the current display options or skipped the question regarding additional visualizations and features. However, the answers that were provided did contain the same themes: a simple visualization of a “goodness” of the analysis (e.g., how many radars are incorporated into the current analysis) and an easy way to visualize the three-dimensional nature of the products (e.g., an AWIPS-style “all tilts” visualization). The former would provide the forecaster with an understanding of how much he or she could trust the current analysis and the latter would provide the forecaster with the ability to scroll through all levels instead of loading each separately in different panels.

Responses to the final question (answered 14 times in 2011 and 28 times in 2012) noted that added confidence and situational awareness would be the likely use of 15–20-min forecast ensemble information of the same format. While none of the forecasters could see themselves removing radar base data from the actual warning-decision process, numerous respondents mentioned the likelihood of providing additional lead time, assistance in determining the type of warning (e.g., severe thunderstorm versus tornado), and guidance on whether to extend or terminate a current warning. These statements were often made in conjunction with the qualification of “if the system is proven reliable.” Multiple respondents also noted that the products and display will need to be simple and easy to understand; if not, it could muddle the analysis and actually delay warning times.

b. Examples from real-time analyses

As previously noted, the real-time blogging by forecasters allowed the project scientists to review cases postevent with forecaster screenshots and notes providing unbiased insights into which products were used and how they were utilized during storm interrogation. Forecasters were encouraged to utilize the blog during both years; however, after reviewing 2011 data, project scientists in the EWP placed greater emphasis on the blogging aspect in 2012. Typically, forecaster comments concentrated on current analyses from storm interrogation and often explained which factors and products contributed to a warning decision. Below are two examples of forecaster comments within the context of the actual operational event.

1) 10 May 2012: South Texas

Convection initiation had already occurred in south Texas when operations began in the HWT on 10 May 2012. Daytime heating of the moist air mass near the boundary of a surface baroclinic zone with veering winds with height over the region combined to provide an unstable environment favorable for thunderstorms with rotating updrafts. Supercell storms developed quickly over the Corpus Christi (CRP), San Antonio, and Brownsville county warning areas (CWAs). One forecast team was assigned to the CRP CWA and quickly issued tornado warnings for three supercell storms over the northern part of the CWA and a single supercell southwest of the Corpus Christi city limits (Fig. 3). Due to the intensity of the storms, the northeastern supercell (in the line of three) and the storm southwest of Corpus Christi were given first priority. Throughout this event, the forecasters chose to integrate the 3DVAR data with the radar base data to examine storm trends. The three supercell storms near the CRP–San Antonio CWA borders remained a focus for interrogation over the next couple of hours as each storm cycled through periods of strengthening and weakening. The 3DVAR updraft composite and low-level vorticity were utilized during this event to judge storm trends in intensity, particularly as storm trends appeared to change rapidly and tornadoes developed quickly, but were short lived. The forecasters used these particular products to not only issue the warning, but to also determine if they should maintain or discontinue the warnings.

Fig. 3.
Fig. 3.

Forecaster screenshot of regional activity at 1900 UTC 10 May 2012 after first warnings were issued on shift. (top) The 0.5° elevation reflectivity from KCRP. (bottom) (top left) Max updraft composite, (top right) max vorticity below 3 km, (bottom left) reflectivity at the −20°C isothermal level, and (bottom right) MESH. The two highlighted storms (circle, square) were the first to receive tornado warnings at the beginning of the operational shift.

Citation: Weather and Forecasting 29, 3; 10.1175/WAF-D-13-00107.1

Later in the evening, an outflow boundary from the earlier storms remained an area of focus for storm initiation and development farther southwest in the CRP CWA. A forecaster chose to issue and then continue a tornado warning for a supercell storm near Loma Alta at 2300 UTC, noting that “the 3DVAR data [were] extremely useful in identifying where the strongest low-level rotation is located and the potential for a tornado.” Additionally, the forecaster noted that the low-level rotation products were particularly helpful as the 0.5° elevation velocity data from the neared radar were “contaminated and not very useful” for the storm he was interested in. In Fig. 4, we see a screenshot of the forecaster’s display showing the products he utilized for this event, included the 1-km composite updraft maximum (top left), 0.5° elevation reflectivity (top right), 1-km surface maximum vorticity (bottom left), and 1–5-km updraft helicity (bottom right). Both periods of storm activity on this date emphasize the type of event that this system was originally developed to handle: supercell storms, near one or more radars, and with the 3DVAR products utilized to quickly compare trends in storm dynamics (Gao et al. 2013). Overall, during HWT operations on this day, eight different tornadoes were recorded in Storm Data associated with supercell storms over the CRP CWA, the majority of which lasted only briefly and were rated EF0.

Fig. 4.
Fig. 4.

Screenshot of a forecaster’s desktop during interrogation of a supercell near Loma Alta, TX, at 2300 UTC 10 May 2012. Shown are (top left) 1-km composite updraft max, (top right) 0.5° reflectivity, (bottom left) 1-km surface max vorticity, and (bottom right) 1–5-km updraft helicity.

Citation: Weather and Forecasting 29, 3; 10.1175/WAF-D-13-00107.1

2) 14 June 2012: Central plains

A cold front moving across South Dakota and Nebraska supported by a progressive short-wave trough served as the region for convection initiation by 2000 UTC 14 June 2012. A pair of forecasters was assigned the Omaha CWA, where convection initially remained isolated. At the beginning of warning operations, severe hail was the initial concern with these storms. As forecasters examined the 3DVAR products early in the day, they noted the representation of mass continuity through the visualization of the products over time. Specifically, forecaster confidence was added in the increasing storm intensity when an increase in the 3DVAR updraft was followed by an increase in storm-top divergence and later increased values in the multiradar maximum expected size of hail (MESH) product followed with storm reports of 1.5-in. hail at the surface in Butler County, Nebraska. Forecasters also noted that the lack of any signal of rotation in the vorticity or updraft helicity products provided confidence that these storms had no supercellular characteristics and tornadoes were not a threat across the area of operations—though they remained a possibility over nearby CWAs.

Throughout the afternoon, storms continued to grow upscale over the region, transitioning to a line of severe storms with strong winds as the primary threat. With this transition, the forecasters incorporated the 3DVAR low-level winds (1 km AGL) into their analysis. Figure 5 depicts a series of screenshots from the forecaster on shift. Both the Omaha, Nebraska (KOAX), WSR-88D and 3DVAR analyses characterized the possibility of a damaging wind event across southern Omaha and Offutt Air Force Base (AFB). However, as noted by the warning forecaster, the 3DVAR wind products had the advantage of synthesizing multiple radars, with the outflow boundary, peak winds, and wind direction easily evident in the 3DVAR surface wind analysis. Even though the 3DVAR analysis had a latency of approximately 4 min on the raw radar data, the forecaster noted in the blog: “Only if the warning forecaster was paying close attention to the base data from KOAX, would he/she have caught the event much ahead of 3DVAR.” It was also stated in the blog that the 3DVAR products captured well the quick evolution from a hail threat to a heavy rain and damaging wind threat.

Fig. 5.
Fig. 5.

Forecaster screenshots of wind event evolution on 14 Jun 2012 near KOAX. (left) Radial velocity at 0.5° elevation from KOAX. (right) The 3DVAR winds at 1 km for the same time periods (approximately 5-min latency is inherent in the analysis product and all screenshots are time stamped to their arrival time within the AWIPS2 display). Contoured grid are set as meters per second. Wind barbs are in knots (kt; 1 kt = 0.51 m s−1).

Citation: Weather and Forecasting 29, 3; 10.1175/WAF-D-13-00107.1

c. Synthesis of forecaster discussion

Discussion with forecasters postevent and during the weekly debriefing session concentrated on how they integrated the 3DVAR products into their warning procedures and how, if at all, the availability of these products modified their storm interrogation and warning-decision process, as well as any difficulties they encountered. Similar to details provided via the surveys and blog posts, the discussion illustrated that forecasters found the 3DVAR products provided a clearer situational awareness picture, were easy to integrate, and provided additional confidence during the warning-decision process.

Most forecasters greatly utilized both the updraft and vorticity products during warning operations. The updraft helicity and storm-top divergence products introduced in 2012 were also well utilized. Forecasters found these products best highlighted the most intense areas of the storm and provided information on cycling mesocyclones. Forecasters particularly utilized these products (both alone and in combination) when trying to diagnose a large number of storms and when “sitting on the fence” about issuing a warning. Additionally, forecasters mentioned that both the updraft and vertical vorticity products were more efficient to view than existing algorithms to diagnose storm intensity and rotation.

Within AWIPS/AWIPS2, forecasters had the option of combining the 3DVAR products with a multitude of sensors and products, both experimental and those currently available in NWS forecast offices. Many forecasters found that combining the updraft or updraft track products with lightning (total and cloud to ground) and/or MESH plots (instant and track) provided a more complete view of the storm intensity and trends. The correlations they saw between these different products typically provided increased confidence in the storm diagnosis or warning decision. However, some forecasters were concerned about the time lag and resulting displacement from current WSR-88D base data, while a few found the latency to be bad enough that it was impossible to use the 3DVAR data in a real-time sense and useless for issuing warnings.

Another display combination that was often utilized included radial velocity from multiple elevations from the nearest WSR-88D radar and vertical vorticity from the 3DVAR analysis (e.g., Fig. 6). Forecasters relied on this type of display to provide guidance on how the 3DVAR analyses of vertical vorticity compared to single-radar velocity trends, as well as to provide a visualization of the latency of the product relative to the current WSR-88D scans.

Fig. 6.
Fig. 6.

Forecaster AWIPS display at 2111 UTC 11 May 2011. (top left) 3DVAR 3–7-km vertical vorticity; (top right) storm relative velocity at 1.8° elevation from the Oklahoma City, OK, radar (KTLX); (bottom left) storm relative velocity at 2.4° elevation from KTLX; and (bottom right) storm relative velocity at 3.1° elevation from KTLX.

Citation: Weather and Forecasting 29, 3; 10.1175/WAF-D-13-00107.1

A reoccurring theme in weekly debriefs was the variance in specific values and the quality of the 3DVAR products when storms were close to a radar or when multiple radars were incorporated into the analysis. Forecasters noted that the supercell events occurring on 24 May and 9 June 2011 seem to be handled particularly well within the analyses, perhaps due to their close proximity to the WSR-88D. Specifically, the rapid increase in strength of both the vorticity and vertical velocity for the 9 June 2011 supercell was the deciding factor for one forecaster to issue a tornado warning on the storm approaching Wichita, Kansas. Because the 3DVAR analyses were developed with the intent of diagnosing supercells (Gao et al. 2013), it is not surprising that it handled these events much better than other cases. However, the forecasters were not limited to supercell events and these other storm modes depicted some of the limitations of the data at this point. Storms that were not quite as intense or at farther distances from the radar often contained poor artifacts of the analysis. Specifically, on 17 May 2011 in the mid-Atlantic region, interacting storms and shallow convection often contained large up- and downdrafts in regions of clear air between storms or at unrealistic heights above the storm (e.g., large downdraft above storm top). It is likely the 3DVAR cost function constrained by mass continuity produced some of these artifacts, particularly in regions of poor resolution away from a radar. Additionally, sidelobe contamination and other radar data quality issues would occasionally make it into the analysis. However, none of the forecasters noted these as significant enough problems to remove all value from using the 3DVAR analyses.

Multiple forecasters commented that they would like the ability to display and toggle through the 3DVAR products in a manner similar to an all-tilts feature available for radar data in AWIPS. The all-tilts option allows the forecaster to simply hit the up arrow to move spatially from one radar tilt (e.g., 0.5°) to the next highest (0.9°). Similarly, forecasters would like to move from one level of vertical velocity grids to the next highest with just the up and down arrows.

A recommendation for the creation of an updraft helicity, similar to that currently available from ensemble and short-range forecast models such as the High-Resolution Rapid Refresh (HRRR), and storm-top divergence products was the result of discussion periods with the forecasters in 2011. As seen from the surveys, these products were particularly well utilized during 2012 operations. Due to limits on the screen space a single forecaster has access to during warning operations, combination products, such as updraft helicity, can quickly become heavily relied upon by forecasters.

Finally, the weekly summaries captured the ease with which the forecasters were able to integrate the 3DVAR products into the warning-decision process. It was noted, “the multiple-radar integration really helped fill in gaps of having to analyze multiple radars separately, especially where storms from one radar were in the purple haze of the range-folded obscuration.” Additionally, the vorticity products (both low and midlevel) reinforced where the strongest rotation was tracking and were of particular value in determining whether storm tracks were deviating, which guided polygon placement.

4. Conclusions

Overall, forecasters concluded that the 3DVAR products held much promise for warning operations, in particular the updraft, vorticity, and storm-top divergence products. Multiple forecasters noted that using the 3DVAR analyses provided increased confidence in their warning decision, guidance in warning product choices (e.g., severe or tornado) and warning continuance, and allowed them to issue warnings slightly earlier than if using base radar data alone. However, the data latency was an issue for many forecasters as well as some of the unrealistic artifacts and variance in values of the products for convective modes other than supercellular or at far distances from a radar. Currently, these factors do place some limitations on the usefulness of the data in real-time warning operations, but as computation time continues to decrease and display methods are enhanced, this type of information could become the key to tying together data from multiple sources to reduce the amount of information the forecaster must sort through at any given time and streamline the warning-decision process.

Future additions in the HWT include reducing the data latency and cycling the 3DVAR analysis to improve the estimates for other variables, such as temperature and water vapor. As computational speed increases over the next decade and we move in the direction of a warn-on-forecast system, we expect that these same types of synthesis products will be made available to NWS forecasters via a short term (0–1 h) ensemble forecast. To quantify the effect of the 3DVAR analyses on warning performance from current NWS statistics, such as lead time or probability of detection, a future controlled experiment using a simulated real-time event will be necessary. In the meantime, the 3DVAR analyses appear to be a promising addition to operations based on forecaster feedback from real-time warning evaluation.

Acknowledgments

We thank all the forecasters that visited the test bed during the experimental warning program and provided the feedback discussed in this paper. Every comment discussing workload and color scale was useful toward making better and more functional tools. Additionally, Karen Cooper and Brett Morrow at NSSL/INDUS were instrumental in getting the system operational and keeping it going throughout the experiment. We also thank the three anonymous reviewers for comments that helped improve the quality of the manuscript. Finally, we also want to acknowledge the other scientists and support staff that organized, tested, and participated in HWT activities during both years of 3DVAR testing in the EWP including Greg Stumpf, Kiel Ortega, Jim LaDue, Gabe Garfield, and Kevin Manross. Funding was provided by NOAA/Office of Oceanic and Atmospheric Research under NOAA–University of Oklahoma Cooperative Agreement NA11OAR4320072, U.S. Department of Commerce.

REFERENCES

  • Andra, D. L., Jr., Quoetone E. M. , and Bunting W. F. , 2002: Warning decision making: The relative roles of conceptual models, technology, strategy, and forecaster expertise on 3 May 1999. Wea. Forecasting, 17, 559566, doi:10.1175/1520-0434(2002)017<0559:WDMTRR>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Boyatzis, R. E., 1998: Transforming Qualitative Information: Thematic Analysis and Code Development. Sage Publications, 184 pp.

  • Creswell, J. W., 2002: Research Design: Qualitative, Quantitative, and Mixed Methods Approaches. Sage Publications, 227 pp.

  • Gao, J., and Stensrud D. J. , 2012: Assimilation of reflectivity data in a convective-scale, cycled 3DVAR framework with hydrometeor classification. J. Atmos. Sci., 69, 10541065, doi:10.1175/JAS-D-11-0162.1.

    • Search Google Scholar
    • Export Citation
  • Gao, J., Xue M. , Brewster K. , and Droegemeier K. K. , 2004: A three-dimensional variational data analysis method with recursive filter for Doppler radars. J. Atmos. Oceanic Technol., 21, 457469, doi:10.1175/1520-0426(2004)021<0457:ATVDAM>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Gao, J., and Coauthors, 2013: A real-time weather-adaptive 3DVAR analysis system for severe weather detections and warnings with automatic storm positioning capability. Wea. Forecasting, 28, 727745, doi:10.1175/WAF-D-12-00093.1.

    • Search Google Scholar
    • Export Citation
  • Ge, G., Gao J. , and Xue M. , 2012: Diagnostic pressure equation as a weak constraint in a storm-scale three-dimensional variational radar data assimilation system. J. Atmos. Oceanic Technol., 29, 10751092, doi:10.1175/JTECH-D-11-00201.1.

    • Search Google Scholar
    • Export Citation
  • Heinselman, P. L., Cheong B. L. , Palmer R. D. , Bodine D. , and Hondl K. , 2009: Radar refractivity retrievals in Oklahoma: Insights into operational benefits and limitations. Wea. Forecasting, 24, 13451361, doi:10.1175/2009WAF2222256.1.

    • Search Google Scholar
    • Export Citation
  • Heinselman, P. L., LaDue D. S. , and Lazrus H. , 2012: Exploring impacts of rapid-scan radar data on NWS warning decisions. Wea. Forecasting, 27, 10311044, doi:10.1175/WAF-D-11-00145.1.

    • Search Google Scholar
    • Export Citation
  • Lusk, C., Kucera P. , Roberts W. , and Johnson L. , 1999: The process and methods used to evaluate prototype operational hydrometeorological workstations. Bull. Amer. Meteor. Soc., 80, 5764, doi:10.1175/1520-0477(1999)080<0057:TPAMUT>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Morss, R. E., and Ralph F. M. , 2007: Use of information by National Weather Service forecasters and emergency managers during CALJET and PACJET-2001. Wea. Forecasting, 22, 539555, doi:10.1175/WAF1001.1.

    • Search Google Scholar
    • Export Citation
  • Smith, T. M., and Coauthors, 2013: Examination of a real-time 3DVAR analysis system in the Hazardous Weather Testbed. Wea. Forecasting, 29, 63–77, doi:10.1175/WAF-D-13-00044.1.

    • Search Google Scholar
    • Export Citation
  • Stensrud, D. J., and Coauthors, 2009: Convective-scale warn-on-forecast system. Bull. Amer. Meteor. Soc., 90, 14871499, doi:10.1175/2009BAMS2795.1.

    • Search Google Scholar
    • Export Citation
  • Stensrud, D. J., and Coauthors, 2013: Progress and challenges with warn-on-forecast. Atmos. Res., 123, 216, doi:10.1016/j.atmosres.2012.04.004.

    • Search Google Scholar
    • Export Citation
  • Stewart, T. R., Moninger W. R. , Grassia J. , Brady R. H. , and Merrem F. H. , 1989: Analysis of expert judgment in a hail forecasting experiment. Wea. Forecasting, 4, 2434, doi:10.1175/1520-0434(1989)004<0024:AOEJIA>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
Save