• Anderson-Berry, L., , Keenan T. , , Bally J. , , Pielke R. Jr., , Leigh R. , , and King D. , 2004: The societal, social, and economic impacts of the World Weather Research Programme Sydney 2000 Forecast Demonstration Project (WWRP S2000 FDP). Wea. Forecasting, 19, 168178.

    • Search Google Scholar
    • Export Citation
  • Andra, D. L., Jr., , Quoetone E. M. , , and Bunting W. F. , 2002: Warning decision making: The relative roles of conceptual models, technology, strategy, and forecaster expertise on 3 May 1999. Wea. Forecasting, 17, 559566.

    • Search Google Scholar
    • Export Citation
  • Arndt, D. S., , Basara J. B. , , McPherson R. A. , , Illston B. G. , , McManus G. D. , , and Demko D. B. , 2009: Observations of the overland intensification of Tropical Storm Erin (2007). Bull. Amer. Meteor. Soc., 90, 10791093.

    • Search Google Scholar
    • Export Citation
  • Bernard, R., 2006: Research Methods in Anthropology: Qualitative and Quantitative Approaches. 4th ed. Alta Mira Press, 803 pp.

  • Brotzge, J., , and Erickson S. , 2010: Tornadoes without NWS warning. Wea. Forecasting, 25, 159172.

  • Burgess, D., , Donaldson R. J. , , Sieland T. , , Hinkelman J. , , Sirmans D. , , Shreeve K. , , Glover K. , , and Goldman I. , 1979: Final report on the Joint Doppler Operational Project (JDOP) 1976–1979. NOAA Tech. Memo. ERL NSSL86, 84 pp. [NTIS PB80-107188/AS.]

  • Curtis, C. D., , and Torres S. M. , 2011: Adaptive range oversampling to achieve faster scanning on the National Weather Radar Testbed Phased Array Radar. J. Atmos. Oceanic Technol., 28, 15811597.

    • Search Google Scholar
    • Export Citation
  • Emersic, C., , Heinselman P. L. , , MacGorman D. R. , , and Bruning E. C. , 2011: Lightning activity in a hail-producing storm observed with phased-array radar. Mon. Wea. Rev., 139, 18091825.

    • Search Google Scholar
    • Export Citation
  • Endsley, M. R., 1988: Design and evaluation for situation awareness enhancement. Proc. Human Factors Society 32nd Annual Meeting, Santa Monica, CA, Human Factors Society, 97–101.

  • Evans, C., , Schumacher R. S. , , and Galarneau T. J. Jr., 2011: Sensitivity in the overland reintensification of Tropical Cyclone Erin (2007) to near-surface soil moisture characteristics. Mon. Wea. Rev., 139, 38483870.

    • Search Google Scholar
    • Export Citation
  • Friday, E. W., Jr., 1994: The modernization and associated restructuring of the National Weather Service: An overview. Bull. Amer. Meteor. Soc., 75, 4352.

    • Search Google Scholar
    • Export Citation
  • Hahn, B. B., , Rall E. , , and Klinger D. W. , 2003: Cognitive task analysis of the warning forecaster task. Klein Associates Inc., 15 pp. [Available from Klein Associates Inc., 1750 Commerce Center Blvd., North Fairborn, OH 45324-6362.]

  • Heinselman, P. L., , and Torres S. M. , 2011: High-temporal-resolution capabilities of the National Weather Radar Testbed Phased-Array Radar. J. Appl. Meteor. Climatol., 50, 579593.

    • Search Google Scholar
    • Export Citation
  • Heinselman, P. L., , Priegnitz D. L. , , Manross K. L. , , Smith T. M. , , and Adams R. W. , 2008: Rapid sampling of severe storms by the National Weather Radar Testbed Phased Array Radar. Wea. Forecasting, 23, 808824.

    • Search Google Scholar
    • Export Citation
  • Hoffman, R. R., , Coffey J. W. , , Ford K. M. , , and Novak J. D. , 2006: A method for eliciting, preserving, and sharing the knowledge of forecasters. Wea. Forecasting, 21, 416428.

    • Search Google Scholar
    • Export Citation
  • Joslyn, S., , and Jones D. , 2008: Strategies in naturalistic decision making: A cognitive task analysis of naval weather forecasting. Naturalistic Decision Making and Macrocognition, J. M. Schraagen et al., Eds., Ashgate Publishing, 183–201.

  • Klein, G., 1998: Sources of Power: How People Make Decisions. The MIT Press, 330 pp.

  • LaDue, D. S., , Heinselman P. L. , , and Newman J. F. , 2010: Strengths and limitations of current radar systems for two stakeholder groups in the southern plains. Bull. Amer. Meteor. Soc., 91, 899910.

    • Search Google Scholar
    • Export Citation
  • Lakshmanan, V., , Smith T. , , Stumpf G. , , and Hondl K. , 2007: The Warning Decision Support System–Integrated Information. Wea. Forecasting, 22, 596612.

    • Search Google Scholar
    • Export Citation
  • Lusk, C. M., , Stewart T. R. , , Hammond K. R. , , and Potts R. J. , 1990: Judgment and decision making in dynamic tasks: The case of forecasting the microburst. Wea. Forecasting, 5, 627639.

    • Search Google Scholar
    • Export Citation
  • Magsig, M. A., , and Page E. M. , 2002: Development and implementation of the NWS warning event simulator version 1.0. Preprints, Interactive Symp. on AWIPS, Orlando, FL, Amer. Meteor. Soc., J236–J238.

  • Mertens, D. M., 2005: Research and Evaluation in Education and Psychology. 2nd ed. Sage Publications, 507 pp.

  • Moller, A. R., , Doswell C. A. III, , Foster M. P. , , and Woodall G. R. , 1994: The operational recognition of supercell thunderstorm environments and storm structures. Wea. Forecasting, 9, 327347.

    • Search Google Scholar
    • Export Citation
  • Monteverdi, J. P., , and Edwards R. , 2010: The redevelopment of a warm core structure in Erin: A case of inland tropical storm formation. Electron. J. Severe Storms Meteor., 5, 118.

    • Search Google Scholar
    • Export Citation
  • Morss, R. E., , and Ralph F. M. , 2007: Use of information by National Weather Service forecasters and emergency managers during CALJET and PACJET-2001. Wea. Forecasting, 22, 539555.

    • Search Google Scholar
    • Export Citation
  • Morss, R. E., , Wilhelmi O. V. , , and Downton M. W. , 2005: Flood risk, uncertainty, and scientific information for decision making: Lessons learned from an interdisciplinary project. Bull. Amer. Meteor. Soc., 86, 15931601.

    • Search Google Scholar
    • Export Citation
  • Newman, J. F., , and Heinselman P. L. , 2011: Evolution of a quasi-linear convective system sampled by phased-array radar. Preprints, 27th Conf. on Interactive Information Processing Systems (IIPS), Seattle, WA, Amer. Meteor. Soc., 13B.5. [Available online at http://ams.confex.com/ams/91Annual/webprogram/Paper184493.html.]

  • Newman, J. F., , and Heinselman P. L. , 2012: Evolution of a quasi-linear convective system sampled by phased array radar. Mon. Wea. Rev., in press.

    • Search Google Scholar
    • Export Citation
  • NOAA, 2006: Operational modes and volume coverage patterns. Doppler Radar Meteorological Observations: Federal Meteorological Handbook 11; Part C: WSR-88D Products and Alborithms, FCH-H11C-2006, Office of the Federal Coordinator for Meteorological Services and Supporting Research, Rockville, MD, 5-1–5-23.

  • OFCM, 2006: Federal research and development needs and priorities for phased array radar. Rep. FMC-R25-2006, Interdepartmental Committee for Meteorological Services and Supporting Research, Committee for Cooperative Research Joint Action Group for Phased Array Radar Project, 62 pp. [Available online at www.ofcm.gov/r25-mpar/fcm-r25.htm.]

  • Ortega, K. L., , Smith T. M. , , Manross K. L. , , Kolodziej A. G. , , Scharfenberg K. A. , , Witt A. , , and Gourley J. J. , 2009: The Severe Hazards Analysis and Verification Experiment. Bull. Amer. Meteor. Soc., 90, 15191530.

    • Search Google Scholar
    • Export Citation
  • Pliske, R., , Klinger D. W. , , Hutton R. , , Crandall B. , , Knight B. , , and Klein G. , 1997: Understanding skilled weather forecasting: Implications for training and the design of forecasting tools. Final Contractor Rep. AL/HR-CR-1997-0003, 122 pp.

  • Roebber, P. J., , Schultz D. M. , , and Romero R. , 2002: Synoptic regulation of the 3 May 1999 tornado outbreak. Wea. Forecasting, 17, 399429.

    • Search Google Scholar
    • Export Citation
  • Scharfenberg, K. A., and Coauthors, 2005: The Joint Polarization Experiment: Polarimetric radar in forecasting and warning decision making. Wea. Forecasting, 20, 775788.

    • Search Google Scholar
    • Export Citation
  • Schraagen, J. M., , Militello L. G. , , Ormerod T. , , and Lipshitz R. , Eds., 2008: Naturalistic Decision Making and Macrocognition. Ashgate Publishing, 411 pp.

  • Singleton, R. A., , and Straits B. , 2005. Approaches to Social Research. 4th ed. Oxford University Press, 640 pp.

  • Smith, T. M., , and Elmore K. L. , 2004: The use of radial velocity derivatives to diagnose rotation and divergence. Preprints, 11th Conf. on Aviation, Range, and Aerospace, Hyannis, MA, Amer. Meteor. Soc., P5.6. [Available online at http://ams.confex.com/ams/pdfpapers/81827.pdf.]

  • Speheger, D. A., , Doswell C. A. III, , and Stumpf G. J. , 2002: The tornadoes of 3 May 1999: Event verification in central Oklahoma and related issues. Wea. Forecasting, 17, 362381.

    • Search Google Scholar
    • Export Citation
  • Steadham, R., 2008: 2008 National Weather Service field study. Part 1: Volume coverage pattern usage. Radar Operations Center, Norman, OK, 28 pp. [Available from WSR-88D Radar Operations Center, 1200 Westheimer Dr., Norman, OK 73069.]

  • Whiton, R. C., , Smith P. L. , , Bigler S. G. , , Wilk K. E. , , and Harbuck A. C. , 1998: History of operational use of weather radar by U.S. Weather Services. Part II: Development of operational Doppler weather radars. Wea. Forecasting, 13, 244252.

    • Search Google Scholar
    • Export Citation
  • Yu, T.-Y., , Orescanin M. B. , , Curtis C. D. , , Zrnić D. S. , , and Forsyth D. E. , 2007: Beam multiplexing using the phased-array weather radar. J. Atmos. Oceanic Technol., 24, 616626.

    • Search Google Scholar
    • Export Citation
  • Zrnić, D. S., and Coauthors, 2007: Agile-beam phased array radar for weather observations. Bull. Amer. Meteor. Soc., 88, 17531766.

  • View in gallery

    Circles indicate the states, but not the specific city, where participants’ home offices were located. The number of yr employed with the NWS is color coded according to the grayscale.

  • View in gallery

    Time series of 0.5° (a)–(e) velocity and (f)–(j) reflectivity fields seen by 4.5-min teams from 0113 to 0155 UTC 19 Aug 2007. The PAR is located in the direction of the upper-right-hand corner. Radar range in the lower-left-hand corner is ~113 km. The circulation associated with the EF1 tornado is enclosed by a white circle.

  • View in gallery

    Time series of warning decisions and LLSD azimuthal shear associated with (a),(b) 43-s and (c),(d) 4.5-min teams, respectively. Left side of the rectangle is when WarnGen was opened and right side is when a warning was issued; lead time is relative to tornado occurrence. Warnings issued for tornadoes are red, whereas those for severe thunderstorms are yellow. Unfilled rectangles indicate unverified warnings. Time stamps on the 4.5-min data are aligned with corresponding 43-s times.

  • View in gallery

    Normalized confidence ratings of forecasters relative to normal operations in their offices. Numeric values were assigned after the fact; forecasters saw “less confident” at the left end, “usual” in the middle, and “more confident” at the right. (top) Forecasters’ confidence in how well they understood the event: supercells in a tropical environment; (bottom) their confidence in how well they understood supercell signatures and evolution using PAR data.

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 20 20 16
PDF Downloads 43 43 34

Exploring Impacts of Rapid-Scan Radar Data on NWS Warning Decisions

View More View Less
  • 1 NOAA/OAR/National Severe Storms Laboratory, Norman, Oklahoma
  • 2 Center for Analysis and Prediction of Storms, University of Oklahoma, Norman, Oklahoma
  • 3 Social Science Woven into Meteorology, Cooperative Institute for Mesoscale Meteorological Studies, University of Oklahoma, Norman, Oklahoma
© Get Permissions
Full access

Abstract

Rapid-scan weather radars, such as the S-band phased array radar at the National Weather Radar Testbed in Norman, Oklahoma, improve precision in the depiction of severe storm processes. To explore potential impacts of such data on forecaster warning decision making, 12 National Weather Service forecasters participated in a preliminary study with two control conditions: 1) when radar scan time was similar to volume coverage pattern 12 (4.5 min) and 2) when radar scan time was faster (43 s). Under these control conditions, forecasters were paired and worked a tropical tornadic supercell case. Their decision processes were observed and audio was recorded, interactions with data displays were video recorded, and the products were archived. A debriefing was conducted with each of the six teams independently and jointly, to ascertain the forecaster decision-making process. Analysis of these data revealed that teams examining the same data sometimes came to different conclusions about whether and when to warn. Six factors contributing toward these differences were identified: 1) experience, 2) conceptual models, 3) confidence, 4) tolerance of possibly missing a tornado occurrence, 5) perceived threats, and 6) software issues. The three 43-s teams issued six warnings: three verified, two did not verify, and one event was missed. Warning lead times were the following: tornado, 18.6 and 11.5 min, and severe, 6 min. The three tornado warnings issued by the three 4.5-min teams verified, though warning lead times were shorter: 4.6 and 0 min (two teams). In this case, use of rapid-scan data showed the potential to extend warning lead time and improve forecasters’ confidence, compared to standard operations.

Current affiliation: National Center for Atmospheric Research, Boulder, Colorado.

Corresponding author address: Pam Heinselman, NOAA/NSSL, 120 David L. Boren Blvd., Norman, OK 73072. E-mail: pam.heinselman@noaa.gov

Abstract

Rapid-scan weather radars, such as the S-band phased array radar at the National Weather Radar Testbed in Norman, Oklahoma, improve precision in the depiction of severe storm processes. To explore potential impacts of such data on forecaster warning decision making, 12 National Weather Service forecasters participated in a preliminary study with two control conditions: 1) when radar scan time was similar to volume coverage pattern 12 (4.5 min) and 2) when radar scan time was faster (43 s). Under these control conditions, forecasters were paired and worked a tropical tornadic supercell case. Their decision processes were observed and audio was recorded, interactions with data displays were video recorded, and the products were archived. A debriefing was conducted with each of the six teams independently and jointly, to ascertain the forecaster decision-making process. Analysis of these data revealed that teams examining the same data sometimes came to different conclusions about whether and when to warn. Six factors contributing toward these differences were identified: 1) experience, 2) conceptual models, 3) confidence, 4) tolerance of possibly missing a tornado occurrence, 5) perceived threats, and 6) software issues. The three 43-s teams issued six warnings: three verified, two did not verify, and one event was missed. Warning lead times were the following: tornado, 18.6 and 11.5 min, and severe, 6 min. The three tornado warnings issued by the three 4.5-min teams verified, though warning lead times were shorter: 4.6 and 0 min (two teams). In this case, use of rapid-scan data showed the potential to extend warning lead time and improve forecasters’ confidence, compared to standard operations.

Current affiliation: National Center for Atmospheric Research, Boulder, Colorado.

Corresponding author address: Pam Heinselman, NOAA/NSSL, 120 David L. Boren Blvd., Norman, OK 73072. E-mail: pam.heinselman@noaa.gov

1. Introduction

The development of rapid-scan capabilities with S-band phased array radar (PAR) at the National Weather Radar Testbed in Norman, Oklahoma (Yu et al. 2007; Zrnić et al. 2007; Heinselman and Torres 2011; Curtis and Torres 2011), presents new opportunities for the advancement of weather sensing. Several studies indicate more frequent scanning of storms could improve National Weather Service (NWS) operations: a survey of volume coverage pattern (VCP) needs conducted by the National Oceanic and Atmospheric Administration’s (NOAA) NWS Radar Operations Center (Steadham 2008), a radar-needs assessment conducted by the Office of Federal Coordinator for Meteorology (OFCM 2006), and a strengths and limitations study of operational radar systems (LaDue et al. 2010). Further, a study of NWS warning performance (2000–04) found a relative lack of warnings issued for tornadoes rated as 0 and 1 on the enhanced Fujita scale (EF0 and EF1) (27%) versus EF3 and EF4 tornadoes (8.6%) (Brotzge and Erickson 2010). The shorter lifetimes typical of EF0 and EF1 tornadoes present the opportunity to assess improvement in their probably of detection with more rapid scanning.

The PAR is capable of better sampling of the rapid evolution of a variety of severe weather events compared to the Weather Surveillance Radar-1988 Doppler (WSR-88D) (Heinselman et al. 2008; Emersic et al. 2011; Newman and Heinselman 2011; Newman and Heinselman 2012). We do not yet know how such improvements in the depiction of storm development affect conceptual models and warning decision processes of forecasters. Historically, assessment of operational benefit(s) of new radar capabilities has been essential to the technology development process. During development of the WSR-88D, Joint Doppler Program scientists (JDOP; Burgess et al. 1979; Whiton et al. 1998) assessed the benefit of using Doppler data in warning decisions by comparing verification statistics from severe thunderstorm and tornado warnings produced in real time, either primarily with, or without use of Doppler data (spring 1977, 1978). All JDOP pseudowarning decisions employed Doppler data, while most Oklahoma City, Oklahoma, NWS warning decisions did not. Use of Doppler data improved performance by 1) increasing tornado warning lead time (2–20 min), 2) reducing the false alarm ratio for severe thunderstorm and tornado warnings, and 3) increasing the probability of detection for severe thunderstorms (Burgess et al. 1979). Twenty-five years later, the decision to move forward with the polarization upgrade to the WSR-88D network was supported by findings from the Joint Polarization Experiment (JPOLE; Scharfenberg et al. 2005). Observations made by JPOLE scientists of operational use of polarimetric radar products, along with written reports from NWS forecasters, provided cases where these products aided one or more of the following: understanding of storm severity, warning decisions, the wording in follow-up statements, or forecaster confidence. These studies differ from the rapid-scan advancement of PAR in that they provided forecasters with new radar-derived measurements. The PAR, in its current design, provides more frequent sampling of traditional radar-derived measurements of severe storms (nonpolarimetric). The challenge is to understand the potential benefit of PAR to NWS forecasters’ warning decision making.

There are a few descriptions and studies of NWS forecaster decision making. Andra et al. (2002) described observations of the live, warning decision experience at the Norman, Oklahoma, NWS Forecast Office during the 3 May 1999 tornado outbreak (Roebber et al. 2002; Speheger et al. 2002). Five warning decision factors Andra et al. (2002) highlighted are 1) use of scientifically based conceptual models, 2) focus on two primary datasets—Doppler radar and ground truth, 3) efficient use of workstations and software, 4) use of a proactive strategy, and 5) forecaster expertise. Researchers from Klein Associates, Inc., have explored NWS forecaster expertise. Pliske et al. (1997) added to their study of military forecasters by gathering data from 13 Olympic Support forecasters at the NWS Atlanta, Georgia, Forecast Office using small group interviews. Hahn et al. (2003) used cognitive task analysis (interviews) to capture the expertise of a small set of warning forecasters (seven), all but one of whom were highly experienced science operations officers or meteorologists in charge. Both studies identified use of conceptual models, participant identification of the “problem of the day” to focus their analysis, and a constant vigilance for signals the weather was evolving outside their expectations. Hahn et al. (2003) also identified expert forecasters as seeking information on the public impact of weather, acknowledging technology’s strengths and limitations, being aware of weather before arriving at work, and continually checking data against their expectations as a warning unfolds. Finally, during the Pacific Land-Falling Jets (PACJET) 2001 field experiment, Morss and Ralph (2007) used participant observation and structured interviews to study how information was used to generate long- and short-term forecasts. They then used that baseline understanding to assess the operational benefit of forecaster use of offshore gap-filling observations provided during the California Land-Falling Jets Experiment (CALJET). Morss and Ralph (2007) concluded that use of additional data appeared to help forecasters produce more specific forecasts (e.g., timing, location, and intensity). In cases where the initial forecast was fairly accurate, the additional data appeared to increase forecaster confidence. These studies and Andra et al.’s (2002) observations, though small in number, provide expectations of qualities expert forecasters might exhibit, and an approach to studying the impact of an experimental dataset.

The long-term goal for the PAR technology development effort is to understand how new weather radar information provided by rapid scanning might benefit the decision making of NWS warning forecasters. In the first step toward this goal, we conducted a study in the spring of 2010 that explored potential impacts of rapid-scan data on forecaster warning decision making using two control conditions: 1) when scan time is similar to the shortest VCP employed on the WSR-88D (VCP 12, ~4.5 min) and 2) when scan time is significantly faster than VCP 12 (43 s). To observe forecaster warning decision processes in greater detail, our experiment design employed data collection techniques similar to those employed by Morss and Ralph (2007), but in an experiment setting. Our study added audio and video recording. This study was part of the 2010 Phased Array Radar Innovative Sensing Experiment (PARISE).

2. Participant selection and demographics

NWS forecasters were invited to apply for participation in one of three 2010 NOAA Hazardous Weather Testbed Experimental Warning Program (EWP) projects via e-mail. The e-mail was distributed nationally to NWS Forecast Offices by the six NWS Regional Offices. The invitation asked respondents to select projects and explain their interest in participating. The resulting applicant pool for PARISE contained 34 NWS forecasters; 94% were from offices located in or east of the Rockies. We chose participants for PARISE based on the content of their written interest statements, location of their home office, gender, and experience with radar data. Forecasters who demonstrated experience or interest in evaluating weather products and display tools were given priority. We examined information on years of experience to determine a first guess at forming teams that would have similar levels of expertise.

The 12 participants included three females and nine males from NWS offices located in 10 states in or east of the Rocky Mountains (Fig. 1). Diversity in office locations brought together forecasters with experience issuing warnings on storms whose development is impacted by different climatologic conditions and terrain features. Additionally, all but one participant had worked at offices in multiple geographic regions. Years of NWS work experience ranged from 5 to 23, with an average of ~12 yr of service (Fig. 1). Four had held positions in private industry for 1.5–5 yr prior to employment with the NWS. At the time of the experiment, all but one was in a forecaster position; the other was a meteorologist in charge. All participants had experience issuing warnings for severe weather, and more than half (seven) had several years’ experience consistently working the warning desk during severe events. The sample of forecasters is neither representative nor broadly generalizable.

Fig. 1.
Fig. 1.

Circles indicate the states, but not the specific city, where participants’ home offices were located. The number of yr employed with the NWS is color coded according to the grayscale.

Citation: Weather and Forecasting 27, 4; 10.1175/WAF-D-11-00145.1

3. PAR data

PAR data were used to create two datasets: one with full temporal resolution (43-s volume scan) and the other with simulated WSR-88D-like temporal resolution (~4.5-min volume scan with elevations updating through that time period). This approach controlled for any real or perceived differences between the PAR and WSR-88D data. The 4.5-min scan time closely matched the sampling time of the Oklahoma City WSR-88D on the case date. The simulated 4.5-min volume scans were constructed by 1) determining 4.5-min scan times for each elevation (14 total) over the case duration, 2) matching the nearest-in-time PAR elevation to these revised times, and 3) assigning the new time stamp to each elevation. Time stamps of simulated data were ±2–20 s different than the full temporal resolution data. During one simulated 4.5-min volume scan, the 43-s team usually received five volumetric updates.

This method was applied to a ~45-min period (0113:29–0159:39 UTC) on 19 August 2007 when the PAR sampled two supercells as they moved northward toward west-central Oklahoma (Fig. 2). Data were collected using a 60° sector and the same elevation angles as VCP 12 (NOAA 2006), with 0.5° overlapped azimuthal sampling (Heinselman and Torres 2011). Beam multiplexing (Yu et al. 2007) resulted in 43-s volumetric updates. The supercells formed a few hours prior to the reintensification of Tropical Storm Erin (Arndt et al. 2009; Monteverdi and Edwards 2010; Evans et al. 2011). A subsequent damage survey, conducted by scientists involved in the Severe Hazards Analysis and Verification Experiment (Ortega et al. 2009), indicated a short-lived (~3 min) EF1 tornado occurred during the case. An examination of the PAR data along this damage path indicates the tornado was produced by the northern supercell at ~0144–0147 UTC 19 August 2007. The use of this event provided the opportunity to explore how rapid-scan radar data may aid forecaster warning decisions in tropical environments where supercells may produce weak, short-lived tornadoes.

Fig. 2.
Fig. 2.

Time series of 0.5° (a)–(e) velocity and (f)–(j) reflectivity fields seen by 4.5-min teams from 0113 to 0155 UTC 19 Aug 2007. The PAR is located in the direction of the upper-right-hand corner. Radar range in the lower-left-hand corner is ~113 km. The circulation associated with the EF1 tornado is enclosed by a white circle.

Citation: Weather and Forecasting 27, 4; 10.1175/WAF-D-11-00145.1

To aid the reader’s understanding of how each team’s warning decision(s) related to the supercell evolution depicted by the 43-s and 4.5-min PAR data, we computed case-long time series of maximum 0.5° azimuthal shear (Fig. 3) found within each supercell’s primary circulation. Azimuthal shear was computed based on the local, linear least squares derivatives (LLSD) method; the method and its strengths and limitations are described in Smith and Elmore (2004). Computations were limited to the 0.5° tilt due to the shallow depth of the circulations and forecaster tendency to focus their velocity analysis on that elevation. During the case, the PAR sampled circulations at heights of 0.7–0.9 km above radar level (ARL) in the northern storm, and at heights of 1.0–1.3 km (ARL) in the southern storm; beam widths ranged from 1.8 to 2.0 km in the northern storm and from 2.1 to 2.4 km in the southern storm. Hence, mesoscale circulations were fairly well resolved, whereas tornadoes were not.

Fig. 3.
Fig. 3.

Time series of warning decisions and LLSD azimuthal shear associated with (a),(b) 43-s and (c),(d) 4.5-min teams, respectively. Left side of the rectangle is when WarnGen was opened and right side is when a warning was issued; lead time is relative to tornado occurrence. Warnings issued for tornadoes are red, whereas those for severe thunderstorms are yellow. Unfilled rectangles indicate unverified warnings. Time stamps on the 4.5-min data are aligned with corresponding 43-s times.

Citation: Weather and Forecasting 27, 4; 10.1175/WAF-D-11-00145.1

Trends of LLSD azimuthal shear were not provided to forecasters. Rather, forecasters assessed the meteorological significance of trends they found in the velocity or storm-relative velocity data, in light of circulation structure and correspondence with features like reflectivity notches or appendages seen in the reflectivity data.

4. Experiment design

This study generally followed a matched-pairs, control-group design (Mertens 2005), though matches were on teams of two and subjectively determined. We capitalized on the preference of EWP participants to work in pairs (though this is not typical during warning operations) to help us to capture their thought processes as they made decisions together. The groups were formed to have roughly equivalent radar analysis skills based upon earlier observations of them rotating through partners while working three events (severe and null). Though seniority was not prescribed, some forecasters were naturally more dominant in the decision-making process than their partner. Even so, all forecaster pairs voiced agreement prior to issuing a warning. The debriefing plans allowed room for them to tell us if they thought unintended group differences had affected the outcome; none did.

Four NWS forecasters participated each week of the experiment, which ran from midday Tuesday through Friday morning. On Tuesday they were formally introduced to our study and given the option to participate;1 all consented. Because only 2 of the 12 forecasters had some previous experience using the Warning Decision Support System–Integrated Information (WDSS-II; Lakshmanan et al. 2007) display software, Tuesday evening and Wednesday they engaged in activities to build forecaster experience.

On the day of the study, each group worked through the 19 August 2007 case (see section 3) in separate rooms, issuing weather warnings and updates. As the case began, they had approximately 20 min to review weather data to gain situational awareness (Endsley 1988) using the weather event simulator (WES; Magsig and Page 2002). Included were in situ and remote sensing observations, numerical model output, and products issued by the NWS. Thereafter they wrote a discussion about what they thought would happen in the next hour or so. The case2 then ran in a displaced real-time mode. Forecasters were observed and their discussions audio recorded; their computer screens were video recorded. Computer files of the discussion text and warning-related texts were archived.

After the case, a debriefing was conducted with each group independently. Forecasters were invited to take notes on critical decision points to use during the joint debriefing, when the four forecasters would compare experiences. They were asked to rank factors important to their decision making and to rate their confidence relative to usual on two continua: one for the event type (tropical supercell) and one for the impact of the radar data. Next, all four participants came together in the joint debriefing to discuss their experience with the data and its impact on their warning decisions. Afterward they were asked for summary information on their work histories that might be relevant to interpreting differences in their warning decisions.

Limitations

We recognize the following limitations in this study. First, the WDSS-II was used in lieu of the Advanced Weather Interactive Processing System (AWIPS; Friday 1994), due to the operational software’s inherent inability to display data sampled with a scan time of 1 min or shorter. We sought to minimize the impact of software differences on the experiment outcomes by having a software developer upgrade the WDSS-II with WarnGen functionality similar to AWIPS. Second, we could not objectively match pairs because there is no known fair method of assessing warning forecaster expertise. Third, team dynamics may have affected warning decision making to a greater extent than what could be gleaned from our data. However, working in teams minimized the effect of unfamiliar software, as they could help each other manage the added cognitive load. The software options available meant forecasters had additional tools they did not usually have, but the software also could not display data in some ways they were accustomed to.

Some researchers in the decision-making field build a strong cause for only studying decision making in work settings (Klein 1998; Joslyn and Jones 2008; Schraagen et al. 2008). We sought to have multiple instances of the same case comparison to analyze, meaning case data had to be used. The contrived nature of a simulation, though, tended to mean two things to participants: something was likely to happen, and it would happen in the absence of pressures only live operations have. Forecasters worked both severe and null preexperiment cases in an attempt to minimize this perspective. We simulated several aspects of an operational setting and asked forecasters to attempt to work as closely as they would to normal. Forecasters reported they acted normally, though added that cases never truly simulate the tension, distractions, on-the-job conversations, consequences, and other aspects of operations.

5. Analysis methods

Data analysis involved several steps. After a student hire transcribed audio recordings, we reviewed the transcripts and corrected minor errors to ensure the data were an accurate record of the experiment (Singleton and Straits 2005). Video recordings were used to note interactions of participants with WDSS-II and WES displays, and specific radar fields, radar signatures, and other meteorological data viewed. The visual information augmented, clarified, and confirmed verbal expressions of cognitive actions and states of being of participants, as well as issues related to experimental design.

A coding scheme was developed inductively by working with the data to mark important concepts in the transcribed audio and visual recordings. These codes were then collected into the major analytic categories that emerged in our data (Bernard 2006). Examples of codes within each analytic category are, for expressions of state—excitement, frustration, surprise, uncertainty; for cognitive actions—comparing, deciding, interrogating, interpreting; for consideration of meteorological or other factors in the data: circulations, time of day, office policies; and for reflections on experimental design: —wishes for functionality, software issues, overall experimental design.3 From the coded data we built timelines of warning decision making, which in turn facilitated the comparison of warning decision making. The results are discussed next.

6. Observed decision making

During the case decisions made by each team included 1) anticipating severe weather threats, 2) deciding whether to warn, 3) determining warning type and duration, and 4) deciding when to issue severe weather statements. Tornadoes were assessed as the primary threat by five of the six teams, and as a threat secondary to damaging winds by one team. Only the latter team did not realize the tropical environment until shortly after writing their discussion; they had not noticed and read the NWS products displayed on the WES. After doing so, the team’s threat mindset appeared unaltered.

Decisions about which of the two storms to warn on, warning type, and timing varied (Fig. 3a,c). Though all teams issued tornado warnings on the northern storm, one team (43 s) issued a tornado warning on the southern storm (Fig. 3a). Another team issued a severe thunderstorm warning on both storms. Throughout the warning decision comparisons that follow, data values are those voiced by participants; pseudonyms are used. Additionally, time series of LLSD azimuthal shear in Figs. 3b,d are referenced for comparison when forecasters interpreted changes in circulation strength. One warning decision comparison took place each week; these are labeled A, B, and C.

a. Warning decision comparison A

In this comparison the dramatic ~19-min difference in warning issue times (Figs. 3a,c) was due, in part, to the 43-s team’s low threshold for the persistence and strength of a circulation in a tropical environment, and the 4.5-min team’s difficulty with the software. In addition, the 43-s team exhibited far more confidence in what they saw prior to the tornadic intensification than the 4.5-min team did. Teams primarily used radar cues, though the 4.5-min team used IR satellite imagery to ascertain if storms were of similar height.

The 43-s team (Bob and Joe) oriented very quickly to the software and the event. Between 0118:30 and 0122:05 UTC, they noted “a bit of rotation” collocated with a notch in reflectivity on the northern storm and began watching for an upward trend in velocity. The velocity trended upward to 25 kt by 0121:22 UTC (Fig. 3b). They agreed that with a tropical system they were unlikely to see 50 kt inbound and outbound. When the next 0.5° elevation updated (0122:05 UTC), Bob noted the southern storm was also trending upward (Fig. 3b). At this point they began to build toward the warning decision. Joe pointed out that the northern storm’s circulation was tightening a bit. Bob agreed, noting that it was persistent. They confirmed that the tightening circulation corresponded to the reflectivity notch and Bob said, “And that’s where I’m concerned … It doesn’t take much.” Joe agreed, “I’m okay with starting a tor on that.” As they initiated WarnGen, Joe added, “’Cause I think being aggressive in this environment is a good thing to do.” They briefly considered how long the warning should last and began to draft the warning on scan 00123:31 UTC. They issued the 30-min tornado warning about 2 min later with 18.6-min lead time (Fig. 3a).

Over the next 9 min (0123:31–0132:07 UTC), their confidence was maintained. At 0132:07 UTC Bob declared, “There is absolutely something going on there [in the northern storm].” They consulted environmental data and monitored the southern storm. By scan 0135:00 UTC, Bob noted 40 kt of gate-to-gate shear on the northern storm, “pretty good for a tropical system.” On the 0139:18 UTC scan, they assessed that the southern storm continued to have weak outbound velocities, but 41 kt inbound, “something that we are going to have to consider,” Bob said. Joe agreed, “It seems there is kind of a general area of rotation there.” However, by scan 0140:44 UTC, he said the southern storm had “lost that sharp edge.”

Approximately 2.5 min before tornado occurrence (0141:27 UTC), this team noticed a tight area of circulation in the northern storm. On the next update (0142:10 UTC), they were excited by the strength of the circulation (Fig. 3b). They then initiated WarnGen to start a warning update based upon these meteorological changes in the storm, noting that velocities were now 88 kt inbound and almost 30 kt outbound, and saying, “That’s the best we’ve seen.” The severe weather statement was issued just under 2 min later, immediately before the 0145:02 UTC scan arrived.

The 4.5-min team’s (Allen and Carl) first warning decision process followed shortly after they detected a brief strengthening, then weakening, of the northern storm using a loop of storm-relative motion through scan 0126:46 UTC (Fig. 3d). Suspecting the storm may have already produced a very weak tornado, they wondered if they should issue a warning. Interrogating velocity cross sections and elevation scans up to 1.5°, Allen interpreted that the circulation still looked “kind of big.” On the 0131:13 UTC scan, Allen pointed to the couplet on the northern storm (Fig. 3d) and said, “I like that.”

Confused by the effect that storm-relative motion (SRM) had on the appearance of the two storms, he asked, “That’s the southern one, isn’t it?” Carl corrected him. Allen read off “26…27 against” at 0.9°. The circulation collocated with the reflectivity notch—weak, but strong enough—along with evidence of expected recycling, apparently were sufficient to decide to warn. Over the next ~6.1 min, however, Allen struggled to set the storm motion. Just as he finished setting a path he liked, the data updated to 0140:08 UTC (Figs. 2g and 3) and he said, “Oh, now it looks like crap again,” noting it had “lost its kidney” shape. Had this team not encountered software issues, they would likely have issued a warning about 2 min after their decision (the median time to issue a warning during PARISE 2010 was 1 min 58 s), with about 6-min lead time. When the 0.5°, 0144:35 UTC scan (Fig. 2h) arrived, they were surprised to see a sudden strengthening in the circulation on the northern storm (Fig. 3d). They quickly initiated WarnGen and hastily issued a 30-min tornado warning about 40 s later (Fig. 3c).

Allen began the 4.5-min team debriefing discussing how frustrations with software can affect warning decisions by stressing and wearing out the forecaster. He did not, however, think it made a difference here. In the end, he thought he probably saw the tight couplet he expected to see if one of these storms were to have produced a tornado, though he saw it too late. During the case, just after the third tightening of the circulation prompted their successful warning, he had reflected, “welcome to tropical stuff.”

Right from the start, the 43-s team focused on the northern storm, agreed upon what they saw, and exhibited a team dynamic of building upon each other’s thoughts to help them zero in on the correct storm and the correct threat. Each member’s confidence rankings for the event type were consistent with their performance (see Fig. 4). All four participants were more confident than usual in their understanding of the event. Members of the 43-s team, however, were much more confident than usual with what they saw in the data, whereas members of the 4.5-min team were not (marked “usual” confidence; Fig. 4).

Fig. 4.
Fig. 4.

Normalized confidence ratings of forecasters relative to normal operations in their offices. Numeric values were assigned after the fact; forecasters saw “less confident” at the left end, “usual” in the middle, and “more confident” at the right. (top) Forecasters’ confidence in how well they understood the event: supercells in a tropical environment; (bottom) their confidence in how well they understood supercell signatures and evolution using PAR data.

Citation: Weather and Forecasting 27, 4; 10.1175/WAF-D-11-00145.1

b. Warning decision comparison B

The number and types of warnings issued by the 43-s and 4.5-min teams differed primarily with respect to threats they perceived and how they incorporated the environment. Nonradar cues were used; for the 43-s team these were the apparent desire to add to the number of warnings issued prior to case start time (prior warnings were on different storms) and geographical considerations. The 4.5-min team was mainly concerned with missing “spikes” in the slower-update data. Neither team appeared very confident to the researchers. Only one forecaster had some experience with tropical tornado events.

The 43-s team (Ken and Matt) decided to issue their first warning early in the case (0118:05 UTC) after Ken interrogated velocity from 0.5° to 2.4° (4.4–14 kft AGL) and realized there were no warnings in effect north of the Red River (a warning had been issued on a storm south of the domain prior to case start time). He stated his concern that a velocity maxima associated with the southern storm, having dominant inbound base velocities up to 50 kt, was an “indication of severe wind.” Wind up to 70 mi h−1 (1 mi h−1 ≈ 0.447 m s−1) was the main threat mentioned in the 45-min severe thunderstorm warning; the polygon initially encompassed both supercells. During the following 20 min (0123:31–0143:31 UTC), Ken continuously interrogated the velocity and reflectivity signatures of both storms and became concerned with the consistent intensity of the northern supercell (Fig. 3b). He also noticed the (existing) “severe [polygon] is covering [only] half of that northern storm.” These appear to be the deciding factors that led to issuing the second 45-min severe thunderstorm warning, this time for the northern supercell, at 0137:09 UTC (Fig. 3a). Though this severe thunderstorm warning was issued with severe wind threat in mind, it verified (generically, per NWS Directive 10-1601), with a 6-min lead time (Fig. 3a) because of the development of the EF1 tornado. This team was the only team to issue severe thunderstorm warnings.

Just after a new scan arrived at 0142:53 UTC (Fig. 3b), Ken’s interrogation of the northern storm’s circulation revealed high velocities at 0.5°, “I’ve got 34 kts inbound now on this, 25 outbound.” He then examined the reflectivity data and saw correspondence of the circulation with a notch signature. Interrogating through the next three scans, Ken first remarked, “usually at 35 kts you need to start considering tor.” Further intensification of the circulation convinced Ken to issue a tornado warning. On scan 0145:02 he read: “50 [kts] outbound. [pause] 41 inbound, 64 out … don’t think I can ignore that.” The 45-min tornado warning was issued at 0147:28 UTC, but with a 3.15-min negative lead time; the tornado had likely dissipated.

In contrast, the 4.5-min team (Lisa and Ron) determined that the northern storm and short-lived tornadoes were their primary concerns. Their interrogation through the first several scans confirmed that the northern storm was better organized and stronger, though they noted 30 kt inbound on the southern storm in the 0122:19 UTC scan. Interrogating the 0126:36 UTC scan they saw that the velocities had declined (Fig. 3d). Lisa voiced concern that they were missing velocity trends due to slower updates by saying, “I bet you they’re getting spikes in their data.” The 0131:13 UTC update continued to show circulations (Fig. 3d), however, and Ron had continued confidence that the northern storm was the one to focus on. In an attempt to seek validation, Lisa asked for storm reports, but none were available. She then expressed her overall discomfort with the case: “I don’t see them getting any stronger … they are about the same rotation-wise so the question becomes, do we have a strong enough environment that we are going to issue on these weak radar signatures?” Ron agreed it was a good question, then laughed, and said, “And I wish I had the answer.” With interrogation of the 0131:13 and 0135:41 UTC scans showing continuity in the northern storm’s rotation (Fig. 3d), Lisa told Ron she was “anxious to pull [the] trigger because the environment was so good for tornadoes.” Together, the radar signatures, environment, slower updates, and content of the Storm Prediction Center’s mesoscale discussion led to the team’s tornado warning decision. Their 45-min tornado warning, issued at about 0139:40 UTC, verified with an approximate 4.6-min lead time (Fig. 3c).

During the debriefing, the 43-s team’s description of their warning decision process conveyed a radar-data-driven approach, though, as mentioned earlier, our analysis also revealed use of nonradar cues: lack of current warnings and geographical considerations. Both members of this team reported higher confidence than usual in both their understanding of the event and in what they saw in the PAR data.

In contrast, the 4.5-min team expressed the importance of storm environment, in addition to radar data, in their warning decision process. According to Lisa, though the situation on radar was below her normal warning decision threshold, she chose to warn due to the environment: she was worried they were missing tornadoes in between sampling times. The confidence rankings for their understanding of the event were the only marks “less than usual” (Fig. 4), yet this was the only 4.5-min team that had positive lead time on a tornado warning. Neither had worked this type of event before. Both forecasters reported near usual confidence in what they perceived in the PAR data, which simulated what they see in their normal operations.

c. Warning decision comparison C

These teams differed in terms of the number of warnings issued. Lead time gained on one tornado warning (43-s team) appeared primarily due to lower tolerance for personal thresholds to be met in a tropical environment. The 4.5-min team exhibited more confidence due to their knowledge of this type of event: one member had a clear idea of the velocity thresholds he wanted to see. Both teams used or sought nonradar cues, but their impact appeared minimal.

The 43-s team (Frank and Gina) issued two tornado warnings (Fig. 3a), one for each storm. Upon beginning the case, Frank stated that SRM was needed to account for fast storm motion and set it up. Using the high-temporal aspect of PAR to look for “any kind of feature there kind of rolling around the southwest side of the circulation,” as well as querying velocity values and examining cross sections, they found evidence for some focus in a circulation on the southern storm. Interested in achieving lead time, Gina stated she “wouldn’t mind” if they issued a tornado warning a bit early. They considered nonradar factors in determining storm motion. Their 30-min tornado warning on the southern storm was issued at 0123:31 UTC (Fig. 3a). In the following 3 min (through 0126:23 UTC; Fig. 3b), they saw the southern storm weaken, and both laughed at the irony. They later updated this warning based on elapsed time and their belief the southern storm still posed a threat.

In the northern storm they identified a broad divergent signature collocated with the appendage. They continued analysis, checking first for reports. Frank noted “only 30 kts” at the lowest elevation of the 0127:49 UTC scan. After changing SRM twice on the 0129:15 UTC scan, first by tracking the southern storm circulation, then the northern one, they determined the southern storm was weakening and the northern storm was strengthening (Fig. 3b). The inflow region was identified on reflectivity, and a cross section of SRM was used on both storms to determine that both circulations remained “shallow.” When the 0131:24 UTC scan did not alleviate their uncertainty, Gina encouraged issuing another warning given the environment. Apparently seeking to alleviate uncertainty, she looked at the LAPS data, but did not articulate anything about what she saw. Frank decided to “go ahead and get things started.” They issued a 30-min tornado warning on the northern storm at 0132:50 UTC (Fig. 3a). In a loop of reflectivity up to the 0139:18 UTC scan, they saw a curling motion indicative of a rear-flank downdraft, though an examination of a cross section two scans later maintained their uncertainty: “Still looking the right way. Nothing extra special there.” Shortly after they discussed discontinuing their warnings (0141:27 UTC), the next scan (0142:10 UTC) showed a stronger couplet, increasing one scan later to 86 kt inbound and 26 kt outbound (Fig. 3b).

In contrast, the 4.5-min team (Steve and Wendy) immediately determined at the outset that, “If we’re going to be warning, we’ll be warning on that one.” Steve pointed to the northern storm in the velocity display. But that certainty did not come with eagerness to warn. The team examined a cross section and scans in the lowest few elevations to identify a shallow circulation and divergence. When the 0131:13 UTC scan arrived, they noticed it was “getting better as a couplet,” with 14 kt inbound collocated with an appendage on reflectivity, about 2500 ft above the ground (Figs. 2e and 3d). Steve shared his understanding of tropical mesocyclones: that 20 kt gate to gate could be sufficient for this case. In a loop of SRM up to 0131:13 UTC, they saw that the southern storm had “looked interesting” at about 0122:19 UTC. In the following scans they thought the northern cell was beginning to undergo a cell merger. Both storms seemed to be “having trouble getting their act together.” They continued interrogation. When the 0144:35 UTC scan arrived Steve reacted with surprise (Figs. 2h and 3d). Wendy laughed and Steve said, “That was quick!” They quickly issued a 30-min tornado warning on the northern storm (Fig. 3c).

During the individual team debriefing, the 43-s team reported that they issued the first warning because they expected, given the environment, that the broad mesocyclone would tighten. They specifically sought lead time. They said they were comfortable using the rapid-update data during the case because fluid motions matched their expectation of seeing circulations cycling up and down, and that increased their confidence in their long-lead time warnings. Their confidence was not high to begin with, however, and both marked lower confidence in both the event and in the data than did the other team members (Fig. 4).

The 4.5-min team marked high confidence in both the event and data. They also appeared very confident during the case. The signature they eventually saw matched their expectations that it would suddenly appear in one volume scan, and be associated with a shallow circulation. Based on Steve’s data interrogation, his empirically based threshold had not been met prior to that point. Steve judged the tornado had probably dissipated by the time their warning was issued. During the joint debriefing, when able to see the 43-s data for the first time, Steve thought signatures in the three to four scans prior to the shared 0144:19 UTC scan would have prompted him to warn.

7. Discussion

The three warning decision comparisons resulted in considerable variation in forecaster decision making; such variations can confound analyses of forecaster behavior (e.g., Lusk et al. 1990; Pliske et al. 1997; Hoffman et al. 2006). We identified six warning decision factors that contributed to warning decision making: 1) forecaster experience, 2) use of scientifically based conceptual models, 3) confidence, 4) tolerance to potentially miss tornado occurrence, 5) perceived threats, and 6) software issues. The PAR data are assumed to be an inherent warning decision factor. Aspects of these factors are found in Andra et al. (2002). Several factors impacted forecasters’ interpretation of the 43-s and 4.5-min PAR data. Our ability to isolate the impact of scan time on decisions made was limited. As confounding factors exist in the world, it is neither possible nor realistic to reduce them to zero. The identification, understanding, and accounting of confounding factors are critical for those working in weather testbeds.

a. Experience

Forecasters’ experience with tropical tornado events varied widely, from many events to none. All four participants in comparison A had experience with several events; further, one on each team had worked several hurricanes. In comparison B, one had worked a few events, one had worked just one event, and the remaining two had only worked tropical events with heavy rainfall and flooding. In comparison C, one forecaster on each team had at least one experience with a tropical tornado event; the other did not.

Forecasters’ experience with different types of weather events varies, mainly due to geographic differences in event climatology and career tracks in which forecasters change offices. Hence, the variability in experience with tropical storms seen here is likely similar to that found in the field. Variation in experience and the relative lack of training in tropical tornadoes appears to have led, in part, to the variations seen in forecaster conceptual models.

b. Conceptual models

Forecasters’ conceptual models tended to contain the following: characteristics of circulations and associated reflectivity features, atmospheric environment, and implications of scan time on detection (Table 1). Generally, forecaster conceptual models of a supercell contained aspects discussed in Moller et al. (1994). Though all forecasters expected circulations to be shallow and collocated with a notch in reflectivity, their ideas about circulation strength sufficient to warrant a tornado warning differed (Table 1). Three forecasters mentioned specific thresholds or ranges of inbound–outbound velocities, while the rest only mentioned circulations would be “weaker” than those typical of nontropical supercells. There are no specific training requirements, nor guidance from research for tropical cyclone tornado warning decision making (J. LaDue 2011, personal communication); these issues may explain the wide variability in circulation strength forecasters appeared to use in their warning decisions.

Table 1.

Components of forecaster conceptual models and percent of forecasters sharing each.

Table 1.

Forecaster tornado warning decisions were made within the context of trends or persistence in inbound and outbound velocities seen in the data. As a result, the moment at which a forecaster interpreted a circulation’s strength as significant depended on how the evolution of velocity magnitudes was sampled in the 43-s and 4.5-min PAR data (Fig. 3). During the case, all forecasters with 4.5-min data mentioned they were concerned about missing “spikes” or intensification in circulations indicative of tornado occurrence. This concern was driven by their conceptual models: tornadoes forming in a tropical environment would likely be short lived. The implication of a longer scan time appears to have directly affected two of the 4.5-min teams’ warning decisions (aborted warning in comparison A and warning with lead time in comparison B). Because sampling times available for precipitating storms range from 4 to 6 min at the NWS (as of 2011), concern about missing important storm developments is likely a common forecaster experience in similar operational situations. This concern was found in LaDue et al. (2010). The experiment design may have amplified the impact of sampling time because participants had examined rapid-scan data during the prior 2 days.

Most forecasters stated that environment was a factor in their warning decisions; sufficient low-level shear and low lifting condensation level heights were mentioned as environmental indicators for the potential for short-lived, rapidly developing tornadoes. Neither the environment nor the concept of a weak circulation strength threshold were factors in one 43-s teams’ decision making (comparison B, Table 1). As discussed previously, the environment prompted one 4.5-min team (comparison B) and one 43-s team (comparison C) to issue tornado warnings though the velocity signatures were not as tight or as strong as desired.

c. Confidence in PAR data versus event

Rapid-scan data increased forecaster confidence. The motion and evolution of the northern storm’s circulation was clear especially during the time of the tornado. Joe (43-s team, comparison A) remarked, “I didn’t expect to see something that strong and so persistent…” Bob replied, “And this rapid update is really giving me a clear…look how smooth it is moving to the north and west. We can follow that all the way around, look at that … how it kind of disappeared.” Gina (43-s team, comparison C) explained that being able to watch trends at smaller time steps “builds the confidence factor, because you know, you’re seeing something persistent.” Rapid-scan data also remove ambiguity. While comparing decision making during a joint debriefing, Ken (43-s team, comparison C) pointed out of the other team, “she’s basing [her decision] on what’s in her mind. She sees a few things and she’s having to create the rest of it.” Lisa, who had 4.5-min data, added, “I’d rather be in [Ken’s] position so I don’t feel like I have to take a gamble.”

Our analysis revealed that confidence is also affected by event type. Confidence ratings marked by forecasters, relative to normal office operations, were normalized to support quantitative comparison among 43-s and 4.5-min teams (Fig. 4). The 43-s team members’ scores were above usual confidence, and for at least half of these participants they were high (above 0.6) for both the event and the data. Increased forecaster confidence through the use of additional data was also reported by Morss and Ralph (2007). In contrast, the range of 4.5-min team members’ confidence scores was broader. The fact that two 4.5-min team members felt “less confident than normal” in their understanding of the event (Fig. 4) suggests for some participants that the event type may have had a slightly larger effect on their confidence than scan time. Confidence scores from four of the six 4.5-min participants indicate that what they saw in the data was near usual, as would be expected given that the degraded PAR data simulated a WSR-88D volume scan.

d. Additional factors

The three remaining factors—tolerance of possibly missing tornado occurrence, perceived threats, and software issues—had varying manifestations. The first is the most difficult to mitigate. We speculate forecasters are accustomed to sometimes choosing to warn early because they know radar sampling is insufficient for good detection of severe weather precursors. In one case (4.5-min team, comparison B), it is possible that knowing the other team had more frequent updates contributed, in part, to the pair’s decision to warn. This possible impact was not observed in other 4.5-min team decision processes. Experience, conceptual models, confidence, and personality may also contribute to tolerance for missing an event. We found that rapid-scan data allowed some forecasters (43-s team, comparison A) to gauge storm persistence and intensity, changes in storm motion, and other aspects of storm evolution at a level that enhanced their confidence in their decisions by conforming to their conceptual models.

The second two factors can be more easily mitigated. To help mitigate the issue of perceived threats, rather than having the forecaster conduct their own environmental assessment, we could provide a weather briefing prior to the case. Finally, AWIPS II will be capable of handling PAR data. Following some modifications to AWIPS II, experiments like this one will be equipped to use the same software forecasters are accustomed to.

8. Conclusions

The case examined revealed some of the complexities of NWS forecasters’ warning decision processes, including radar and nonradar factors that impact their decisions. The warning decision comparisons (section 7) show that though teams examined similar reflectivity and velocity signatures, in some instances they came to different conclusions about whether and when to warn. These differences arose from six factors: 1) experience, 2) use of conceptual models, 3) confidence, 4) tolerance of possibly missing tornado occurrence, 5) perceived threats, and 6) software issues. These factors influenced how forecasters interpreted 43-s and 4.5-min PAR data and their decisions making, confounding our capability to determine the benefit of scan time on decision making. Regardless, we found rapid-scan data improved forecasters’ confidence in what they saw in the data, compared to normal operations (Fig. 4).

The 43-s teams issued six warnings: three warnings verified (two tornado and one severe), two did not verify (tornado and severe on southern storm), and one tornado event was missed (likely issued seconds after tornado dissipation). Tornado-warning lead times were 18.6 and 11.5 min; severe-warning lead time was 6 min. All three tornado warnings issued by 4.5-min teams verified, though lead times were shorter: 4.6 and 0 min (two teams). The NWS assigns 0-min lead time to warnings issued when a tornado is in progress. Although a small sample, these numbers are suggestive of a positive effect of rapid-scan data on warning lead time for short-lived, weak tornadoes—an underwarned tornado class (Brotzge and Erickson 2010). The numbers are also suggestive of a possible increase in the number of unverified warnings from the use of rapid-scan data. Another possibility is that being in an experiment made forecasters more willing to warn, as they may have expected something to happen and there were no actual consequences. Nevertheless, by understanding avenues of forecasters’ decision-making processes, researchers can identify means through which faster radar scan time may benefit warnings for severe weather.

This study has provided a framework for future investigations of the impact of a new technology or product on user decision making. This framework is especially advantageous in a testbed environment and applicable to the technology transfer process. Like others, we advocate the testing of new technologies and products throughout the development process (e.g., Anderson-Berry et al. 2004; Morss et al. 2005), as interactions between researchers and forecasts can improve operational usefulness. Future efforts employing this type of framework should strive to include a larger and more diverse set of cases (including nulls) and specifically inform participants that the cases worked may or may not contain a severe weather event. Furthermore, rewards or penalties may be used to simulate the added pressure of real-world forecasting.

Acknowledgments

A portion of the funding for this work was provided by NOAA/Office of Oceanic and Atmospheric Research under NOAA–University of Oklahoma Cooperative Agreement NA08OAR4320904 and NA17RJ1227. We are grateful to our NWS participants, and those who assisted with meteorological software displays and data: Charles Kerr, Kevin Manross, Darrel Kingfield, and Mark Sessing; EWP recruitment and infrastructure, Greg Stumpf and Travis Smith; observersations, Kim Klockow, Heather Moser, and Rachel Riley; transcription, Natalie Daab; audio quality improvements, James Murnan; and informal reviews, Liz Quoetone, Rachel Riley, Doug Speheger, and Robert Hoffman. We thank the three external reviewers for helpful comments on the manuscript.

REFERENCES

  • Anderson-Berry, L., , Keenan T. , , Bally J. , , Pielke R. Jr., , Leigh R. , , and King D. , 2004: The societal, social, and economic impacts of the World Weather Research Programme Sydney 2000 Forecast Demonstration Project (WWRP S2000 FDP). Wea. Forecasting, 19, 168178.

    • Search Google Scholar
    • Export Citation
  • Andra, D. L., Jr., , Quoetone E. M. , , and Bunting W. F. , 2002: Warning decision making: The relative roles of conceptual models, technology, strategy, and forecaster expertise on 3 May 1999. Wea. Forecasting, 17, 559566.

    • Search Google Scholar
    • Export Citation
  • Arndt, D. S., , Basara J. B. , , McPherson R. A. , , Illston B. G. , , McManus G. D. , , and Demko D. B. , 2009: Observations of the overland intensification of Tropical Storm Erin (2007). Bull. Amer. Meteor. Soc., 90, 10791093.

    • Search Google Scholar
    • Export Citation
  • Bernard, R., 2006: Research Methods in Anthropology: Qualitative and Quantitative Approaches. 4th ed. Alta Mira Press, 803 pp.

  • Brotzge, J., , and Erickson S. , 2010: Tornadoes without NWS warning. Wea. Forecasting, 25, 159172.

  • Burgess, D., , Donaldson R. J. , , Sieland T. , , Hinkelman J. , , Sirmans D. , , Shreeve K. , , Glover K. , , and Goldman I. , 1979: Final report on the Joint Doppler Operational Project (JDOP) 1976–1979. NOAA Tech. Memo. ERL NSSL86, 84 pp. [NTIS PB80-107188/AS.]

  • Curtis, C. D., , and Torres S. M. , 2011: Adaptive range oversampling to achieve faster scanning on the National Weather Radar Testbed Phased Array Radar. J. Atmos. Oceanic Technol., 28, 15811597.

    • Search Google Scholar
    • Export Citation
  • Emersic, C., , Heinselman P. L. , , MacGorman D. R. , , and Bruning E. C. , 2011: Lightning activity in a hail-producing storm observed with phased-array radar. Mon. Wea. Rev., 139, 18091825.

    • Search Google Scholar
    • Export Citation
  • Endsley, M. R., 1988: Design and evaluation for situation awareness enhancement. Proc. Human Factors Society 32nd Annual Meeting, Santa Monica, CA, Human Factors Society, 97–101.

  • Evans, C., , Schumacher R. S. , , and Galarneau T. J. Jr., 2011: Sensitivity in the overland reintensification of Tropical Cyclone Erin (2007) to near-surface soil moisture characteristics. Mon. Wea. Rev., 139, 38483870.

    • Search Google Scholar
    • Export Citation
  • Friday, E. W., Jr., 1994: The modernization and associated restructuring of the National Weather Service: An overview. Bull. Amer. Meteor. Soc., 75, 4352.

    • Search Google Scholar
    • Export Citation
  • Hahn, B. B., , Rall E. , , and Klinger D. W. , 2003: Cognitive task analysis of the warning forecaster task. Klein Associates Inc., 15 pp. [Available from Klein Associates Inc., 1750 Commerce Center Blvd., North Fairborn, OH 45324-6362.]

  • Heinselman, P. L., , and Torres S. M. , 2011: High-temporal-resolution capabilities of the National Weather Radar Testbed Phased-Array Radar. J. Appl. Meteor. Climatol., 50, 579593.

    • Search Google Scholar
    • Export Citation
  • Heinselman, P. L., , Priegnitz D. L. , , Manross K. L. , , Smith T. M. , , and Adams R. W. , 2008: Rapid sampling of severe storms by the National Weather Radar Testbed Phased Array Radar. Wea. Forecasting, 23, 808824.

    • Search Google Scholar
    • Export Citation
  • Hoffman, R. R., , Coffey J. W. , , Ford K. M. , , and Novak J. D. , 2006: A method for eliciting, preserving, and sharing the knowledge of forecasters. Wea. Forecasting, 21, 416428.

    • Search Google Scholar
    • Export Citation
  • Joslyn, S., , and Jones D. , 2008: Strategies in naturalistic decision making: A cognitive task analysis of naval weather forecasting. Naturalistic Decision Making and Macrocognition, J. M. Schraagen et al., Eds., Ashgate Publishing, 183–201.

  • Klein, G., 1998: Sources of Power: How People Make Decisions. The MIT Press, 330 pp.

  • LaDue, D. S., , Heinselman P. L. , , and Newman J. F. , 2010: Strengths and limitations of current radar systems for two stakeholder groups in the southern plains. Bull. Amer. Meteor. Soc., 91, 899910.

    • Search Google Scholar
    • Export Citation
  • Lakshmanan, V., , Smith T. , , Stumpf G. , , and Hondl K. , 2007: The Warning Decision Support System–Integrated Information. Wea. Forecasting, 22, 596612.

    • Search Google Scholar
    • Export Citation
  • Lusk, C. M., , Stewart T. R. , , Hammond K. R. , , and Potts R. J. , 1990: Judgment and decision making in dynamic tasks: The case of forecasting the microburst. Wea. Forecasting, 5, 627639.

    • Search Google Scholar
    • Export Citation
  • Magsig, M. A., , and Page E. M. , 2002: Development and implementation of the NWS warning event simulator version 1.0. Preprints, Interactive Symp. on AWIPS, Orlando, FL, Amer. Meteor. Soc., J236–J238.

  • Mertens, D. M., 2005: Research and Evaluation in Education and Psychology. 2nd ed. Sage Publications, 507 pp.

  • Moller, A. R., , Doswell C. A. III, , Foster M. P. , , and Woodall G. R. , 1994: The operational recognition of supercell thunderstorm environments and storm structures. Wea. Forecasting, 9, 327347.

    • Search Google Scholar
    • Export Citation
  • Monteverdi, J. P., , and Edwards R. , 2010: The redevelopment of a warm core structure in Erin: A case of inland tropical storm formation. Electron. J. Severe Storms Meteor., 5, 118.

    • Search Google Scholar
    • Export Citation
  • Morss, R. E., , and Ralph F. M. , 2007: Use of information by National Weather Service forecasters and emergency managers during CALJET and PACJET-2001. Wea. Forecasting, 22, 539555.

    • Search Google Scholar
    • Export Citation
  • Morss, R. E., , Wilhelmi O. V. , , and Downton M. W. , 2005: Flood risk, uncertainty, and scientific information for decision making: Lessons learned from an interdisciplinary project. Bull. Amer. Meteor. Soc., 86, 15931601.

    • Search Google Scholar
    • Export Citation
  • Newman, J. F., , and Heinselman P. L. , 2011: Evolution of a quasi-linear convective system sampled by phased-array radar. Preprints, 27th Conf. on Interactive Information Processing Systems (IIPS), Seattle, WA, Amer. Meteor. Soc., 13B.5. [Available online at http://ams.confex.com/ams/91Annual/webprogram/Paper184493.html.]

  • Newman, J. F., , and Heinselman P. L. , 2012: Evolution of a quasi-linear convective system sampled by phased array radar. Mon. Wea. Rev., in press.

    • Search Google Scholar
    • Export Citation
  • NOAA, 2006: Operational modes and volume coverage patterns. Doppler Radar Meteorological Observations: Federal Meteorological Handbook 11; Part C: WSR-88D Products and Alborithms, FCH-H11C-2006, Office of the Federal Coordinator for Meteorological Services and Supporting Research, Rockville, MD, 5-1–5-23.

  • OFCM, 2006: Federal research and development needs and priorities for phased array radar. Rep. FMC-R25-2006, Interdepartmental Committee for Meteorological Services and Supporting Research, Committee for Cooperative Research Joint Action Group for Phased Array Radar Project, 62 pp. [Available online at www.ofcm.gov/r25-mpar/fcm-r25.htm.]

  • Ortega, K. L., , Smith T. M. , , Manross K. L. , , Kolodziej A. G. , , Scharfenberg K. A. , , Witt A. , , and Gourley J. J. , 2009: The Severe Hazards Analysis and Verification Experiment. Bull. Amer. Meteor. Soc., 90, 15191530.

    • Search Google Scholar
    • Export Citation
  • Pliske, R., , Klinger D. W. , , Hutton R. , , Crandall B. , , Knight B. , , and Klein G. , 1997: Understanding skilled weather forecasting: Implications for training and the design of forecasting tools. Final Contractor Rep. AL/HR-CR-1997-0003, 122 pp.

  • Roebber, P. J., , Schultz D. M. , , and Romero R. , 2002: Synoptic regulation of the 3 May 1999 tornado outbreak. Wea. Forecasting, 17, 399429.

    • Search Google Scholar
    • Export Citation
  • Scharfenberg, K. A., and Coauthors, 2005: The Joint Polarization Experiment: Polarimetric radar in forecasting and warning decision making. Wea. Forecasting, 20, 775788.

    • Search Google Scholar
    • Export Citation
  • Schraagen, J. M., , Militello L. G. , , Ormerod T. , , and Lipshitz R. , Eds., 2008: Naturalistic Decision Making and Macrocognition. Ashgate Publishing, 411 pp.

  • Singleton, R. A., , and Straits B. , 2005. Approaches to Social Research. 4th ed. Oxford University Press, 640 pp.

  • Smith, T. M., , and Elmore K. L. , 2004: The use of radial velocity derivatives to diagnose rotation and divergence. Preprints, 11th Conf. on Aviation, Range, and Aerospace, Hyannis, MA, Amer. Meteor. Soc., P5.6. [Available online at http://ams.confex.com/ams/pdfpapers/81827.pdf.]

  • Speheger, D. A., , Doswell C. A. III, , and Stumpf G. J. , 2002: The tornadoes of 3 May 1999: Event verification in central Oklahoma and related issues. Wea. Forecasting, 17, 362381.

    • Search Google Scholar
    • Export Citation
  • Steadham, R., 2008: 2008 National Weather Service field study. Part 1: Volume coverage pattern usage. Radar Operations Center, Norman, OK, 28 pp. [Available from WSR-88D Radar Operations Center, 1200 Westheimer Dr., Norman, OK 73069.]

  • Whiton, R. C., , Smith P. L. , , Bigler S. G. , , Wilk K. E. , , and Harbuck A. C. , 1998: History of operational use of weather radar by U.S. Weather Services. Part II: Development of operational Doppler weather radars. Wea. Forecasting, 13, 244252.

    • Search Google Scholar
    • Export Citation
  • Yu, T.-Y., , Orescanin M. B. , , Curtis C. D. , , Zrnić D. S. , , and Forsyth D. E. , 2007: Beam multiplexing using the phased-array weather radar. J. Atmos. Oceanic Technol., 24, 616626.

    • Search Google Scholar
    • Export Citation
  • Zrnić, D. S., and Coauthors, 2007: Agile-beam phased array radar for weather observations. Bull. Amer. Meteor. Soc., 88, 17531766.

1

The plan for this study was approved by the University of Oklahoma’s Office for Human Research Participant Protection (a.k.a. Institutional Review Board).

2

The term case refers to both PAR data and operational nonradar data.

3

The two researchers with meteorological background (DL and PH) followed standard practice and compared their codes on a section of data. Because the comparison indicated some nuanced differences in the use of similar codes, both researchers coded all data and came to a consensus. One author (HL) focused on nontechnical codes and analysis of those factors.

Save